jes.ph
← Back to Blog

How I Use AI to Run My Daily Dev Workflow

March 14, 2026

I do not use AI as a replacement for thinking. I use it as structure. The biggest shift in my workflow happened when I stopped opening a blank chat window and started building a repeatable system around the kind of work I do every day.

Current Setup

Two tools, shared CLI, reusable skills

My current workflow is simple to describe: I usually lean on one tool more for orchestration and the other more for implementation, but I can swap them depending on the task. Both sit on top of a shared command line and a library of repeatable skills.

Why It Works

Less repeated setup

Better review consistency

Faster context switching

That system now helps me move from idea to implementation faster. It also reduces context switching, removes a lot of repeated setup, and gives me a cleaner handoff between planning and shipping.

I Use Two Different AI Roles

I currently pay for Claude and Codex, and my workflow got much better once I stopped treating them as interchangeable. I give each one a job that matches the way I actually like to work.

Claude

Orchestration

Choose where I am working

Pull in the right context

Load the right instructions

Start the session without repetition

Codex

Execution

Work close to the code

Read the repository carefully

Make changes and run checks

Tighten up the final result

Separating those roles sounds small, but it changes the feel of the day. One side helps me get oriented. The other helps me finish the work.

The Tools I Actually Use

I have been lucky to join a genuinely AI-first company, so at work we lean into AI properly and I have broad access to the tools I need. On the personal side, I keep pro subscriptions for the tools I use most. In both cases, having access to Claude and Codex does not feel like duplication. It feels like a useful split between planning and execution.

Planning Tool

Claude

Pro

I use Claude for shaping the session before code starts moving. It is the tool I reach for when I want to load instructions, navigate a workspace, reason through the next move, or kick off one of my command-driven flows.

Implementation Tool

Codex

I use Codex when I want a tool that stays closer to the repository. It is where I want careful file reads, code edits, checks, and a tighter loop between implementation and verification.

One thing I like is that both parts of the setup can lean on the same command-line foundation. In both apps, I use the GitHub CLI heavily, which means repository operations, pull request review, and follow-up actions can all stay close to the terminal instead of turning into browser work.

Skills Beat Repeated Prompts

The most useful part of my setup is not a model. It is the layer of small written instructions I keep around the work. I treat those instructions like lightweight skills. Each one is focused on a specific task and describes the steps clearly enough that I do not need to reconstruct the process every time.

I keep these as plain markdown files because I want them to be easy to read, easy to update, and close enough to the codebase that they evolve with the work instead of becoming stale documentation.

That might look more like a small system than a prompt:

Workflow

Session Setup

Open the right repository, pick the right branch, and start from a predictable setup instead of rebuilding context by hand.

Quality

Pull Request Review

Run through the same checks for correctness, conventions, and test coverage every time, then classify findings as P1, P2, or P3 with a clear summary.

Operations

Business Admin

Handle recurring records with the right structure and formatting without stopping to remember every detail.

Codebase

Architecture Cleanup

Apply the same architectural rules across edits so refactors feel intentional rather than improvised.

This gives me a much better result than relying on memory. Instead of writing a clever prompt over and over, I write the workflow down once and let the assistant follow it.

Favorite Skill

Structured PR Review

This is probably my favorite part of the whole setup. The review skill reads the pull request, checks the implementation carefully, classifies findings with priority badges, and leaves a summary that is actually useful.

P1 critical badgeP2 major badgeP3 minor badge

It turns review into a repeatable system instead of a fuzzy vibe check. That matters a lot on work projects where consistency is part of the job.

When everything looks good, it can also approve the pull request with a simple checkmark and LGTM, which keeps the clean cases fast without losing the structure.

Priority Model

Critical issues rise to the top, substantial fixes are clear, and lighter improvements stay lightweight.

Review Output

Inline comments where needed, a summary that explains the overall state of the PR, and a quick ✅ LGTM approval when the review is clean.

Team Signal

👍👎

Valid comments get a positive reaction before the fix is made and resolved. Invalid comments get a negative reaction, a short explanation, and then the thread is closed.

AI Signature

All AI comments are signed so they are easy to distinguish from manual human feedback in the pull request conversation.

Commands Are Where the Momentum Comes From

I also keep a small set of commands around the workspace so I can start common flows quickly. These are not fancy. That is the point. They exist to remove friction.

One command helps me start a workspace session. Another helps me pick a repository, fetch branches, create or reuse a worktree, and open the right environment.

When the command layer is doing its job, I spend less time remembering setup steps and more time deciding what actually matters. That makes AI feel less like an interruption and more like part of the operating system of the project.

Command Layer

Fast paths for common flows

/start-workspaceBootstrap

Sync the workspace and get oriented quickly.

/open-repoContext

Choose a repo, fetch branches, and open a focused worktree.

/review-changesReview

Run a structured review flow instead of an ad hoc pass.

/clean-architectureRefine

Apply the project’s preferred architecture rules consistently.

The exact command names are less important than the idea behind them. I want common actions to be short, obvious, and easy to trust.

I also like wiring those commands into the assistants directly instead of keeping them in a separate mental drawer. When the commands are exposed through the tool I am already using, I reach for them more often and the workflow stays fast.

The GitHub CLI is a big part of that. It is the shared layer I use across both apps for PR inspection, diffs, metadata, and the follow-up work that happens after review.

A Good AI Workflow Is Mostly Good Environment Design

A lot of people talk about AI workflow as if the magic is all in the model. My experience has been the opposite. The quality comes from environment design.

For me, that means a few things:

Predictable workspace root
Plain-text instructions near the code
Shared command folders wired into the assistant
Dynamic paths that travel well between machines
Worktrees for clean branch isolation
Fast access to a new focused session

None of this is flashy, but it compounds. Once the environment starts carrying some of the load, the assistant can be more useful because it is entering a system instead of a blank room.

Why It Matters More on Work Projects

Work projects are where this kind of setup really pays off. There are more services, more conventions, more review expectations, and usually more hidden knowledge that slows people down.

In that kind of environment, the orchestration side helps load the right project context, branch strategy, and team instructions before any code gets touched. The execution side can then work inside those constraints instead of guessing.

In practice, that means AI becomes much more useful for things like:

Start in the right repository, branch, and local environment
Follow team-specific review and testing expectations
Apply architectural patterns consistently across changes
Reduce the overhead of switching between active projects
Make handoffs cleaner because more of the workflow is documented

A good example is code review. If the review flow already knows to check correctness, convention drift, test coverage, and whether feedback belongs as inline comments or a higher-level summary, the assistant can behave much more like a reliable teammate and much less like a generic assistant.

I also have a follow-up skill for handling review comments after they land. It reads through the open pull request comments, decides whether each one is valid, reacts to it, and then takes the next step with intention.

This works as a loop. If the comment is valid, the workflow reacts positively, updates the code, pushes the change, and resolves the thread. If the comment is not worth acting on, it reacts negatively, explains why it can be skipped, and resolves the thread without wasting time. The loop closes once the review is clean and the pull request is approved.

Cross-Check

Sometimes the tools review each other in a loop

The point is not ceremony. It is a fast second perspective that can keep cycling until the change is clean enough to approve.

Implementation

Codex

Implements the change

Review

Reviewer

Claude

Reviews the pull request

Workflow

Claude

Can also drive the workflow

Inspect

Verifier

Codex

Checks the implementation

i

Either tool can make the first move, the other can review it, and the loop keeps going until the PR is clean enough to approve.

The bigger the project gets, the less I want to rely on memory. That is why I keep coming back to skills, commands, and project-local instructions. They make AI more predictable, which is what I want when the work is shared with other people.

My Goal Is Not More AI

The goal is not to add AI to every possible step. The goal is to remove avoidable friction, keep context intact, and create enough structure that I can stay in motion.

That is the workflow I keep coming back to: default roles for planning and shipping, the freedom to swap tools when the task calls for it, and a layer of reusable skills and commands in the middle that makes the whole thing feel reliable.

Current Snapshot

This is my workflow right now

As of March 14, 2026

This setup is what works for me at the moment, but I do not expect it to stay frozen. AI tooling is moving quickly, the strengths of each product keep shifting, and my habits will keep evolving with them.

The important part is not memorizing this exact stack. It is building a workflow that can adapt as the tools change.