Why I Quit Pomodoro in the AI Era - Let AI Run on Autopilot While Humans Rest

Tadashi Shigeoka ·  Mon, April 13, 2026

25 minutes of focused work, 5 minutes of rest. As I wrote in “My Encounter with the Pomodoro Technique,” I used the Pomodoro Technique for years. It was excellent for maintaining focus, and every timer ring brought a small sense of accomplishment.

Recently, though, I noticed the timer kept ringing at moments that had nothing to do with natural work boundaries. The reason was clear: I now spend more time waiting for AI agents to finish writing code than writing code myself.

Why Pomodoro Stopped Working

The Pomodoro Technique assumes that the human is the one doing the work. You concentrate for 25 minutes, take a 5-minute break, and repeat. As long as you’re the one executing tasks, it works beautifully.

But as I wrote in “Harness Engineering — The New Discipline Powering Software Development in the AI Agent Era,” the engineer’s role in the AI era is shifting to Humans steer. Agents execute. When the executor is an AI agent, following a 25-minute timer loses its purpose.

Pomodoro’s AssumptionAI-Era Reality
Human focuses for 25 minutesHuman gives instructions; AI works for 30 min to several hours
5-minute break refreshes the brainAI’s work time is a natural break window
Timer marks work boundariesAI task completion marks work boundaries
Long break after 4 pomodoros (2 hours)Extended free time while AI runs autonomously

One day, I delegated a large refactoring task to Claude Code. When the 25-minute timer rang, it hit me: “I’m not doing anything right now. The AI is working. There’s no reason to follow this timer.”

The New Rhythm: Humans Rest While AI Works

After dropping Pomodoro, I adopted a rhythm that aligns human time with AI work cycles.

graph LR
    subgraph "Human Time"
        H1["Define intent & specs"]
        H2["Review results"]
    end

    subgraph "AI Time"
        A1["Autonomous implementation"]
    end

    H1 -- "Delegate task" --> A1
    A1 -- "Completion notification" --> H2
    H2 -- "Delegate next task" --> A1

In practice, it looks like this:

  1. Human defines intent and specs (5-15 minutes)
  2. Delegate to AI and let it run (30 minutes to several hours)
  3. While AI runs, human does something else
  4. Review AI’s output when notified (5-15 minutes)
  5. Back to step 1

The “something else” during AI runtime falls into two categories:

  • Work that only humans can do: writing specs, talking to stakeholders, making architecture decisions, conducting interviews, running 1-on-1s
  • Being human: going for a walk, grabbing coffee, having a meal, stretching, getting proper sleep

As I wrote in “What an AI Outage Taught Me: AI Has Become a Teammate,” AI is now a teammate. When a teammate is working on something, you work on something else or take a break. It’s that simple.

Getting AI to Run Longer

The key to this rhythm is getting AI to work autonomously for as long as possible. As I compared in “Giving AI Coding CLIs Full Permission to Run Autonomously,” both Claude Code and Codex have autonomous execution modes.

But enabling autonomous mode alone isn’t enough. If the AI loses direction or hits an ambiguous decision point, it stops. To keep AI running for extended periods, you need clear instructions and a well-prepared environment.

That’s why I built autopilot skills for each AI agent.

Claude Code’s Autopilot Skill

With Claude Code, I combine a skill file with the /loop command to achieve autopilot behavior.

The skill lives in .claude/skills/autopilot/:

<!-- .claude/skills/autopilot/SKILL.md -->
# Autopilot Skill
 
## Overview
A skill for executing development tasks with long-running autonomous operation.
Processes a task list sequentially without human intervention.
 
## Workflow
1. Receive a task list
2. For each task:
   - Read relevant files and understand current state
   - Implement or research
   - Run tests to verify
   - Commit the changes
3. Output a summary after all tasks are complete
 
## Constraints
- Each commit corresponds to one issue; reference the issue in the commit message
- If tests fail, attempt fixes. If unresolved, create an issue and move on to the next task
- Follow existing architecture patterns
- Adhere to CLAUDE.md conventions

This skill is used with --permission-mode auto and /loop:

# Launch in auto mode, then process the task list via the skill
claude --permission-mode auto
 
# Inside the session, use /loop for autonomous execution
> /loop Process the task list in order

/loop is Claude Code’s autonomous loop feature that continues processing until tasks are complete. Combined with auto mode, it enables extended autonomous runs without permission prompt interruptions.

Codex’s Autopilot Skill

With Codex, I write skill-equivalent instructions in AGENTS.md and use the --full-auto flag:

<!-- AGENTS.md (excerpt) -->
## Autopilot Mode
 
When given a task list prefixed with `[autopilot]`:
1. Process tasks sequentially
2. Run tests after each change
3. Commit per issue, referencing the issue in the commit message
4. If a task fails, create an issue and move on
5. Output a summary when all tasks are complete
# Launch in full-auto mode
codex --full-auto
 
> [autopilot] Process the following tasks in order:
> 1. ...
> 2. ...
> 3. ...

Codex’s --full-auto mode auto-approves reads, writes, and command execution within the workspace. Access outside the workspace is still restricted, maintaining safety during autonomous operation.

Key Design Principles for Autopilot Skills

Three things I prioritize when designing autopilot skills:

PrincipleRationale
Failure fallbackPrevents AI from getting stuck. Create an issue and move on to the next task
One commit per issueLink each commit to an issue, making changes traceable
Automatic test executionLets AI self-verify, reducing human review burden

In the “Harness Engineering” article, I noted that OpenAI reported single Codex runs lasting over 6 hours. Building an environment where AI can run for extended periods is Harness Engineering in practice.

What Humans Do During AI Runtime

While AI runs autonomously, I’m intentional about how I spend my time.

Work Only Humans Can Do

As I discussed in “The Evolution Toward AI-First Engineering Organizations,” the more AI handles, the clearer it becomes what humans should focus on.

  • Defining specs and intent: What to build and why. This is a human decision. As I explored in “Lessons from Block’s ‘From Hierarchy to Intelligence’,” even when AI handles information routing, final decision-making responsibility stays with humans
  • Stakeholder conversations: Customer calls, team 1-on-1s, hiring interviews. Human-to-human communication can’t be delegated to AI
  • Improving the harness: Every time AI makes a mistake, engineer it so it never happens again. Updating CLAUDE.md, AGENTS.md, adding tests, refining linter rules
  • Reviewing AI output: As discussed in “The ‘Workslop’ Trap in AI-Generated Code,” ensuring the quality of AI-generated code is a human responsibility

Being Human

This isn’t a joke. It’s a serious point.

AI agents don’t get tired. They run 24/7 at consistent quality. Humans need rest. The time AI spends running autonomously is the perfect rest window for humans.

While Claude Code spends 30 minutes on a refactoring task, I go for a walk. When I come back, I review the results and delegate the next task. This creates far better work-life rhythm than Pomodoro’s 25-on-5-off cycle ever did.

Investing Human Effort to Maximize AI Autonomy

Getting AI to run for extended periods requires upfront human investment.

1. Investing in CLAUDE.md / AGENTS.md

As I wrote in “Refactoring to AI-Friendly Code,” creating an environment that AI can easily understand directly improves AI productivity.

Project conventions, architecture constraints, test execution commands. Documenting these in CLAUDE.md or AGENTS.md lets AI run without hesitation.

2. Building Comprehensive Tests

With tests, AI can self-verify its changes. Without tests, AI either asks the human for confirmation or proceeds without verification.

Tests directly amplify AI’s autonomous capability.

3. Accumulating Skills

Recurring tasks get defined as skills. Like “Generating Meeting Minutes from Video with Gemini CLI” and “gh-security-scan,” defining workflows as skills means you can delegate the same patterns to AI repeatedly.

I maintain a growing collection in the oh-my-skills repository, supporting Claude Code, Codex, and Gemini CLI.

4. Running Multiple AIs in Parallel

The “Using Three AIs Like a MAGI System” approach extends to autopilot workflows. Delegate refactoring to Claude Code while Codex implements a separate feature. As I noted in “Comparing Multi-Platform Strategies: Claude Code, Codex, and Gemini,” each tool has distinct strengths. Running multiple AIs in parallel maximizes the productive use of human wait time.

Takeaways

The Pomodoro Technique is a brilliant method. But its core assumption was that the human is the task executor. Now that AI agents are the executors, there’s no reason to follow a 25-minute timer.

What I adopted instead is a simple rhythm: align the human’s schedule with AI work cycles.

  • Use autopilot skills and autonomous execution modes to keep AI running as long as possible
  • While AI runs, humans focus on work that only humans can do
  • Humans get tired. AI doesn’t. Rest when AI is working

In the “Harness Engineering” article, I cited Mitchell Hashimoto’s six-stage framework. The final stage is “always have agents running.” Once you achieve that state, your rhythm is no longer set by a timer but by AI task completion notifications.

Instead of a Pomodoro timer, I have AI task completion notifications. That’s my new Pomodoro.

That’s all from the front lines, where I’ve traded the Pomodoro timer for AI-driven development rhythms.

References