Back to Blog

Death by Approval Clicks

Author avatar
Andy Görnt
7 min read
Cover for Death by Approval ClicksDark Cover for Death by Approval Clicks
Modern AI coding workflows promise speed, but many teams are stuck in a loop of constant approval clicks. This article explores how excessive confirmations break developer flow, reduce code quality, and what it takes to move from command-level approvals to real human oversight in a modern product management process.

I remember the moment it became obvious that something was off. I had two Antigravity instances open, both running Gemini 3.1 Pro on high. Code was flowing, iterations were fast, and the system looked, on the surface, exactly like what modern AI-assisted development promises. Then I noticed what I was actually doing most of the time.

Click. Approve. Click. Approve.

Not reviewing. Not thinking. Just approving.

At first it felt like a minor annoyance. A few confirmations here and there, some friction that comes with powerful tools. But as the session went on, the pattern intensified. Every git commit, every small command, every agent action required confirmation. The system was supposed to be in auto mode. It wasn’t.

After a while, I stopped reading what I was approving.

That’s when it becomes dangerous.

The illusion of control

In product management, we often talk about control as a necessary counterweight to speed. Approval steps, review gates, structured workflows — they all exist to prevent mistakes and maintain quality. The assumption is simple: more control leads to better outcomes.

But in practice, control does not scale linearly with approvals. At some point, it flips.

Too many approvals don’t increase safety. They destroy attention.

I was approving two to three actions per minute. Most of them were trivial: git add, git commit, small automated steps that had no strategic importance. The system asked for confirmation anyway. And because it asked so often, the meaning of approval collapsed.

Approval became muscle memory.

In that state, the one approval that actually matters — a risky change, a flawed implementation, a security issue — becomes harder to detect, not easier.

How the real workflow actually looks

If you strip away the noise, the development loop I care about is simple:

  • create an idea
  • iterate on it
  • commit changes
  • open a PR
  • run QA (code quality, security, structure)
  • iterate based on feedback

This loop is where product discovery meets execution. It is where product strategy becomes concrete. Everything else is infrastructure.

To strengthen this loop, I introduced a second system: Claude Code. Not as a replacement, but as a counterbalance. A different model, a different perspective, like another engineer reviewing the work. Claude runs a QA and security pass on the PR, using a custom skill that spawns multiple agents.

Conceptually, this is powerful. Multiple agents collaborating, each with their own context, pushing the code towards higher quality.

Operationally, it broke down.

Not because of the models. Because of the approvals.

What actually breaks

When every step requires confirmation, three things start to degrade almost immediately.

First, flow disappears. You are no longer thinking in systems or outcomes. You are reacting to prompts. Each interruption resets context, even if only slightly. Over time, this compounds into real cognitive fatigue.

Second, speed becomes unpredictable. Not because the system is slow, but because the human becomes the bottleneck. Every micro-decision adds latency, and the loop that should feel continuous starts to stutter.

Third, and most importantly, judgment weakens. When approvals are frequent and low-value, attention drifts. The brain optimizes for throughput instead of evaluation. You stop asking, “Is this correct?” and start asking, “How fast can I get through this?”

Human oversight turns into human noise.

This is the opposite of what a good product management process should do. Instead of improving decision quality, the system obscures it.

The unexpected insight

One evening, after another session of mechanical approvals, I decided to approach the problem differently. Instead of trying to tweak settings manually, I asked Claude Code to solve it.

I created a new skill and let it investigate the issue itself.

What followed was not a configuration tweak. It was a reframing.

Claude didn’t assume the settings were wrong. It questioned the system. Together, we explored where the behavior actually originated. That led us outside the UI, into how commands were executed and how approvals were triggered internally.

The key insight was surprisingly specific.

Piped commands — anything using | — broke the auto-approval logic completely.

No matter what was configured in Antigravity, those commands forced manual approval. The UI suggested that auto mode was active, but under the hood, it wasn’t.

At the same time, parts of Claude Code’s own interface reinforced the issue. The system looked automated, but still required confirmation in critical places.

This was not a user error.

It was a structural bug.

Fixing the system, not the symptom

Once the real cause was clear, the solution became straightforward.

I installed a small but effective tool that patches Antigravity’s behavior:

https://github.com/yazanbaker94/AntiGravity-AutoAccept

Then I let Claude take over the rest.

Instead of navigating the UI, it wrote and adjusted configuration files directly. Over two or three iterations, we aligned both systems until approvals behaved as expected.

The result was not partial improvement.

It was full auto mode.

No more meaningless confirmations. No more constant interruptions.

Just the loop.

Before and after

The difference was immediate and measurable.

Before, I was dealing with two to three approvals per minute. The work felt fragmented, and mental fatigue set in quickly. Even short sessions became exhausting because attention was constantly being redirected.

After, the system became quiet. Not silent, but focused. Approvals only appeared where they mattered. The development loop — idea, iteration, PR, QA — flowed without interruption.

More importantly, I started reviewing again.

Not commands. Outcomes.

The real trade-off

Full automation always comes with risk. Removing approvals blindly is not a solution. It simply shifts the problem.

What matters is where you place the boundary.

In my setup, I enforce strict global rules inside the agents. Certain actions — especially final implementations or major changes — require explicit human approval. Not because the system cannot proceed without it, but because those decisions carry real impact.

Everything else is delegated.

Approve decisions, not commands.

This distinction is subtle, but critical. It transforms the role of the human from operator to supervisor.

A broader pattern

What this experience highlights is a deeper issue in how we design modern development workflows.

We are increasingly working with systems where agents:

  • generate code
  • review code
  • iterate on feedback

Yet we still insert humans into the loop at the lowest possible level, asking them to approve individual commands instead of evaluating outcomes.

This creates a mismatch.

The system operates at the level of flows and iterations. The human is forced to operate at the level of clicks.

In product terms, this is a broken interface.

Towards a better model

If we take product strategy and product discovery seriously, we need to think of development not as a sequence of tasks, but as a connected system. A loop where feedback, iteration, and validation continuously shape the outcome.

In such a system, approvals should exist, but they should align with meaningful checkpoints:

  • architectural decisions
  • product direction changes
  • final implementations

Not with internal mechanics like git commit.

This is where the idea of a product operating system becomes relevant. A system that connects strategy, discovery, delivery, and feedback into a coherent flow, where humans focus on direction and agents handle execution.

The goal is not to remove humans from the loop.

It is to move them to the right level of abstraction.

Conclusion

“Death by approval clicks” is not a tooling problem. It is a design problem.

When systems ask for confirmation at the wrong level, they don’t create safety. They create friction, fatigue, and ultimately worse decisions.

The fix is not more approvals or fewer approvals.

It is better approvals.

Approvals that match how work actually happens.

Approvals that protect what matters.

Everything else should be automated.