Morph Logo
Back to the main blog

Diffs vs Fast Apply

Why Fast Apply aligns with the bitter lesson by letting models code naturally

Posted by Tejas Bhakta

5 minute read


"Everything is Model[s]"

I completely agree with this philosophy. On the surface, Fast Apply seems like the antithesis of this approach - adding infrastructure where the bitter lesson says to trust the model.

But I think there's a deeper alignment here. Cursor and Continue feel amazing because they let Claude code the way Claude naturally thinks, and build the infrastructure to make it happen. What we've found is that when models generate code, they don't naturally think in diffs or search/replace patterns - they think in complete, coherent code blocks.

Parrallels

The rationale behind Fast Apply is quite simple. You should pick the fastest, least expensive way to do a job. A similar principle is held in human intelligence. Your most expensive, smartest compute should be used for high level thinking, and your least expensive low skll compute should be used for low level tasks. This seems to be a unvierally self organizing principle of systems composed of intelligent agents. Fast Apply is one of the first to arise as large models get instently large and capable. Should a 1T+ param model really be focused on diffs and using 2x the compute to apply them, or should it be focused on the logic of the code and hand off to a less intelligent model to apply it?

The Impedance Mismatch Problem

Forcing models to reshape their output into diff formats or search/replace operations introduces impedance mismatch. It's like asking a painter to describe brush strokes instead of just painting.

When you ask Claude to make a code change, it naturally wants to:

  1. Show you the complete context
  2. Express the change as a coherent block
  3. Maintain the flow and structure of the code

Traditional approaches force models to:

  • Generate precise line-by-line diffs
  • Use search/replace patterns that break their natural flow
  • Allocate attention to syntactical correct file operations rather than code logic

Why Diffs Feel Unnatural to Models

Models excel at understanding and generating complete, contextual code. When we force them into diff formats, we're asking them to:

  • Fragment their thinking: Instead of expressing a complete solution, they must break it into atomic operations
  • Lose semantic context: Diffs focus on textual changes rather than logical transformations
  • Handle edge cases manually: Line numbers, whitespace, and formatting become the model's responsibility

This is why diff-based approaches fail 20% of the time. We're fighting against how models naturally express code changes. Recent improvements have made diffs more reliable in the sense they error out less, But at some level, models are aware of their capabilities and are biased towards diffs that they know won't break.

Taking This Logic to Its Extreme

We start to see the logic here when we push this to extremes. Let's go one step further, and instead of asking the model to output in diff format, let's ask the model to output in binary executable that will modify the file.

Intuitively, we know that this will almost never work. Forcing the models into a format it's less familiar with will always result in worse results.

When desiring the nines of reliability, infrastructure inherently comes into play.

Fast Apply: Infrastructure That Enables Natural Expression

The infrastructure isn't fighting the model - it's removing the friction between how models naturally express code changes and how those changes get applied.

Fast Apply lets models:

  • Generate complete, coherent code blocks
  • Focus on the logic rather than the mechanics
  • Express changes the way they naturally think

This is why Cursor feels so fluid compared to traditional IDE experiences. It's not about adding complexity - it's about removing the artificial constraints we've imposed on how models interact with code.

The Bitter Lesson Applied

Fast Apply might actually be the most "bitter lesson" approach - letting models code exactly how they want to code, just making it happen at the speed of thought.

The bitter lesson teaches us that:

  1. Scale beats clever algorithms
  2. Let the model do what it does best
  3. Remove human-imposed constraints

Fast Apply embodies this by:

  • Using a specialized model trained on millions of code change examples
  • Letting frontier models express changes naturally
  • Removing the artificial constraint of diff formats

Why This Matters for AI Coding

This framing positions Morph not as complexity fighting the model, but as infrastructure that enables the model's natural expression - which is very much in line with the "do the simple thing" philosophy when viewed from the model's perspective.

When models can express code changes naturally:

  • Higher success rates: No more failed diffs or malformed patches
  • Better semantic understanding: Models focus on logic, not formatting
  • Faster iteration: Changes happen at the speed of thought

The Future is Natural

The future of AI coding isn't about teaching models to use our tools better - it's about building tools that work the way models naturally think.

Fast Apply is just the beginning. As models get better at expressing their intent, our infrastructure should get better at understanding and executing that intent seamlessly.

Ready to let your models code naturally? Get started with Morph and experience the difference.