Morph LogoMorph
Back to the main blog

What is Morph For?

Understanding Morph: The fastest way to apply code updates from AI

Posted by Tejas Bhakta

5 minute read


What is Morph For?

What is Morph?

Morph is a specialized LLM designed to seamlessly apply code changes suggested by frontier AI models (like Claude or GPT-4o) to your existing code files. It acts as the final step in the AI coding workflow:

  1. Your original code file
  2. The changes suggested by an AI assistant
  3. The final, correctly updated file

While this might sound simple, going from step 2 to 3 is crucial to the product feel for the AI coding experience. Here's why.

The Current AI Coding Experience

When you use AI coding assistants today, the workflow looks like this:

  1. You ask the AI to modify your code, and you bring in all the context you feel is relevant
  2. It gives you a snippet/snippets of updated code

Your options are:

  1. Have Claude/4o reoutput the whole file - expensive + slow
  2. Ask Claude/4o to output in diff format - models are not trained to do this and its extremely error prone
  3. Use tresitter or similar to parse out relevant chunks, use frontier model to reoutput this section fully - closer, but error prone, and misses semantic understanding

This manual process is slow, error-prone, and breaks your flow. But why can't we just automate it with traditional software?

Why Software Solutions Fall Short

The obvious solution might seem to be using diff/patch tools or writing deterministic matching algorithms. After all, we're just combining two pieces of code, right?

In his famous "Software 2.0" blog post, Andrej Karpathy argued that certain types of software should be written by training neural networks rather than through explicit programming. Code editing is a perfect example of this paradigm shift.

Software 2.0

Traditional software approaches fail at least 30% of the time because the task requires deep semantic understanding that's difficult to encode in rules. The key insight is that for many real-world problems, it's far better to collect examples of desired behavior than to explicitly program rules. This is especially true for code editing, where we can gather millions of examples of code changes but struggle to write reliable rules.

Just as AI is eating traditional software, specialized AI models are now eating specific software tasks. Code editing is a perfect example where:

  1. Context is Non-Deterministic:

    • AI suggestions often include semantic changes that don't match exactly
    • Variable names might be different but mean the same thing
    • Code structure might be reorganized while preserving functionality
    • A model can learn these patterns from thousands of examples
  2. Large File Challenges:

    • Frontier models struggle with line numbers in large files
    • Traditional diff tools can't handle partial function updates
    • Context windows get overwhelmed with big files
    • Neural networks can learn to focus on relevant sections
  3. Subtle Dependencies:

    • Import statements need careful handling
    • Type definitions might need updates
    • Test files often need corresponding changes
    • Models can learn these relationships implicitly

Why You Need a Model

This is why we built Morph as a specialized model. We're hellbent on making this workflow the best, fastest, cheapest, and most reliable way to apply updates from AI.

  1. Semantic Understanding:

    • Morph understands code semantically, not just textually
    • It can match calculate_total(items) with calculate_total(item_list)
    • It preserves your code's style and structure
  2. Speculative Edits:

    • We use a variant of speculative decoding
    • Unlike traditional speculative decoding that uses a small model to predict drafts
    • We use the original code as a strong prior
    • This lets us process unchanged code in parallel, making it incredibly fast
  3. Streaming Updates:

    • Changes stream down as they're processed
    • You can render code changes before it's fully done
    • No waiting for large files to process
  4. Context:

    • Reoutputting the original code along with the changes "guides" the model to make changes in the correct place
    • This is critical for complex codebases with many files
  5. Trained on Frontier Model outputs

    • Frontier models are amazing at reasoning and code generation but fall short at code edits - they were just not trained for this
    • These models have quirks like // ... existing code ..., missing imports, etc.
    • Morph is trained on thousands of these outputs to understand the quirks and handle them gracefully
  6. Speed:

    • Morph is the fastest way to apply code updates from AI
    • It's 2x faster than the next best method (speculative decoding with Qwen 2.5 Coder 7b model), and more accurate
    • It's 4x faster than reoutputting the entire file with claude

The Technical Magic

Here's how Morph achieves its speed:

  1. Inference Optimizations:

    • Smart inference side KV caching
    • Speculative decoding with the original code as a strong prior (speculative edits)
    • Custom attention heads for code editing
  2. Efficient Token Usage:

    • Minimizes tokens sent to expensive frontier models
    • Uses specialized models for implementation details
    • Enables faster, cheaper code updates

What this process needs

  • A way to determine which snippets correspond to which files. Right now this responsibility falls on the user (typically Claude tool call). We are working on an API for this as well.
  • Morph API - Businesses should contact us for dedicated instances and access to larger models.

The Future of Code Updates

This approach opens up new possibilities:

  • Using frontier models for high-level planning
  • Specialized models for implementation and merging
  • Recursive application of changes at different abstraction levels

Getting Started

Ready to try Morph? Check out:

Or sign up now to get your API key and start using Morph in your workflow.