Rely or Adopt — A pre-AI rule for the age of AI-assisted coding

In my experience, it's fine to rely on code. And it's fine to adopt code. The problems come when we split the difference — the middle ground is painful.

I was using this rule pre-AI assistance, but the rise of AI-assisted coding has reinforced it, so I now believe it's more relevant than ever!

For Example

A couple of years ago, not wanting to reinvent the wheel, a group of us forked the google-research/bert repo, well actually an IPU (Graphcore processor) specific fork of that. Our changes were too tightly integrated to treat it as a library and rely on the code. But in order to get going quickly, we didn't properly adopt it as our own either. It wasn't fun. Debugging felt like shooting in the dark. Adding features felt like tiptoeing across thin ice. And stuff broke (a lot)!

In this post, I'll explain the rule: how to rely upon or adopt code, then discuss the "painful middle", before explaining how I think this works with AI assistance.

AI-generated image of beautiful mountains on the left, rolling hills on the right, and a river of fire in between; generated by ChatGPT

[AI generated; as if you couldn't tell]... Beware the middle ground!

Rely or Adopt

When programming, my usual rule is to either rely or to adopt external code, but to avoid the mushy middle-ground in-between. Rely means I trust the abstraction that wraps the code; I don't need to read the implementation or put in too much effort to understand exactly how it works, but I do need to know what it promises to do, through its interface. Adopt means I take the code itself, and make it my own; I learn exactly how it works so that I can tweak it, patch it, treat it as if I'd written it from scratch.

When to rely or adopt? First I ask if I can rely on the code. It's cheaper and easier to rely on code than it is to adopt it! The main criteria for this are: 1) does it have a well-defined and tight abstraction boundary for me to understand & trust, 2) is it reliable in itself - can I trust it, and 3) does the code bring in additional complexity or negative trade-offs? If the answer to any of these is "no", I'd consider adopting the code, in which case the main questions are 1) how much implementation complexity do I need to import, and 2) can I trust myself to own & maintain this code — do I have the skills required to understand it and verify correctness?

For example, imagine I'm building a music player app. I want to make some neat frequency-spectrum animations by analysing the sound as it plays. I learn that this means I need a Fast Fourier Transform (FFT), which is a non-trivial algorithm in a performance-sensitive codepath. I've found a library that I could add as a dependency, and a simple tutorial implementation that I could adopt for my purposes. Depending on my goals, here are some advantages of each approach:

Rely Adopt
Faster to integrate Could be combined with visualisation code
Use a battle-tested implementation Simpler build: fewer dependencies
Benefit from fancy optimisation tricks Smaller binary size: only required functionality
FFT permits a simple functional abstraction Learn something new

But this article isn't about the choice of whether to rely on code or adopt it. It's about avoiding the middle ground. In this example, a "middle ground" solution would be to copy an FFT implementation into my codebase (as if adopting), skip unit tests (as if relying), not properly read/understand the implementation (as if relying), and hack it around when needed (as if adopting). This is a bad idea 🚩; in trying to get the best of both worlds, I get something that's worse than both.

The painful middle ground

It's fine to rely on code (I trust the interface). It's fine to adopt code (I own the implementation). But it's generally a bad idea to do something in-between. Middle-ground solutions often involve copy-paste, forking, patching/monkey-patching - where I am implementation-aware, but I wouldn't say that I really own the implementation.

It's not immediately obvious why this is bad. What's the real difference between calling a library function and copy-pasting that library function into my codebase before calling it, after all? I think the root problem is that it adds maximum complexity to my codebase.

Everything in my source code is there for a reason.

One way in which the middle-ground adds complexity is by breaking the assumption "everything in my source code is there for a reason". While this is rarely 100% true, it's a useful property to aim for. It helps with refactoring, testing and debugging. And bringing code in without adopting it (note that one key step of adopting code should be to make it minimal for my requirements) breaks this. The result: less confidence in my own code, as I don't know what's necessary and what's dross.

Relying on code (usually via libraries/tools) usually adds the least complexity to my source code, although it can add complexity to the build process, security processes and deployment. When reading my code, I should be able to use a simple mental model of what the dependency code does based on the interface, without having to worry about how it's actually written, and only dive into the dependency code itself if there's a bug or performance problem in it. Adopting code adds some complexity to my source code, but if I've done a good job, it's close to the fundamental complexity of the algorithm itself — as complex as it needs to be, just as if I'd written it myself.

If I copy-paste, fork or patch someone else's code, I inherit the implementation complexity as per adoption, but I also have a bunch of incidental complexity as well. Since I haven't fully adopted it, I've probably retained a bunch of functionality that I never need. It's probably written in a different coding style. Perhaps it had unit tests that I didn't copy over. It's easy to add complexity to my code anyway; it's very easy to add complexity by bringing in code without properly adopting it as my own. This is technical debt.

In the age of AI assistance...

AI assistance is changing how we write code. It makes it much cheaper to add code to your codebase, and to review and test that code. Does it change the rely or adopt rule? No! In my thinking, AI-generated code adds a new category of code to rely on, and provides tools to help with adopting code, but doesn't change the fact that the middle-ground is bad.

When using AI generation, there are two extremes:

  1. Vibe coding mode — don't read the code, just inspect the outputs.
  2. Reviewer mode — read, understand, and probably rewrite the generated code.

...which look quite like the rely/adopt modes we saw previously! In vibe coding "rely" mode, I ask the AI for an FFT implementation to meet my requirements. It generates one, and some tests. I read the tests but only skim the code. I rely on the function signature, just as if I'd imported it from a human-authored external library. In reviewer mode, I ask for an implementation, then read and ensure I understand it, make sure it looks like the code I'd have written. I've adopted it as mine.

This interpretation adds a new category of code: AI-maintained code that's in my codebase, but is treated as if it's an external library. For this to work, it must have clean, simple interfaces as if it were an external library, and should not be tightly coupled to the truly "owned" parts. An example might be a custom op in a deep learning application, for example a fused implementation of a Triton or CUDA kernel. I can verify the implementation against a PyTorch implementation, and use it just like the PyTorch code. It could as well be a one-function library that I import and use: loosely coupled and clear responsibility.

The Danger Zone.

A substantial danger of AI code generation is that it encourages us to blur the lines between rely and adopt in our codebase. Should I have vibe-coded and adopted functions in the same source file? In the same codebase? How do I communicate to future-me which mode a function was written in?

Admission: I'm rubbish at vibe coding. I love AI coding assistants. But I can't vibe code, or indeed orchestrate agents, or whatever it is I'm supposed to be doing these days. Whenever I try to use this vibe coding "rely" pattern on a long-lived codebase, I eventually end up adopting. I've only truly vibe coded throwaway demos and throwaway scripts. I accept, however, that my experience doesn't generalise, which is why I've presented both options A and B above (even though I am somewhat stuck in B).

Conclusion

Absolute rules are usually wrong. With this in mind, the rule I've advocated for: "rely or adopt, never take the middle ground" will have its exceptions, but is generally helpful.

I struggle with complexity, and love code to be as simple as possible. In choosing to rely on code, often refusing to look at the implementation, OR to adopt the code and insist on reviewing/rewriting the implementation, I'm helping keep complexity under control in my codebase. Choosing which to do is hard, definitely a "senior software engineer" skill in my books, and I'm not sure how much I can help there.

AI coding assistance doesn't change this, but it does increase the risk of taking the middle ground. When using these tools, we can opt to "rely" by verifying outputs or "adopt", and get used to reading/rewriting a lot of code. We'd better keep working at how to do this effectively!