Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is like a bookmark in time of where I exactly gave up (in July) managing context in Claude code.

I made specs for every part of the code in a separate folder and that had in it logs on every feature I worked on. It was an API server in python with many services like accounts, notifications, subscriptions etc.

It got to the point where managing context became extremely challenging. Claude would not be able to determine business logic properly and it can get complex. e.g. if you want to do a simple RBAC system with an account and profile with a junction table for roles joining an account with profile. In the end what kind of worked was I had to give it UML diagrams of the relationship with examples to make it understand and behave better.



i think that was one of the key reasons we built research_codebase.md first - the number one concern is

"what happens if we end up owning this codebase but don't know how it works / don't know how to steer a model on how to make progress"

There are two common problems w/ primarily-AI-written code

1. Unfamiliar codebase -> research lets you get up to speed quickly on flows and functionality

2. Giant PR Reviews Suck -> plans give you ordered context on what's changing and why

Mitchell has praised ampcode for the thread sharing, another good solution to #2 - https://x.com/mitchellh/status/1963277478795026484


> the number one concern "what happens if we end up owning this codebase but ... don't know how to steer a model on how to make progress"

> Research lets you get up to speed quickly on flows and functionality

This is the _je ne sais quoi_ that people who are comfortable with AI have made peace with and those who are not have not. If you don't know what the code base does or how to make progress you are effectively trusting the system that built the thing you don't understand to understand the thing and teach you. And then from that understanding you're going to direct the teacher to make changes to the system it taught you to understand. Which suggests a certain _je ne sais quoi_ about human intelligence that isn't present in the system, but which would be necessary to create an understanding of the thing under consideration. Which leads to your understanding being questionable because it was sourced from something that _lacks_ that _je ne sais quoi_. But the order time of failure here is "lifetimes". Of features, of codebases, of persons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: