I'm not sold at the idea - for most projects it makes sense that the author of the PR should ultimately have ownership in the code that they're submitting. It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.
> A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979
Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.
I agree that accountability should always rest with the human submitting the PR. This isn't for deflecting ownership to AI. The goal is transparency, making it visible how code was produced, not who is accountable for it. These signals can help teams align on expectations, review depth, and risk tolerance, especially for beta or proof‑of‑concept code that may be rewritten later. It can also serve as a reminder to the author about which parts of the code were added with less scrutiny, without changing who ultimately owns the outcome.
> It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.
It doesn't matter how true this should be in principle, in practice there are significant slop issues on the ground that we can't ignore and have to deal with. Context and subtext matter. It's already reasonable in some cases to trust contributions from different people differently based on who they are.
> Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes
The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.
Isolating the AI is the next best thing. It's still an account that's facing consequences, even if it's anonymous. Yes there are issues but there's no perfect solution in a world where we can't have good things anymore.
It seems like something like this should be added to the commit object/message itself, instead of git notes. Maybe as addition to Co-Authored-By trailer.
This would make sure this data is part of repository history (and commit SHA). Additional tooling can be still used to visualize it.
Wouldn't the thing to do to give them their own account id / email so we can use standard git blame tools?
Why do we need a plugin or new tools to accomplish this?
Don't know why this has been resubmitted and placed on the front of HN. (See 2day old peer comment) What's the feature of this post that warrants special treatment?
> Wouldn't the thing to do be to give AI its own account id / email so we can use standard git blame tools?
That’s a reasonable idea and something I considered. The issue is that AI assistance is often inline and mixed with human edits within a single commit (tab completion, partial rewrites, refactors). Treating AI as a separate Git author would require artificial commit boundaries or constant context switching. That quickly becomes tedious and produces noisy or misleading history, especially once commits are squashed.
> Why do we need a plugin or new tools to accomplish this?
There’s currently no friction‑less way to attribute AI‑assisted code, especially for non–turn‑based workflows like Copilot or Cursor completions. In those cases, human and machine edits are interleaved at the line level and collapse into a single author at commit time. Existing Git and blame tooling can’t express that distinction. This is an experiment to complement—not replace—existing contributor workflows.
PS: I asked for a resubmission and was encouraged to try again :)
Many posts get resubmitted if someone finds them interesting and, if it's been a few days, they generally get "second-chance" treatment. That means they'll be able to make it to the front-page based on upvotes, if they didn't make it the first time.
There are a couple of paths to resubmission, the auto dedup if close enough in time vs fresh post / id. There are also instances where the HN team tilts the scale a bit (typically placing it on the front iirc)
I was curious which path this post took, OP answered in a peer comment
I guess because 99% of generated code will likely need significant edits, so you'd never want to commit direct "AI contributions" - you don't commit every time you take something from StackOverflow, likewise I wonder if people might start adding credit comments to LLMs?
how much is this solution like this going to cost you per current seat?
On one hand, I would imagine companies like GitHub will not charge for agent accounts because they want to encourage their use and see the cost recouped by token usage. On the other hand, Microslop is greedy af and struggling to sell their ai products
Because AI is really good at generating code that looks good on its own, on both first and second glance. It's only when you notice the cumulative effects of layers if such PRs that the cracks really show.
Humans are pretty terrible at reliable high quality choice review. The only thing worse is all the other things we've tried.
> Because AI is really good at generating code that looks good on its own, on both first and second glance.
This is a good call out. Ai really excels at making things which are coherent, but nonsensical. It's almost as if its a higher-order of Chomsky's "green ideas sleep furiously"
Because they can produce magnitude more code than you can review. And personally I don't want to review _any_ submitted AI code if I don't have a guarantee that the person who prompted it has reviewed it before.
It's just disrespectful. Why would anyone want to review the output of an LLM without any more context? If you really want to help, submit the prompt, the llm thinking tokens along with the final code. There are only nefarious reasons not to.
> A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979
Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.
reply