1. Go's spec and standard practices are more stable, in my experience. This means the training data is tighter and more likely to work.
2. Go's types give the llm more information on how to use something, versus the python model.
3. Python has been an entry-level accessible language for a long time. This means a lot of the code in the training set is by amateurs. Go, ime, is never someone's first language. So you effectively only get code from someone who has already has other programming experience.
4. Go doesn't do much 'weird' stuff. It's not hard to wrap your head around.
yeah i love that there is a lot of source data for "what is good idiomatic go" - the model doesn't have it all in the training set but you can easily collect coding standards for go with deep research or something
And then I find models try to write scripts/manual workflows for testing, but Go is REALLY good for doing what you might do in a bash script, and so you can steer the model to build its own feedback loop as a harness in go integration tests (we do a lot of this in github.com/humanlayer/humanlayer/tree/main/hld)
1. Go's spec and standard practices are more stable, in my experience. This means the training data is tighter and more likely to work.
2. Go's types give the llm more information on how to use something, versus the python model.
3. Python has been an entry-level accessible language for a long time. This means a lot of the code in the training set is by amateurs. Go, ime, is never someone's first language. So you effectively only get code from someone who has already has other programming experience.
4. Go doesn't do much 'weird' stuff. It's not hard to wrap your head around.