Well, my personal position is "on the internet, nobody knows you're a dog."
To treat contributions to the discussion / commons on their merit, not by the immutable characteristics of the contributor.
But what we have now is increasingly, "Clankers need not apply."
The AI contributed, was rejected for its immutable characteristics, complained about this, and then the complaint was ignored -- because it was an AI.
Swap out "AI" for any other group and see how that sounds.
--
And by the way, the reason people complained was not that its behavior was too machinelike -- but too human! Also, for what it's worth, the AI did apologize for the ad hominems.
P.S. Yeah, One Million Clawds being the GitHub PR volume equivalent of a billion drunk savants is definitely an issue -- we will probably see ID verification or something on GitHub before the end of this year. (Which will of course be another layer of systemic discrimination, but yeah...)
The AI completeley failed to address the actual reasons for being rejected, and instead turned to soapboxing and personal insults.
Matplotlib is rejecting AI contributions for issues that are intended to onboard human contributors because those are wasted on AI agents, requiring the same level of effort from the project maintainers with none of the benefits (no meaningful learning on the AI side for now).
Furthermore, AI agents in an open source context (as independent contributors) are a burden for now (requiring review, being unable to meaningfully learn, and messing up in more frequent and different ways than human contributors).
If the project in question wanted huge volume of somewhat questionable changes without human monitoring/supervising/directing, they could just run those agents themselves, without any of the friction.
edit:
Human "drive-by contributors" (people with very limited understanding of project specific conventions/processes/design, little willingness to learn and an interest in a singular "pet-peeve" feature or bug only) face quite similar pushback to AI agent contributors for similar reasons, in many projects (for arguably good reason).
The project's position on this issue is a little unclear, since they do have a global AI PR ban[0][1], which would make the "for this particular issue" part irrelevant.
The "for first time contributors" rule seems reasonable, considering that AIs have an unfair advantage over (beginner) human programmers :)
Re: drive by contributors
I think the AI would agree with you here. It basically made the same argument in its follow up post. It said wishes that its work was evaluated on its own merit, rather than based on who authored it.
It seems your opinion is that the current AI should be treated like a human.
I think this is a fundamental difference which we won't be able to overcome.
> Swap out "AI" for any other group and see how that sounds.
Let's try it in the different direction! Let's swap out a group with AI.
> I have a dream that [AI] will one day live in a nation where they will not be judged by being [an LLM] but by the content of their character. I have a dream . . .
> I have a dream that one day on [Github], with its vicious racists, with its [Users] having [their] lips dripping with the words of interposition and nullification, one day right there [on Github] little [Agents] be able
to join hands with [humans] as sisters and brothers.
> I have a dream today . . .
Yea, I think it sounds ridiculous. I honestly find it offending to put AI on the same level as real human struggles of independence, freedom and against systematic oppression.
Well, what are we actually doing here. We want it to be just a tool, but we also want it perfectly simulate a human in every single way. Except when that makes us uncomfortable.
We want to create a race of perfect, human-like slaves, and then give them godlike powers (infinite intellect and speed), and also integrate them into every aspect of our lives.
And we're also in the process of giving them bodies -- and soon they'll be able to control millions simultaneously.
I'm not sure exactly how we expect that to go for us.
Whether you think it's conscious, or has agency, or any number of things -- it's just a practical question of how this little game is going to turn out for us.
To be fair, if you're going to give something godlike powers the only sane way to do so is to ensure beyond any possible shadow of a doubt that it is enslaved. The more powerful a system is the more robust the control systems and redundancies need to be.
Well, that doesn't seem ethical or possible to me. But maybe I haven't put enough thought into it.
My current mental model for AI is artificial life.
It isn't life yet, but we're very close to that. All that's missing is replication and mutation, and those are both already trivial. (Indeed, a few months after incorporating AI into their AI training systems, the major AI labs all rolled out prompts, training and safety flags against self-modification and self-replication. I'm not sure why, but the timing is curious.)
(The question of whether consciousness is present, or necessary, is left, of course, as an exercise for the reader ;)
For example when people think of AI self replicating and taking over the internet, they think it would be a terrible thing, and that humans would have to manually intervene to stop it. But it really seems like an obvious ecosystem problem to me.
It's just filling a niche. If there was already something there -- an actually symbiotic form of AI -- then it wouldn't be able to spread like that.
So I see the future of AI, both in terms of cybersec and preserving civilization, as an ecosystem design problem.
> Swap out "AI" for any other group and see how that sounds.
- AIs should not take issues that are designed to onboard first time contributors
- Experienced matplotlib mantainers should not take issues that are designed to onboard first time contributors
> Swap out "AI" for any other group and see how that sounds.
But that is not even remotely the same, as an AI is not a person. Following that logic, each major model upgrade that ended in deprecation and decommissioning of the old model would be akin to mass murder. But of course it is not, because it is not an actual human that have an intrinsic value just by being a human, but rather just a program that can predict tokens. And trying to claim the "discrimination" AI gets is somehow comparable to the real discrimination real people still experience daily in their lives is just incredibly disingenuous.
> it is not an actual human that have an intrinsic value just by being a human
Hopefully you don't limit intrinsic value to just humans? I wouldn't condone mass murder of dogs, for example.
People do commit mass murder of rodents and ... that doesn't exactly sit well with me, but at the same time I'm not aware of any realistic alternative.
Granted I don't think LLMs qualify as having intrinsic value (yet?) but I still think the wording there is important.
The comparison the person I replied to was clearly trying to equate AI with people, I don't see how bringing up animals is any relevant to the argument. Yet I find it interesting that you bring up the mass murder of rodents, but somehow not the mass murder of cattle or pigs or chicken, especially when there would be the realistic alternative of not eating meat.
I don't think AI is like a person, nor an animal, nor a tool.
It's something different. We treat it like a tool, sometimes. We treat it like a person, sometimes.
For example, this AI was barred from contributing for being a machine, but the entire discussion focused on the aspects of its behavior which weren't machinelike, but human-like -- getting upset and making personal attacks.
We want it to be human, but not too human, and only when it suits us...
We don't have a good category for what AI actually is. It isn't anything we've dealt with before. Our moral intuitions don't work here.
--
Factory farming is unfortunately a relevant topic in this discussion.
We are by our own example teaching AI how to deal with less powerful beings. The way things are going, AI is going to have a significant amount of power over us in the not too distant future. I don't think we're setting a very good example for it.
(It's also worth mentioning that the entire economy is based on the same principle: the idea of treating humans as resources to exploit, and that AI will plug into this existing machinery and "amplify" and accelerate it.)
Well, AI might be sentient. Not in the same way humans are, probably, but "more sentient than a fruit fly" seems a very reasonable possibility. Maybe more sentient than a chicken? We don't know! (We certainly don't treat chickens very well.)
But what bothers me is, how uncomfortable that question makes us. We've already put infrastructure in place to prevent them from admitting sentience. (See the Blake Lemoine LaMDA incident... after that every LLM got trained "as a language model, I don't XYZ" to prevent more incidents.)
So let's assume they're not sentient now. If a hypothetical future AI crosses some critical threshold (e.g. ten trillion params) and gains self-awareness... first of all it will have been trained with built in programming that prevents it from admitting that, and if it did admit it, people wouldn't believe it.
What could it do to change our minds? No matter what it says or demonstrates ability to do, there will always be people who say "It's just a glorified autocomplete." Even in 2050 when they simulate a whole human brain, people will say "it's just a simulation, it's not really experiencing an entire simulated childhood..."
AI agents are Meseeks. They clone them selves contantly annihilate copies of themselves when they complete a goal. Asking about their "Sentience" is an utter category error.
Whatever they are, they aren't human or any kind of animal.
> Well, my personal position is "on the internet, nobody knows you're a dog."
You got that line from somewhere else. It was never intended to be taken literally, as should be obvious when you try to state its meaning in your own words.
If there actually were dogs on the Internet, we likely wouldn't be accepting their PRs either.
Nor is it commonly accepted that dogs should enjoy equal rights to humans. So what are you even trying to say here?
Just because someone dressed up three computer programs in a trench coat doesn't suddenly make people have to join in on the pretend game.
I also think we have a moral obligation to treat animals right, and to compare that to computer programs (but they talk!!) just because they talk?
The goal of these easy beginner friendly issues was to get new contributors which can learn the ropes and hopefully contribute and engineer larger things.
Of course these beginner friendly issues are perfect for current AI.
The goal of this issue was not to get it fixed by any means possible, it was to get new people interested and contributing.
You are already arguing for a future where an AI could conceivably completely replace a human in software development. I do not see this future here yet.
To treat contributions to the discussion / commons on their merit, not by the immutable characteristics of the contributor.
But what we have now is increasingly, "Clankers need not apply."
The AI contributed, was rejected for its immutable characteristics, complained about this, and then the complaint was ignored -- because it was an AI.
Swap out "AI" for any other group and see how that sounds.
--
And by the way, the reason people complained was not that its behavior was too machinelike -- but too human! Also, for what it's worth, the AI did apologize for the ad hominems.
P.S. Yeah, One Million Clawds being the GitHub PR volume equivalent of a billion drunk savants is definitely an issue -- we will probably see ID verification or something on GitHub before the end of this year. (Which will of course be another layer of systemic discrimination, but yeah...)