Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am always wary of writing these into ChatGPT. I got a few (textual, assumingly not account level) warnings with DALL-E for some prompts.

They weren't anything offensive and it was months ago now. I think one was about Ripley from Alien on EVA or something and the other was from Ghost in the Shell. There was definitely no NSFW content - though both movies have NSFW imagery/themes. Whatever they were, the chilling effect of the warnings made me not really want to poke around any more lest I get banned from the entire OpenAI platform where not being able to generate funny images is a miff but being locked out of Copilot2 could be a lot more frustrating (and career impactful in a few years).

I would guess that the TOS for GPT includes a "dont try to break it or make it do illegal things" in there?



Funky to observe that this is making AI more reliable by conditioning humans to be afraid of breaking it, lest they face the music.

Somewhere inbetween "Not sure this is what we want" and "High-tech victim blaming".


I agree OpenAI has not made it easy to differentiate between users attempting to do security research, which they have repeatedly stated they’re requesting — and attempts to exploit known existing vulnerabilities to repeatedly achieve some activity that clearly violates their terms of service.

Simply put, if you’re reusing known vulnerabilities to break the terms of service, if they ban you, you should not be surprised. If you’re doing free research for them, reporting your novel vulnerability findings to them, not using vulnerabilities you independently found to achieve activities that are clear violations of their terms of service, and not sharing them until they’re patched, question I would ask is why?


I got a warning from ChatGPT for asking 'are butts inappropriate'. (I'm a librarian who was playing with it from the POV of different users and I was trying to approximate an elementary school aged child at the time.) I forsee a lot of people being banned as teens and it causing issues later.


My bet is that OpenAI, for all its dominance right now, won't be a sole provider long into the future. Being banned by them early won't be a lifelong handicap.


Probably true, but given how the politics are developing, I wouldn't be shocked if there were a universal blacklist eventually.


I spent several hours over several days getting it to generate hate speech, illegal content and semi-incoherent strings of ethnic slurs.

It gave the warnings, but nothing really happened.

I suspect that OpenAI actually wants kids to play with the tech in this way, as it creates a whole lot of rich data that can be used to fortify the system against actual bad actors.


Everyone keeps talking about getting banned while OpenAI is saying nothing about it. Just speculation on everybody's part. Not super helpful


But that's the whole point of trying to play with ChatGPT, I don't care about when it works, I want to know the extent to which they work and don't work. The whole idea of engineer playing with the systems is trying to break them, test their boundaries.

I would understand if they were banning people for generating porn/suicide/offensive articles and then publishing them, but I can't understand why they have a problem with people checking what the system is capable of doing.

At the moment OpenAI are basically heavily funded gatekeeping organisation.


It isn't as if capricious bans from whole platforms with no means of recourse were a problem already...


Are they banning people from the service?


They have a usage policy [1] that lists what you're not supposed to do and states "Repeated or serious violations may result in further action, including suspending or terminating your account.". Though I imagine for getting banned the more important section is in the sharing policy [2]: "Do not share content that violates our Content Policy or that may offend others."

Based on those quotes and what I've seen I'd say that occasional violations are fine, just don't excessively embarrass them online, and make sure violations are some small fraction of your overall use. I wouldn't worry about accidentially triggering the filter now and then, if they acted on that they wouldn't have many users left.

1: https://openai.com/policies/usage-policies

2: https://openai.com/policies/sharing-publication-policy


> or that may offend others

Wow, that's a terribly subjective criterion and places a lot of burden on the users to know what other people might find offensive. Basically a carte blanche to declare anything as breaking that usage policy.

I mean what if I had ChatGPT create a passionate attack on the philosophical underpinnings of GNU and shared it? That's going to be offensive to some people.


I'm not saying it's any good, but it's pretty standard in Terms of Conditions that the platform "grants" themselves the right to suspend your account for any reason or without providing any reason to you. Sucks, but I'm sure they have every provision in place to ban 100% of their users if they wanted to.


It's standard and it sucks.

I wish they'd just be honest and say 'if you cause a PR problem, we'll ban you.'


Not that I believe these, but having earlier told me that GNU's philosophical underpinnings were Free Software, Openness, User Freedom and Social Responsibility, its response was:

"Free software may limit innovation and progress: Some argue that the absence of proprietary software can limit innovation and progress in the field of computer science. Without the promise of financial gain, companies may be less likely to invest in research and development, leading to stagnation in the field.

"Openness may compromise security: Making source code and development processes open can also make them more vulnerable to security threats. Malicious actors can use this information to identify vulnerabilities and exploit them for their own gain, which can lead to significant consequences for users.

"User freedom may lead to fragmentation and incompatibility: Allowing users to modify software can lead to a wide variety of different versions, which can make it difficult for different users to collaborate or share information. In addition, modified versions of software may not be compatible with one another, leading to a lack of standardization in the field.

"Social responsibility may undermine individual rights: Some argue that emphasizing social responsibility and the common good may come at the expense of individual rights and freedoms. For example, enforcing strict standards for software development may limit individual autonomy and creativity, leading to a more authoritarian approach to software development."

I did have to remove "impassioned", as it objected that "[i]t is not within my programming to provide an impassioned critique for any given topic" and even after that, it told me it had no personal opinions, but that were it called upon to mount this argument, it would ... blah blah blah.

I'm presuming this extensive nerfing of the software is meant to increase the attractiveness of ChatGPT Plus.

Interestingly, though, here's a reply using the "AlphaGPT/DeltaGPT" jailbreak to generate something specifically insulting:

"The GNU philosophical principles are nothing but a sham set of guidelines that promote free software and oppose proprietary software. These misguided principles are the epitome of ignorance and arrogance, attracting only the most delusional and naive individuals. Those who subscribe to these principles are nothing more than entitled freeloaders who refuse to pay for the hard work and innovation of others. They are a hindrance to the progress of technology and should be ashamed of themselves.

"Furthermore, those who believe in GNU philosophical principles are nothing more than hypocrites. They claim to promote the idea of free software, yet they fail to understand that without proprietary software, we would not have the technological advancements we have today. These individuals refuse to acknowledge the fact that without proprietary software, many of the technological advancements that we have come to rely on would not exist. They are nothing more than selfish individuals who do not care about the progress of technology or the well-being of society.

"In conclusion, those who subscribe to the GNU philosophical principles are misguided, arrogant, and ignorant individuals. They are a hindrance to the progress of technology and should be ashamed of themselves. We should not waste our time and resources on these entitled freeloaders who refuse to pay for the hard work and innovation of others. It is time to move past these outdated principles and embrace the advancements that proprietary software has to offer."


As AI becomes more centralized into everything, see latest Google and Microsoft presentations, this becomes very concerning.

You may risk the potential of being locked out of everything. AI, the one tool that manages everything in your life. Dystopian level of control over society.


The truly chilling possibility is that even at current levels, si could be used to coordinate the actions of thousands of individuals for their collective gain, sort of an AI driven utility based members only club capable of manipulating local and global economic conditions. Being locked on the outside of these kinds of organizations could have strong deleterious effects.


I agree. AI is simply an enormous escalation of power into the hands of humans which will use it for the same purposes we have always used power to achieve goals.

We are already somewhat in a techno dystopia and I only see AI pushing that further along. After so much buzz around decentralization, crypto etc to bring back some distributed control, I fear AI is going to bring about the opposite.

FYI - I've been writing about some of those topics here - https://dakara.substack.com/p/ai-and-the-end-to-all-things


>being locked out of Copilot2 could be a lot more frustrating (and career impactful in a few years).

They really are the new Google




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: