borisdayma / dalle-mini

DALL·E Mini - Generate images from a text prompt

Home Page:https://www.craiyon.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Improvement: Restriction of nefarious purpose image generation

notJoon opened this issue · comments

Hi,

Recently, people have been increasingly interested in this project. How about applying a filter that rejects the image creation request for some words (e.g. violence, abuse or discriminate things) to prevent unexpected accidents?

There's already plenty of solutions out there that police stuff to that extent the goal of this project is to put the freedom of AI art in more hands and yes it's gonna be the wild west for a while but we'll eventually see a normalization trend.

Censorship is a giant can of worms -- Where do you start? Where do you stop? Who says what's too far? Is it the Jesus people, or the Aetheists? Perhaps the Satanists should have the reigns? Science-only folks then? Who pays for someone to go through and maintain the list?

Who prevents someone from checking out the code, commenting the censoring function, and using it anyway? I agree that "it can be bypassed" is a weak reason not to do something, but it's literally an open source project with instructions on how to recreate it locally. It's trivial to find a PR, Issue, or Reddit thread removing a limiting function from an open source project. The objection bears merit, in this context.

This is not a topic for this project to tackle. It's a topic that needs to be addressed, but with people, not some codebase that's one of billions. The best we can do is encourage the team to train the model with appropriate data, and encourage awareness and positivity in the community.

I think I had a pretty narrow view. I didn't think of that. Thank you for your criticism

I don't think your view is narrow. I'd like to understand it more, however. (I commented on something tangential here.)

@notJoon, what would it mean to you if the AI was really capable at generating photorealistic images and there was no filter? Let's someone is generating violent content - are you worried that the content will stimulate them to go and carry out these actions in real-life? Could the argument be made that if they are able to generate their violent imagery using an AI, they are less likely to act their fantasies out in real-life?

Here (link: https://eml.berkeley.edu/~sdellavi/wp/moviescrime08-08-01Forthc.pdf) is a study that claims violent movies shown at movie theaters reduce violence. The theory is that people who may have violent tendencies go watch violent movies instead of going out attacking people (I'm trivializing the finding here, please go read it. Here's an excerpt: "... explained by the self-selection of violent individuals into violent movie attendance, leading to a substitution away from more volatile activities.")

If you assign any level of credibility to that theory, letting people generate violent imagery might actually lower the actual amount of violence in society. There are other topics which may or may not work the same way (e.g., pedophilia). Thus, the goal of which (I'm assuming) you want to achieve - that of less violence in society - might actually be better achieved by letting people generate that of which you despise. Imagine giving a steak to a violent dog to keep it busy, preventing it attacking you (hey, I'm not necessarily saying people generating violent imagery are dogs, but the analogy explains what I'm trying to communicate well enough!). What if you would be able to say the same of pedophilia or whatever controversial topic you can come up with?

This is surely a matter of societal cognitive dissonance. On one hand, we may be able to save people from becoming actual victims - save them from being assaulted in various ways - save them from being traumatized. On the other hand, in doing so, we are in a sense giving potential murderers and pedophiles what they want - feeling that we are acting as if we were, I guess, "pedo-positive."

Do you want to feed a violent dog that attacks and even kills random people? No, you want it to not exist. So why would you feed it - sustaining it's existence? Because you don't want to be attacked by it. Again, it's a matter of cognitive dissonance. If you're being rational, you don't really care if the dog exists, you care whether someone is being attacked. I think this holds true for potential murderers and pedophiles as well (along with many, many other things that you, I or someone else dislike). We want them to not exist - but in essence, what we don't want is for them to predate on other people. If the choice is between a violent dog eating steak and a violent dog attacking people, well... rather easy pick, I would think. The third alternative, a non-existent dog, is not provided for you to choose. Unless you kill it, at which point you become the violent dog instead. Cognitive dissonance ensues.

Does this clarify things a bit for you or help you think about the issue?

DISCLAIMER: I'm sure you can find studies coming to an opposite conclusion, but I still hope my post provides some form of tool for thinking about this.