8 Comments

Thanks for putting this together. It's much easier than attempting a retrospective on a discord server that's very crowded.

My biggest takeaway is releases like this should be much slower. I also don't know any documentation on a hidden signature of SD (having been loosely involved in the launch), so I'm curious if you find sources on that. Embedding signatures that are permanent in generated images is quite challenging for many reasons.

Expand full comment

that's the biggest horseload of crap i've ever read.

cancel cars since accidents happen. cancel driverless cars without 100% sureity that accidents won't happen. cancel photoshop which allows you to generate deepfakes. cancel fire since it can be used to burn people.

childish argument.

Expand full comment

the question you posed in your "interview" RE suicide is a transparent disgrace

what if someone had worked on stable diffusion and killed themselves because their model was censored? what's your math work out to there?

it's blatantly obvious you were looking to get a hot take to discredit the release and that's a bad look

Expand full comment

What is actually wrong with you? What is this moral panic crap? You're an adult and you're freaking out about art on the internet. For the good of all humanity, go outside, touch grass, and think about where you went wrong in your life.

Expand full comment

So SD is a tool which can produce disturbing (offensive) content easily if asked. Sounds like the main question here is - what do we do about that? - prohibit the tool, build in protections about such content within a tool (self-censor), allow it but make it easier to control and accidentally avoid?

LIAON-5B data set appears to be better (https://waxy.org/2022/08/exploring-12-million-of-the-images-used-to-train-stable-diffusions-image-generator/) than LAION-400M (https://arxiv.org/abs/2110.01963) but ultimately, by design SD can be used to train on any content you want.

I think it's always good to think things through and your article definitely captures various angles on this. However, I suspect that more comprehensive solutions would lie more in the content consumption area (user preferences, filters) and less on the content producing side.

Expand full comment
Sep 6, 2022·edited Sep 6, 2022

"After SD’s release, we may see open source innovation, but this will be at the tooling/UI level not at the model level underpinning this argument now." You are wrong on this point. https://textual-inversion.github.io/ released not a week later and directly addresses the bias at the model level.

Expand full comment
Aug 28, 2022·edited Aug 28, 2022

While I absolutely support research and development in this area, as well as the development of methods for "democratizing" artistic production, these models are nevertheless fundamentally methods for automating plagiarism. There's no way around it. Such models are utterly incapable of anything without literally centuries of human art making and thus OWE the community of human artists literally millions of hours in back-pay! Haha... Yes, I realize that's absurd. But until our societies develop some deeper sense of the intrinsic value of human effort—in this case, artistic effort in particular—I think we really must be willing to make absurd suggestions for how to restore some kind of balance. The technological wizardry of these models is a piss-fragment of a drop in the ocean-size bucket of human ingenuity and effort that ACTUALLY powers them, and that effort was made entirely by artists. There isn't, and never will be, a single SD-generated (or AI-generated, in general) image that isn't fundamentally indebted to human artistic effort, and this DEBT needs to be acknowledged. I don't know how, but it does.

Expand full comment

>Such models are utterly incapable of anything without literally centuries of human art making and thus OWE the community of human artists literally millions of hours in back-pay

Well, it's a good thing that it's free and open-source then...

Expand full comment