New Video: Multimodal AI and The Serious Dangers of Corporate Mind Control [GPT-X, DALL-E, and our Multimodal Future]
Sorry for the late post!
This morning I uploaded the latest video in the series. It is essentially on the upcoming ethical challenges I can imagine for the next few years for multimodal creativity.
YouTube Transcript (SPOILER WARNING)
There are many, many, ethical concerns which arise from the public usage of multimodal AI models, which, unfortunately, I just don’t have the bandwidth to cover in this series. Where do I begin? The is just a quick list I came up with in the last 5 minutes alone.
However, on clubhouse, my friend Olle, brought up a very significant point about the risk of multimodal models in the hands of corporations. These tech mega corps could easily afford to spin up their own very large models to do their evil bidding.
It reminded me of Life 3.0, a book by Max Tegmark which opens with a short story about a superintelligent AI model which creates its own worldwide media conglomerate, infinitely generating content it knows humans will watch non-stop. This is in order to spread disinformation and subtly control the masses, as a part of a larger political agenda it has, to usurp governments and to take over the world.
In many ways, I’m also reminded of Neil Postman’s, “Amusing Ourselves to Death: Public Discourse in the Age of Show Business”. Which explores humanity’s hedonistic love for mindless TV and entertainment.
What will Google do with corporate access to the world’s most powerful multimodal models? What kinds of content will they create? Would you ever listen to a mumble rap style Google mixtape?
At another extreme consequence, how will big mouse incorporated react to new kinds of multimodal AI competition saturating the entertainment market? How would they lobby the government to sabotage the commercial deployment of these models? Will there be some kind of struggle between multimodal creatives and the big hollywood corporations over supposed copyright and safety concerns?
Regardless, the risk of corporate mind control is still a huge ethical and safety challenge. It may not actually be that far off either, Netflix is already on the record stating that their greatest enemy is human sleep.
The big question I'm struggling with …
Will we spend the rest of our lives endlessly consuming and being manipulated by highly addictive, personalized multimodal content trained on our data?
This question remains unanswered, but I’m optimistic that, perhaps, societal and cultural immunities will kick in where we will root for the little guy, leveraging multimodal AI models and ridicule large corporations for doing the same.
Actually, perhaps, we will leverage our own multimodal models to create content just for ourselves, our own personal Netflix, which tightly controls the media and influences we consume to suit our own personal goals and values in life.
The Key Idea:
I’m sorry if this is a very, very shallow lesson from an otherwise deeply complex and challenging discussion. To be honest, I’d like to revisit this topic in greater detail in the future. Anyways,
Multimodal AI models pose a serious safety threat to the human condition, especially in the hands of our corporate overlords. It’s important to think about safety, ethics, and the societal consequences of your work in all of your creative projects.