AI Media Literacy in 2023: Adversarial Thinking
With ChatGPT going viral on schools and college campuses, I thought it would be a good idea to capture this concept of “Adversarial Thinking” which I have been promoting for a year or two now across various channels and appearances.
It seems obvious to me that schools should be teaching a host of new literacy skills for AI products and media. It’s important for students to be able to distinguish, command, and understand the effects of AI tools like DALL-E 2, ChatGPT, and many more to come.
Schools often teach “critical thinking” but, I think it’s important to teach what I call Adversarial Thinking.
Adversarial Thinking
I would define Adversarial Thinking as:
Adversarial Thinking is the ability to anticipate and plan for potential risks or challenges by actively considering potential opposition or manipulation. This skill is particularly important when interpreting or creating AI-generated content. It involves understanding the architecture, training data, and goals of a language model, cross-referencing information, and simulating scenarios where a model is being used to achieve a specific goal. Additionally, it can be applied in the long-term to issues such as AI alignment and X-risk to make informed decisions about the use and regulation of AI.
To break down the concept of Adversarial Thinking, I would say it’s about:
Anticipating working closely with an agent (ie. something like ChatGPT) which has different objectives than you
Keep in mind, current language models like ChatGPT are trained to sound plausible (but not necessarily truthful or even causal outputs). As a result, they can be especially deceiving.
On top of this, they can generate information which is not only deceitful but completely hallucinated!
Understanding the technical architecture, biases, and limitations of language and multimodal models
Understanding the use cases and opportunities for AI models
Practice cross referencing the claims made by ChatGPT across various reputable sources. Students should be given some practice when it comes to critically examining very plausible, but incorrect AI generated content.
Practice trying to identify human vs. AI generated content (if possible)
Teaching students about AI alignment, AI safety, and the existential risks associated with language models.
Some discussion with students about potential regulatory solutions and current AI government regulatory initiatives
Bonus: understanding ways AI media like images or video could impact society and influence the public in positive and negative ways
When should students use Adversarial Thinking?
In my view, students would apply Adversarial Thinking when:
writing AI prompts to generate content
using AI tools like ChatGPT as a learning tool/tutor
using AI tools like ChatGPT to complete their homework
sharing AI content with friends on social media or chat groups like Discord
consuming AI media and content on social media
Who benefits from learning and applying Adversarial Thinking?
I believe this AI media literacy skill is essential not just for students but teachers, administrators, parents, and the public at large.
Adversarial Thinking vs. Critical Thinking
While both skills share some similarities, I believe critical thinking is more focused on evaluating and analyzing information, while adversarial thinking is more focused on anticipating and mitigating potential risks. Both skills are important for students to learn, as they help students to make well-informed decisions, evaluate information and arguments, and understand potential biases and limitations of the information they are presented with.
Please keep in mind, I don’t think the two skills are mutually exclusive. Most of the time, I definitely think a student should think critically as well as adversarially! I just think Adversarial Thinking should be at the forefront when evaluating content generated by AI, especially as a first pass, followed by rigorous critical thinking analysis.
How can you learn Adversarial Thinking?
My gut tells me there may be a multitude of ways to teach this idea, you could spread it out over many fun and informative workshops (which could also be helpful for adults too). One quick way I think would be a promising approach would be by simply teaching Prompt Design. It could be an amazing way to implicitly teach students about Adversarial Thinking by having them apply it practically. The best way to become an Adversarial Thinker is to develop an intuition around the capabilities, typical behaviour, risks, and limitations of the different AI models which are out there.
I’m looking forward to what others have to say on this matter or what they think could be the right solution here!