Main content
Course: Social media literacy > Unit 5
Lesson 2: How does persuasive technology amplify societal problems?How does persuasive technology amplify societal problems?
Truth, memory function, and elections are all being sacrificed to profit
How does persuasive technology amplify societal problems?
“I felt so insecure about myself. My abilities, my looks, my roots, my potential… I was comparing my life with people around me and people I saw on social media.”
– Nathan, 21, Midlaren, Netherlands
“I realized I was becoming more hateful and less open minded.”
– Madison, 23, Louisville, KY
“If you've never experienced addiction, a small warning, it sucks. I mean that literally: it sucks you in and prevents you from being happy, reaching your dreams, or living life.”
– Mahika, 15, San Francisco, CA
Nathan, Madison, and Mahika’s stories are examples of the ways in which technology shaped by the attention economy is resulting in painful experiences of insecurity, distortion, and addiction. As discussed in the Attention economy unit and on the Center for Humane Technology’s Ledger of Harms, research shows that these problems are being felt throughout society.
Many of these problems have existed in various forms for generations. What’s different now is that they’re being amplified by AI-powered technology used around-the-clock by billions of people around the world. For instance:
- Fake news spreads six times faster than true news.¹ Misinformation has always been around, but on platforms that thrive on engagement, unexpected, attention-grabbing misinformation is widely shared.
- The level of social media use on a given day is linked to weaker memory function the next day.² There have always been companies competing for attention, from TV to magazines to billboards. But the frequency and strength of attention hijacking that happens on social media hurts our memory and focus.
- The outcomes of elections around the world are being more easily manipulated via social media.³ A politician can now deliver customized, emotionally resonant messages to different groups, even if those messages contradict each other, because most people never find out about the contradiction.
- AI algorithms have shown significant stereotypical bias by gender, race, profession, and religion.⁴ Society has long struggled with these biases, but when they’re embedded in the algorithms that shape platforms, they can become even more prevalent. Watch this bonus clip from The Social Dilemma to explore the topic more:
In response to public outcry, Facebook, Twitter, YouTube, and similar platforms have begun to invest heavily in programs designed to track and counteract organized hate and misinformation, address bias, and counteract many of these harms. But so long as their products are incentivized to lift up the posts that get people worked up, variations of these problems will continue to emerge.
We need technology that is accountable to the communities it serves. As long as our attention is highly profitable, and a small number of companies are trying to capture and control the attention of everyone in the world, that accountability will be impossible.
Want to join the conversation?
- I'm sorry if this hurts anyone's feelings about Khan Academy, but why must they talk about AI being so bad but it has an AI tutoring system itself? I understand that if AI grows, we grow with it, making it more popular and easy to use, but still. Please tell me what you think of this.(2 votes)
- personally if you cant handle feed back turn comments off cause no matter whether youre in the real world or online you're Ganna get hate and bad comments. you just have to suck it up sometimes and filter out those negative things(1 vote)
- hi
(first thing ever said on khan academy)(1 vote) - What platforms should do is just remove the entire comment section on all social media posts so there are not any jokes or hurtful comments.(0 votes)