speech on internet kills communication

The dying art of conversation – has technology killed our ability to talk  face-to -face?

speech on internet kills communication

Senior Lecturer, Media, Communication and Culture, Leeds Beckett University

Disclosure statement

Melanie Chan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Leeds Beckett University provides funding as a member of The Conversation UK.

View all partners

What with Facetime, Skype , Whatsapp and Snapchat, for many people, face-to-face conversation is used less and less often.

These apps allow us to converse with each other quickly and easily – overcoming distances, time zones and countries. We can even talk to virtual assistants such as Alexa, Cortana or Siri – commanding them to play our favourite songs, films, or tell us the weather forecast.

Often these ways of communicating reduce the need to speak to another human being. This has led to some of the conversational snippets of our daily lives now taking place mainly via technological devices . So no longer do we need to talk with shop assistants, receptionists, bus drivers or even coworkers, we simply engage with a screen to communicate whatever it is we want to say.

In fact, in these scenarios, we tend to only speak to other people when the digital technology does not operate successfully. For instance, human contact occurs when we call for an assistant to help us when an item is not recognised at the self-service checkout .

And when we have the ability to connect so quickly and easily with others using technological devices and software applications it is easy to start to overlook the value of face-to-face conversation. It seems easier to text someone rather than meet with them.

Bodily cues

My research into digital technologies indicates that phrases such as “word of mouth” or “keeping in touch” point to the importance of face-to-face conversation . Indeed, face-to-face conversation can strengthen social ties: with our neighbours, friends, work colleagues and other people we encounter during our day.

It acknowledges their existence, their humanness, in ways that instant messaging and texting do not. Face-to-face conversation is a rich experience that involves drawing on memories, making connections, making mental images, associations and choosing a response. Face-to-face conversation is also multisensory: it’s not just about sending or receiving pre-programmed trinkets such as likes, cartoon love hearts and grinning yellow emojis.

speech on internet kills communication

When having a conversation using video you mainly see another person’s face only as a flat image on a screen. But when we have a face-to-face conversation in real life, we can look into someone’s eyes, reach out and touch them. We can also observe the other person’s body posture and the gestures they use when speaking – and interpret these accordingly. All these factors, contribute to the sensory intensity and depth of the face-to-face conversations we have in daily life.

Speaking to machines

Sherry Turkle , professor of social studies of science and technology, warns that when we first “speak through machines, [we] forget how essential face-to-face conversation is to our relationships, our creativity, and our capacity for empathy”. But then “we take a further step and speak not just through machines but to machines”.

In many ways, our everyday lives now involve a blend of face-to-face and technologically mediated forms of communication. But in my teaching and research I explain how digital forms of communication can supplement, rather than replace face-to-face conversation.

At the same time though, it is also important to acknowledge that some people value online communication because they can express themselves in ways they might find difficult through face-to-face conversation.

Look up from your phone

Gary Turk , is a spoken word poet whose poem Look Up illustrates what is at stake by becoming entranced by technological ways of communicating at the expense of connecting with others face-to-face.

Turk’s poem draws attention to the rich, sensory aspects of face-to-face communication, valuing bodily presence in relation to friendship, companionship and intimacy. The central idea running through Turk’s evocative poem is that screen-based devices consume our attention while distancing us from the bodily sense of being with others.

Ultimately the sound, touch, smell and observation of bodily cues we experience when having a face-to-face conversation cannot be fully replaced by our technological devices. Communicating and connecting with others through face-to-face discussion is valuable because it is not something that can be edited, paused or replayed.

So next time you’re deciding between human or machine at the supermarket checkout or whether to get up from your desk and walk to another office to talk to a colleague – rather than sending them an email – it might be worth following Turk’s advice and engaging with the human rather than the screen.

  • Social media
  • Body language
  • Text messages
  • Face-to-face
  • Conversations

speech on internet kills communication

Service Delivery Consultant

speech on internet kills communication

Newsletter and Deputy Social Media Producer

speech on internet kills communication

College Director and Principal | Curtin College

speech on internet kills communication

Head of School: Engineering, Computer and Mathematical Sciences

speech on internet kills communication

Educational Designer

Greater Good Science Center • Magazine • In Action • In Education

How Smartphones Are Killing Conversation

What happens when we become too dependent on our mobile phones? According to MIT sociologist Sherry Turkle, author of the new book Reclaiming Conversation , we lose our ability to have deeper, more spontaneous conversations with others, changing the nature of our social interactions in alarming ways.

Turkle has spent the last 20 years studying the impacts of technology on how we behave alone and in groups. Though initially excited by technology’s potential to transform society for the better, she has become increasingly worried about how new technologies, cell phones in particular, are eroding the social fabric of our communities.

In her previous book, the bestselling Alone Together , she articulated her fears that technology was making us feel more and more isolated, even as it promised to make us more connected. Since that book came out in 2012, technology has become even more ubiquitous and entwined with our modern existence. Reclaiming Conversation is Turkle’s call to take a closer look at the social effects of cell phones and to re-sanctify the role of conversation in our everyday lives in order to preserve our capacity for empathy , introspection, creativity, and intimacy.

speech on internet kills communication

I interviewed Turkle by phone to talk about her book and some of the questions it raises. Here is an edited version of our conversation.

Jill Suttie: Your new book warns that cell phones and other portable communication technology are killing the art of conversation. Why did you want to focus on conversation, specifically?

Sherry Turkle: Because conversation is the most human and humanizing thing that we do. It’s where empathy is born, where intimacy is born—because of eye contact, because we can hear the tones of another person’s voice, sense their body movements, sense their presence. It’s where we learn about other people. But, without meaning to, without having made a plan, we’ve actually moved away from conversation in a way that my research was showing is hurting us.

JS: How are cell phones and other technologies hurting us?

ST: Eighty-nine percent of Americans say that during their last social interaction, they took out a phone, and 82 percent said that it deteriorated the conversation they were in. Basically, we’re doing something that we know is hurting our interactions.

I’ll point to a study. If you put a cell phone into a social interaction, it does two things: First, it decreases the quality of what you talk about, because you talk about things where you wouldn’t mind being interrupted, which makes sense, and, secondly, it decreases the empathic connection that people feel toward each other.

So, even something as simple as going to lunch and putting a cell phone on the table decreases the emotional importance of what people are willing to talk about, and it decreases the connection that the two people feel toward one another. If you multiply that by all of the times you have a cell phone on the table when you have coffee with someone or are at breakfast with your child or are talking with your partner about how you’re feeling, we’re doing this to each other 10, 20, 30 times a day.

JS: So, why are humans so vulnerable to the allure of the cell phone, if it’s actually hurting our interactions?

ST: Cell phones make us promises that are like gifts from a benevolent genie—that we will never have to be alone, that we will never be bored, that we can put our attention wherever we want it to be, and that we can multitask, which is perhaps the most seductive of all. That ability to put your attention wherever you want it to be has become the thing people want most in their social interactions—that feeling that you don’t have to commit yourself 100 percent and you can avoid the terror that there will be a moment in an interaction when you’ll be bored.

Actually allowing yourself a moment of boredom is crucial to human interaction and it’s crucial to your brain as well. When you’re bored, your brain isn’t bored at all—it’s replenishing itself, and it needs that down time.

We’re very susceptible to cell phones, and we even get a neurochemical high from the constant stimulation that our phones give us.

I’ve spent the last 20 years studying how compelling technology is, but you know what? We can still change. We can use our phones in ways that are better for our kids, our families, our work, and ourselves. It’s the wrong analogy to say we’re addicted to our technology. It’s not heroin.

JS: One thing that struck me in your book was that many people who you interviewed talked about the benefits of handling conflict or difficult emotional issues online. They said they could be more careful with their responses and help decrease interpersonal tensions. That seems like a good thing. What’s the problem with that idea?

ST: It was a big surprise when I did the research for my book to learn how many people want to dial down fighting or dealing with difficult emotional issues with a partner or with their children by doing it online.

But let’s take the child example. If you do that with your child, if you only deal with them in this controlled way, you are basically playing into your child’s worst fear—that their truth, their rage, their unedited feelings, are something that you can’t handle. And that’s exactly what a parent shouldn’t be saying to a child. Your child doesn’t need to hear that you can’t take and accept and honor the intensity of their feelings.

People need to share their emotions—I feel very strongly about this. I understand why people avoid conflict, but people who use this method end up with children who think that the things they feel aren’t OK. There’s a variant of this, which is interesting, where parents give their children robots to talk to or want their children to talk to Siri, because somehow that will be a safer place to get out their feelings. Again, that’s exactly what your child doesn’t need.

JS: Some studies seem to show that increased social media use actually increases social interaction offline. I wonder how this squares with your thesis?

ST: How I interpret that data is that if you’re a social person, a socially active person, your use of social media becomes part of your social profile. And I think that’s great. My book is not anti-technology; it’s pro-conversation. So, if you find that your use of social media increases your number of face-to-face conversations, then I’m 100 percent for it.

Another person who might be helped by social media is someone who uses it for taking baby steps toward meeting people for face-to-face conversations. If you’re that kind of person, I’m totally supportive. 

I’m more concerned about people for whom social media becomes a kind of substitute, who literally post something on Facebook and just sit there and watch whether they get 100 likes on their picture, whose self-worth and focus becomes dictated by how they are accepted, wanted, and desired by social media.

And I’m concerned about the many other situations in which you and I are talking at a dinner party with six other people, and everyone is texting at the meal and applying the “three-person rule”—that three people have to have their heads up before anyone feels it’s safe to put their head down to text. In this situation, where everyone is both paying attention and not paying attention, you end up with nobody talking about what’s really on their minds in any serious, significant way, and we end up with trivial conversations, not feeling connected to one another.

JS: You also write about how conversation affects the workplace environment. Aren’t conversations just distractions to getting work done? Why support conversation at work?

More on Technology

Read Jill Suttie's review of Reclaiming Conversation .

How healthy are your online and offline social networks? Take the quiz !

five ways to build caring community on social media .

Take Christine Carter's advice to use technology intentionally and stop checking your freaking phone .

Learn how technology is shaping romance .

ST: In the workplace, you need to create sacred spaces for conversation because, number one, conversation actually increases the bottom line. All the studies show that when people are allowed to talk to each other, they do better—they’re more collaborative, they’re more creative, they get more done.

It’s very important for companies to make space for conversation in the workplace. But if a manager doesn’t model to employees that it’s OK to be off of their email in order to have conversation, nothing is going to get accomplished. I went to one workplace that had cappuccino machines every 10 feet and tables the right size for conversation, where everything was built for conversation. But people were feeling that the most important way to show devotion to the company was answering their email immediately. You can’t have conversation if you have to be constantly on your email. Some of the people I interviewed were terrified to be away from their phones. That translates into bringing your cell phone to breakfast and not having breakfast with your kids.

JS: If technology is so ubiquitous yet problematic, what recommendations do you make for keeping it at a manageable level without getting so hooked?

ST: The path ahead is not a path where we do without technology, but of living in greater harmony with it. Among the first steps I see is to create sacred spaces—the kitchen, the dining room, the car—that are device-free and set aside for conversation. When you have lunch with a friend or colleague or family member, don’t put a phone on the table between you. Make meals a time when you are there to listen and be heard.

When we move in and out of conversations with our friends in the room and all the people we can reach on our phones, we miss out on the kinds of conversations where empathy is born and intimacy thrives. I met a wise college junior who spoke about the “seven-minute rule”: It takes seven minutes to know if a conversation is going to be interesting. And she admitted that she rarely was willing to put in her seven minutes. At the first “lull,” she went to her phone. But it’s when we stumble, hesitate, and have those “lulls” that we reveal ourselves most to each other.

So allow for those human moments, accept that life is not a steady “feed,” and learn to savor the pace of conversation—for empathy, for community, for creativity.

About the Author

Headshot of Jill Suttie

Jill Suttie

Jill Suttie, Psy.D. , is Greater Good ’s former book review editor and now serves as a staff writer and contributing editor for the magazine. She received her doctorate of psychology from the University of San Francisco in 1998 and was a psychologist in private practice before coming to Greater Good .

You May Also Enjoy

speech on internet kills communication

Why Won’t Your Teen Talk To You?

speech on internet kills communication

The Place of Talk in a Digital Age

speech on internet kills communication

Nine Tips for Talking With Kids About Trauma

Talking to strangers, and other things that bring good luck.

speech on internet kills communication

How to Get Your Kid to Talk about What Happened at School

speech on internet kills communication

Should We Talk to Young Children about Race?

GGSC Logo

More From Forbes

Has technology killed face-to-face communication.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Most of us use our cell phones and computers to inform, make requests of, and collaborate with co-workers, clients and customers. The digital age has connected people across the world, making e-commerce and global networking a reality. But does this reliance on technology, also mean we are losing the ability to effectively communicate with each other in person?

Ulrich Kellerer thinks so. He is a leadership expert, international speaker, and author. According to Kellerer, “When it comes to effective business communication, over reliance on technology at work can be a hindrance, especially when it ends up replacing face-to-face, human interaction.”

Carol Kinsey Goman: You were the founder and CEO of Faro Fashion in Munich, Germany. What did you discover about business communication in this role?

Ulrich Kellerer: The digital age has fundamentally changed the nature and function of business communication. It has blurred international boundaries allowing people to connect with each other across the world. Communication is mobilized and instantaneous, and it is easier than ever to access and share information on a global scale.

However, I’ve also seen the negative impact of digital communication on business both internally and externally. While digital methods themselves are not detrimental – in fact many devices help us boost productivity, increase and inspire creativity -- it is our intensifying relationship with the digital environment that leads to unhealthy habits that not only distract us from the “present,” but also negatively impact communication effectiveness.

Goman: In the midst of a digital age, I believe that face-to-face is still the most productive and powerful communication medium. An in-person meeting offers the best opportunity to engage others with empathy and impact. It builds and supports positive professional connections that we can’t replicate in a virtual environment. Would you agree?

Kellerer: Connection is critical to building business relationships. Anyone working in sales knows that personal interactions yield better results. According to Harvard research, face-to-face requests were 34 times more likely to garner positive responses than emails. Communication in sales is complicated. It requires courtesies and listening skills that are simply not possible on digital platforms.

Interpersonal communication is also vital for a business to function internally.  While sending emails is efficient and fast, face-to-face communication drives productivity. In a recent survey, 67% of senior executives and managers said their organization’s productivity would increase if superiors communicated face-to-face more often.

Goman: In my research on the impact of body language on leadership effectiveness I’ve seen the same dynamic. In face-to-face meetings our brains process the continual cascade of nonverbal cues that we use as the basis for building trust and professional intimacy. As a communication medium, face-to-face interaction is information-rich. People are interpreting the meaning of what you say only partially from the words you use. They get most of your message (and all of the emotional nuance behind the words) from vocal tone, pacing, facial expressions and body language. And, consciously or unconsciously, you are processing the instantaneous nonverbal responses of others to help gauge how well your ideas are being accepted.

Kellerer: While digital communication is often the most convenient method, face-to-face interaction is still by far the most powerful way to achieve business goals. Having a personal connection builds trust and minimizes misinterpretation and misunderstanding. With no physical cues, facial expressions/gestures, or the ability to retract immediately, the risk of disconnection, miscommunication, and conflict is heightened.

Goman: Human beings are born with the innate capability to send and interpret nonverbal signals. In fact, our brains need and expect these more primitive and significant channels of information. When we are denied these interpersonal cues, the brain struggles and communication suffers. In addition, people remember much more of what they see than what they hear -- which is one reason why you tend to be more persuasive when you are both seen and heard.

In addition to eye contact, gestures, facial expressions and body postures, another powerful nonverbal component (and one that comes solely in face-to-face encounters) is touch . We are programmed to feel closer to someone who’s touched us. For example, a study on handshakes by the Income Center for Trade Shows showed that people are twice as likely to remember you if you shake hands with them.

Kellerer: Business leaders must create environments in which digital communication is used strategically and personal communication is practiced and prioritized. Technology is a necessary part of business today but incorporating the human touch is what will give businesses the competitive edge in the digital marketplace.

Goman: Agreed!

Carol Kinsey Goman, Ph.D.

  • Editorial Standards
  • Reprints & Permissions

Does social media kill communication skills?

Being of "that certain age," I notice changes in society and trends that sometimes are beneficial and others that seem to impair our human nature.

Conversing and speaking in complete sentences seems to have gone by the wayside for some. Why? Pondering this question, which may seem inane to some, begs to be addressed and answered.

With social media consuming so much of our time, conversing with one another face to face has suffered. The busyness of our lives and over scheduling also takes a toll on the family. The percentage of families that sit down together to share dinner is steadily declining.

The dinner table used to be a place where those gathered shared their news of the day. Often it was where families discussed current events and had actual conversations. Fast food meant dinner was ready at the allotted time, not a bag of greasy takeout to be devoured in record time.

Email, Facebook, Instagram, Twitter and a host of other ways to communicate have often taken the place of talking and sharing. Yes they are fast, available at all hours and easy, but should never take the place of verbal discussions.

Listening to some people as they try to put together a complete sentence is uncomfortable and can be agonizing. Lots of "you knows" and "umm" and "well" as they try to put together a string of words that make sense.

Maybe it's time to practice good verbal skills by requiring a class specifically addressing conversation skills. The main requirement would be to put away the cell phones and learn about eye contact, how to listen and how to gather your thoughts into complete sentences.

Social media has its place but should never be an exception to replace human contact. Tonight while having dinner, turn off the phones and begin the tradition of face to face communication. Take turns sharing your joys and concerns and practice the forgotten art of conversation.

ABOUT THE AUTHOR

Deb McMahon is a retired educator and political activist living in Des Moines. Click here to read more of her work .

Is the internet killing off language?

Emojis and micro-blog slang are changing the way we communicate

Emoji

  • Introduction and positive effects on writing
  • Mobile devices and context

The internet is changing the way we communicate. LOL, awks, amazeballs, BRB, the use of emoji and emoticon – and even writing facial expressions such as 'sad face' – have all become standard in digital communications. So ingrained, in fact, that they're changing the way we write and even talk.

"People are becoming less concerned with grammar, spelling and sentence structure, and more concerned with getting their message across," says Gavin Hammar, CEO and founder of Sendible , a UK-based social media dashboard for business.

There's no doubt that the consumption of abbreviated digital content is having a huge effect on language. "Over the last five years attention spans have shortened considerably, which is reflected in the contracted forms of language we see in social media," says Robin Kermode, founder of communications coaching consultancy Zone2 and author of the book 'Speak So Your Audience Will Listen: A practical guide for anyone who has to speak to another human being'.

However, some think that the internet has made us better communicators since we increasingly use much more streamlined language. "To get a message across using Twitter for example, it must be concise and must conform to the tone used there, which includes abbreviations, acronyms and emoticons," say Hammar.

What about emoticons and emojis?

The fastest growing 'new language' in the world is emoticons (faces) and emojis (images of objects, which hail from Japan), which are one of the biggest changes caused by digital communications. "Facial expressions, visual presence and body language have always been vital to being a confident speaker, but now emojis are blurring the lines between verbal and written communication," thinks Kermode, who adds that cavemen had early versions of emojis on the sides of their caves. "Pictures, cartoons or emojis are 'shortcuts' so we can be clear about what our message really means."

If you mainly use emojis, why not get a keyboard based around smiley faces and cartoon icons? That's exactly what Swyft Media recently created, and while it's more of a PR stunt the keyboards of the future will probably contain at least some emojis.

How emojis add meaning

Emoticons and emoji are arguably more meaningful than slang and shorthand, which can be too easily misunderstood. "I once witnessed a girl being dumped in a text, which consisted of a message with just five letters, 'U R MY X' – linguistically economic, but emotionally harsh," says Kermode. Trouble is, the sender had actually meant 'YOU ARE MINE. X'. "If he'd added three emojis – like a smiley face, a heart and a wedding ring, he might now be happily married!"

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The same goes for a statement such as "I NEED TO SPEAK TO YOU RIGHT NOW", which needs a qualifying emoticon or emoji to give it meaning. "It could signal an angry meeting or a passionate meeting but add a coffee cup, a big smiley face or an angry face and it becomes clear what's really going on," says Kermode.

They may be derided by traditionalists, but emoticons and emojis used to describe mood are the body language add-on that the written word has always lacked. In most instances, these icons represent language evolution and progress, not regression.

Mood emoticons are the body language add-on the written word has never had

The web's positive effects on writing

Some think that the internet is actually sharpening up writing skills, particularly of professional writers, creating new niches and specialisms. "[The internet] lays bare the disparity between good and bad copy, which has resulted in writers and editors becoming better educated and more aware of global grammatical standards, raising the bar overall," says Paul Parreira, founder of digital content creation agency Company Cue , which has a network of 800 highly skilled writers and programming experts working in 32 languages.

He thinks that the internet is also driving language to become more globalised, with Americanisms such as 'road trip', 'what's up?' and 'like' being used as a conversational link now ingrained into what's fast being called 'International English' or ELF (English Lingua Franca). It has nothing to do with where the language originated, and often those that use a basic form of ELF online can understand each other far easier than native English speakers.

However, online English has also spawned new specialism and skills among professional, often native English speaking writers. "Writing has become more idiosyncratic and unique," says Parreira, "creating new breeds of writers – those that specialise in short form and those that focus on long form … it's rare to find writers than can excel in both."

Current page: Introduction and positive effects on writing

Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites,  TravGear.com  and  WhenIsTheNextEclipse.com  that reflect his obsession with travel gear and solar eclipse travel. He is the author of  A Stargazing Program For Beginners  (Springer, 2015),

Fastest ever graphics memory launches but still lags HBM — SK Hynix GDDR7 RAM tops 1.5Tbps and could find its way in more affordable AI GPU cards

Notion launches new website builder and tools. We found out more

How to watch Australia vs Spain women's Water Polo final at Olympics 2024: free live streams and start time

Most Popular

  • 2 Best Buy's massive weekend sale is live: 32 deals I'd buy on TVs, laptops, and appliances
  • 3 AMD just unleashed FSR 3.1 – and it’s a great day for PC gamers no matter what brand of graphics card they own
  • 4 Early 4th of July deal drops LG's all-new C4 OLED TV to a new record-low price
  • 5 Everything new on Max in July 2024
  • 2 Even Apple Intelligence can’t save the smart home if Apple won’t fix its infuriating Home app
  • 3 Microsoft has gone too far: including a Game Pass ad in the Settings app ushers in a whole new age of ridiculous over-advertising
  • 4 Microsoft's Copilot+ AI PCs aren't all that special right now, but there's one major reason why that's about to change
  • 5 This One Million Checkbox game is sparking an internet war – and it's taken hours of our life we'll never get back

speech on internet kills communication

Is Technology Killing Human Emotion?: How Computer-Mediated Communication Compares to Face-to-Face Interactions

  • September 2019
  • Conference: Mensch und Computer 2019

Anneli Eddy at Fachhochschule Salzburg

  • Fachhochschule Salzburg

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Dorte P K Meldgaard
  • Iben Rosa T. Thomsen
  • Livia-Manuela Boeriu

Martin Lindrup

  • Mieke Fimpel

Nathan Flach

  • Mats Reckzügel

Bernhard Maurer

  • J SOC PERS RELAT
  • Kevin N. Shufford

Deborah Hall

  • Ashley K. Randall

Kristin D Mickelson

  • Jun-Koo Kang
  • Thomas Neumayr

Banu Saatçi

  • HUM COMMUN RES

Brian Spitzberg

  • Monika Henderson

Adrian Furnham

  • COMPUT HUM BEHAV

Mark Carrier

  • Joris H. Janssen

Wijnand A Ijsselsteijn

  • Brant R. (Ed) Burleson
  • Irwin G. (Ed) Sarason

Arthur Bochner

  • Clifford W. Kelly

Mark Redmond

  • Ruth Anne Clark
  • Jesse G. Delia
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Combating Hate Speech Through Counterspeech

Daniel Jones

Daniel Jones

Susan Benesch

Susan Benesch

From misogyny and homophobia, to xenophobia and racism, online hate speech has become a topic of greater concern as the Internet matures, particularly as its offline impacts become more widely known. And with hate fueled tragedies across the US and New Zealand, 2019 has seen a continued rise in awareness of how social media and fringe websites are being used to spread hateful ideologies and instigate violence.

Through the Dangerous Speech Project , Berkman Klein Faculty Associate Susan Benesch studies the kinds of public speech that can catalyze intergroup violence, and explores the efforts to diminish such speech and its impacts while protecting the rights of freedom of expression.  Like the Center’s own work examining the legal, platform-based, and international contours of harmful speech, the Dangerous Speech Project brings new research and framing to efforts to reduce online hate and its impacts.

This work often involves observing and cataloging extremely toxic speech on social media platforms, including explicit calls for violence against vulnerable populations around the world. But dangerous speech researchers also get to interact with practitioners of “counterspeech” - people who use social media to battle hateful and bigoted messaging and ideology.

The Dangerous Speech Project’s Senior Researcher Cathy Buerger convened a group of counterspeech practitioners at RightsCon 2019 to talk about the most effective counterspeech efforts. Here she reflects on these efforts, and how activists can better combat hate in online spaces and prevent its offline impacts.  

How has social media facilitated the proliferation of hatred/harmful speech? Do you think there is more hate today as a result of Internet-enabled communication, or is it just more visible and noticeable?

It’s hard to say if there is more hate in the world today or not. My instinct is no. At the Dangerous Speech Project, we’ve examined the speech used before incidents of mass violence in various historical periods, and the rhetorical patterns are remarkably similar. The hate that we see today is certainly nothing new.

But there are some new factors that impact the spread of this hate. First, social media makes it relatively simple to see speech produced in communities outside of one’s own. I’m an anthropologist, so I’m always thinking about how communities set and enforce norms. Different communities have divergent opinions about what kind of speech is considered “acceptable.” With social media, speech that might be seen as acceptable by its intended audience can easily be discovered and broadcast to a larger audience that doesn’t share the same speech norms. That audience may attempt to respond through counterspeech, which can be a positive outcome. But even if that doesn’t happen, at the very least, this speech becomes more visible than it otherwise would have been.

A second factor that is frequently discussed is how quickly harmful messages on social media can reach a large audience. This can potentially have horrifying consequences. Between January 2017 and June of 2018, for example, 33 people were killed by vigilante mobs in India following rumors that circulated on WhatsApp suggesting that men were coming to villages in order to kidnap children. The rumors were, of course, false. In an effort to battle these kinds of rumors, WhatsApp has since placed a limit on how many times a piece of content can be forwarded.

These are just two of the ways that technology is affecting the spread and visibility of hateful messages. We need to understand this relationship, and the relationship between online speech and offline action, if we are going to develop effective policies and programs to counter harmful speech and prevent intergroup violence. 

You've spoken with a number of folks who work online to counter hateful speech. What are some of your favorite examples?

There are so many fascinating examples of people and organizations working to counter online hateful speech. One of my favorites is #Jagärhär, a Swedish group that collectively responds to hateful posts in the comment sections of news articles posted on Facebook. They have a very specific method of action. On the #Jagärhär Facebook page, group administrators post links to articles with hateful comments, directing their members to counterspeak there. Members tag their posts with #Jagärhär (which means, “I am here”), so that other members can find their posts and like them. Most of the news outlets have their comments ranked by what Facebook calls “relevance.” Relevance is, in part, determined by how much interaction (likes and replies) a comment receives. Liking the counterspeech posts, therefore, drives them up in relevance ranking, moving them to the top and ideally drowning out the hateful comments. 

The group is huge – around 74,000 members, and the model has spread to 13 other countries as well. The name of each group is “#iamhere” in the local language (for example, #jesusilà in France and #somtu in Slovakia). I like this example because it demonstrates how powerful counterspeech can be when people work together. In the bigger groups (the groups range in size from 64 in #iamhereIndia to 74,274 in #Jagärhär), their posts regularly have the most interaction, and therefore become the most visible comments.

One of the questions that I am interested in right now is how counterspeaking as a group may serve as a sort of protective factor for group members. I’ve interviewed lots of counterspeakers, and most of them talk about how lonely and emotionally difficult the work is – not to mention the fact that they often become the targets of online attacks themselves. In the digital ethnography that I am working on right now, members of #iamhere groups frequently mention how working as a group makes them feel braver and more able to sustain their counterspeech work over time. 

I’m also very interested in efforts that try to counter hateful messages by sharing those messages more widely. The Instagram account Bye Felipe , for example, is dedicated to “calling out dudes who turn hostile when rejected or ignored.” The account allows users to submit screenshots of conversations they have had with men – often on dating sites – where the man has lashed out after being ignored or rejected. I interviewed Alexandra Tweten, who founded and runs the account, and she told me that although she started it mostly to make fun of the men in the interactions, she quickly realized that it could be a tool to spark a larger conversation about online harassment against women. A similar effort is the Twitter account @YesYoureRacist . Logan Smith, who runs the anti-racism account, retweets racist posts that he finds to his nearly 400,000 followers in an effort to make people aware that the racism exists. 

Broadcasting hateful comments to a larger audience may seem somewhat counterintuitive because we are frequently so focused on deleting content. But by drawing the attention of a larger audience to a particular piece of speech, these efforts can serve as an educational tool - for example, showing men the type of harassment that women face online. By connecting a piece of speech with a larger audience, it is also very likely that at least some members of that new audience may not share the same speech norms as the original author. Sometimes, this is primarily a source of amusement for the new audience. At other times, though, it can be a quick way to inspire counterspeech responses from members of that new audience.

Why do you think these efforts are effective? What can folks who work in counterspeech efforts learn from one another? 

Effectiveness is an interesting issue. The first thing we have to ask is “effective at doing what?” One of the findings from my research on those who are working on countering hatred online is that they don’t all have the same goal. We often think that counterspeakers are primarily trying to impact the behavior or the views of the hateful speakers to whom they are responding. But of the 40 or so people that I have interviewed who are involved in these efforts, most state that they are actually trying to do something different. They are trying to reach the larger reading audience or have a positive impact on the discourse within particular online spaces. The strategies that you use to accomplish goals like that are going to be very different from those you might use if you are trying to change the mind or behavior of someone posting hateful speech. The projects that are most effective are those that clearly know their audience and goals and choose their strategies accordingly.

Last November, we hosted a private meeting in Berlin of people who use various methods to respond to hateful or harmful speech online. This group of 15 counterspeakers from around the world discussed counterspeech best practices and the challenges that they face in their work. After the workshop, we heard from many of them about how useful the experience had been because they no longer felt as isolated. The work of responding to hatred online can be lonely work. Although some people do this work in groups – like those involved in the #iamhere groups – most people do it by themselves. So, of course, counterspeakers can learn a lot from each other in terms of what kinds of strategies might work in different contexts, but there is also tremendous potential benefit in getting to know one another simply because it reminds them that they are not alone in their efforts. 

What did your group at RightsCon learn from one another? Did any surprising or exciting ideas emerge?

One of the best parts about RightsCon is that it brings people together from different sectors, from all over the world, who are working on issues related to securing human rights in the digital age. During our session, which focused on online anti-hatred efforts, one of the topics that was raised by both the session participants and several audience members was just how hard this work can be – the toll it can take on a person’s personal and emotional life. At one point, an audience member asked Logan Smith (of @yesyoureracist) whether he had ever received a death threat. He answered “oh yeah.” People laughed, but it also really brought home the point. This is really tough work. It’s emotionally demanding. It can make you the target of online attacks. One seldom gets that perfect moment where someone who had posted something hateful says “oh, you’re right. Thank you so much for helping me see the light.” An online anti-hatred effort is successful if it can reach its goal, whether that goal is to reach the larger reading audience or to change the mind or behavior of person posting hateful comments. But to do any of those things, it has to be sustainable. So I think that learning more about what helps counterspeakers avoid burnout and stay active is an important piece of better understanding what makes efforts effective in the long run.

You might also like

  • community GOP reacts to Trump search with threats and comparisons to ‘Gestapo’
  • community Why Elon Musk’s Twitter might be (more) lethal
  • community An expert on ‘dangerous speech’ explains how Trump’s rhetoric and the recent spate of violence are and aren’t linked

Projects & Tools 01

Harmful speech online.

The Berkman Klein Center for Internet & Society is in the third year of a research, policy analysis, and network building effort devoted to the study of harmful speech, in close…

Main navigation

  • Clinical Education
  • O'Brien Fellowships
  • Steinberg Fellowships
  • CHRLP Article Lab

Protecting Freedom of Expression Online

Black tablet showing the blue and white Twitter logo on top of a cardboard box with 'handle with care' stamped on a lid flap. By Ravi Sharma, via Unsplash.

  • Add to calendar
  • Tweet Widget

Questions around freedom of expression are once again in the air. While concern around the Internet’s role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the principle of online intermediary immunity, arguably one of the main principles that has allowed the Internet to flourish as vibrantly as it has.

What is online intermediary immunity?

Laws that enact online intermediary immunity provide Internet platforms (e.g., Facebook, Twitter, YouTube) with legal protections against liability for content generated by third-party users.

Simply put, if a user posts illegal content, the host (i.e., intermediary) may not be held liable. An intermediary is understood as any actor other than the content creator. This includes large platforms such as Twitter where, for example, if a user posts an incendiary call to violence, Twitter may not be held liable for that post. It also holds for smaller platforms, such as a personal blog, where the blogger is protected from being held liable for comments left by readers. The same is true for the computer servers hosting the content.

These laws have multiple policy goals, ranging from promoting free expression and information access, to encouraging economic growth and technical innovation. But balancing these objectives against the risk of harm has proven complicated, as seen in debates about how to prevent online election disinformation campaigns, hate speech, and threats of violence.

There is also a growing public perception that large-scale Internet platforms need to be held accountable for the harms they enable. With the European Union reforming its major legislation on Internet regulation, the ongoing debate in the United States regarding similar reforms, and the recent January 6 attack on Capitol Hill, it is a propitious time to examine how different jurisdictions implement online intermediary liability laws and what that means for ensuring that the Web continues to allow deliberative democracy and civic participation.

The United States

Traditionally, the United States has provided some of the most rigorous protections for online intermediaries under section 230 of the Communications Decency Act (CDA) [.pdf], which bars platforms from being treated as the “publisher or speaker” of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, there are increasing calls on both the left and right for this to change.

Republican Senator Josh Hawley of Missouri introduced two pieces of legislation in 2020 and 2019 respectively ― the Limiting Section 230 Immunity to Good Samaritans Act and the Ending Support for Internet Censorship Act ― to undercut the liability protections provided for in section 230 CDA. If passed, the Limiting Section 230 Immunity to Good Samaritans Act would limit liability protections to platforms that use value-neutral content moderation practices, meaning that content would have to be moderated with absolute neutrality, free from any set of values, to be protected. However, this is an unrealistic standard, given that all editorial decisions involve choices based on value, be it merely a question of how to sort that content (e.g., chronologically, alphabetically, etc.) or the editor’s own personal interests and taste. The Ending Support for Internet Censorship Act also seeks to remove liability protections for platforms that curate political information, the vagueness of which risks aggressively demotivating platforms from hosting politically sensitive conversations and chilling free speech online.

The bipartisan Platform Accountability and Consumer Transparency (PACT) [.pdf], introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota in 2020, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. While a step in the right direction towards greater platform transparency, PACT could still endanger free speech on the Internet; it might motivate platforms to remove any content that might be found illegal rather than risk the costs of litigation, thereby taking down legitimate speech out of an abundance of caution. PACT would also entrench the already overwhelming power and influence of the largest platforms, such as Facebook and Google, by imposing onerous obligations that small-to-medium size platforms might find difficult to respect.

During his presidential campaign, Joe Biden even called for the outright repeal of section 230 CDA , with the goal of holding large platforms more accountable for the spread of disinformation and extremism. This remains a worrisome position and something that President Biden should reconsider, given the importance of section 230 CDA for prohibiting online censorship and allowing the Internet to flourish as an arena for public debate.

Questions around to how ensure the Internet remains a viable space for freedom of expression are particularly important in Canada, which does not currently have domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combating disinformation, harassment, and the spread of hate, legislation that increases restrictions on freedom of speech, such as the reforms described above, should not be taken in Canada. These types of measures risk incentivizing platforms to actively engage in censorship due to the prohibitive costs associated with the nearly impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while also safeguarding their right to freedom of expression.

One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the United States–Mexico–Canada Agreement (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but a difference in wording [1] suggests that under USMCA, individuals who have been harmed by online speech may be able to obtain non-monetary equitable remedies, such as restraining orders and injunctions.

It remains to be seen how courts will interpret the provision, but the text leaves room to allow platforms to continue to enjoy immunity from liability, while being required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. Under this interpretation, platforms would be free to take down or leave up content based on their own terms of service, until ordered otherwise by a court. This would leave ultimate decision-making with courts and avoid incentivizing platforms to overzealously take down content out of fear of monetary penalties.

USMCA thus appears to balance providing redress for harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for legislators considering how to reform Canada’s domestic online intermediary liability laws.

Going forward

The Internet has proven itself to be a phenomenally transformative tool for human expression, community building, and knowledge dissemination. That power, however, can also be used for the creation, spread, and amplification of hateful, anti-democratic groups and ideas.

Countries are now wrestling with how to balance the importance of freedom of expression with the importance of protecting vulnerable groups and democracy itself. Decisions taken today on how to regulate online intermediary liability will play a crucial role in determining whether the Web remains a place for the free and open exchange of ideas, or a chill and stagnant desert.

Although I remain sympathetic to the legitimate concerns that Internet platforms do too little to prevent their own misuse, I fear that removing online intermediary liability protections will result in the same platforms having too much power and incentive to monitor and censor speech, something that risks being equally harmful.

There are other possible ways forward. We could take the roadmap offered by article 19.17 USMCA. We could prioritize prosecuting individuals for unlawful behaviour on the web, such as peddling slander, threatening bodily violence or fomenting sedition. Ultimately, we need nuanced solutions that balance empowering freedom of expression with protecting individuals against harm. Only then can the Internet remain a place that fosters deliberative democracy and civic participation.

[1] CDA 230(c) provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” USMCA 19.17.2 instead provides that “No Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability [emphasis added] for harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part, created or developed the information.”

About the writer

Rachel Zuroff, BCL/LLB’16

She resides in Montreal, where she continues to pursue her interests in human rights and legal pluralism.

Department and University Information

Centre for human rights and legal pluralism.

  • Faculty of Law
  • Law Admissions - BCL/JD
  • Law Admissions - graduate programs
  • Law Student Affairs Office
  • Law Career Development Office
  • Nahum Gelber Law Library
  • Focus online
  • CHRLP Facebook page
  • Business Law Platform
  • Centre for Intellectual Property Policy
  • Fortier Chair in Int'l Arbitration & Commercial Law
  • Institute of Air & Space Law
  • Jean Monnet Chair in International Economic Integration
  • Labour Law and Development Research Laboratory
  • Oppenheimer Chair in public international law
  • Paul-André Crépeau Centre for Private & Comparative Law
  • Peter MacKell Chair in Federalism
  • Private Justice and the Rule of Law
  • Research Group on Health & Law
  • Rule of Law and Economic Development
  • Stikeman Chair in Tax Law
  • Wainwright Fund

Section 230, the internet law that’s under threat, explained

The pillar of internet free speech seems to be everyone’s target.

by Sara Morrison

The US Supreme Court building exterior, seen from behind barricades.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.

Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.

While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google , another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.

Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions ). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.

Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street , and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found not liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet , but the law says more than that.

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.

The Supreme Court might determine Section 230’s fate

Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google , the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh , the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.

These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh , the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.

In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”

Congress and presidents are getting sick of Section 230, too

In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law , which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.

In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.

There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.

In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.

But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.

However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20 , which forbids large platforms from removing or moderating content based on a user’s viewpoint.

Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.

If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.

Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.

  • Open Sourced
  • Supreme Court

Most Popular

  • The “It Ends With Us” drama is the new “Don’t Worry Darling” drama
  • Take a mental break with the newest Vox crossword
  • Tim Walz is riding the wave of the vibes election
  • Advertisers aren’t buying what X is selling. Is that a crime?
  • Why readers love — and love to hate — Colleen Hoover

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

Sponsor Logo

This is the title for the native ad

Sponsor thumbnail

More in Technology

You need a budget. But do you really need to pay for one?

An app called YNAB wants to take the dread out of talking about money. For a price.

Those Olympics AI ads feel bad for a reason

It’s not just Google’s “Dear Sydney” commercial that feels soulless and strange.

A historic ruling against Google could change the internet as we know it

A new era for search — and regulation of Big Tech — could be imminent.

Is there an AI bubble — and is it about to pop?

The bill may be coming due for Silicon Valley’s huge investment in AI.

What AI in music can — and can’t — do

Music production is getting easier. Does that make it better?

It’s practically impossible to run a big AI company ethically

Anthropic was supposed to be the good guy. It can’t be — unless government changes the incentives in the industry.

The news site of Santa Barbara City College.

The Channels

The news site of Santa Barbara City College.

Technology, social media kill our communication skills

I’m sitting at Starbucks with a friend, and during our hours of conversations I wonder when her face got replaced with the back of her phone.

Is this what today’s society has come to? Where people somewhere along the way of social media and the Internet lost track of the real world and genuine conversations?

Jessika Karlsson

Don’t get me wrong—I do think that social media is a wonderful concept and that the basic idea of the Internet is astonishing. Mankind managed to develop a tool to share information in an easy way, to enhance globalization and to bring the world closer together.

Social media, which is known worldwide, has helped families and friends to stay connected, despite the fact that they live on different continents.

Not even 10 years ago, communication with people from other countries or states, wasn’t only complicated, but also extremely expensive. Making a long-distance call was a time-consuming process and the reception wasn’t anything to cheer for.

But here we are, year 2014, and the situation looks completely different. Today’s society is privileged with several free services accessible for anyone with a smart phone, to communicate with anyone, anywhere. To access information is just as simple, thanks to the Internet.

All of these characteristics are unique for the Internet where the concept and purpose of it is to connect, communicate and inform.

Somehow it feels like today’s society has taken the cyber world too far and our usage of the Internet is damaging everyday contact and interpersonal communication.

As the Internet progresses over the years, a new world phenomenon has been born and social media saw its first rays of light.

It is close to impossible to meet someone today that has managed to dodge the attraction from networks such as Facebook , Instagram or Twitter . Social media has many amazing features but it does also come with a high price.

People from many generations use these networks every day, but it is slowly taking over their real life.

There are numerous stories of neglected children that are struggling with parents who would rather update their Facebook page than pay attention to their own flesh and blood.

Somewhere along the way people started using social media more than they interacted with each other. Personal conversations where people actually socialize and communicated face-to-face seem to be a dying art in the year of 2014.

Another problem with the simple spread of information over the Internet is that it creates an easy way for people to express hatreds, share inappropriate materials and exert cyber bullying .

Cyber bullying in particular, especially among children, is an increasing problem today that many people struggle with. These mean comments and personal attacks among children spreads like wildfire on various social networks. The outcome of this has in some cases led to suicide attempts and even deaths.

The concept of the Internet and social media is a wonderful thing when used properly. But if we don’t wake up from this cyber daze and realize that there is a darker side to it, the social world we once new will change forever.

The enjoyment of a cup of coffee and a conversation with a friend that doesn’t involve the back of a phone might be lost forever.

  • Social Media

From left, Anneli Larson struts in style with her pink cowgirl hat and her sister, brother and grandpa on his farm back in 2011 in Grass Valley, Calif. They are helping with the daily chores and taking turns driving the tractor.

The news site of Santa Barbara City College.

  • Foreign Affairs
  • CFR Education
  • Newsletters

Council of Councils

Climate Change

Global Climate Agreements: Successes and Failures

Backgrounder by Lindsay Maizland December 5, 2023 Renewing America

  • Defense & Security
  • Diplomacy & International Institutions
  • Energy & Environment
  • Human Rights
  • Politics & Government
  • Social Issues

Myanmar’s Troubled History

Backgrounder by Lindsay Maizland January 31, 2022

  • Europe & Eurasia
  • Global Commons
  • Middle East & North Africa
  • Sub-Saharan Africa

How Tobacco Laws Could Help Close the Racial Gap on Cancer

Interactive by Olivia Angelino, Thomas J. Bollyky , Elle Ruggiero and Isabella Turilli February 1, 2023 Global Health Program

  • Backgrounders
  • Special Projects

Nonproliferation, Arms Control, and Disarmament

CFR Welcomes Lori Esposito Murray

July 25, 2024

  • Centers & Programs
  • Books & Reports
  • Independent Task Force Program
  • Fellowships

Oil and Petroleum Products

Academic Webinar: The Geopolitics of Oil

Webinar with Carolyn Kissane and Irina A. Faskianos April 12, 2023

  • Students and Educators
  • State & Local Officials
  • Religion Leaders
  • Local Journalists

NATO's Future: Enlarged and More European?

Virtual Event with Emma M. Ashford, Michael R. Carpenter, Camille Grand, Thomas Wright, Liana Fix and Charles A. Kupchan June 25, 2024 Europe Program

  • Lectureship Series
  • Webinars & Conference Calls
  • Member Login

Hate Speech on Social Media: Global Comparisons

A memorial outside Al Noor mosque in Christchurch, New Zealand.

  • Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing.
  • Policies used to curb hate speech risk limiting free speech and are inconsistently enforced.
  • Countries such as the United States grant social media companies broad powers in managing their content and enforcing hate speech rules. Others, including Germany, can force companies to remove posts within certain time periods.

Introduction

A mounting number of attacks on immigrants and other minorities has raised new concerns about the connection between inflammatory speech online and violent acts, as well as the role of corporations and the state in policing speech. Analysts say trends in hate crimes around the world echo changes in the political climate, and that social media can magnify discord. At their most extreme, rumors and invective disseminated online have contributed to violence ranging from lynchings to ethnic cleansing.

The response has been uneven, and the task of deciding what to censor, and how, has largely fallen to the handful of corporations that control the platforms on which much of the world now communicates. But these companies are constrained by domestic laws. In liberal democracies, these laws can serve to defuse discrimination and head off violence against minorities. But such laws can also be used to suppress minorities and dissidents.

How widespread is the problem?

  • Radicalization and Extremism
  • Social Media
  • Race and Ethnicity
  • Censorship and Freedom of Expression
  • Digital Policy

Incidents have been reported on nearly every continent. Much of the world now communicates on social media, with nearly a third of the world’s population active on Facebook alone. As more and more people have moved online, experts say, individuals inclined toward racism, misogyny, or homophobia have found niches that can reinforce their views and goad them to violence. Social media platforms also offer violent actors the opportunity to publicize their acts.

A bar chart of the percent agreeing "people should be able to make statements that are offensive to minority groups publicly" showing the U.S. with 67% in agreement

Social scientists and others have observed how social media posts, and other online speech, can inspire acts of violence:

  • In Germany a correlation was found between anti-refugee Facebook posts by the far-right Alternative for Germany party and attacks on refugees. Scholars Karsten Muller and Carlo Schwarz observed that upticks in attacks, such as arson and assault, followed spikes in hate-mongering posts .
  • In the United States, perpetrators of recent white supremacist attacks have circulated among racist communities online, and also embraced social media to publicize their acts. Prosecutors said the Charleston church shooter , who killed nine black clergy and worshippers in June 2015, engaged in a “ self-learning process ” online that led him to believe that the goal of white supremacy required violent action.
  • The 2018 Pittsburgh synagogue shooter was a participant in the social media network Gab , whose lax rules have attracted extremists banned by larger platforms. There, he espoused the conspiracy that Jews sought to bring immigrants into the United States, and render whites a minority, before killing eleven worshippers at a refugee-themed Shabbat service. This “great replacement” trope, which was heard at the white supremacist rally in Charlottesville, Virginia, a year prior and originates with the French far right , expresses demographic anxieties about nonwhite immigration and birth rates.
  • The great replacement trope was in turn espoused by the perpetrator of the 2019 New Zealand mosque shootings, who killed forty-nine Muslims at prayer and sought to broadcast the attack on YouTube.
  • In Myanmar, military leaders and Buddhist nationalists used social media to slur and demonize the Rohingya Muslim minority ahead of and during a campaign of ethnic cleansing . Though Rohingya comprised perhaps 2 percent of the population, ethnonationalists claimed that Rohingya would soon supplant the Buddhist majority. The UN fact-finding mission said, “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet [PDF].”
  • In India, lynch mobs and other types of communal violence, in many cases originating with rumors on WhatsApp groups , have been on the rise since the Hindu-nationalist Bharatiya Janata Party (BJP) came to power in 2014.
  • Sri Lanka has similarly seen vigilantism inspired by rumors spread online, targeting the Tamil Muslim minority. During a spate of violence in March 2018, the government blocked access to Facebook and WhatsApp, as well as the messaging app Viber, for a week, saying that Facebook had not been sufficiently responsive during the emergency.

Does social media catalyze hate crimes?

The same technology that allows social media to galvanize democracy activists can be used by hate groups seeking to organize and recruit. It also allows fringe sites, including peddlers of conspiracies, to reach audiences far broader than their core readership. Online platforms’ business models depend on maximizing reading or viewing times. Since Facebook and similar platforms make their money by enabling advertisers to target audiences with extreme precision, it is in their interests to let people find the communities where they will spend the most time.

Users’ experiences online are mediated by algorithms designed to maximize their engagement, which often inadvertently promote extreme content. Some web watchdog groups say YouTube’s autoplay function, in which the player, at the end of one video, tees up a related one, can be especially pernicious. The algorithm drives people to videos that promote conspiracy theories or are otherwise “ divisive, misleading or false ,” according to a Wall Street Journal investigative report. “YouTube may be one of the most powerful radicalizing instruments of the 21st century,” writes sociologist Zeynep Tufekci .

YouTube said in June 2019 that changes to its recommendation algorithm made in January had halved views of videos deemed “borderline content” for spreading misinformation. At that time, the company also announced that it would remove neo-Nazi and white supremacist videos from its site. Yet the platform faced criticism that its efforts to curb hate speech do not go far enough. For instance, critics note that rather than removing videos that provoked homophobic harassment of a journalist, YouTube instead cut off the offending user from sharing in advertising revenue.  

How do platforms enforce their rules?

Social media platforms rely on a combination of artificial intelligence, user reporting, and staff known as content moderators to enforce their rules regarding appropriate content. Moderators, however, are burdened by the sheer volume of content and the trauma that comes from sifting through disturbing posts , and social media companies don’t evenly devote resources across the many markets they serve.

A ProPublica investigation found that Facebook’s rules are opaque to users and inconsistently applied by its thousands of contractors charged with content moderation. (Facebook says there are fifteen thousand.) In many countries and disputed territories, such as the Palestinian territories, Kashmir, and Crimea, activists and journalists have found themselves censored , as Facebook has sought to maintain access to national markets or to insulate itself from legal liability. “The company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities,” ProPublica found.

Daily News Brief

A summary of global news developments with cfr analysis delivered to your inbox each morning.  weekdays., the world this week, a weekly digest of the latest from cfr on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. every friday., think global health.

A curation of original analyses, data visualizations, and commentaries, examining the debates and efforts to improve health worldwide.  Weekly.

Addressing the challenges of navigating varying legal systems and standards around the world—and facing investigations by several governments—Facebook CEO Mark Zuckerberg called for global regulations to establish baseline content, electoral integrity, privacy, and data standards.

Problems also arise when platforms’ artificial intelligence is poorly adapted to local languages and companies have invested little in staff fluent in them. This was particularly acute in Myanmar, where, Reuters reported, Facebook employed just two Burmese speakers as of early 2015. After a series of anti-Muslim violence began in 2012, experts warned of the fertile environment ultranationalist Buddhist monks found on Facebook for disseminating hate speech to an audience newly connected to the internet after decades under a closed autocratic system.

Facebook admitted it had done too little after seven hundred thousand Rohingya were driven to Bangladesh and a UN human rights panel singled out the company in a report saying Myanmar’s security forces should be investigated for genocidal intent. In August 2018, it banned military officials from the platform and pledged to increase the number of moderators fluent in the local language.

How do countries regulate hate speech online?

In many ways, the debates confronting courts, legislatures, and publics about how to reconcile the competing values of free expression and nondiscrimination have been around for a century or longer. Democracies have varied in their philosophical approaches to these questions, as rapidly changing communications technologies have raised technical challenges of monitoring and responding to incitement and dangerous disinformation.

United States. Social media platforms have broad latitude [PDF], each establishing its own standards for content and methods of enforcement. Their broad discretion stems from the Communications Decency Act . The 1996 law exempts tech platforms from liability for actionable speech by their users. Magazines and television networks, for example, can be sued for publishing defamatory information they know to be false; social media platforms cannot be found similarly liable for content they host.

A list of data points on Americans' level of concern over online hate speech, including that 59% believe online hate and harassment make hate crimes more common.

Recent congressional hearings have highlighted the chasm between Democrats and Republicans on the issue. House Judiciary Committee Chairman Jerry Nadler convened a hearing in the aftermath of the New Zealand attack, saying the internet has aided white nationalism’s international proliferation. “The President’s rhetoric fans the flames with language that—whether intentional or not—may motivate and embolden white supremacist movements,” he said, a charge Republicans on the panel disputed. The Senate Judiciary Committee, led by Ted Cruz, held a nearly simultaneous hearing in which he alleged that major social media companies’ rules disproportionately censor conservative speech , threatening the platforms with federal regulation. Democrats on that panel said Republicans seek to weaken policies  dealing with hate speech and disinformation that instead ought to be strengthened.

European Union. The bloc’s twenty-eight members all legislate the issue of hate speech on social media differently, but they adhere to some common principles. Unlike the United States, it is not only speech that directly incites violence that comes under scrutiny; so too does speech that incites hatred or denies or minimizes genocide and crimes against humanity. Backlash against the millions of predominantly Muslim migrants and refugees who have arrived in Europe in recent years has made this a particularly salient issue, as has an uptick in anti-Semitic incidents in countries including France, Germany, and the United Kingdom.

In a bid to preempt bloc-wide legislation, major tech companies agreed to a code of conduct with the European Union in which they pledged to review posts flagged by users and take down those that violate EU standards within twenty-four hours. In a February 2019 review, the European Commission found that social media platforms were meeting this requirement in three-quarters of cases .

The Nazi legacy has made Germany especially sensitive to hate speech. A 2018 law requires large social media platforms to take down posts that are “manifestly illegal” under criteria set out in German law within twenty-four hours. Human Rights Watch raised concerns that the threat of hefty fines would encourage the social media platforms to be “overzealous censors.”

New regulations under consideration by the bloc’s executive arm would extend a model similar to Germany’s across the EU, with the intent of “preventing the dissemination of terrorist content online .” Civil libertarians have warned against the measure for its “ vague and broad ” definitions of prohibited content, as well as for making private corporations, rather than public authorities, the arbiters of censorship.

India. Under new social media rules, the government can order platforms to take down posts within twenty-four hours based on a wide range of offenses, as well as to obtain the identity of the user. As social media platforms have made efforts to stanch the sort of speech that has led to vigilante violence, lawmakers from the ruling BJP have accused them of censoring content in a politically discriminatory manner, disproportionately suspending right-wing accounts, and thus undermining Indian democracy . Critics of the BJP accuse it of deflecting blame from party elites to the platforms hosting them. As of April 2018, the New Delhi–based Association for Democratic Reforms had identified fifty-eight lawmakers facing hate speech cases, including twenty-seven from the ruling BJP. The opposition has expressed unease with potential government intrusions into privacy.

Japan. Hate speech has become a subject of legislation and jurisprudence in Japan in the past decade [PDF], as anti-racism activists have challenged ultranationalist agitation against ethnic Koreans. This attention to the issue attracted a rebuke from the UN Committee on the Elimination of Racial Discrimination in 2014 and inspired a national ban on hate speech in 2016, with the government adopting a model similar to Europe’s. Rather than specify criminal penalties, however, it delegates to municipal governments the responsibility “to eliminate unjust discriminatory words and deeds against People from Outside Japan.” A handful of recent cases concerning ethnic Koreans could pose a test: in one, the Osaka government ordered a website containing videos deemed hateful taken down , and in Kanagawa and Okinawa Prefectures courts have fined individuals convicted of defaming ethnic Koreans in anonymous online posts.

What are the prospects for international prosecution?

Cases of genocide and crimes against humanity could be the next frontier of social media jurisprudence, drawing on precedents set in Nuremberg and Rwanda. The Nuremberg trials in post-Nazi Germany convicted the publisher of the newspaper Der Sturmer ; the 1948 Genocide Convention subsequently included “ direct and public incitement to commit genocide ” as a crime. During the UN International Criminal Tribunal for Rwanda, two media executives were convicted on those grounds. As prosecutors look ahead to potential genocide and war crimes tribunals for cases such as Myanmar, social media users with mass followings could be found similarly criminally liable.

Recommended Resources

Andrew Sellars sorts through attempts to define hate speech .

Columbia University compiles relevant case law from around the world.

The U.S. Holocaust Memorial Museum lays out the legal history of incitement to genocide.

Kate Klonick describes how private platforms have come to govern public speech .

Timothy McLaughlin chronicles Facebook’s role in atrocities against Rohingya in Myanmar.

Adrian Chen reports on the psychological toll of content moderation on contract workers.

Tarleton Gillespie discusses the politics of content moderation .

  • Technology and Innovation

More From Our Experts

How Will the EU Elections Results Change Europe?

In Brief by Liana Fix June 10, 2024 Europe Program

Iran Attack Means an Even Tougher Balancing Act for the U.S. in the Middle East

In Brief by Steven A. Cook April 14, 2024 Middle East Program

Iran Attacks on Israel Spur Escalation Concerns

In Brief by Ray Takeyh April 14, 2024 Middle East Program

Top Stories on CFR

Middle East and North Africa

Mapping the Growing U.S. Military Presence in the Middle East

Article by Jonathan Masters and Will Merrow August 6, 2024

Election 2024

Meet Tim Walz, Democratic Vice-Presidential Candidate

Blog Post by James M. Lindsay August 7, 2024 The Water's Edge

Mexico’s Long War: Drugs, Crime, and the Cartels

Backgrounder by CFR.org Editors August 5, 2024

speech on internet kills communication

The Telemarketing Company

We need to talk: has digital killed the art of conversation.

  • Share to Google Plus
  • Share to Twitter
  • Share to LinkedIn

Technological advancements have changed every aspect of the way we communicate and work in 2020; they have dramatically influenced the way we make decisions as consumers and business decision makers. But, are we better for it? We take a closer look at the impact of mobile technology and digital communications, particularly how they have affected our ability to connect with each other, both socially and in the world of work.

Instant everything

Whilst, once, it might have just been email notifications and the occasional phone call, our everyday working lives are now filled with interruptions, alerts and distractions. Internal communication channels, task list reminders and meeting alerts come at us all day, each with differing levels of urgency and varying requirements. In our personal lives we have instant messages, emails, app notifications and voice assistants in our hands, demanding instant attention or diverting our focus until we are lost in absent-minded scrolling. As a result we are generally less patient and more easily frustrated if we don’t get prompt answers, instant communications and immediate results. In our capacity as consumers, this means being able to search online and get immediate results, or receive instant replies from customer service channels to solve our problems. And now, our experience as consumers has influenced our expectations in the business environment, where online research, web chat interactions and self- serve customer support are the norm when seeking information to feed our decision-making process. Whilst this means we are essentially capable of doing more in less time, it’s interesting to consider how this affects the way we interact with others in direct or social situations – how many people rely on email or instant messaging as opposed to making calls or speaking face to face and how does this impact working environments?

Are we losing the connection?

Has Technology Killed Face-To-Face Communication? Our intensifying relationship with the digital environment that leads to unhealthy habits that not only distract us from the “present,” but also negatively impact communication effectiveness. Forbes

The role of technology

Our thirst for instant information and our expectation of a seamless customer experience create a massive challenge, which technologies such as AI, machine learning, marketing automation and bots can help address. Marketers today need to be able to adapt their messaging, not just via the appropriate channels for their target audiences, but also in the correct medium to ensure they capture and hold consumers’ attention for long enough to engage. Technology has an important role to play in helping marketers address this challenge and respond to buyer’s instantaneous needs, but this should not be to the exclusion of human interaction. In any context, human contact creates a genuine connection, allows us to communicate sensitive or complex information and ideas, and builds trust and understanding in a way that can’t be achieved through digital channels alone. The growing consensus is that we need to find the right balance between digital and human interactions, both in our personal and work lives.

The importance of a human connection

Relationships in the 21st century: the forgotten foundation of mental health and wellbeing Relationships are one of the most important aspects of our lives, yet we can often forget just how crucial our connections with other people are for our physical and mental health and wellbeing. Mental Health Foundation

Related articles

The power of relationship selling in B2B

The power of relationship selling in B2B

Mid-year marketing check-in. Assessing progress.

Mid-year marketing check-in. Assessing progress.

The psychology behind successful B2B sales

The psychology behind successful B2B sales

Telemarketing – obsolete tactic or powerful strategic tool?

Telemarketing – obsolete tactic or powerful strategic tool?

Our full range of telephone services, view our full range of telephone services, data services, flexible data services, pre sales research, drive your sales strategy with telephone interviewing, telemarketing, telemarketing that’s personal, agile, smart and insightful., high-performance telesales, post sales research, strengthen customer relationships, awards and accreditations.

Financial Conduct Authority

Make an enquiry

The Telemarketing Company can help with all of your sales and market research needs. We look forward to hearing from you.

Call us on 01273 765 000 or email us at [email protected]

Visit us at: The Telemarketing Company 26-27 Regency Square Brighton East Sussex

Find anything you save across the site in your account

The Evolving Free-Speech Battle Between Social Media and the Government

speech on internet kills communication

Earlier this month, a federal judge in Louisiana issued a ruling that restricted various government agencies from communicating with social-media companies. The plaintiffs, which include the attorneys general of Missouri and Louisiana, argued that the federal government was coercing social-media companies into limiting speech on topics such as vaccine skepticism. The judge wrote, in a preliminary injunction, “If the allegations made by plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States’ history. The plaintiffs are likely to succeed on the merits in establishing that the government has used its power to silence the opposition.” The injunction prevented agencies such as the Department of Health and Human Services and the F.B.I. from communicating with Facebook , Twitter, or other platforms about removing or censoring content. (The Biden Administration appealed the injunction and, on Friday, the Fifth Circuit paused it. A three-judge panel will soon decide whether it will be reinstated as the case proceeds.) Critics have expressed concern that such orders will limit the ability of the government to fight disinformation.

To better understand the issues at stake, I recently spoke by phone with Genevieve Lakier, a professor of law at the University of Chicago Law School who focusses on issues of social media and free speech. (We spoke before Friday’s pause.) During our conversation, which has been edited for length and clarity, we discussed why the ruling was such a radical departure from the way that courts generally handle these issues, how to apply concepts like free speech to government actors, and why some of the communication between the government and social-media companies was problematic.

In a very basic sense, what does this decision actually do?

Well, in practical terms, it prevents a huge swath of the executive branch of the federal government from essentially talking to social-media platforms about what they consider to be bad or harmful speech on the platforms.

There’s an injunction and then there’s an order, and both are important. The order is the justification for the injunction, but the injunction itself is what actually has effects on the world. And the injunction is incredibly broad. It says all of these defendants—and we’re talking about the President, the Surgeon General, the White House press secretary, the State Department, the F.B.I.—may not urge, encourage, pressure, or induce in any manner the companies to do something different than what they might otherwise do about harmful speech. This is incredibly broad language. It suggests, and I think is likely to be interpreted to mean, that, basically, if you’re a member of one of the agencies or if you’re named in this injunction, you just cannot speak to the platforms about harmful speech on the platform until, or unless, the injunction ends.

But one of the puzzling things about the injunction is that there are these very significant carve-outs. For example, my favorite is that the injunction says, basically, “On the other hand, you may communicate with the platforms about threats to public safety or security of the United States.” Now, of course, the defendants in the lawsuit would say, “That’s all we’ve been doing. When we talk to you, when we talk to the platforms about election misinformation or health misinformation, we are alerting them to threats to the safety and security of the United States.”

So, read one way, the injunction chills an enormous amount of speech. Read another way, it doesn’t really change anything at all. But, of course, when you get an injunction like this from a federal court, it’s better to be safe than sorry. I imagine that all of the agencies and government officials listed in the injunction are going to think, We’d better shut up.

And the reason that specific people, jobs, and agencies are listed in the injunction is because the plaintiffs say that these entities were communicating with social-media companies, correct?

Correct. And communicating in these coercive or harmful, unconstitutional ways. The presumption of the injunction is that if they’ve been doing it in the past, they’re probably going to keep doing it in the future. And let’s stop continuing violations of the First Amendment.

As someone who’s not an expert on this issue, I find the idea that you could tell the White House press secretary that he or she cannot get up at the White House podium and say that Twitter should take down COVID misinformation—

Does this injunction raise issues on two fronts: freedom of speech and separation of powers?

Technically, when the press secretary is operating as the press secretary, she’s not a First Amendment-rights holder. The First Amendment limits the government, constrains the government, but protects private people. And so when she’s a private citizen, she has all her ordinary-citizen rights. Government officials technically don’t have First Amendment rights.

That said, it’s absolutely true that, when thinking about the scope of the First Amendment, courts take very seriously the important democratic and expressive interests in government speech. And so government speakers don’t have First Amendment rights, but they have a lot of interests that courts consider. A First Amendment advocate would say that this injunction constrains and has negative effects on really important government speech interests.

More colloquially, I would just say the irony of this injunction is that in the name of freedom of speech it is chilling a hell of a lot of speech. That is how complicated these issues are. Government officials using their bully pulpit can have really powerful speech-oppressive effects. They can chill a lot of important speech. But one of the problems with the way the district court approaches the analysis is that it doesn’t seem to be taking into account the interest on the other side. Just as we think that the government can go too far, we also think it’s really important for the government to be able to speak.

And what about separation-of-powers issues? Or is that not relevant here?

I think the way that the First Amendment is interpreted in this area is an attempt to protect some separation of powers. Government actors may not have First Amendment rights, but they’re doing important business, and it’s important to give them a lot of freedom to do that business, including to do things like express opinions about what private citizens are doing or not doing. Courts generally recognize that government actors, legislators, and executive-branch officials are doing important business. The courts do not want to second-guess everything that they’re doing.

So what exactly does this order say was illegal?

The lawsuit was very ambitious. It claimed that government officials in a variety of positions violated the First Amendment by inducing or encouraging or incentivizing the platforms to take down protected speech. And by coercing or threatening them into taking down protected speech. And by collaborating with them to take down protected speech. These are the three prongs that you can use in a First Amendment case to show that the decision to take down speech that looks like it’s directly from a private actor is actually the responsibility of the government. The plaintiffs claimed all three. What’s interesting about that district-court order is that it agreed with all three. It says, Yeah, there was encouragement, there was coercion, and there was joint action or collaboration.

And what sort of examples are they providing? What would be an example of the meat of what the plaintiffs argued, and what the judge found to violate the First Amendment?

A huge range of activities—some that I find troubling and some that don’t seem to be troubling. Public statements by members of the White House or the executive branch expressing dissatisfaction with what the platforms are doing. For instance, President Biden’s famous statement that the platforms are killing people. Or the Surgeon General’s warning that there is a health crisis caused by misinformation, and his urging the platforms to do something about it. That’s one bucket.

There is another bucket in which the platforms were going to agencies like the C.D.C. to ask them for information about the COVID pandemic and the vaccine—what’s true and what’s false, or what’s good and what’s bad information—and then using that to inform their content-moderation rules.

Very different and much more troubling, I think, are these e-mails that they found in discovery between White House officials and the platforms in which the officials more or less demand that the platforms take down speech. There is one e-mail from someone in the White House who asked Twitter to remove a parody account that was linked to President Biden’s granddaughter, and said that he “cannot stress the degree to which this needs to be resolved immediately”—and within forty-five minutes, Twitter takes it down. That’s a very different thing than President Biden saying, “Hey, platforms, you’re doing a bad job with COVID misinformation.”

The second bucket seems full of the normal give-and-take you’d expect between the government and private actors in a democratic society, right?

Yeah. Threats and government coercion on private platforms seem the most troubling from a First Amendment perspective. And traditionally that is the kind of behavior that these cases have been most worried about.

This is not the first case to make claims of this kind. This is actually one of dozens of cases that have been filed in federal court over the last years alleging that the Biden Administration or members of the government had put pressure on or encouraged platforms to take down vaccine-skeptical speech and speech about election misinformation. What is unusual about this case is the way that the district court responded to these claims. Before this case, courts had, for the most part, thrown these cases out. I think this was largely because they thought that there was insufficient evidence of coercion, and coercion is what we’re mostly worried about. They have found that this kind of behavior only violates the First Amendment if there is some kind of explicit threat, such as “If you don’t do X, we will do Y,” or if the government actors have been directly involved in the decision to take down the speech.

In this case, the court rejects that and has a much broader test, where it says, basically, that government officials violate the First Amendment if they significantly encourage the platforms to act. And that may mean just putting pressure on them through rhetoric or through e-mails on multiple occasions—there’s a campaign of pressure, and that’s enough to violate the First Amendment. I cannot stress enough how significant a departure that is from the way courts have looked at the issue before.

So, in this case, you’re saying that the underlying behavior may constitute something bad that the Biden Administration did, that voters should know about it and judge them on it, but that it doesn’t rise to the level of being a First Amendment issue?

Yes. I think that this opinion goes too far. It’s insufficiently attentive to the interests on the other side. But I think the prior cases have been too stingy. They’ve been too unwilling to find a problem—they don’t want to get involved because of this concern with separation of powers.

The platforms are incredibly powerful speech regulators. We have largely handed over control of the digital public sphere to these private companies. I think there is this recognition that when the government criticizes the platforms or puts pressure on the platforms to change their policies, that’s some form of political or democratic oversight, a way to promote public welfare. And those kinds of democratic and public-welfare concerns are pretty significant. The courts have wanted to give the government a lot of room to move.

But you think that, in the past, the courts have been too willing to give the government space? How could they develop a better approach?

Yeah. So, for example, the e-mails that are identified in this complaint—I think that’s the kind of pressure that is inappropriate for government actors in a democracy to be employing against private-speech platforms. I’m not at all convinced that, if this had come up in a different court, those would have been found to be a violation of the First Amendment. But there need to be some rules of the road.

On the one hand, I was suggesting that there are important democratic interests in not having too broad a rule. But, on the other hand, I think part of what’s going on here—part of what the facts that we see in this complaint are revealing—is that, in the past, we’ve thought about this kind of government pressure on private platforms, which is sometimes called jawboning, as episodic. There’s a local sheriff or there’s an agency head who doesn’t like a particular policy, and they put pressure on the television station, or the local bookseller, to do something about it. Today, what we’re seeing is that there’s just this pervasive, increasingly bureaucratized communication between the government and the platforms. The digital public theatre has fewer gatekeepers; journalists are not playing the role of leading and determining the news that is fit to print or not fit to print. And so there’s a lot of stuff, for good or for ill, that is circulating in public. You can understand why government officials and expert agencies want to be playing a more significant role in informing, influencing, and persuading the platforms to operate one way or the other. But it does raise the possibility of abuse, and I’m worried about that.

That was a fascinating response, but you didn’t totally answer the question. How should a court step in here without going too far?

The traditional approach that courts have taken, until now, has been to say that there’s only going to be a First Amendment violation if the coercion, encouragement, or collaboration is so strong that, essentially, the platform had no choice but to act. It had no alternatives; there was no private discretion. Because then we can say, Oh, yes, it was the government actor, not the platform, that ultimately was responsible for the decision.

I think that that is too restrictive a standard. Platforms are vulnerable to pressure from the government that’s a lot less severe. They’re in the business of making money by disseminating a lot of speech. They don’t particularly care about any particular tweet or post or speech act. And their economic incentives will often mean that they want to curry favor with the government and with advertisers by being able to continue to circulate a lot of speech. If that means that they have to break some eggs, that they have to suppress particular kinds of posts or tweets, they will do that. It’s economically rational for them to do so.

The challenge for courts is to develop rules of the road for how government officials can interact with platforms. It has to be the case that some forms of communication are protected, constitutionally O.K., and even democratically good. I want expert agencies such as the C.D.C. to be able to communicate to the platforms. And I want that kind of expert information to be constitutionally unproblematic to deliver. On the other hand, I don’t think that White House officials should be writing to platforms and saying, “Hey, take this down immediately.”

I never thought about threatening companies as a free-speech issue that courts would get involved with. Let me give you an example. If you had told me four years ago that the White House press secretary had got up and said, “I have a message from President Trump. If CNN airs one more criticism of me, I am going to try and block its next merger,” I would’ve imagined that there would be a lot of outrage about that. What I could not have imagined was a judge releasing an injunction saying that people who worked for President Trump were not allowed to pass on the President’s message from the White House podium. It would be an issue for voters to decide. Or, I suppose, CNN, during the merger decision, could raise the issue and say, “See, we didn’t get fair treatment because of what President Trump said,” and courts could take that into account. But the idea of blocking the White House press secretary from saying anything seems inconceivable to me.

I’ll say two things in response. One is that there is a history of this kind of First Amendment litigation, but it’s usually about private speech. We might think that public speech has a different status because there is more political accountability. I don’t know. I find this question really tricky, because I think that the easiest cases from a First Amendment perspective, and the easiest reason for courts to get involved, is when the communication is secret, because there isn’t political accountability.

You mentioned the White House press secretary saying something in public. O.K., that’s one thing. But what about if she says it in private? We might think, Well, then the platforms are going to complain. But often regulated parties do not want to say that they have been coerced by the government into doing something against their interests, or that they were threatened. There’s often a conspiracy of silence.

In those cases, it doesn’t seem to me as if there’s democratic accountability. But, even when it is public, we’ve seen over the past year that government officials are writing letters to the platforms: public letters criticizing them, asking for information, badgering them, pestering them about their content-moderation policies. And we might think, Sure, people know that that’s happening. Maybe the government officials will face political accountability if it’s no good. But we might worry that, even then, if the behavior is sufficiently serious, if it’s repeated, it might give the officials too much power to shape the content-moderation policies of the platforms. From a First Amendment perspective, I don’t know why that’s off the table.

Now, from a practical perspective, you’re absolutely right. Courts have not wanted to get involved. But that’s really worrying. I think this desire to just let the political branches work it out has meant that, certainly with the social-media platforms, it’s been like the Wild West. There are no rules of the road. We have no idea what’s O.K. or not for someone in the White House to e-mail to a platform. One of the benefits of the order and the injunction is that it’s opening up this debate about what’s O.K. and what’s not. It might be the case that the way to establish rules of the road will not be through First Amendment-case litigation. Maybe we need Congress to step in and write the rules, or there needs to be some kind of agency self-regulation. But I think it’s all going to have to ultimately be viewed through a First Amendment lens. This order and injunction go way too far, but I think the case is at least useful in starting a debate. Because up until now we’ve been stuck in this arena where there are important free-speech values that are at stake and no one is really doing much to protect them. ♦

More New Yorker Conversations

Naomi Klein sees uncanny doubles in our politics .

Olivia Rodrigo considers the meanings of “Guts.”

Isabel Allende’s vision of history .

Julia Fox didn’t want to be famous, but she knew she would be .

John Waters is ready for his Hollywood closeup .

Patrick Stewart boldly goes there .

Support The New Yorker’s award-winning journalism. Subscribe today .

The Radicalization of Israel’s Military

AI struggles to recognize toxic speech on social media. Here’s why.

speech on internet kills communication

Facebook says its artificial intelligence models identified and pulled down 27 million pieces of hate speech in the final three months of 2020 . In 97 percent of the cases, the systems took action before humans had even flagged the posts.

That’s a huge advance, and all the other major social media platforms are using AI-powered systems in similar ways. Given that people post hundreds of millions of items every day, from comments and memes to articles, there’s no real alternative. No army of human moderators could keep up on its own.

But a team of human-computer interaction and AI researchers at Stanford sheds new light on why automated speech police can score highly accurately on technical tests yet provoke a lot dissatisfaction from humans with their decisions. The main problem: There is a huge difference between evaluating more traditional AI tasks, like recognizing spoken language, and the much messier task of identifying hate speech, harassment, or misinformation — especially in today’s polarized environment.

“It appears as if the models are getting almost perfect scores, so some people think they can use them as a sort of black box to test for toxicity,’’ says Mitchell Gordon , a PhD candidate in computer science who worked on the project. “But that’s not the case. They’re evaluating these models with approaches that work well when the answers are fairly clear, like recognizing whether ‘java’ means coffee or the computer language, but these are tasks where the answers are not clear.”

The team hopes their study will illuminate the gulf between what developers think they’re achieving and the reality — and perhaps help them develop systems that grapple more thoughtfully with the inherent disagreements around toxic speech.

Too Much Disagreement

There are no simple solutions, because there will never be unanimous agreement on highly contested issues. Making matters more complicated, people are often ambivalent and inconsistent about how they react to a particular piece of content.

In one study, for example, human annotators rarely reached agreement when they were asked to label tweets that contained words from a lexicon of hate speech. Only 5 percent of the tweets were acknowledged by a majority as hate speech, while only 1.3 percent received unanimous verdicts. In a study on recognizing misinformation, in which people were given statements about purportedly true events, only 70 percent agreed on whether most of the events had or had not occurred.

Despite this challenge for human moderators, conventional AI models achieve high scores on recognizing toxic speech — .95 “ROCAUC” — a popular metric for evaluating AI models in which 0.5 means pure guessing and 1.0 means perfect performance. But the Stanford team found that the real score is much lower — at most .73 — if you factor in the disagreement among human annotators.

Reassessing the Models

In a new study, the Stanford team re-assesses the performance of today’s AI models by getting a more accurate measure of what people truly believe and how much they disagree among themselves.

The study was overseen by Michael Bernstein and Tatsunori Hashimoto , associate and assistant professors of computer science and faculty members of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). In addition to Gordon, Bernstein, and Hashimoto, the paper’s co-authors include Kaitlyn Zhou , a PhD candidate in computer science, and Kayur Patel, a researcher at Apple Inc.

To get a better measure of real-world views, the researchers developed an algorithm to filter out the “noise” — ambivalence, inconsistency, and misunderstanding — from how people label things like toxicity, leaving an estimate of the amount of true disagreement. They focused on how repeatedly each annotator labeled the same kind of language in the same way. The most consistent or dominant responses became what the researchers call "primary labels," which the researchers then used as a more precise dataset that captures more of the true range of opinions about potential toxic content.

The team then used that approach to refine datasets that are widely used to train AI models in spotting toxicity, misinformation, and pornography. By applying existing AI metrics to these new “disagreement-adjusted” datasets, the researchers revealed dramatically less confidence about decisions in each category. Instead of getting nearly perfect scores on all fronts, the AI models achieved only .73 ROCAUC in classifying toxicity and 62 percent accuracy in labeling misinformation. Even for pornography — as in, “I know it when I see it” — the accuracy was only .79.

Someone Will Always Be Unhappy. The Question Is Who?

Gordon says AI models, which must ultimately make a single decision, will never assess hate speech or cyberbullying to everybody’s satisfaction. There will always be vehement disagreement. Giving human annotators more precise definitions of hate speech may not solve the problem either, because people end up suppressing their real views in order to provide the “right” answer.

But if social media platforms have a more accurate picture of what people really believe, as well as which groups hold particular views, they can design systems that make more informed and intentional decisions.

In the end, Gordon suggests, annotators as well as social media executives will have to make value judgments with the knowledge that many decisions will always be controversial.

“Is this going to resolve disagreements in society? No,” says Gordon. “The question is what can you do to make people less unhappy. Given that you will have to make some people unhappy, is there a better way to think about whom you are making unhappy?”

Related |  Michael Bernstein , associate professor of Computer Science and STMicroelectronics Faculty Scholar at Stanford University, and a member of the Human-Computer Interaction group.

Related |  Tatsunori Hashimoto , assistant professor of Computer Science

Related Departments

NASA illustration of a swarm of satellites in space.

Engineers conduct first in-orbit test of ‘swarm’ satellite autonomous navigation

Close up of a syringe bottle.

A step toward more effective vaccines

Black and white portrait of Moffat at this desk, centered on a red background.

Robert Moffat, expert on heat transfer and beloved teacher, dies at 96

speech on internet kills communication

Study Highlights Challenges in Detecting Violent Speech Aimed at Asian Communities

Aug 09, 2024 —.

CSE ACL 2024

A research group is calling for internet and social media moderators to strengthen their detection and intervention protocols for violent speech. 

Their study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks. 

Researchers from Georgia Tech and the Anti-Defamation League (ADL) teamed together  in the study . They made their discovery while testing natural language processing (NLP) models trained on data they crowdsourced from Asian communities. 

“The Covid-19 pandemic brought attention to how dangerous violence-provoking speech can be. There was a clear increase in reports of anti-Asian violence and hate crimes,” said  Gaurav Verma , a Georgia Tech Ph.D. candidate who led the study. 

“Such speech is often amplified on social platforms, which in turn fuels anti-Asian sentiments and attacks.”

Violence-provoking speech differs from more commonly studied forms of harmful speech, like hate speech. While hate speech denigrates or insults a group, violence-provoking speech implicitly or explicitly encourages violence against targeted communities.

Humans can define and characterize violent speech as a subset of hateful speech. However, computer models struggle to tell the difference due to subtle cues and implications in language.

The researchers tested five different NLP classifiers and analyzed their F1 score, which measures a model's performance. The classifiers reported a 0.89 score for detecting hate speech, while detecting violence-provoking speech was only 0.69. This contrast highlights the notable gap between these tools and their accuracy and reliability. 

The study stresses the importance of developing more refined methods for detecting violence-provoking speech. Internet misinformation and inflammatory rhetoric escalate tensions that lead to real-world violence. 

The Covid-19 pandemic exemplified how public health crises intensify this behavior, helping inspire the study. The group cited that anti-Asian crime across the U.S. increased by 339% in 2021 due to malicious content blaming Asians for the virus. 

The researchers believe their findings show the effectiveness of community-centric approaches to problems dealing with harmful speech. These approaches would enable informed decision-making between policymakers, targeted communities, and developers of online platforms.

Along with stronger models for detecting violence-provoking speech, the group discusses a direct solution: a tiered penalty system on online platforms. Tiered systems align penalties with severity of offenses, acting as both deterrent and intervention to different levels of harmful speech. 

“We believe that we cannot tackle a problem that affects a community without involving people who are directly impacted,” said  Jiawei Zhou , a Ph.D. student who studies human-centered computing at Georgia Tech. 

“By collaborating with experts and community members, we ensure our research builds on front-line efforts to combat violence-provoking speech while remaining rooted in real experiences and needs of the targeted community.”

The researchers trained their tested NLP classifiers on a dataset crowdsourced from a survey of 120 participants who self-identified as Asian community members. In the survey, the participants labeled 1,000 posts from X (formerly Twitter) as containing either violence-provoking speech, hateful speech, or neither.

Since characterizing violence-provoking speech is not universal, the researchers created a specialized codebook for survey participants. The participants studied the codebook before their survey and used an abridged version while labeling. 

To create the codebook, the group used an initial set of anti-Asian keywords to scan posts on X from January 2020 to February 2023. This tactic yielded 420,000 posts containing harmful, anti-Asian language. 

The researchers then filtered the batch through new keywords and phrases. This refined the sample to 4,000 posts that potentially contained violence-provoking content. Keywords and phrases were added to the codebook while the filtered posts were used in the labeling survey.

The team used discussion and pilot testing to validate its codebook. During trial testing, pilots labeled 100 Twitter posts to ensure the sound design of the Asian community survey. The group also sent the codebook to the ADL for review and incorporated the organization’s feedback. 

“One of the major challenges in studying violence-provoking content online is effective data collection and funneling down because most platforms actively moderate and remove overtly hateful and violent material,” said Tech alumnus  Rynaa Grover (M.S. CS 2024).

“To address the complexities of this data, we developed an innovative pipeline that deals with the scale of this data in a community-aware manner.”

Emphasis on community input extended into collaboration within Georgia Tech’s College of Computing. Faculty members  Srijan Kumar and  Munmun De Choudhury oversaw the research that their students spearheaded.

Kumar, an assistant professor in the School of Computational Science and Engineering, advises Verma and Grover. His expertise is in artificial intelligence, data mining, and online safety.

De Choudhury is an associate professor in the School of Interactive Computing and advises Zhou. Their research connects societal mental health and social media interactions.

The Georgia Tech researchers partnered with the ADL, a leading non-governmental organization that combats real-world hate and extremism. ADL researchers  Binny Mathew and  Jordan Kraemer co-authored the paper.

The group will present its paper at the  62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), which takes place in Bangkok, Thailand, Aug. 11-16 

ACL 2024 accepted 40 papers written by Georgia Tech researchers. Of the 12 Georgia Tech faculty who authored papers accepted at the conference, nine are from the College of Computing, including Kumar and De Choudhury.

“It is great to see that the peers and research community recognize the importance of community-centric work that provides grounded insights about the capabilities of leading language models,” Verma said. 

“We hope the platform encourages more work that presents community-centered perspectives on important societal problems.” 

Visit https://sites.gatech.edu/research/acl-2024/ for news and coverage of Georgia Tech research presented at ACL 2024.

Gaurav Verma CSE ACL 2024

Gaurav Verma CSE ACL 2024

Srijan Kumar CSE ACL 2024

Srijan Kumar CSE ACL 2024

CSE ACL 2024

CSE ACL 2024

Bryant Wine, Communications Officer [email protected]

Twitter

This website uses cookies.  For more information, review our  Privacy & Legal Notice Questions? Please email [email protected]. More Info Decline --> Accept

Internet Essays

A report on a marketing article.

Publication Date: January 22, 2018 Headline/Title: How Marketing and Advertising are bound to Change In 2018 Major Company(s) mentioned: Facebook Company Summary: The article indicates…

The Effect of Online Advertising on Consumers

The ad fission of the internet has increasingly been adopted thereby gradually enabling the world web to become a great advertisement platform (Kalia & Mishra…

Digital Media Analysis: ASOS Website

Introduction The business world has gravely changed in the recent past, making it quite hard for one to sustainably keep abreast with many things trending…

Anonymity and Abuse on the Internet

Internet has been the fastest growing medium, as more and more people are becoming a part of the internet fraternity; it becomes more difficult to…

History of casinos

Introduction Casinos have become rampant in the modern society, especially in the metropolitan areas. In fact, casinos have an impact on both social and economic…

speech on internet kills communication

Creating Your LinkedIn Profile

Making your existing LinkedIn profile professional. First step Enter your login details on the LinkedIn page. From your browser’s address bar, please note LinkedIn URL…

Impact of video games on children under 18

What are the legal, ethical and social issues in society when dealing with video games? One of the critical technology issues that the society is…

E-Business Essay

Introduction Electronic business is a type that utilizes technological innovations and involves business taking place by the help of computer networks. E-business refers to the…

Agile Information Systems

Introduction  ISD refers to incremental and iterative approaches used in software development performed by collaborative manner through self-organizing teams that produce quality software that is…

Library Management System Design

Process to Improve Quality of Management Systems in Academics Library forms an important part of any learning and research program. Therefore, effective management of learning…

Main Features of Cyber Harassment

Traditional bullying transformed into cyber harassment in the 1990s after personal computers became popular. People with personal computers use the web to hide their identities…

Cultural Considerations on Website Creations

Summary The study indicates how ESE can improve its website for better communication to users with varied cultures. In this regard, the study identified that…

Artificial Intelligence Possible Benefits and Challenges of Artificial Intelligence in…

Introduction In the recent years, artificial intelligence (AI) is no longer considered as a robot in the fiction of science, despite the rapid development of…

E-Commerce and System Design

Abstract Ecommerce is the emerging concept in the world of business during the current era. The companies managing the physical business for long times are…

Electronic Media

How has electronic media (the internet especially and self-produced DVD’s) reversed some of this dominant cultural hegemony generated by Hollywood movies by democratizing access to…

Changes in the Documentary as A Result of Technological Changes

Documentary film is motion picture which is non-fiction and is meant to document some aspects of reality. Documentary films are made primarily for the purpose…

Our Media and the Immorality Explosion

Introduction Morality in the society has been an interesting topic over the years. It is important to have good morals where people are disciplined and…

Has the internet brought about more harm than it is…

Internet is a system that links computer networks globally for serving its users present across the world. In the present decade, the use of internet…

How technology has affected conversations

Description How people conduct conversations has changed these days due to advancements in technology. In a recent workshop that I attended, people were using their…

Internet kills communication

Introduction A famous quote by Peter Drucker, “the most important thing in communication is hearing that which is not said”, remains a very relevant dictum…

  • Artificial Intelligence
  • Computer Science
  • Cyber Crime
  • Cyber Security
  • Data Analysis
  • Internet Of Things

speech on internet kills communication

Is Internet Language a Destroyer to Communication?

  • Conference paper
  • First Online: 25 July 2023
  • Cite this conference paper

speech on internet kills communication

  • Chan Eang Teng 13 &
  • Tang Mui Joo 13  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 693))

Included in the following conference series:

  • International Congress on Information and Communication Technology

626 Accesses

Internet language is known as the new form of language that has been used on social media by the Internet users. Since the Internet language has been widely used through social media, there is some influence on the users’ behaviour. As a result, the use of the Internet language is feared to undermine the authenticity of the original language. Language is a significantly important communication tool for everyone, and hence, there are many studies on human communication. As the emergence of the Internet is growing fast, language has been affected and caused by the inventions of Internet language which is also known as Internet slang. It is also now widely used by people in their daily communication. However, there are several problems caused by Internet language. For example, people who seldom use the Internet could not understand the Internet language which might cause communication problems, the loss of language authenticity, and generation gap. As the study of Internet language is not broad, a few problems still remain unknown such as the communication habits of Internet language users, the level of understanding of the original language, and how elders are out of touch with contemporary society due to the Internet language. Quantitative research method which is an online survey is used to study the generation Z who were born from 1997 to 2012 and baby boomers who were born from 1955 to 1964. The reason why this research targets these selected samples is to investigate the generation gap between gen Z and baby boomers that the Internet language brings to them. The research is intended to investigate how the Internet language affects human communication habits. This research has found out that the Internet language has actually affected human communication habits, because the Internet language has become a part and parcel of their communication style.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unbabael (2019) Do you speak internet? How internet slang is changing language. https://resources.unbabel.com/blog/speak-internet-slang . Last accessed 5 Aug 2022

Kern R (2015) Language, literacy and technology . Cambridge University Press, Cambridge

Google Scholar  

Alvernia University (2000) Applying business models to higher education. https://academicjournals.org/journal/IJEAPS/article-full-text-pdf/1380FAC58982 . Last accessed 22 Sep 2022

Sabri NAB, Hamdan SB, Nadarajan N-T M, Sing SR (2020) The usage of English internet slang among malaysians in social media. Selangor Humaniora Review

Coleman J (2022) The life of slang: the history of slang. https://ebookcentral-proquest-com.tarcez.tarc.edu.my/lib/tarc-ebooks/reader.action?docID=943382 . Last accessed 23 Aug 2022

Farina F, Lyddy F (2011) The language of text messaging. “Lingustic ruin or resource?” The Irish Psychologist 37(6):145–149

Kadir ZA, Idris H, Husain SSS (2012) Playfulness and creativity: a look at language use online in Malaysia. Proced-Soc Behav Sci 65:404–409

Indera WAIWA, Ali AAER (2021) The relationship between internet slang and English language learning 4(2):1–6

Fish TW (2015) Internet slang and high school students: a teacher’s perspective . A thesis submitted for Master of Arts in Communication and Leadership Studies, Gonzaga University

Petrova YA, Vasichkina ON (2021) The impact of the development of information technology tools of communication on digital culture and Internet . https://doi.org/10.1051/shsconf/20110101002 . Last accessed 23 Aug 2021

Subramaniam V, Razak NA (2014) Examining language usage and patterns in online conversation: communication gap among generation y and baby boomers 118:468–474

Eliza MJ, Marigrace DC (2022) Digital culture and social media slang of gen Z, https://uijrt.com/articles/v3/i4/UIJRTV3I40002.pdf . Last accessed 5 Aug 2022

Mansell R (2021) Imagining the internet. https://ebookcentral-proquest-com.tarcez.tarc.edu.my/lib/tarc-ebooks/detail.action?docID=998950&pq-origsite=summon. Last accessed 2021/08/20 . Last accessed 20 Aug 2021

Rezeki TI, Wahyudin R (2019) Language Acquisition Pada Anak Periode Lingustik . Serunal Jurnal Ilmiah Ilmu Pendidikan 5(1):84–89

Download references

Acknowledgements

The authors acknowledged the raw materials provided by Alice Tan, Hor Yan, Wern Jing, and Sherwyn Yap.

Author information

Authors and affiliations.

Tunku Abdul Rahman University of Management and Technology, 53300, Kuala Lumpur, Malaysia

Chan Eang Teng & Tang Mui Joo

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Chan Eang Teng .

Editor information

Editors and affiliations.

Department of Design Engineering and Mathematics, Middlesex University London, London, UK

Xin-She Yang

Department of Biomedical Engineering, University of Reading, England, UK

R. Simon Sherratt

Department of Computer Science and Engineering, Techno International Newtown, Chakpachuria, West Bengal, India

Nilanjan Dey

Global Knowledge Research Foundation, Ahmedabad, India

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Teng, C.E., Joo, T.M. (2023). Is Internet Language a Destroyer to Communication?. In: Yang, XS., Sherratt, R.S., Dey, N., Joshi, A. (eds) Proceedings of Eighth International Congress on Information and Communication Technology. ICICT 2023. Lecture Notes in Networks and Systems, vol 693. Springer, Singapore. https://doi.org/10.1007/978-981-99-3243-6_42

Download citation

DOI : https://doi.org/10.1007/978-981-99-3243-6_42

Published : 25 July 2023

Publisher Name : Springer, Singapore

Print ISBN : 978-981-99-3242-9

Online ISBN : 978-981-99-3243-6

eBook Packages : Engineering Engineering (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Speech on Internet for Students and Children

Speech on Internet

Very good morning to all. Today, I am here to present a speech on internet. Someone has rightly said that the world is a small place. With the advent of the internet, this saying seems realistic. The internet has really bought the world together and the distance between two persons is really not a distance today. We all know about the technological advancements happening in the world. One of the major attributes of technological advancement is the internet. Today the internet is available easily to many individuals. Also, it is rapidly changing the way we work, travel, educate and entertain.

Speech on Internet

Source: pixabay.com

Evolution of Internet

Many of you are aware of what the internet facility is. Still, I would like to highlight the aspects of the internet. The internet is a facility wherein two gadget screens are connected through signals. Thus, through this medium, the information can be exchanged between two gadgets.

The history of the internet dates back to 40 years ago with its first use in the United States of America and the inventor of the internet was Robert E.Kahn and Vint Cerf. Earlier the internet was only used to send emails between two computers. Today it has reached all distant parts of the globe with more than 1.5 million users. They use the internet for exchange of information, entertainment, money exchanges, etc.

Get the Huge list of 100+ Speech Topics here

Pros of the Internet

The internet facility has many advantages and it has proved to be a milestone in the technical advancement of humankind. It allows users to exchange and communicate information. Two users who are sitting in distant corners of the world can easily communicate through mails, chats, and video conferencing by using the internet.

It provides information of all kinds to its users. Also, it provides entertainment by offering services of watching movies, listening to music, playing a game. Various day to day activities such as travel ticket bookings, banking facilities, shopping, etc. can be easily done through the internet.

Nowadays the internet also offers various dating websites and matrimonial websites by which one can find their prospective soul mate.

The Internet also offers a facility to its users where they can earn online by means of blogs and video blogs. These are some of the major benefits of the internet has a dark side also.

Cons of the Internet

Many a number of people misuse information for fraud and illegal works. Due to excessive use of the internet in the wrong hands, a number of cybercrimes are happening which is affecting the trust of the people on the internet.

Abuse over social media is also prevailing through the internet wherein people of negative mentality abuse other people on the basis of caste, race, color, appearance, etc. Addiction to online games is one of the major problems of parents today as children get addicted to online games and avoid their studies and outdoor activities.

The internet has nowadays become such an important part of the life of the people that it is hardly possible to spend even a day without using the internet. Thus after seeing the negatives of the internet, it is not practically possible to completely avoid the internet. However, we can put a timeline or restriction on its usage especially to children.

The parents and teachers can monitor the online activities of their children and guide them on the proper use of the internet. We should also educate and aware people of online cybercrime and fraud. Thus through proper precautions and adopting safety measures the internet can prove to be a boon for the development of human society.

Read Essays for Students and Children here !

Customize your course in 30 seconds

Which class are you in.

tutor

Speech for Students

  • Speech on India for Students and Children
  • Speech on Mother for Students and Children
  • Speech on Air Pollution for Students and Children
  • Speech about Life for Students and Children
  • Speech on Disaster Management for Students and Children
  • Speech on Generation Gap for Students and Children
  • Speech on Indian Culture for Students and Children
  • Speech on Sports for Students and Children
  • Speech on Water for Students and Children

16 responses to “Speech on Water for Students and Children”

this was very helpful it saved my life i got this at the correct time very nice and helpful

This Helped Me With My Speech!!!

I can give it 100 stars for the speech it is amazing i love it.

Its amazing!!

Great !!!! It is an advanced definition and detail about Pollution. The word limit is also sufficient. It helped me a lot.

This is very good

Very helpful in my speech

Oh my god, this saved my life. You can just copy and paste it and change a few words. I would give this 4 out of 5 stars, because I had to research a few words. But my teacher didn’t know about this website, so amazing.

Tomorrow is my exam . This is Very helpfull

It’s really very helpful

yah it’s is very cool and helpful for me… a lot of 👍👍👍

Very much helpful and its well crafted and expressed. Thumb’s up!!!

wow so amazing it helped me that one of environment infact i was given a certificate

check it out travel and tourism voucher

thank you very much

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Advertisement

Supported by

Britain’s Violent Riots: What We Know

Officials had braced for more unrest on Wednesday, but the night’s anti-immigration protests were smaller, with counterprotesters dominating the streets instead.

  • Share full article

A handful of protesters, two in masks, face a group of riot police officers with shields. In the background are a crowd, a fire and smoke in the air.

By Lynsey Chutel

After days of violent rioting set off by disinformation around a deadly stabbing rampage, the authorities in Britain had been bracing for more unrest on Wednesday. But by nightfall, large-scale anti-immigration demonstrations had not materialized, and only a few arrests had been made nationwide.

Instead, streets in cities across the country were filled with thousands of antiracism protesters, including in Liverpool, where by late evening, the counterdemonstration had taken on an almost celebratory tone.

Over the weekend, the anti-immigration protests, organized by far-right groups, had devolved into violence in more than a dozen towns and cities. And with messages on social media calling for wider protests and counterprotests on Wednesday, the British authorities were on high alert.

With tensions running high, Prime Minister Keir Starmer’s cabinet held emergency meetings to discuss what has become the first crisis of his recently elected government. Some 6,000 specialist public-order police officers were mobilized nationwide to respond to any disorder, and the authorities in several cities and towns stepped up patrols.

Wednesday was not trouble-free, however.

In Bristol, the police said there was one arrest after a brick was thrown at a police vehicle and a bottle was thrown. In the southern city of Portsmouth, police officers dispersed a small group of anti-immigration protesters who had blocked a roadway. And in Belfast, Northern Ireland, where there have been at least four nights of unrest, disorder continued, and the police service said it would bring in additional officers.

But overall, many expressed relief that the fears of wide-scale violence had not been realized.

Here’s what we know about the turmoil in Britain.

Where arrests have been reported

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Online hate analysts are calling for greater eSafety powers after study finds rise in anti-Semitism and Islamophobia

A woman in Muslim headdress sitting opposite a man. Both are at a desk using laptops.

Australian analysts tracking offensive online comments since the current Israel-Gaza conflict have found anti-Semitic and Islamophobic posts have skyrocketed.

They say the national eSafety Commissioner needs greater power to rein in online hate and there should be funding to train police to tackle it.

What's next?

A report from the Online Hate Prevention Institute comparing anti-Semitic and Islamophobic data will be released in coming months.

Researchers of online hate are calling for the remit of Australia's online regulator to be expanded, amid a "significant" increase in anti-Semitism and Islamophobia.

The Australian-based Online Hate Prevention Institute tracked offensive posts globally on 10 social media platforms for three months the beginning of the war in Gaza on October 7 last year.

The research has already been used for international police training, and the authors says it highlights a flaw in how the issue is tackled by the eSafety Commissioner.

The study, conducted with the Online Hate Task Force in Belgium, involved analysts working in one-hour blocks, seeking out anti-Semitism or Islamophobia.

"It's rough," an Australian researcher who worked on the project told the ABC.

"All of us who do the social media monitoring work, we all struggle with it from time to time."

A woman's hand on using a pink computer mouse.

There can be personal risk for those involved in the work, so she has asked for her name not to be used.

The team has used a "snowball" methodology where they find an offensive post and then click through to those interacting with it, working for an hour at a time to avoid becoming stuck in "an echo chamber".

"Having real people look at this means we see things that artificial intelligence won't," she said.

"There's a lot of dog whistles that are used, coded language, things like that."

Online hate speech has ballooned in past year

The lead author, Andre Oboler, who is also the Online Hate Prevention Institute's CEO, said hate speech targeting both groups was "up significantly on every single platform".

"It's the mainstream, it's the extreme — everything is just up," he said.

The institute's report into Islamophobia and racism against Palestinians and Arabs, found 1,169 offensive posts over 160 hours of searching, from October to February.

Man wearing glasses and business attire.

It divided the hate into 11 categories, including 'inciting violence against Muslims', 'Muslims as a cultural threat', 'demonising or dehumanising', 'xenophobia' and 'anti-Muslim jokes'.

The institute earlier released a report into anti-Semitism, which found 2,898 offensive items.

The posts were sorted into 27 groups, under four broad categories: 'traditional anti-Semitism, 'incitement to violence', 'Holocaust related content', and 'anti-Semitism related to Israel or Israelis'.

A third report comparing the two datasets will be released in coming months.

Research papers piled on top of each other.

The institute had already been working on an anti-Semitism project before the Hamas attacks on Israel on October 7 last year and the ensuing Gaza war, and so was able to compare the data.

It found the volume of offensive posts increased more than five-fold.

The institute did not have a dataset on Islamophobia for a comparison but said "through comparisons with other data we can state with certainty that religious vilification against Muslims has increased substantially".

A man is looking at a research paper while taking part in a Zoom call.

The report estimated the increase had quadrupled.

The report into anti-Semitism was produced in partnership with the Executive Council of Australian Jewry and the report into Islamophobia was produced partly with funding from the Australian government's Safe and Together Community Grants Program.

The institute is seeking funding to run the project again starting in October, to see if rates have changed one year on.

"If we don't have that measurement, we don't have the data to guide government policy, to guide the community, to support the community organisations that need to respond," Dr Oboler said.

It’s the latest in a series of measures showing an increase in discriminatory incidents.

In the weeks after the conflict began , Islamophobia Register Australia recorded a 13-fold increase in incidents compared to the previous year, while the Executive Council of Australian Jewry reported a six-fold increase in anti-Semitic incidents, saying more had occurred following the start of the war than in the entire previous year.

A photo of two police officers standing next to each other in high-vis. Their heads are not in the photo.

'People should be able to disagree peacefully'

In Victoria, police have established 'Operation Park' to investigate offences associated with the Middle East conflict.

They have investigated 88 incidents relating to anti-Semitism and 16 related to Islamophobia, mostly involving things like graffiti and verbal abuse.

Queensland Police said it did not categorise the offences in a searchable way and flagged "police aren’t always involved" in discriminatory incidents.

NSW Police was not able to access potential data and Western Australia also raised an issue with gathering data but said there had been "no incidents involving violence" in the state.

ACT Police said there had been no incidents there, and South Australia said it had "not observed an increase" in reports of racially motivated incidents. The Northern Territory and Tasmanian police forces did not respond.

The federal government's recently appointed Special Envoy on Social Cohesion, Peter Khalil, said his role would "absolutely" be looking at rises in hate on social media.

A man wearing a dark suit gestures with his hands.

His appointment follows the announcement of a special envoy on anti-Semitism, and the promise of one on Islamophobia, although some Muslim leaders have questioned the worth of such an appointment .

Mr Khalil said the appointments were needed to help respond to "the challenges we're facing at the moment" but he hoped "if we do our work effectively, we navigate through what is a difficult period now, those roles won't be necessary in the future".

Mr Khalil said he and the government supported the right to political expression "100 per cent".

"People should be able to disagree peacefully on issues without resorting to the personal attack on someone based on their identity," he said.

"The terrible cost of war, the deep pain and anguish many Australians have felt of what they're seeing overseas, should not mean that Jewish Australians or Muslim Australians should be vilified or attacked, because of their faith and their background."

A man pressure-hosing graffiti off a brick fence.

Grassroots efforts to combat the impacts

Amid the rise in discrimination, members of affected communities are standing up to support one another.

Heshy Adelist, who owns an outdoor cleaning company, has been voluntarily removing anti-Semitic graffiti in Melbourne.

After being alerted to a spray-painted message to "kill Jews" in November last year, he immediately left the job he was on to go and clean it off.

"We already have so much hate in this world already," he said.

A man standing outdoors and wearing a collared t-shirt with a black cap sitting backwards on his head.

"If it had have said kill all Muslims, or kill or Christians, I would have gone out and cleaned it, it's more about the hate.

"But when it said kill the Jews, since I am Jewish, it was a bit more personal."

He has since cleaned off plenty more hate speech, including Nazi symbols, for free and is part of an informal message group where people can report and respond to anti-Semitism.

A man pressure hosing some steps outside a house.

"People at work, people at their homes, being targeted, online — just because they're Jewish."

Also in Melbourne, Abdurrafi Suwarno has been offering emotional and legal support to victims of Islamophobia, who he said were often women of colour.

"There are many online incidents, there's a lot of Islamophobic graffiti that happens, there is a lot of workplace incidents," he said.

"It's been extremely intense and horrible, it feels for our entire community that we've been going through a collective and unending trauma, and while we're crying for our brothers and sisters overseas, we're also seeing the direct impacts on the ground here."

A man wearing a white collared shirt with a dark jumper on top and glasses.

Mr Suwarno works with the Islamophobia Support Service which is run by the Islamic Council of Victoria, and feels a responsibility to hear the often-untold stories.

"I wouldn't say it feels good, I would say it feels necessary," he said.

"I want to see a future where the women in our community, the future generations and children in our community, can have a fair and equitable society where they can contribute and feel free to go about their business, and not have to believe that something will happen to them."

Training for police and calls to increase eSafety powers

The Online Hate Prevention Institute and the Belgium-based Online Hate Task Force have used their latest research to run training for police from across the world.

The session was staged in Brussels and online, with contributions from the European Commission Coordinator on combating anti-Muslim hatred, Marion Lalisse, and the Office of the EU Coordinator on combating anti-Semitism and fostering Jewish life.

Dr Andre Oboler said some Australian officers, including some members of the Federal Police, had taken part.

"The biggest piece of feedback was a view that this sort of training really needs to be rolled out to the grass roots," he said.

"The police we had were generally those working on biased hate crimes, or in the counter-terrorism space, but their view was this is really training that the cops on the beat need."

Dr Oboler has spent more than a decade at the Online Hate Prevention Institute and was previously co-chair of the Israeli government's Global Forum for Combating Anti-Semitism.

He said there was potential for more Australian officers to get access to the training.

A man in business attire sitting at a desk on a zoom call with multiple others.

"We've already had contact with some of the police forces that are interested — again, one of the difficulties comes back to funding."

The institute's latest report has recommended expanding the remit of Australia's eSafety Commissioner, so it can deal with group-based hate.

"At the moment, it can only deal with hate targeting individuals," Dr Oboler said.

"But if it's attacking the entire community right now, it's out of eSafety's remit, they have no power to deal with that."

The research found the rate offensive posts were removed varied "significantly" between social media platforms, but overall, for anti-Semitism 18 per cent of the offending posts were removed, and 32 per cent of the Islamophobic posts were taken down.

"We need eSafety to be able to take down such content as well," Dr Oboler said.

"They need to be able to issue a notice, and they can't do that unless the government updates the legislation and actually gives them that power."

Julie Inman Grant looking down the barrel of the camera in a portrait.

An eSafety spokesperson said the government had commissioned an independent review of Australia's Online Safety Act and the final report was expected by the end of October.

"We will also continue to work closely with the Australian government to ensure the Online Safety Act and related enabling legislation remains fit for purpose and adequately reflects Australians' needs and expectations," the spokesperson said.

  • X (formerly Twitter)
  • Internet Culture
  • Religious Discrimination
  • Social Media
  • Social Problems

Middle East latest: Israel bracing for attack after Hamas leader killed - as Britons in Lebanon told: 'Leave now'

Israel is bracing itself for a potential multiday attack from Iran and Hezbollah, officials have said, as fears grow over a wider escalation of conflict in the Middle East. Meanwhile, the UK and US are among the countries urging citizens to leave Lebanon on "any ticket available".

Monday 5 August 2024 17:20, UK

  • Israel-Hamas war

Please use Chrome browser for a more accessible video player

  • Israel bracing for attack as 'wave of missiles' expected
  • Iran says it doesn't want regional escalation but must 'punish' Israel
  • Flights from Lebanon to London double in price as Britons told: 'Leave now'
  • Eyewitness: Inside the inhumane 'safe zone' where Palestinians are crammed in
  • Alistair Bunkall: Iran purposefully delaying retaliation against Israel
  • Alex Crawford: Inevitable audacious assassinations will expand the war zone

We're pausing our live coverage for the day, but we'll return if there are any major developments later.

Before we go, here is a recap of what's been happening today as tensions rise in the Middle East:

  • Israel is bracing for a potential multiday attack by Iran and Hezbollah, an Israeli official has told our US partner network NBC;
  • The UK, US, Australia, France, Canada, Japan, Jordan, South Korea and Saudi Arabia have all recommended its nationals vacate Lebanon as soon as possible;
  • Ultra-Orthodox Jews have clashed with Israeli police over a supreme court ruling saying their previous draft exemptions were illegal;
  • Lebanon has received emergency medical supplies to equip its hospitals for possible war injuries. The World Health Organisation has delivered 32 tonnes of medical supplies;
  • Israel returned the bodies of 84 Palestinians to Gaza, though Hamas says none are identifiable and were left in a "state of complete decomposition";
  • Israel says it has killed Abdul Fattah Al-Zurai'i - an official in the Hamas-run government in Gaza who it says was involved in militant activities.

Israel hasn't ruled out sending its government to a command bunker that sits underneath the Jerusalem hills amid fear of attacks from Iran and Hezbollah.

The bunker, known as the National Management Centre, was built after the end of the Second Lebanon War in 2006.

It can hold hundreds of people and can reportedly sustain hits from a range of existing weaponry, while it is connected to the headquarters of Israel's defence ministry.

A spokesman for Israel's government was asked today whether the prime minister's office had activated the bunker amid rising tensions in the Middle East.

"I think, like almost every single government or any responsible government around the world, Israel does have procedures to protect the government of this country in times of difficulty," said David Mencer.

"Israel is, of course, no different, and we've had those things in place for many years now. 

"This is a tough neighbourhood to live in, perhaps the toughest neighbourhood in the entire world to live in. 

"We have unfortunately become used to genocidal terrorist regimes, organisations, but also genocidal countries like Iran that seek our destruction."

Israel is hoping it can band together another coalition of countries to help defend it from whatever attack Iran has planned.

There are fears in Israel that Iran is planning an attack with "waves of missiles and drones" fired over several days into the country (see 08.39am post).

The last time Iran openly attacked Israel was in April, when a coalition of countries including Britain, the US, France, and allied Arab states helped intercept missiles fired into the country.

Prior to that attack, Iran provided warning of what was to follow, giving Israel and its allies time to get military preparations in place.

But according to our  Middle East correspondent Alistair Bunkall , it doesn't look like Iran will be so generous this time around.

"They are probably more likely to launch an attack without any forewarning," he says.

"Last time in April, they took about 13 days before launching an attack that kept Israelis on edge. That is happening again. This time, the country is very tense awaiting that."

This video report has more...

Earlier, we brought you news that Israel had returned the bodies of 84 Palestinians to Gaza (see 2.00pm post).

Hamas has since released a statement on its Telegram channel saying that none of the bodies are identifiable as they are in a "state of complete decomposition".

This, the group says, is an attempt from Israel to "double the suffering" of the families of the deceased who want to know the fate of their loved ones.

Hamas has called on the international community to "reject and denounce" what it has labelled as "heinous inhuman practices".

Foreign nationals have been told to leave Lebanon, with many expecting the Hezbollah group based there to be involved in the Iranian response to Israel over the killing of Ismail Haniyeh in Tehran last week. 

The UK, US, Australia, France, Canada, Japan, Jordan, South Korea and Saudi Arabia have recommended its nationals vacate the country as soon as possible.

"Tensions are high, and the situation could deteriorate rapidly," David Lammy, the foreign secretary, said on Saturday. 

A group focused on the safe return of Israeli hostages has released an angry statement in response to Palestinian reports that Israel returned more than 80 bodies to Gaza.

Weam Fares, a spokesperson for the Nasser Hospital in southern Gaza, said today that 84 bodies were handed over at the Kerem Shalom crossing and were taken directly for burial.

"How can it be that the state of Israel gives 80 bodies and receives zero in return?" the Hostages and Missing Families Forum said.

"How is it possible that the state of Israel, under the leadership of Netanyahu, is returning bodies that are not part of a deal? What about our family members, how long will they be held captive by Hamas in Gaza?

"The prime minister actually shows maximum determination and efficiency in returning the bodies of the Gazans to their families."

There are 115 hostages believed to be held in Gaza, including 41 whose deaths have been confirmed by Israeli authorities. Of the total, 111 were abducted on 7 October.

Last week, the families of hostages held in Gaza held a march and rally in Tel Aviv, where the Hebrew words "300 days in abandonment" were lit up in flames on the beach.

Iran is purposefully delaying its retaliation against Israel to sow fear and give itself more time to coordinate, our  Middle East correspondent Alistair Bunkall  says.

Israel has been blamed by Iran for killing Hamas's political leader in Tehran last week, with Ayatollah Ali Khamenei vowing revenge and saying it was Iran's "duty" to avenge the assassination.

It is expected to retaliate with a multiday attack on Israel. This will come from multiple fronts, including from Hezbollah in Lebanon to the north, the Houthis in Yemen to the south and proxies loyal to Iran in Syria and Iraq to the east.

Organising such an attack undoubtedly takes time and strict coordination, which is one reason why a large attack has not yet come.

Another reason is that Iran is able to sow fear into Israelis and the Lebanese, who are anxiously speculating over what might be on the horizon.

"The wait for Iran's response is in part deliberate, I think," says Bunkall.

"The Iranians know that it sort of plays into the psychology of Israelis as they speculate what might be coming. 

"It's also because I think the Iranians are trying to decide exactly what their response is going to be, how to coordinate it with Hezbollah and other proxies. 

"The delay does give particularly the Americans time to get more military assets into the region, to provide a defensive layer for Israel too."

Nasser Kanaani, a spokesman for the Iranian foreign minister, said today that Tehran was not seeking a wider conflict in the region, but added that "punishing Israel is necessary".

Despite Iran's attempt to downplay the idea it would spark an all-out regional war, Bunkall says that decision might now be out of politicians' hands.

"It would not take a very big Iranian retaliation, or from Hezbollah, to force the region into an uncontrollable war," he added.

"And even though that's not what everybody wants, sometimes these events have a habit of running out of the control of the politicians and the commanders who try to orchestrate them."

Israel says it has killed an official in the Hamas-run government in Gaza who was involved in militant activities.

Hamas confirmed that Abdul Fattah Al-Zurai'i was killed alongside his mother in an airstrike yesterday. The statement identified him as the undersecretary of its economy ministry, with no reference to any militant roles.

Israel's Defence Forces identified him as the economy minister and said he also worked in the manufacturing department of Hamas's armed wing.

The IDF also said he had a "significant role" in directing efforts to seize control of humanitarian aid entering Gaza and that he was responsible for the distribution of fuel, gas and funds for "terrorist purposes".

In its own statement announcing his death, Hamas said the killing would not deter it from "performing our national duty towards our Palestinian people".

Lebanon has received emergency medical supplies today to equip its hospitals for possible war injuries.

Tensions in the region have spiralled in the past week after the killings of Hamas's political leader in Tehran and a senior Hezbollah commander in Beirut.

Iran and Hezbollah have vowed to retaliate against Israel for the killings, prompting concerns that violence could escalate into a full-blown regional war.

Hospitals in southern Lebanon, where most of the exchanges between Hezbollah and the Israeli military have taken place, have struggled to cope with wounded patients over the past 10 months.

Today, the World Health Organisation delivered 32 tonnes of medical supplies to Lebanon's health ministry, including at least 1,000 trauma kits.

"The goal is to get these supplies and medicines to various hospitals and to the health sector in Lebanon, especially in the places most exposed [to hostilities] so that we can be ready to deal with any emergency," said Lebanon's health minister Firass Abiad.

Israel's finance minister says that blocking humanitarian aid to Gaza might be "justified and moral" even if it causes two million civilians to die of hunger.

Bezalel Smotrich, a prominent leader of the nationalist-religious bloc within Benjamin Netanyahu's government, complained that international pressure meant Israel had "no choice" but to bring in aid.

Speaking at a news conference in Israel, he said the main factor extending the war was the aid sustaining Hamas.

"We can't, in the current global reality, manage a war," said Mr Gallant.

"Nobody will let us cause two million civilians to die of hunger, even though it might be justified and moral, until our hostages are returned. 

"Humanitarian in exchange for humanitarian is morally justified, but what can we do? We live today in a certain reality, we need international legitimacy for this war."

Be the first to get Breaking News

Install the Sky News app for free

speech on internet kills communication

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

IMAGES

  1. Combating Hate Speech on the Internet

    speech on internet kills communication

  2. Internet Kills Communication by Mustafa Ayman on Prezi

    speech on internet kills communication

  3. Themenwoche: Respekt: Hate Speech im Internet

    speech on internet kills communication

  4. Internet kills communication

    speech on internet kills communication

  5. Speech On Internet

    speech on internet kills communication

  6. Does internet kills communication? by Christine May Estrella on Prezi

    speech on internet kills communication

COMMENTS

  1. The dying art of conversation

    Speaking to machines. Sherry Turkle, professor of social studies of science and technology, warns that when we first "speak through machines, [we] forget how essential face-to-face conversation ...

  2. Internet kills communication

    Here's Namrata Motwani speaking on the topic "Internet kills communication".Speak UP 4.0 is a Speech Competition event organized by the Tachyons. It is an Op...

  3. How Smartphones Are Killing Conversation

    According to MIT sociologist Sherry Turkle, author of the new book Reclaiming Conversation, we lose our ability to have deeper, more spontaneous conversations with others, changing the nature of our social interactions in alarming ways. Sherry Turkle. Turkle has spent the last 20 years studying the impacts of technology on how we behave alone ...

  4. Has Technology Killed Face-To-Face Communication?

    While sending emails is efficient and fast, face-to-face communication drives productivity. In a recent survey, 67% of senior executives and managers said their organization's productivity would ...

  5. Does social media kill communication skills?

    Yes they are fast, available at all hours and easy, but should never take the place of verbal discussions. Listening to some people as they try to put together a complete sentence is uncomfortable ...

  6. Is the internet killing off language?

    The fastest growing 'new language' in the world is emoticons (faces) and emojis (images of objects, which hail from Japan), which are one of the biggest changes caused by digital communications ...

  7. Is Technology Killing Human Emotion?: How Computer-Mediated

    Technology is a vital part of day-to-day life for people all around the globe, but the effects of taking our communication online remain unclear, especially in terms of interpersonal communication.

  8. Combating Hate Speech Through Counterspeech

    Combating Hate Speech Through Counterspeech. Aug 9, 2019. Media, Democracy, & Public Discourse. Daniel Jones. Susan Benesch. Share To. From misogyny and homophobia, to xenophobia and racism, online hate speech has become a topic of greater concern as the Internet matures, particularly as its offline impacts become more widely known.

  9. Protecting Freedom of Expression Online

    Questions around freedom of expression are once again in the air. While concern around the Internet's role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the ...

  10. What is Section 230? The internet free speech law before the Supreme

    You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from ...

  11. PDF Is the Internet Killing Communication

    Well, frankly speaking, the internet offers an expedient method to communicate but. it still kills the transmission of messages, for the reason that the internet segregates one and another. For example, our feelings may not be fully transmitted through words and recipients may interpret them wrongly. Thus, the conveying of message is unsuccessful.

  12. Technology, social media kill our communication skills

    Social media, which is known worldwide, has helped families and friends to stay connected, despite the fact that they live on different continents. Story continues below advertisement. Not even 10 years ago, communication with people from other countries or states, wasn't only complicated, but also extremely expensive.

  13. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  14. We need to talk: has digital killed the art of conversation?

    Today's younger generations have never known a life without the internet. Research by Ofcom into children's use of media shows that children are becoming more digitally independent at younger ages, with 55% of 5-15 year-olds using a mobile phone to access the internet. Whilst the report highlights that 55% of parents of children aged 5-15 ...

  15. The Rutherford Institute :: Digital Kill Switches: How Tyrannical

    Communications kill switches have become tyrannical tools of domination and oppression to stifle political dissent, shut down resistance, forestall election losses, reinforce military coups, and keep the populace isolated, disconnected and in the dark, literally and figuratively. ... The internet kill switch is just one piece of the government ...

  16. Internet kills communication by Shannon Grega on Prezi

    Benefits of Internet in Communication. 1. The act or process of communicating; fact of being communicated. 2. The imparting or interchange of thoughts, opinions, or information by speech, writing, or signs. 3. Something imparted, interchanged, or transmitted. 4.

  17. Supreme Court Poised to Reconsider Key Tenets of Online Speech

    David McCabe, who is based in Washington, has reported for five years on the policy debate over online speech. Jan. 19, 2023 For years, giant social networks like Facebook , Twitter and Instagram ...

  18. Perspective

    Internet culture often categorizes hate speech as "trolling," but the severity and viciousness of these comments has evolved into something much more sinister in recent years, said Whitney ...

  19. The Evolving Free-Speech Battle Between Social Media and the Government

    The judge wrote, in a preliminary injunction, "If the allegations made by plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States ...

  20. AI struggles to recognize toxic speech on social media. Here's why

    In one study, for example, human annotators rarely reached agreement when they were asked to label tweets that contained words from a lexicon of hate speech. Only 5 percent of the tweets were acknowledged by a majority as hate speech, while only 1.3 percent received unanimous verdicts.

  21. Study Highlights Challenges in Detecting Violent Speech Aimed at Asian

    A research group is calling for internet and social media moderators to strengthen their detection and intervention protocols for violent speech. Their study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks.

  22. Internet Essay Examples

    Pages: 4. Words: 1177. Rating: 4,8. Internet has been the fastest growing medium, as more and more people are becoming a part of the internet fraternity; it becomes more difficult to…. Internet Cyber Bullying Cyber Crime Cyber Security Virtual Reality ⏳ Social Issues. View full sample.

  23. Is Internet Language a Destroyer to Communication?

    Table 7 also shows that Internet language makes the communication funny and humorous. It helps to improve the communication speed, gives space to cultivate creativity, eases the process of learning language, and builds some bonding in the community. Table 6 Negative impact of Internet language. Full size table.

  24. Speech on Internet for Students and Children

    Speech for Students. Very good morning to all. Today, I am here to present a speech on internet. Someone has rightly said that the world is a small place. With the advent of the internet, this saying seems realistic. The internet has really bought the world together and the distance between two persons is really not a distance today.

  25. Riots Break Out Across UK: What to Know

    Three children were killed, ... Britain and other democracies have found that policing the internet is legally murky terrain, with individual rights and free speech protections balanced against a ...

  26. Online hate analysts are calling for greater eSafety powers after study

    Online hate speech has ballooned in past year The lead author, Andre Oboler, who is also the Online Hate Prevention Institute's CEO, said hate speech targeting both groups was "up significantly on ...

  27. China's Proposed Digital ID System Stokes Fears of Overreach

    China's plan to introduce a nationwide digital identification system has been met with criticism of government overreach in a country that already closely monitors and censors speech.

  28. Middle East latest: Israel bracing for attack after Hamas leader killed

    Israel is bracing itself for a potential multiday attack from Iran and Hezbollah, officials have said, as fears grow over a wider escalation of conflict in the Middle East. Meanwhile, the UK and ...

  29. Nigeria's Tinubu Says Protests Seek to Undermine Government

    Tinubu, in his first public speech since the protests began on Aug. 1, called for a suspension of demonstrations and for those taking part to embrace dialog and not allow themselves to be used to ...

  30. FBI details shooter's search history before Trump assassination attempt

    The gunman who tried to kill former president Donald Trump conducted internet searches related to power plants, mass shooting events and the attempted assassination this year of Slovakia's prime ...