U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Systematic Literature Review on the Spread of Health-related Misinformation on Social Media

Affiliations.

  • 1 Centre for Research on Health and Social Care, Department of Social and Political Science, Bocconi University, Italy. Electronic address: [email protected].
  • 2 London School of Hygiene and Tropical Medicine, United Kingdom.
  • 3 Centre for Research on Health and Social Care, Department of Social and Political Science, Bocconi University, Italy.
  • 4 Department of Social and Political Science, Bocconi University, Italy.
  • PMID: 31561111
  • PMCID: PMC7117034
  • DOI: 10.1016/j.socscimed.2019.112552

Contemporary commentators describe the current period as "an era of fake news" in which misinformation, generated intentionally or unintentionally, spreads rapidly. Although affecting all areas of life, it poses particular problems in the health arena, where it can delay or prevent effective care, in some cases threatening the lives of individuals. While examples of the rapid spread of misinformation date back to the earliest days of scientific medicine, the internet, by allowing instantaneous communication and powerful amplification has brought about a quantum change. In democracies where ideas compete in the marketplace for attention, accurate scientific information, which may be difficult to comprehend and even dull, is easily crowded out by sensationalized news. In order to uncover the current evidence and better understand the mechanism of misinformation spread, we report a systematic review of the nature and potential drivers of health-related misinformation. We searched PubMed, Cochrane, Web of Science, Scopus and Google databases to identify relevant methodological and empirical articles published between 2012 and 2018. A total of 57 articles were included for full-text analysis. Overall, we observe an increasing trend in published articles on health-related misinformation and the role of social media in its propagation. The most extensively studied topics involving misinformation relate to vaccination, Ebola and Zika Virus, although others, such as nutrition, cancer, fluoridation of water and smoking also featured. Studies adopted theoretical frameworks from psychology and network science, while co-citation analysis revealed potential for greater collaboration across fields. Most studies employed content analysis, social network analysis or experiments, drawing on disparate disciplinary paradigms. Future research should examine susceptibility of different sociodemographic groups to misinformation and understand the role of belief systems on the intention to spread misinformation. Further interdisciplinary research is also warranted to identify effective and tailored interventions to counter the spread of health-related misinformation online.

Keywords: Fake news; Health; Misinformation; Social media.

Copyright © 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

PubMed Disclaimer

PRISMA flow diagram.

Numbers of potentially eligible articles.

Topic categories.

Co-citation analysis. We extracted citation…

Co-citation analysis. We extracted citation data from Scopus and analysed citation patterns using…

  • Social media is a source of health-related misinformation. Rolls K, Massey D. Rolls K, et al. Evid Based Nurs. 2021 Apr;24(2):46. doi: 10.1136/ebnurs-2019-103222. Epub 2020 Feb 11. Evid Based Nurs. 2021. PMID: 32046968 No abstract available.
  • Going viral: doctors must tackle fake news in the covid-19 pandemic. O'Connor C, Murphy M. O'Connor C, et al. BMJ. 2020 Apr 24;369:m1587. doi: 10.1136/bmj.m1587. BMJ. 2020. PMID: 32332066 No abstract available.

Similar articles

  • Prevalence of Health Misinformation on Social Media: Systematic Review. Suarez-Lledo V, Alvarez-Galvez J. Suarez-Lledo V, et al. J Med Internet Res. 2021 Jan 20;23(1):e17187. doi: 10.2196/17187. J Med Internet Res. 2021. PMID: 33470931 Free PMC article.
  • Fake news in the age of COVID-19: evolutional and psychobiological considerations. Giotakos O. Giotakos O. Psychiatriki. 2022 Sep 19;33(3):183-186. doi: 10.22365/jpsych.2022.087. Epub 2022 Jul 19. Psychiatriki. 2022. PMID: 35947862 English, Greek, Modern.
  • Medical and Health-Related Misinformation on Social Media: Bibliometric Study of the Scientific Literature. Yeung AWK, Tosevska A, Klager E, Eibensteiner F, Tsagkaris C, Parvanov ED, Nawaz FA, Völkl-Kernstock S, Schaden E, Kletecka-Pulker M, Willschke H, Atanasov AG. Yeung AWK, et al. J Med Internet Res. 2022 Jan 25;24(1):e28152. doi: 10.2196/28152. J Med Internet Res. 2022. PMID: 34951864 Free PMC article.
  • Fake News: Spread of Misinformation about Urological Conditions on Social Media. Loeb S, Taylor J, Borin JF, Mihalcea R, Perez-Rosas V, Byrne N, Chiang AL, Langford A. Loeb S, et al. Eur Urol Focus. 2020 May 15;6(3):437-439. doi: 10.1016/j.euf.2019.11.011. Epub 2019 Dec 23. Eur Urol Focus. 2020. PMID: 31874796 Review.
  • The Psychology of Fake News. Pennycook G, Rand DG. Pennycook G, et al. Trends Cogn Sci. 2021 May;25(5):388-402. doi: 10.1016/j.tics.2021.02.007. Epub 2021 Mar 15. Trends Cogn Sci. 2021. PMID: 33736957 Review.
  • Nutrition Messaging by Healthcare Students: A Mixed-Methods Study Exploring Social Media Usage and Digital Competence. Kaya Kaçar H, Kaçar ÖF, McCullough F. Kaya Kaçar H, et al. Nutrients. 2024 May 10;16(10):1440. doi: 10.3390/nu16101440. Nutrients. 2024. PMID: 38794678 Free PMC article.
  • Assessing the Impact of the COVID-19 Pandemic on Pregnant Women's Attitudes towards Childhood Vaccinations: A Cross-Sectional Study. Arcaro P, Nachira L, Pattavina F, Campo E, Mancini R, Pascucci D, Damiani G, Carducci B, Spadea A, Lanzone A, Bruno S, Laurenti P. Arcaro P, et al. Vaccines (Basel). 2024 Apr 29;12(5):473. doi: 10.3390/vaccines12050473. Vaccines (Basel). 2024. PMID: 38793724 Free PMC article.
  • Association between eHealth literacy and health outcomes in German athletes using the GR-eHEALS questionnaire: a validation and outcome study. Geiger S, Esser AJ, Marsall M, Muehlbauer T, Skoda EM, Teufel M, Bäuerle A. Geiger S, et al. BMC Sports Sci Med Rehabil. 2024 May 24;16(1):117. doi: 10.1186/s13102-024-00902-9. BMC Sports Sci Med Rehabil. 2024. PMID: 38790069 Free PMC article.
  • Environmental factors that impact the eating behavior of home-living older adults. Kvalsvik F, Øgaard T, Jensen Ø. Kvalsvik F, et al. Int J Nurs Stud Adv. 2021 Oct 12;3:100046. doi: 10.1016/j.ijnsa.2021.100046. eCollection 2021 Nov. Int J Nurs Stud Adv. 2021. PMID: 38746717 Free PMC article.
  • Attitudes towards zoonotic disease risk vary across sociodemographic, communication and health-related factors: A general population survey on literacy about zoonoses in the Netherlands. Vlaanderen F, Mughini-Gras L, Bourgonje C, van der Giessen J. Vlaanderen F, et al. One Health. 2024 Apr 8;18:100721. doi: 10.1016/j.onehlt.2024.100721. eCollection 2024 Jun. One Health. 2024. PMID: 38699437 Free PMC article.
  • Abbasi M.-A., Liu H. In: Social Computing, Behavioral-Cultural Modeling and Prediction. Greenberg A.M., Kennedy W.G., Bos N.D., editors. Springer Berlin Heidelberg; 2013. Measuring user credibility in social media; pp. 441–448.
  • Abroms L.C., Lee Westmaas J., Bontemps-Jones J., Ramani R., Mellerson J. A content analysis of popular smartphone apps for smoking cessation. Am. J. Prev. Med. 2013;45(6):732–736. doi: 10.1016/j.amepre.2013.07.008. - DOI - PMC - PubMed
  • Adebimpe W.O., Adeyemi D.H., Faremi A., Ojo J.O., Efuntoye A.E. The relevance of the social networking media in Ebola virus disease prevention and control in Southwestern Nigeria. Pan Afr. Med. J. 2015;22(Suppl. 1) doi: 10.11694/pamj.supp.2015.22.1.6165. - DOI - PMC - PubMed
  • Al Khaja K.A.J., AlKhaja A.K., Sequeira R.P. Drug information, misinformation, and disinformation on social media: a content analysis study. J. Public Health Policy. 2018 doi: 10.1057/s41271-018-0131-2. - DOI - PubMed
  • Albarracin D., Romer D., Jones C., Hall Jamieson K., Jamieson P. Misleading claims about tobacco products in YouTube videos: experimental effects of misinformation on unhealthy attitudes. J. Med. Internet Res. 2018;20(6):e229. doi: 10.2196/jmir.9959. - DOI - PMC - PubMed

Publication types

  • Search in MeSH

Related information

  • Cited in Books

Grants and funding

  • WT_/Wellcome Trust/United Kingdom

LinkOut - more resources

Full text sources.

  • Elsevier Science
  • Europe PubMed Central
  • PubMed Central
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Hong Kong Baptist University Logo

  • Help & FAQ

Health misinformation on social media: A literature review

  • School of Business

Research output : Chapter in book/report/conference proceeding › Conference proceeding › peer-review

Health misinformation on social media is considered as a major public concern. This study evaluates the current state of this issue by conducting a systematic literature review. Based on a stepwise literature search and selection procedure, we have identified 21 articles relevant to the topic of health misinformation on social media. We find that health misinformation on social media is a new and emerging topic in multiple disciplines. One very important insight of this review is that most studies are theoretical and exploratory in nature. There is only a small number of studies have solid theoretical foundations. Finally, we discuss the implication of the literature review for future research.

Original languageEnglish
Title of host publicationPACIS 2019 Proceedings
Publisher
Publication statusPublished - Jul 2019
Event - Xi'an, China
Duration: 8 Jul 201912 Jul 2019
Conference23rd Pacific Asia Conference on Information Systems, PACIS 2019
Country/TerritoryChina
CityXi'an
Period8/07/1912/07/19
Internet address

Scopus Subject Areas

  • Information Systems

User-Defined Keywords

  • Health misinformation
  • Literature analysis
  • Social media

Access to Document

  • https://aisel.aisnet.org/pacis2019/194/

Other files and links

  • Link to publication in Scopus

Fingerprint

  • Health Engineering & Materials Science 100%

T1 - Health misinformation on social media

T2 - 23rd Pacific Asia Conference on Information Systems, PACIS 2019

AU - Li, Yang Jun

AU - Cheung, Christy M K

AU - Shen, Xiao Liang

AU - Lee, Matthew K.O.

N1 - Publisher Copyright: © Proceedings of the 23rd Pacific Asia Conference on Information Systems: Secure ICT Platform for the 4th Industrial Revolution, PACIS 2019. Copyright: Copyright 2020 Elsevier B.V., All rights reserved.

PY - 2019/7

Y1 - 2019/7

N2 - Health misinformation on social media is considered as a major public concern. This study evaluates the current state of this issue by conducting a systematic literature review. Based on a stepwise literature search and selection procedure, we have identified 21 articles relevant to the topic of health misinformation on social media. We find that health misinformation on social media is a new and emerging topic in multiple disciplines. One very important insight of this review is that most studies are theoretical and exploratory in nature. There is only a small number of studies have solid theoretical foundations. Finally, we discuss the implication of the literature review for future research.

AB - Health misinformation on social media is considered as a major public concern. This study evaluates the current state of this issue by conducting a systematic literature review. Based on a stepwise literature search and selection procedure, we have identified 21 articles relevant to the topic of health misinformation on social media. We find that health misinformation on social media is a new and emerging topic in multiple disciplines. One very important insight of this review is that most studies are theoretical and exploratory in nature. There is only a small number of studies have solid theoretical foundations. Finally, we discuss the implication of the literature review for future research.

KW - Health misinformation

KW - Literature analysis

KW - Rumors

KW - Social media

UR - http://www.scopus.com/inward/record.url?scp=85084429705&partnerID=8YFLogxK

M3 - Conference proceeding

AN - SCOPUS:85084429705

BT - PACIS 2019 Proceedings

PB - Association for Information Systems

Y2 - 8 July 2019 through 12 July 2019

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 24, Issue 2
  • Social media is a source of health-related misinformation
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-1807-6620 Kaye Rolls 1 ,
  • Debbie Massey 2
  • 1 Centre for Applied Nursing Research , Western Sydney University , Penrith South , New South Wales , Australia
  • 2 School of Health and Human Services , Southern Cross University , Bilinga , New South Wales , Australia
  • Correspondence to Dr Kaye Rolls, Centre for Applied Nursing Research, Western Sydney University, Penrith South 2170, New South Wales, Australia; kaye.rolls{at}westernsydney.edu.au

https://doi.org/10.1136/ebnurs-2019-103222

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • world wide web technology
  • primary care

Commentary on: Wang Y, McKee M, Torbica A, et al . Systematic review on the spread of health-related misinformation on social media. Soc Sci Med .2019;240:112552.doi: 10.1016/j.socscimed.2019.112552. [Epub ahead of print 18 Sep 2019].

Implications for practice and research

When nurses and midwives encounter misinformation on social media, they should provide or direct individuals to sources of accurate information.

Cross-disciplinary research to understand factors that influence the uptake of health-related (mis)information is required.

Over the past 25 years, the Internet and social media have rapidly become ubiquitous in daily life, and despite improved access to information there are increasing concerns that these social channels are also spreading health-related false information or misinformation. 1 2

Of the 57 papers identified, the largest category of subjects linked to miscommunication was communicable diseases (n=30) followed by a mixed group (n=10), non-communicable diseases (n=6), risk factors (n=6) and general health (n=5). Studies examining vaccine-related misinformation (n=8) were the most common individual topic. The most frequent research methods used were observational or exploratory designs incorporating content analysis (n=38). Social network analysis or modelling were also used to understand the dynamics or spread; only seven experimental studies were undertaken.

Co-citation analysis revealed four distinct interdisciplinary clusters including infectious disease/vaccine and public health (largest cluster), social psychology and communications, general science and medicine, and medical internet and biomedical science. While there were fewer misleading posts than accurate ones, the former were more influential because they were frequently shared across smaller networks. Conspiracy theories and heightened emotions played a significant role in the propagation of misinformation across groups, with peers or close social connections playing an important role in supporting or hindering the spread of misinformation. In addition, where individuals lacked analytical thinking skills, they were more likely to ‘believe’ and spread misinformation.

This systematic review 1 revealed that a considerable amount of misinformation has been disseminated on social media in a relatively short time, creating significant potential for adverse health outcomes, especially in relation to vaccine-preventable diseases. 2

Health literacy is the ability of an individual to effectively evaluate and apply health-related information, while eHealth literacy extends this to include the use of online media to access information and health services. 3 Individuals who are exposed to this false information and have limited health literacy and poor analytical skills may be unable to effectively evaluate the accuracy of online information. 2 4 Coupled with a belief that some health behaviours are not supported by their social group, 5 these individuals may not make desirable decisions resulting in poorer health outcomes. 3

This is a significant concern for nurses and midwives because it impacts directly on their core roles of healthcare providers and patient educators, especially in relation to primary healthcare. Central to this role is assisting individuals to make more informed health-related decisions by evaluating and improving their health literacy. The effectiveness of any patient education is likely to be limited because they are not incorporating either a health literacy evaluation 3 or addressing social group norms within these programmes. This study highlights the necessity of a multilevel approach that includes facilitating (through reminders and removing barriers) and incentivising desired health behaviours, as well as countering the misinformation via local social networks. 5

  • Torbica A , et al
  • Krishna A ,
  • Thompson TL
  • Ahmad Hassali MA ,
  • Jou LC , et al
  • Bronstein MV ,
  • Pennycook G ,
  • Bear A , et al
  • Brewer NT ,
  • Chapman GB ,
  • Rothman AJ , et al

Twitter @kaye_rolls

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 05 June 2024

Misunderstanding the harms of online misinformation

  • Ceren Budak   ORCID: orcid.org/0000-0002-7767-3217 1 ,
  • Brendan Nyhan   ORCID: orcid.org/0000-0001-7497-1799 2 ,
  • David M. Rothschild   ORCID: orcid.org/0000-0002-7792-1989 3 ,
  • Emily Thorson   ORCID: orcid.org/0000-0002-6514-801X 4 &
  • Duncan J. Watts   ORCID: orcid.org/0000-0001-5005-4961 5  

Nature volume  630 ,  pages 45–53 ( 2024 ) Cite this article

2816 Accesses

386 Altmetric

Metrics details

  • Communication

The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

health misinformation on social media a literature review

Similar content being viewed by others

health misinformation on social media a literature review

Toolbox of individual-level interventions against online misinformation

health misinformation on social media a literature review

Exposure to untrustworthy websites in the 2016 US election

health misinformation on social media a literature review

Psychological inoculation protects against the social media infodemic

Myers, S. L. How social media amplifies misinformation more than information. The New York Times , https://www.nytimes.com/2022/10/13/technology/misinformation-integrity-institute-report.html (13 October 2022).

Haidt, J. Why the past 10 years of American life have been uniquely stupid. The Atlantic , https://www.theatlantic.com/magazine/archive/2022/05/social-media-democracy-trust-babel/629369/ (11 April 2022).

Haidt, J. Yes, social media really is undermining democracy. The Atlantic , https://www.theatlantic.com/ideas/archive/2022/07/social-media-harm-facebook-meta-response/670975/ (28 July 2022).

Tufekci, Z. YouTube, the great radicalizer. The New York Times , https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html (10 March 2018).

Romer, P. A tax that could fix big tech. The New York Times , https://www.nytimes.com/2019/05/06/opinion/tax-facebook-google.html (6 May 2019).

Schnell, M. Clyburn blames polarization on the advent of social media. The Hill , https://thehill.com/homenews/sunday-talk-shows/580440-clyburn-says-polarization-is-at-its-worst-because-the-advent-of/ (7 November 2021).

Robert F. Kennedy Human Rights/AP-NORC Poll (AP/NORC, 2023).

Goeas, E. & Nienaber, B. Battleground Poll 65: Civility in Politics: Frustration Driven by Perception (Tarrance Group, 2019).

Murray, M. Poll: Nearly two-thirds of Americans say social media platforms are tearing us apart. NBC News , https://www.nbcnews.com/politics/meet-the-press/poll-nearly-two-thirds-americans-say-social-media-platforms-are-n1266773 (2021).

Auxier, B. 64% of Americans say social media have a mostly negative effect on the way things are going in the U.S. today. Pew Research Center (2020).

Koomey, J. G. et al. Sorry, wrong number: the use and misuse of numerical facts in analysis and media reporting of energy issues. Annu. Rev. Energy Env. 27 , 119–158 (2002).

Article   Google Scholar  

Gonon, F., Bezard, E. & Boraud, T. Misrepresentation of neuroscience data might give rise to misleading conclusions in the media: the case of attention deficit hyperactivity disorder. PLoS ONE 6 , e14618 (2011).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Copenhaver, A., Mitrofan, O. & Ferguson, C. J. For video games, bad news is good news: news reporting of violent video game studies. Cyberpsychol. Behav. Soc. Netw. 20 , 735–739 (2017).

Article   PubMed   Google Scholar  

Bratton, L. et al. The association between exaggeration in health-related science news and academic press releases: a replication study. Wellcome Open Res. 4 , 148 (2019).

Article   PubMed   PubMed Central   Google Scholar  

Allcott, H., Braghieri, L., Eichmeyer, S. & Gentzkow, M. The welfare effects of social media. Am. Econ. Rev. 110 , 629–676 (2020).

Braghieri, L., Levy, R. & Makarin, A. Social media and mental health. Am. Econ. Rev. 112 , 3660–3693 (2022).

Guess, A. M., Barberá, P., Munzert, S. & Yang, J. The consequences of online partisan media. Proc. Natl Acad. Sci. USA 118 , e2013464118 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sabatini, F. & Sarracino, F. Online social networks and trust. Soc. Indic. Res. 142 , 229–260 (2019).

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R. & Hertwig, R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nat. Hum. Behav. 4 , 1102–1109 (2020). This paper provides a review of possible harms from social media .

Lapowsky, I. The mainstream media melted down as fake news festered. Wired , https://www.wired.com/2016/12/2016-mainstream-media-melted-fake-news-festered/ (26 December 2016).

Lalani, F. & Li, C. Why So Much Harmful Content Has Proliferated Online—and What We Can Do about It Technical Report (World Economic Forum, 2020).

Stewart, E. America’s growing fake news problem, in one chart. Vox , https://www.vox.com/policy-and-politics/2020/12/22/22195488/fake-news-social-media-2020 (22 December 2020).

Sanchez, G. R., Middlemass, K. & Rodriguez, A. Misinformation Is Eroding the Public’s Confidence in Democracy (Brookings Institution, 2022).

Bond, S. False Information Is Everywhere. ‘Pre-bunking’ Tries to Head It off Early. NPR , https://www.npr.org/2022/10/28/1132021770/false-information-is-everywhere-pre-bunking-tries-to-head-it-off-ear (National Public Radio, 2022).

Tufekci, Z. Algorithmic harms beyond Facebook and google: emergent challenges of computational agency. Colo. Tech. Law J. 13 , 203 (2015).

Google Scholar  

Cohen, J. N. Exploring echo-systems: how algorithms shape immersive media environments. J. Media Lit. Educ. 10 , 139–151 (2018).

Shin, J. & Valente, T. Algorithms and health misinformation: a case study of vaccine books on Amazon. J. Health Commun. 25 , 394–401 (2020).

Ceylan, G., Anderson, I. A. & Wood, W. Sharing of misinformation is habitual, not just lazy or biased. Proc. Natl Acad. Sci. USA 120 , e2216614120 (2023).

Pauwels, L., Brion, F. & De Ruyver, B. Explaining and Understanding the Role of Exposure to New Social Media on Violent Extremism. an Integrative Quantitative and Qualitative Approach (Belgian Science Policy, 2014).

McHugh, B. C., Wisniewski, P., Rosson, M. B. & Carroll, J. M. When social media traumatizes teens: the roles of online risk exposure, coping, and post-traumatic stress. Internet Res. 28 , 1169–1188 (2018).

Soral, W., Liu, J. & Bilewicz, M. Media of contempt: social media consumption predicts normative acceptance of anti-Muslim hate speech and Islamo-prejudice. Int. J. Conf. Violence 14 , 1–13 (2020).

Many believe misinformation is increasing extreme political views and behaviors. AP-NORC https://apnorc.org/projects/many-believe-misinformation-is-increasing-extreme-political-views-an (2022).

Fandos, N., Kang, C. & Isaac, M. Tech executives are contrite about election meddling, but make few promises on Capitol Hill. The New York Times , https://www.nytimes.com/2017/10/31/us/politics/facebook-twitter-google-hearings-congress.html (31 October 2017).

Eady, G., Paskhalis, T., Zilinsky, J., Bonneau, R., Nagler, J. & Tucker, J. A. Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior. Nat. Commun. 14 , 62 (2023). This paper shows that exposure to Russian misinformation on social media in 2016 was a small portion of people’s news diets and not associated with shifting attitudes.

Badawy, A., Addawood, A., Lerman, K. & Ferrara, E. Characterizing the 2016 Russian IRA influence campaign. Soc. Netw. Anal. Min. 9 , 31 (2019). This paper shows that exposure to and amplification of Russian misinformation on social media in 2016 was concentrated among Republicans (who would have been predisposed to support Donald Trump regardless) .

Hosseinmardi, H., Ghasemian, A., Clauset, A., Mobius, M., Rothschild, D. M. & Watts, D. J. Examining the consumption of radical content on YouTube. Proc. Natl Acad. Sci. USA 118 , e2101967118 (2021). This paper shows that extreme content is consumed on YouTube by a small portion of the population who tend to consume similar content elsewhere online and that consumption is largely driven by demand, not algorithms .

Chen, A. Y., Nyhan, B., Reifler, J., Robertson, R. E. & Wilson, C. Subscriptions and external links help drive resentful users to alternative and extremist YouTube channels. Sci. Adv. 9 , eadd8080 (2023). This paper shows that people who consume extremist content on YouTube have highly resentful attitudes and typically find the content through subscriptions and external links, not algorithmic recommendations to non-subscribers .

Munger, K. & Phillips, J. Right-wing YouTube: a supply and demand perspective. Int. J. Press Polit. 27 , 186–219 (2022).

Lasser, J., Aroyehun, S. T., Simchon, A., Carrella, F., Garcia, D. & Lewandowsky, S. Social media sharing of low-quality news sources by political elites. PNAS Nexus 1 , pgac186 (2022).

Muddiman, A., Budak, C., Murray, C., Kim, Y. & Stroud, N. J. Indexing theory during an emerging health crisis: how U.S. TV news indexed elite perspectives and amplified COVID-19 misinformation. Ann. Inte. Commun. Assoc. 46 , 174–204 (2022). This paper shows how mainstream media also spreads misinformation through amplification of misleading statements from elites .

Pereira, F. B. et al. Detecting misinformation: identifying false news spread by political leaders in the Global South. Preprint at OSF , https://doi.org/10.31235/osf.io/hu4qr (2022).

Horwitz, J. & Seetharaman, D. Facebook executives shut down efforts to make the site less divisive. Wall Street Journal , https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499 (26 May 2020).

Hosseinmardi, H., Ghasemian, A., Rivera-Lanas, M., Horta Ribeiro, M., West, R. & Watts, D. J. Causally estimating the effect of YouTube’s recommender system using counterfactual bots. Proc. Natl Acad. Sci. USA 121 , e2313377121 (2024).

Article   CAS   PubMed   Google Scholar  

Nyhan, B. et al. Like-minded sources on facebook are prevalent but not polarizing. Nature 620 , 137–144 (2023).

Guess, A. M. et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381 , 398–404 (2023). This paper shows that algorithms supply less untrustworthy content than reverse chronological feeds .

Article   ADS   CAS   PubMed   Google Scholar  

Asimovic, N., Nagler, J., Bonneau, R. & Tucker, J. A. Testing the effects of Facebook usage in an ethnically polarized setting. Proc. Natl Acad. Sci. USA 118 , e2022819118 (2021).

Allen, J., Mobius, M., Rothschild, D. M. & Watts, D. J. Research note: Examining potential bias in large-scale censored data. Harv. Kennedy Sch. Misinformation Rev. 2 , https://doi.org/10.37016/mr-2020-74 (2021). This paper shows that engagement metrics such as clicks and shares that are regularly used in popular and academic research do not take into account the fact that fake news is clicked and shared at a higher rate relative to exposure and viewing than non-fake news .

Scheuerman, M. K., Jiang, J. A., Fiesler, C. & Brubaker, J. R. A framework of severity for harmful content online. Proc. ACM Hum. Comput. Interact. 5 , 1–33 (2021).

Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359 , 1146–1151 (2018).

Roy, D. Happy to see the extensive coverage of our science paper on spread of true and false news online, but over-interpretations of the scope of our study prompted me to diagram actual scope (caution, not to scale!). Twitter , https://twitter.com/dkroy/status/974251282071474177 (15 March 2018).

Greenemeier, L. You can’t handle the truth—at least on Twitter. Scientific American , https://www.scientificamerican.com/article/you-cant-handle-the-truth-at-least-on-twitter/ (8 March 2018).

Frankel, S. Deceptively edited video of Biden proliferates on social media. The New York Times , https://www.nytimes.com/2020/11/02/technology/biden-video-edited.html (2 November 2020).

Jiameng P. et al. Deepfake videos in the wild: analysis and detection. In Proc. Web Conference 2021 981–992 (International World Wide Web Conference Committee, 2021).

Widely Viewed Content Report: What People See on Facebook: Q1 2023 Report (Facebook, 2023).

Mayer, J. How Russia helped swing the election for Trump. The New Yorker , https://www.newyorker.com/magazine/2018/10/01/how-russia-helped-to-swing-the-election-for-trump (24 September 2018).

Jamieson, K. H. Cyberwar: How Russian Hackers and Trolls Helped Elect A President: What We Don’t, Can’t, and Do Know (Oxford Univ. Press, 2020).

Solon, O. & Siddiqui, S. Russia-backed Facebook posts ‘reached 126m Americans’ during US election. The Guardian , https://www.theguardian.com/technology/2017/oct/30/facebook-russia-fake-accounts-126-million (30 October 2017).

Watts, D. J. & Rothschild, D. M. Don’t blame the election on fake news. Blame it on the media. Columbia J. Rev. 5 , https://www.cjr.org/analysis/fake-news-media-election-trump.php (2017). This paper explores how seemingly large exposure levels to problematic content actually represent a small proportion of total news exposure .

Jie, Y. Frequency or total number? A comparison of different presentation formats on risk perception during COVID-19. Judgm. Decis. Mak. 17 , 215–236 (2022).

Reyna, V. F. & Brainerd, C. J. Numeracy, ratio bias, and denominator neglect in judgments of risk and probability. Learn. Individ. Differ. 18 , 89–107 (2008). This paper details research into how salient numbers can lead to confusion in judgements of risk and probability, such as denominator neglect in which people fixate on a large numerator and do not consider the appropriate denominator .

Jones, J. Americans: much misinformation, bias, inaccuracy in news. Gallup , https://news.gallup.com/opinion/gallup/235796/americans-misinformation-bias-inaccuracy-news.aspx (2018).

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B. & Lazer, D. Fake news on Twitter during the 2016 US presidential election. Science 363 , 374–378 (2019).

Guess, A. M., Nyhan, B. & Reifler, J. Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav. 4 , 472–480 (2020). This paper shows untrustworthy news exposure was relatively rare in US citizens’ web browsing in 2016 .

Altay, S., Nielsen, R. K. & Fletcher, R. Quantifying the “infodemic”: people turned to trustworthy news outlets during the 2020 coronavirus pandemic. J. Quant. Descr. Digit. Media 2 , 1–30 (2022).

Allen, J., Howland, B., Mobius, M., Rothschild, D. & Watts, D. J. Evaluating the fake news problem at the scale of the information ecosystem. Sci. Adv. 6 , eaay3539 (2020). This paper shows that exposure to fake news is a vanishingly small part of people’s overall news diets when you take television into account .

Article   ADS   PubMed   PubMed Central   Google Scholar  

Guess, A. M., Nyhan, B., O’Keeffe, Z. & Reifler, J. The sources and correlates of exposure to vaccine-related (mis)information online. Vaccine 38 , 7799–7805 (2020). This paper shows hows how a small portion of the population accounts for the vast majority of exposure to vaccine-sceptical content .

Chong, D. & Druckman, J. N. Framing public opinion in competitive democracies. Am. Polit. Sci. Rev. 101 , 637–655 (2007).

Arendt, F. Toward a dose-response account of media priming. Commun. Res. 42 , 1089–1115 (2015). This paper shows that people may need repeated exposure to information for it to affect their attitudes .

Arceneaux, K., Johnson, M. & Murphy, C. Polarized political communication, oppositional media hostility, and selective exposure. J. Polit. 74 , 174–186 (2012).

Feldman, L. & Hart, P. Broadening exposure to climate change news? How framing and political orientation interact to influence selective exposure. J. Commun. 68 , 503–524 (2018).

Druckman, J. N. Political preference formation: competition, deliberation, and the (ir)relevance of framing effects. Am. Polit. Sci. Rev. 98 , 671–686 (2004).

Bakshy, E., Messing, S. & Adamic, L. A. Exposure to ideologically diverse news and opinion on facebook. Science 348 , 1130–1132 (2015).

Article   ADS   MathSciNet   CAS   PubMed   Google Scholar  

Bozarth, L., Saraf, A. & Budak, C. Higher ground? How groundtruth labeling impacts our understanding of fake news about the 2016 U.S. presidential nominees. In Proc. International AAAI Conference on Web and Social Media Vol. 14, 48–59 (Association for the Advancement of Artificial Intelligence, 2020).

Gerber, A. S., Gimpel, J. G., Green, D. P. & Shaw, D. R. How large and long-lasting are the persuasive effects of televised campaign ads? Results from a randomized field experiment. Am. Polit. Sci. Rev. 105 , 135–150 (2011). This paper shows that the effect of news decays rapidly; news needs repeated exposure for long-term impact .

Hill, S. J., Lo, J., Vavreck, L. & Zaller, J. How quickly we forget: the duration of persuasion effects from mass communication. Polit. Commun. 30 , 521–547 (2013). This paper shows that the effect of persuasive advertising decays rapidly, necessitating repeated exposure for lasting effect .

Larsen, M. V. & Olsen, A. L. Reducing bias in citizens’ perception of crime rates: evidence from a field experiment on burglary prevalence. J. Polit. 82 , 747–752 (2020).

Roose, K. What if Facebook is the real ‘silent majority’? The New York Times , https://www.nytimes.com/2020/08/28/us/elections/what-if-facebook-is-the-real-silent-majority.html (27 August 2020).

Breland, A. A new report shows how Trump keeps buying Facebook ads. Mother Jones , https://www.motherjones.com/politics/2021/07/real-facebook-oversight-board/ (28 July 2021).

Marchal, N., Kollanyi, B., Neudert, L.-M. & Howard, P. N. Junk News during the EU Parliamentary Elections: Lessons from A Seven-language Study of Twitter and Facebook (Univ. Oxford, 2019).

Ellison, N. B., Trieu, P., Schoenebeck, S., Brewer, R. & Israni, A. Why we don’t click: interrogating the relationship between viewing and clicking in social media contexts by exploring the “non-click”. J. Comput. Mediat. Commun. 25 , 402–426 (2020).

Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles,and, D. & Rand, D. G. Shifting attention to accuracy can reduce misinformation online. Nature 592 , 590–595 (2021).

Ghezae, I. et al. Partisans neither expect nor receive reputational rewards for sharing falsehoods over truth online. Open Science Framework https://osf.io/5jwgd/ (2023).

Guess, A. M. et al. Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science 381 , 404–408 (2023).

Godel, W. et al. Moderating with the mob: evaluating the efficacy of real-time crowdsourced fact-checking. J. Online Trust Saf. 1 , https://doi.org/10.54501/jots.v1i1.15 (2021).

Rogers, K. Facebook’s algorithm is broken. We collected some suggestion on how to fix it. FiveThirtyEight , https://fivethirtyeight.com/features/facebooks-algorithm-is-broken-we-collected-some-spicy-suggestions-on-how-to-fix-it/ (16 November 2021).

Roose, K. The making of a YouTube radical. The New York Times , https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html (8 June 2019).

Eslami, M. et al. First I “like” it, then I hide it: folk theories of social feeds. In Proc. 2016 CHI Conference on Human Factors in Computing Systems 2371–2382 (Association for Computing Machinery, 2016).

Silva, D. E., Chen, C. & Zhu, Y. Facets of algorithmic literacy: information, experience, and individual factors predict attitudes toward algorithmic systems. New Media Soc. https://doi.org/10.1177/14614448221098042 (2022).

Eckles, D. Algorithmic Transparency and Assessing Effects of Algorithmic Ranking. Testimony before the Senate Subcommittee on Communications, Media, and Broadband , https://www.commerce.senate.gov/services/files/62102355-DC26-4909-BF90-8FB068145F18 (U.S. Senate Committee on Commerce, Science, and Transportation, 2021).

Kantrowitz, A. Facebook removed the news feed algorithm in an experiment. Then it gave up. OneZero , https://onezero.medium.com/facebook-removed-the-news-feed-algorithm-in-an-experiment-then-it-gave-up-25c8cb0a35a3 (25 October 2021).

Ribeiro, M. H., Hosseinmardi, H., West, R. & Watts, D. J. Deplatforming did not decrease parler users’ activity on fringe social media. PNAS Nexus 2 , pgad035 (2023). This paper shows that shutting down Parler just displaced user activity to other fringe social media websites .

Alfano, M., Fard, A. E., Carter, J. A., Clutton, P. & Klein, C. Technologically scaffolded atypical cognition: the case of YouTube’s recommender system. Synthese 199 , 835–858 (2021).

Huszár, F. et al. Algorithmic amplification of politics on Twitter. Proc. Natl Acad. Sci. USA 119 , e2025334119 (2022).

Levy, R. Social media, news consumption, and polarization: evidence from a field experiment. Am. Econ. Rev. 111 , 831–870 (2021).

Cho, J., Ahmed, S., Hilbert, M., Liu, B. & Luu, J. Do search algorithms endanger democracy? An experimental investigation of algorithm effects on political polarization. J. Broadcast. Electron. Media 64 , 150–172 (2020).

Lewandowsky, S., Robertson, R. E. & DiResta, R. Challenges in understanding human-algorithm entanglement during online information consumption. Perspect. Psychol. Sci. https://doi.org/10.1177/17456916231180809 (2023).

Narayanan, A. Understanding Social Media Recommendation Algorithms (Knight First Amendment Institute at Columbia University, 2023).

Finkel, E. J. et al. Political sectarianism in America. Science 370 , 533–536 (2020).

Auxier, B. & Anderson, M. Social Media Use in 2021 (Pew Research Center, 2021).

Frimer, J. A. et al. Incivility is rising among American politicians on Twitter. Soc. Psychol. Personal. Sci. 14 , 259–269 (2023).

Broderick, R. & Darmanin, J. The “yellow vest” riots in France are what happens when Facebook gets involved with local news. Buzzfeed News , https://www.buzzfeednews.com/article/ryanhatesthis/france-paris-yellow-jackets-facebook (2018).

Salzberg, S. De-platform the disinformation dozen. Forbes , https://www.forbes.com/sites/stevensalzberg/2021/07/19/de-platform-the-disinformation-dozen/ (2021).

Karell, D., Linke, A., Holland, E. & Hendrickson, E. “Born for a storm”: hard-right social media and civil unrest. Am. Soc. Rev. 88 , 322–349 (2023).

Smith, N. & Graham, T. Mapping the anti-vaccination movement on Facebook. Inf. Commun. Soc. 22 , 1310–1327 (2019).

Brady, W. J., McLoughlin, K., Doan, T. N. & Crockett, M. J. How social learning amplifies moral outrage expression in online social networks. Sci. Adv. 7 , eabe5641 (2021).

Suhay, E., Bello-Pardo, E. & Maurer, B. The polarizing effects of online partisan criticism: evidence from two experiments. Int. J. Press Polit. 23 , 95–115 (2018).

Arugute, N., Calvo, E. & Ventura, T. Network activated frames: content sharing and perceived polarization in social media. J. Commun. 73 , 14–24 (2023).

Nordbrandt, M. Affective polarization in the digital age: testing the direction of the relationship between social media and users’ feelings for out-group parties. New Media Soc. 25 , 3392–3411 (2023). This paper shows that affective polarization predicts media use, not the other way around .

AFP. Street protests, a French tradition par excellence. The Local https://www.thelocal.fr/20181205/revolutionary-tradition-the-story-behind-frances-street-protests (2018).

Spier, R. E. Perception of risk of vaccine adverse events: a historical perspective. Vaccine 20 , S78–S84 (2001). This article documents the history of untrustworthy information about vaccines, which long predates social media .

Bryant, L. V. The YouTube algorithm and the alt-right filter bubble. Open Inf. Sci. 4 , 85–90 (2020).

Sismeiro, C. & Mahmood, A. Competitive vs. complementary effects in online social networks and news consumption: a natural experiment. Manage. Sci. 64 , 5014–5037 (2018).

Fergusson, L. & Molina, C. Facebook Causes Protests Documento CEDE No. 41 , https://doi.org/10.2139/ssrn.3553514 (2019).

Lu, Y., Wu, J., Tan, Y. & Chen, J. Microblogging replies and opinion polarization: a natural experiment. MIS Q. 46 , 1901–1936 (2022).

Porter, E. & Wood, T. J. The global effectiveness of fact-checking: evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proc. Natl Acad. Sci. USA 118 , e2104235118 (2021).

Arechar, A. A. et al. Understanding and combatting misinformation across 16 countries on six continents. Nat. Hum. Behav. 7 , 1502–1513 (2023).

Blair, R. A. et al. Interventions to Counter Misinformation: Lessons from the Global North and Applications to the Global South (USAID Development Experience Clearinghouse, 2023).

Haque, M. M. et al. Combating misinformation in Bangladesh: roles and responsibilities as perceived by journalists, fact-checkers, and users. Proc. ACM Hum. Comput. Interact. 4 , 1–32 (2020).

Humprecht, E., Esser, F. & Van Aelst, P. Resilience to online disinformation: a framework for cross-national comparative research. Int. J. Press Polit. 25 , 493–516 (2020).

Gillum, J. & Elliott, J. Sheryl Sandberg and top Facebook execs silenced an enemy of Turkey to prevent a hit to the company’s business. ProPublica , https://www.propublica.org/article/sheryl-sandberg-and-top-facebook-execs-silenced-an-enemy-of-turkey-to-prevent-a-hit-to-their-business (24 February 2021).

Nord M. et al. Democracy Report 2024: Democracy Winning and Losing at the Ballot V-Dem Report (Univ. Gothenburg V-Dem Institute, 2024).

Alba, D. How Duterte used Facebook to fuel the Philippine drug war. Buzzfeed , https://www.buzzfeednews.com/article/daveyalba/facebook-philippines-dutertes-drug-war (4 September 2018).

Zakrzewski, C., De Vynck, G., Masih, N. a& Mahtani, S. How Facebook neglected the rest of the world, fueling hate speech and violence in India. Washington Post , https://www.washingtonpost.com/technology/2021/10/24/india-facebook-misinformation-hate-speech/ (24 October 2021).

Simonite, T. Facebook is everywhere; its moderation is nowhere close. Wired , https://www.wired.com/story/facebooks-global-reach-exceeds-linguistic-grasp/ (21 October 2021).

Cruz, J. C. B. & Cheng, C. Establishing baselines for text classification in low-resource languages. Preprint at https://arxiv.org/abs/2005.02068 (2020). This paper shows one of the challenges that makes content moderation costlier in less resourced countries .

Müller, K. & Schwarz, C. Fanning the flames of hate: social media and hate crime. J. Eur. Econ. Assoc. 19 , 2131–2167 (2021).

Bursztyn, L., Egorov, G., Enikolopov, R. & Petrova, M. Social Media and Xenophobia: Evidence from Russia (National Bureau of Economic Research, 2019).

Lewandowsky, S., Jetter, M. & Ecker, U. K. H. Using the President’s tweets to understand political diversion in the age of social media. Nat. Commun. 11 , 5764 (2020).

Bursztyn, L., Rao, A., Roth, C. P. & Yanagizawa-Drott, D. H. Misinformation During a Pandemic (National Bureau of Economic Research, 2020).

Motta, M. & Stecula, D. Quantifying the effect of Wakefield et al. (1998) on skepticism about MMR vaccine safety in the US. PLoS ONE 16 , e0256395 (2021).

Sanderson, Z., Brown, M. A., Bonneau, R., Nagler, J. & Tucker, J. A. Twitter flagged Donald Trump’s tweets with election misinformation: they continued to spread both on and off the platform. Harv. Kennedy Sch. Misinformation Rev. 2 , https://doi.org/10.37016/mr-2020-77 (2021).

Anhalt-Depies, C., Stenglein, J. L., Zuckerberg, B., Townsend, P. A. & Rissman, A. R. Tradeoffs and tools for data quality, privacy, transparency, and trust in citizen science. Biol. Conserv. 238 , 108195 (2019).

Gerber, N., Gerber, P. & Volkamer, M. Explaining the privacy paradox: a systematic review of literature investigating privacy attitude and behavior. Comput. Secur. 77 , 226–261 (2018). This paper explores the trade-offs between privacy and research .

Isaak, J. & Hanna, M. J. User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer 51 , 56–59 (2018).

Vogus, C. Independent Researcher Access to Social Media Data: Comparing Legislative Proposals (Center for Democracy and Technology, 2022).

Xie, Y. “Undemocracy”: inequalities in science. Science 344 , 809–810 (2014).

Nielsen, M. W. & Andersen, J. P. Global citation inequality is on the rise. Proc. Natl Acad. Sci. USA 118 , e2012208118 (2021).

King, D. A. The scientific impact of nations. Nature 430 , 311–316 (2004).

Zaugg, I. A., Hossain, A. & Molloy, B. Digitally-disadvantaged languages. Internet Policy Rev. 11 , 1–11 (2022).

Zaugg, I. A. in Digital Inequalities in the Global South (eds Ragnedda, M. & Gladkova, A.) 247–267 (Springer, 2020).

Sablosky, J. Dangerous organizations: Facebook’s content moderation decisions and ethnic visibility in Myanmar. Media Cult. Soc. 43 , 1017–1042 (2021). This paper highlights the challenges of content moderation in the Global South .

Warofka, A. An independent assessment of the human rights impact of Facebook in Myanmar. Facebook Newsroom , https://about.fb.com/news/2018/11/myanmar-hria/ (2018).

Fick, M. & Dave, P. Facebook’s flood of languages leave it struggling to monitor content. Reuters , https://www.reuters.com/article/idUSKCN1RZ0DL/ (23 April 2019).

Newman, N. Executive Summary and Key Findings of the 2020 Report (Reuters Institute for the Study of Journalism, 2020).

Hilbert, M. The bad news is that the digital access divide is here to stay: domestically installed bandwidths among 172 countries for 1986–2014. Telecommun. Policy 40 , 567–581 (2016).

Traynor, I. Internet governance too US-centric, says European commission. The Guardian , https://www.theguardian.com/technology/2014/feb/12/internet-governance-us-european-commission (12 February 2014).

Pennycook, G., Cannon, T. D. & Rand, D. G. Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen. 147 , 1865–1880 (2018).

Guess, A. M. et al. “Fake news” may have limited effects beyond increasing beliefs in false claims. Kennedy Sch. Misinformation Rev. 1 , https://doi.org/10.37016/mr-2020-004 (2020).

Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K. & Larson, H. J. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat. Hum. Behav. 5 , 337–348 (2021).

Lorenz-Spreen, P., Oswald, L., Lewandowsky, S. & Hertwig, R. Digital media and democracy: a systematic review of causal and correlational evidence worldwide. Nat. Hum. Behav. 7 , 74–101 (2023). This paper provides a review of evidence on social media effects .

Donato, K. M., Singh, L., Arab, A., Jacobs, E. & Post, D. Misinformation about COVID-19 and Venezuelan migration: trends in Twitter conversation during a pandemic. Harvard Data Sci. Rev. 4 , https://doi.org/10.1162/99608f92.a4d9a7c7 (2022).

Wieczner, J. Big lies vs. big lawsuits: why Dominion Voting is suing Fox News and a host of Trump allies. Fortune , https://fortune.com/longform/dominion-voting-lawsuits-fox-news-trump-allies-2020-election-libel-conspiracy-theories/ (2 April 2021).

Calma, J. Twitter just closed the book on academic research. The Verge https://www.theverge.com/2023/5/31/23739084/twitter-elon-musk-api-policy-chilling-academic-research (2023).

Edelson, L., Graef, I. & Lancieri, F. Access to Data and Algorithms: for an Effective DMA and DSA Implementation (Centre on Regulation in Europe, 2023).

Download references

Author information

Authors and affiliations.

University of Michigan School of Information, Ann Arbor, MI, USA

Ceren Budak

Department of Government, Dartmouth College, Hanover, NH, USA

Brendan Nyhan

Microsoft Research, New York, NY, USA

David M. Rothschild

Maxwell School of Citizenship and Public Affairs, Syracuse University, Syracuse, NY, USA

Emily Thorson

Department of Computer and Information Science, Annenberg School of Communication, and Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA, USA

Duncan J. Watts

You can also search for this author in PubMed   Google Scholar

Contributions

C.B., B.N., D.M.R., E.T. and D.J.W. wrote and revised the paper. D.M.R. collected the data and prepared Fig. 1 .

Corresponding author

Correspondence to David M. Rothschild .

Ethics declarations

Competing interests.

The authors declare no competing interests, but provide the following information in the interests of transparency and full disclosure. C.B. and D.J.W. previously worked for Microsoft Research and D.M.R. currently works for Microsoft Research. B.N. has received grant funding from Meta. B.N. and E.T. are participants in the US 2020 Facebook and Instagram Election Study as independent academic researchers. D.J.W. has received funding from Google Research. D.M.R. and D.J.W. both previously worked at Yahoo!.

Peer review

Peer review information.

Nature thanks Stephan Lewandowsky, David Rand, Emma Spiro and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Budak, C., Nyhan, B., Rothschild, D.M. et al. Misunderstanding the harms of online misinformation. Nature 630 , 45–53 (2024). https://doi.org/10.1038/s41586-024-07417-w

Download citation

Received : 13 October 2021

Accepted : 11 April 2024

Published : 05 June 2024

Issue Date : 06 June 2024

DOI : https://doi.org/10.1038/s41586-024-07417-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

health misinformation on social media a literature review

  • Search Menu

Sign in through your institution

  • Advance Articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About Health Education Research
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

  • < Previous

Promoting a healthy lifestyle: exploring the role of social media and fitness applications in the context of social media addiction risk

  • Article contents
  • Figures & tables
  • Supplementary Data

Junfeng Liu, Promoting a healthy lifestyle: exploring the role of social media and fitness applications in the context of social media addiction risk, Health Education Research , Volume 39, Issue 3, June 2024, Pages 272–283, https://doi.org/10.1093/her/cyad047

  • Permissions Icon Permissions

The popularity of social networks turns them into a legal method for promoting a healthy lifestyle, which benefits not only people but also different countries’ governments. This research paper aimed to examine the Keep fitness app integrated into WeChat, Weibo and QQ as regards long-term improvements in health-related behaviors (physical activity, nutrition, health responsibility, spiritual growth, interpersonal relationships and stress management) and assess the associated risk of increased social media addiction. Students from Lishui University in China ( N  = 300) participated in this study, and they were formed into control and experimental groups. The Healthy Lifestyle Behavior Scale and Social Media Disorder Scale were used as psychometric instruments. The Keep app was found to improve respondents’ scores on the parameters of physical activity, nutrition and health responsibility ( P  = 0.00). However, the level of dependence on social media did not change in either the control or the experimental group during the year of research ( P  ≥ 0.05). It is concluded that fitness apps can be an effective tool to promote healthy lifestyles among young people in China and other countries. The feasibility of government investment in fitness apps to promote healthy lifestyles is substantiated.

  • physical activity
  • addictive behavior
  • interpersonal relations
  • science of nutrition
  • stress management
  • social networks
  • social media
  • mobile applications
  • healthy lifestyle

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Short-term Access

To purchase short-term access, please sign in to your personal account above.

Don't already have a personal account? Register

Month: Total Views:
January 2024 36
February 2024 48
March 2024 91
April 2024 62
May 2024 79
June 2024 10

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1465-3648
  • Print ISSN 0268-1153
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

This paper is in the following e-collection/theme issue:

Published on 25.1.2022 in Vol 24 , No 1 (2022) : January

Medical and Health-Related Misinformation on Social Media: Bibliometric Study of the Scientific Literature

Authors of this article:

Author Orcid Image

Original Paper

  • Andy Wai Kan Yeung 1, 2 , PhD   ; 
  • Anela Tosevska 2, 3 , PhD   ; 
  • Elisabeth Klager 2 , MSc   ; 
  • Fabian Eibensteiner 2, 4 , MD   ; 
  • Christos Tsagkaris 5 , MD   ; 
  • Emil D Parvanov 2, 6 , PhD   ; 
  • Faisal A Nawaz 7 , MBBSc   ; 
  • Sabine Völkl-Kernstock 2 , PhD   ; 
  • Eva Schaden 2, 8 , MD   ; 
  • Maria Kletecka-Pulker 2 , PhD   ; 
  • Harald Willschke 2, 8 , MD   ; 
  • Atanas G Atanasov 2, 9 , PhD  

1 Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong, China

2 Ludwig Boltzmann Institute for Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria

3 Department of Molecular, Cell and Developmental Biology, University of California Los Angeles, Los Angeles, CA, United States

4 Division of Pediatric Nephrology and Gastroenterology, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria

5 Faculty of Medicine, University of Crete, Heraklion, Greece

6 Department of Translational Stem Cell Biology, Medical University of Varna, Varna, Bulgaria

7 College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates

8 Department of Anaesthesia, Intensive Care Medicine and Pain Medicine, Medical University of Vienna, Vienna, Austria

9 Institute of Genetics and Animal Biotechnology of the Polish Academy of Sciences, Jastrzebiec, Poland

Corresponding Author:

Atanas G Atanasov, PhD

Ludwig Boltzmann Institute for Digital Health and Patient Safety

Medical University of Vienna

Spitalgasse 23

Vienna, 1090

Phone: 43 664 1929 852

Email: [email protected]

Background: Social media has been extensively used for the communication of health-related information and consecutively for the potential spread of medical misinformation. Conventional systematic reviews have been published on this topic to identify original articles and to summarize their methodological approaches and themes. A bibliometric study could complement their findings, for instance, by evaluating the geographical distribution of the publications and determining if they were well cited and disseminated in high-impact journals.

Objective: The aim of this study was to perform a bibliometric analysis of the current literature to discover the prevalent trends and topics related to medical misinformation on social media.

Methods: The Web of Science Core Collection electronic database was accessed to identify relevant papers with the following search string: ALL=(misinformati* OR “wrong informati*” OR disinformati* OR “misleading informati*” OR “fake news*”) AND ALL=(medic* OR illness* OR disease* OR health* OR pharma* OR drug* OR therap*) AND ALL=(“social media*” OR Facebook* OR Twitter* OR Instagram* OR YouTube* OR Weibo* OR Whatsapp* OR Reddit* OR TikTok* OR WeChat*). Full records were exported to a bibliometric software, VOSviewer, to link bibliographic information with citation data. Term and keyword maps were created to illustrate recurring terms and keywords.

Results: Based on an analysis of 529 papers on medical and health-related misinformation on social media, we found that the most popularly investigated social media platforms were Twitter (n=90), YouTube (n=67), and Facebook (n=57). Articles targeting these 3 platforms had higher citations per paper (>13.7) than articles covering other social media platforms (Instagram, Weibo, WhatsApp, Reddit, and WeChat; citations per paper <8.7). Moreover, social media platform–specific papers accounted for 44.1% (233/529) of all identified publications. Investigations on these platforms had different foci. Twitter-based research explored cyberchondria and hypochondriasis, YouTube-based research explored tobacco smoking, and Facebook-based research studied vaccine hesitancy related to autism. COVID-19 was a common topic investigated across all platforms. Overall, the United States contributed to half of all identified papers, and 80% of the top 10 most productive institutions were based in this country. The identified papers were mostly published in journals of the categories public environmental and occupational health, communication, health care sciences services, medical informatics, and medicine general internal, with the top journal being the Journal of Medical Internet Research.

Conclusions: There is a significant platform-specific topic preference for social media investigations on medical misinformation. With a large population of internet users from China, it may be reasonably expected that Weibo, WeChat, and TikTok (and its Chinese version Douyin) would be more investigated in future studies. Currently, these platforms present research gaps that leave their usage and information dissemination warranting further evaluation. Future studies should also include social platforms targeting non-English users to provide a wider global perspective.

Introduction

Public health information has been traditionally distributed to the public with the use of printed media, television, or radio. With the rise of participatory web and social media [ 1 ] and particularly in the face of recent pandemics, such as the H1N1 influenza pandemic in 2009 and the COVID-19 pandemic [ 2 ], the internet plays a major role in information sharing. The general public no longer acts as a passive consumer but plays a critical role in the generation, filtering, and amplification of public health information [ 1 ]. Health care–related scientific discoveries are now often condensed into news pieces written in layman’s terms and disseminated to broad and nonexpert audiences via social media, which contributes to not only better visibility of important information, but also better communication between health care professionals and the community [ 3 ]. Another major benefit of social media for health care is the potential for patient empowerment by providing a platform where patients can get information about their medical condition, communicate with health care professionals, share their experiences, and support other individuals affected by the same condition [ 4 ].

While providing numerous empowerment opportunities, there lies a great potential for miscommunication and misinformation [ 5 ] within the social media–based setting of health-related information distribution. While social media has increased and improved the dissemination of scientific results to the community, it has also increased the sensationalist language used to describe scientific findings [ 6 , 7 ]. Often, media articles may report research findings with misinterpretation and overstatement that can lead to confusion, misinformation, and mistrust in scientific reporting [ 6 ]. Moreover, social media empowers pseudoexperts and nonexpert influencers in sharing opinions and false information in the area of health care [ 8 ]. Very often, important societal figures, such as celebrities, politicians, and activists, without any expert knowledge of a certain topic, but with a large influence, can take part in spreading health-related misinformation [ 8 ]. The need for social media to moderate the information shared and increase expert consultation is increasingly evident and could be one way to reduce the spread of misinformation [ 9 ].

One of the most polarizing topics in recent years has been vaccination, following a scientific article from 1998 by Wakefield et al, which proposed a causative link between the measles, mumps, and rubella (MMR) vaccine and autism [ 3 , 10 ]. The study by Wakefield et al was later found to be flawed and fraudulent and was retracted [ 11 - 14 ]. Even though the findings in the study by Wakefield et al have since been disproved as numerous subsequent studies found no link between vaccines and autism, the study caused great damage to vaccine programs worldwide, with a considerable increase in the number of people rejecting vaccination in the past decades [ 9 ]. Another prominent illustrative example is the case of measles reappearance in the United States [ 15 ]. In the United States, there was an immense surge in antivaccine Tweets around 2015 to 2016, closely following the 2014 to 2015 measles outbreak and the release of Wakefield’s antivaccine movie Vaxxed in 2016 [ 16 ]. This could be linked to the finding that antivaccine posts on Facebook were often shared and liked more often than provaccine posts [ 17 ]. Similarly, individuals exposed to negative opinions on vaccination are more likely to further share the opinions compared to individuals exposed to positive or neutral opinions [ 18 ]. This is potentiated by the so called “echo chamber effect,” where many social media users are exposed to curated content that is likely to align to their existing beliefs and exacerbates the strength of the misinformation they receive [ 19 , 20 ].

As medical misinformation is an increasingly relevant topic to study, the amount of available literature is growing. On this background, the aim of this study was to perform a bibliometric analysis of the current literature to discover the prevalent trends and topics related to medical misinformation on social media. Conventional systematic reviews have been published on this topic to identify original articles and summarize their methodological approaches and themes [ 7 , 21 ]. A bibliometric study could complement their findings, for instance, by identifying the most productive authors and institutions, evaluating the geographical distribution of the publications, revealing recurring journals disseminating such research findings, unveiling the most common keywords or concepts reported, and evaluating if the publications were well cited and disseminated in journals with high impact factors. These data can serve as starting points to guide fellow researchers to pinpoint papers relevant to their studies, contact potential collaborators to conduct joint research, and find suitable journals to submit their work. At the same time, these data can help researchers find missing gaps in the literature and missing parties contributing to the field, so that the missing pieces can be filled. Since the most common social media platforms have originated in the United States, it was hypothesized that the United States would have the highest contribution in this area of academic research. This would be an important research question as publication bias toward the United States might shadow or fail to capture the wider spectrum of global developments and experiences regarding medical misinformation on social media.

Data Source and Search Strategy

A bibliometric analysis is a study that applies mathematical and statistical methods to books and other media of communication, such as academic publications [ 22 ]. Similar to a previous bibliometric study [ 23 ], this work was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [ 24 ]. The Web of Science (WoS) Core Collection database was accessed on January 13, 2021, via the following search string: ALL=(misinformati* OR “wrong informati*” OR disinformati* OR “misleading informati*” OR “fake news*”) AND ALL=(medic* OR illness* OR disease* OR health* OR pharma* OR drug* OR therap*) AND ALL=(“social media*” OR Facebook* OR Twitter* OR Instagram* OR YouTube* OR Weibo* OR Whatsapp* OR Reddit* OR TikTok* OR WeChat*). The PubMed database was similarly searched for papers mentioning these terms in their titles and abstracts. The search terms about misinformation and its common synonyms were referred from 2 recent systematic reviews [ 7 , 25 ]. No additional filter was placed to restrict the search results, and the indicated search yielded 529 papers in WoS and 285 papers in PubMed. After merging the lists from both databases and removing duplicates, 529 papers remained. Since this was a total-scale analysis of the concerned literature [ 26 ], all resultant papers were included without exclusion ( Multimedia Appendix 1 ).

The “Analyze” function of WoS was used to provide initial descriptive statistics regarding the bibliographic data. The numbers of social media platform–specific papers were counted. The approach applied to Facebook is presented here as an example for the used evaluation strategy. In particular, we additionally searched with the following search string: ALL= Facebook* NOT (Twitter* OR Instagram* OR YouTube* OR Weibo* OR Whatsapp* OR Reddit* OR TikTok* OR WeChat*). When the original search string and this new search string were combined with the Boolean operator “AND,” the resulting papers mentioned Facebook but not the other referenced social media.

Outcome Measures

We evaluated the publication and citation counts of contributors in terms of author, institution, country, and journal. We also computed the publication and citation counts of terms and keywords, and identified the top 10 most cited papers. The semantic content of the identified publications was analyzed in the following ways. Citations per paper (CPPs) were computed for terms occurring in the titles, abstracts, and keywords of the identified papers, and n-gram analysis was conducted to identify the most recurring metatext. These analyses aimed to answer the queries listed at the end of the Introduction. Further details are described below.

Data Extraction and Main Analysis

The 529 identified papers were exported in full record with cited references to VOSviewer [ 27 , 28 ] for subsequent bibliometric analyses and visualizations. To visualize the results, a term map was created via VOSviewer to display publication and citation data for terms that appeared in the titles and abstracts of the analyzed papers. We decided to visualize terms that appeared in over 1% of the papers (ie, at least six papers) for improved clarity of the generated image, to avoid a heavily crowded figure [ 26 ]. A keyword map was similarly produced with the same frequency threshold, displaying author keywords and keywords added by WoS (KeyWords Plus) altogether. VOSviewer performs text mining by part-of-speech tagging with the aid of Apache OpenNLP and a linguistic filter, and converts plural noun phrases into singular form [ 29 ]. Meanwhile, it constructs a map in the following 3 steps based on a co-occurrence matrix: (1) calculation of a similarity index based on association strength (also known as proximity index and probabilistic affinity index), (2) application of the VOS mapping technique to the matrix, and (3) solution transformation to produce consistent results [ 27 ]. Besides visualizations, the resultant data from the maps were checked, and the recurring items were presented in tabular format.

In addition, keyword maps were produced for subsets of papers that were specific to Twitter, YouTube, and Facebook. For these maps, keywords with at least two appearances were included.

Exploratory Analysis

Finally, an exploratory n-gram analysis was conducted with the online NGram Analyzer [ 30 ] that allows n-gram metatexts to be listed. The abstracts of the publications were pasted into the program and the recurring 5-grams (a contiguous sequence of 5 words) were extracted. After manual checking, meaningful 5-grams with at least four appearances have been reported in the Results.

Our search strategy identified a total of 529 scientific articles addressing medical misinformation on social media. The analysis of these papers revealed that the earliest papers on this subject could be traced back to 2010 and 2011, and the total publication and citation counts increased very rapidly, especially during the last 2 years ( Figure 1 ). Original articles accounted for the majority of the identified publications (n=393, 74.3%), followed by editorial materials (n=50, 9.5%). The article-to-review ratio was 12.7:1 (n=393 vs 31). Proceedings accounted for another 7.2% (n=38). Over 97% of the indexed papers were written in English. The most cited paper among the 529 was also the oldest paper; it involved content analysis of over 5000 relevant Tweets during the 2009 H1N1 outbreak [ 1 ]. Within a decade, it has already accumulated 589 citations.

health misinformation on social media a literature review

The most productive author publishing in this subject area was Emily K Vraga from George Mason University (Virginia, USA). She started to publish on this topic in 2015 and accumulated a total of 13 papers, mostly with Leticia Bode and Melissa Tully. Leticia Bode and Joseph A Hill followed second in the list of the most productive researchers, with 9 papers each. Following them were 27 authors with 7 papers each. The top 10 most productive institutions, countries, journals, and WoS categories in which the analyzed works were published are listed in Table 1 . The United States contributed to half (265/529, 50.1%) of the identified papers and was the home country of 80% of the top 10 most productive institutions. The identified papers were mostly published in journals belonging to the following categories: public environmental and occupational health, communication, health care sciences services, medical informatics, and medicine general internal.

Social media platform–specific papers accounted for 44.1% (n=233) of all 529 identified papers ( Table 2 ). The most popularly investigated social media were Twitter, YouTube, and Facebook. They also had higher CPPs than other social media.

VariablePublication count (N=529), n (%)Citations per paper



Harvard University25 (4.7)13.2

University of Texas System20 (3.8)3.4

University of North Carolina14 (2.6)13.8

University of Pennsylvania14 (2.6)11.9

University of London13 (2.5)30.5

Johns Hopkins University12 (2.3)6.0

University of California System11 (2.1)1.7

University of Minnesota System11 (2.1)2.8

Pennsylvania Commonwealth System of Higher Education10 (1.9)3.6

University System of Sydney10 (1.9)32.1



United States265 (50.1)12.2

United Kingdom53 (9.3)20.0

Italy35 (6.6)9.2

Canada33 (6.2)34.0

Spain30 (5.7)7.2

Australia27 (5.1)19.0

China27 (5.1)13.7

Turkey17 (3.2)5.1

Germany15 (2.8)27.9

India14 (2.6)2.8

Switzerland14 (2.6)14.9



Journal of Medical Internet Research (5.034)32 (6.0)14.1

American Journal of Public Health (6.464)14 (2.6)3.1

Health Communication (1.965)13 (2.5)9.2

Vaccine (3.143)13 (2.5)28.1

International Journal of Environmental Research and Public Health (2.468)11 (2.1)8.6

PLOS One (2.740)11 (2.1)60.1

Annals of Behavioral Medicine (4.475)8 (1.5)0

Professional de la Informacion (N/A )8 (1.5)8.6

Cureus (N/A)6 (1.1)26.7

Journal of Health Communication (1.596)6 (1.1)2.3



Public environmental and occupational health95 (18.0)12.6

Communication71 (13.4)7.3

Health care sciences services50 (9.5)17.5

Medical informatics48 (9.1)17.4

Medicine general internal38 (7.2)17.2

Computer science information systems33 (6.2)4.1

Information science library science32 (6.0)5.5

Health policy services22 (4.2)13.6

Computer science theory methods21 (4.0)5.4

Immunology21 (4.0)23.6

a All 8 publications in Annals of Behavioral Medicine were meeting abstracts and received no citation.

b N/A: not applicable.

Social mediaPublication count, nCitations per paper
Twitter9017.0
YouTube6713.7
Facebook5715.3
WhatsApp64.0
Instagram68.7
Weibo47.5
Reddit22.5
WeChat13.0
TikTok0N/A

a N/A: not applicable.

Figure 2 shows the terms extracted from the titles and abstracts of all 529 identified papers. COVID-19 (“covid” at the lower half, n=109, CPP=7.1) and vaccine (upper half, n=62, CPP=15.7) were 2 major health issues identified. Mentioned COVID-19 derivatives included SARS-CoV-2 (“sars cov,” n=9, CPP=11.0), coronavirus (n=22, CPP=15.3), coronavirus disease (n=25, CPP=12.6), and coronavirus pandemic (n=6, CPP=2.3). Mentioned vaccine derivatives included vaccination (n=53, CPP=21.6), vaccination rate (n=7, CPP=5.7), vaccine hesitancy (n=14, CPP=13.8), vaccine misinformation (n=13, CPP=6.9), vaccine preventable disease (n=6, CPP=29.0), and vaccine safety (n=8, CPP=7.8). The top 20 terms with the highest CPPs are listed in Table 3 . Notable terms hinting on important issues discussed in the analyzed literature set were public perception, public concern, health authority, peer (related to peer-to-peer support), and policy maker ( Table 3 ).

health misinformation on social media a literature review

Term Publication count (N=529), n (%)Citations per paper
Real time5 (0.9)160.2
Public perception7 (1.3)86.4
Credible source7 (1.3)84.9
Public concern11 (2.1)75.5
Health authority14 (2.6)56.5
Story24 (4.5)54.5
Peer11 (2.1)50.4
Adoption16 (3.0)49.3
Relevant video7 (1.3)48.1
Term34 (6.4)43.3
Sentiment19 (3.6)41.2
Illness6 (1.1)41.0
Zika virus6 (1.1)40.3
Emergency21 (4.0)38.7
Policy maker9 (1.7)38.6
Viewer7 (1.3)37.1
Misperception8 (1.5)36.5
Information source12 (2.3)36.0
Feeling6 (1.1)35.0
Potential risk7 (1.3)35.0

a Only terms that appeared in at least 1% of papers were considered.

A keyword map is shown in Figure 3 . The keyword map showed that several diseases were recurring themes of investigation, such as measles (n=9, CPP=7.7), Ebola (n=22, CPP=11.4), COVID-19 (n=87, CPP=6.4), and cardiovascular diseases (n=9, CPP=1.7). Table 4 presents the top 20 keywords with the highest CPPs, and reveals that risk and safety were among the concepts with the highest CPPs.

health misinformation on social media a literature review

Keyword Publication count (N=529), n (%)Citations per paper
Risk17 (3.2)51.6
Social network7 (1.3)50.7
Parents7 (1.3)41.1
Hesitancy8 (1.5)39.5
Coverage13 (2.5)37.9
Immunization13 (2.5)34.5
People7 (1.3)31.0
Web 2.014 (2.6)30.4
Knowledge13 (2.5)27.5
Medical information6 (1.1)27.5
Technology8 (1.5)27.3
Public-health8 (1.5)27.1
Attitudes7 (1.3)24.9
Vaccines19 (3.6)24.7
Videos11 (2.1)23.1
Safety7 (1.3)22.7
Care9 (1.7)20.9
Risk communication10 (1.9)20.0
China6 (1.1)19.5
Internet86 (16.3)18.6

a Only keywords that appeared in at least 1% of papers were considered.

Keyword maps generated for the publication set that investigated Twitter, YouTube, and Facebook are shown in Figure 4 A-C. The keyword maps reveal that Twitter research focused on anxiety related to online searching for disease and medical information (cyberchondria and hypochondriasis, cyan), vaccination and Zika virus (red), COVID-19 (blue), Ebola (yellow), cancer (green), and data analysis involving predictive modeling (purple). YouTube research focused on smoking and tobacco (purple), alternative medicine for various diseases, such as rheumatoid arthritis and prostate cancer (green), breast cancer (yellow), COVID-19 (red), and Ebola (blue). Finally, Facebook research focused on online health communities (yellow), vaccine hesitancy related to autism (blue), credibility of health information related to immunization and nutrition (red), cancer (purple), and COVID-19 (green).

The top 10 most cited papers are listed in Table 5 . Peer-to-peer support and spread of misinformation were mentioned, and all Twitter, YouTube, and Facebook data were investigated. The themes of these top 10 papers were consistent with the list of highly cited terms listed in Table 3 , covering topics such as peer-to-peer support, online videos, and public perception.

The exploratory n-gram analysis resulted in several meaningful 5-gram metatexts with at least four appearances as follows: 6 appearances, “as a source of information” and “the spread of fake news;” 5 appearances, “rumors stigma and conspiracy theories,” “the quality of health information,” and “content reliability and quality of;” 4 appearances, “health anxiety and health literacy,” “the relationship between message factors,” “intentions to trust and share,” “#PedsICU and coronavirus disease 2019,” “in low- and middle-income countries,” “actions for a framework for,” “interacted with perceived message importance,” and “verify and share the message.”

health misinformation on social media a literature review

Authors, yearCitations, n
Chew et al, 2010 [ ]613
Yaqub et al, 2014 [ ]256
Naslund et al, 2016 [ ]212
Madathil et al, 2015 [ ]195
Kamel Boulos et al, 2011 [ ]186
Betsch et al, 2012 [ ]168
Syed-Abdul et al, 2013 [ ]147
Depoux et al, 2020 [ ]136
Singh et al, 2012 [ ]123
Bode et al, 2015 [ ]121

General Discussion

Using bibliometric analysis, this study identified and quantitatively analyzed 529 papers on medical and health-related misinformation on social media, revealing the most popularly investigated social media platforms, prevailing research themes, most utilized journals, and most productive countries, institutions, and authors.

Findings Concerning the Western World and Its Prevalent Social Media Platforms

This bibliometric analysis on 529 scientific articles concerning medical and health-related misinformation on social media revealed that the most heavily investigated platforms were Twitter, YouTube, and Facebook. This could be related to the finding that most of the top 10 productive countries were from North America and Europe where these social media platforms were dominant. The results also confirmed the hypothesis that the United States had the largest contribution in social media research. The total publication and citation counts increased very rapidly especially during the last 2 years, consistent with the trends identified by previous systematic reviews on this topic [ 7 , 21 ]. On the other hand, this study found that original articles accounted for 74.3% of the analyzed literature set. This implied that one-fourth of the literature was not covered by the abovementioned systematic reviews, which might partly explain some differences in the results. For instance, this study found that Twitter was the most recurring social medium in the literature instead of YouTube, as reported by Wang et al [ 7 ]. The strength of the review by Suarez-Lledo and Alvarez-Galvez [ 21 ] was that it analyzed the prevalence of health misinformation posts (0%-98%) reported in the original articles. Meanwhile, Wang et al [ 7 ] categorized them into predefined theoretical frameworks with the most prevalent ones being public health, health policy, and epidemiology (n=14); health informatics (n=8); communications studies (n=5); vaccines (n=4); cyberpsychology (n=3); and system sciences (n=3). Here, it was found that publications in immunology were on average more frequently cited than communication and computer science, whereas health care sciences and medical informatics papers were in-between. This implies that the more published disciplines do not necessarily warrant more citations. This finding has 2 implications. First, quantity may not necessarily mean quality. Second, field differences in citation rates found in general [ 40 ] remained present even when the literature set was confined to a single focus on misinformation. Similar to the current findings, Wang et al [ 7 ] also found that the most popular topics were vaccination, Ebola, and Zika virus, with other less popular focus topics being nutrition, cancer, and smoking. In contrast, Wang et al [ 7 ] identified fluoridation of water as one of the recurring topics, whereas in this study, COVID-19 emerged as a strong research focus. Moreover, the performed keyword analysis in this work revealed that the research on different social media platforms, such as Twitter, YouTube, and Facebook, focused on different topics. While, at present, we do not have an explanation for this interesting observation, we believe that the reasoning for different topic studies on different social media could be a relevant direction for future research. Such studies may elucidate whether this is due to different prevalences of specific content across the platforms or due to preferential academic interest from research teams with particular interest in specific social media platforms or topics.

Findings Concerning China and Its Prevalent Social Media Platforms

Though some social media platforms were not available in China, China still made it into the top 10 list of the most productive countries ( Table 1 ). With a large population of internet users in China, it could be reasonably expected that Weibo and WeChat, which are popular social media platforms in China, would become more investigated in future studies. One potential barrier to non-Chinese researchers would be content translation, as the majority of their content is written in Chinese. In addition, the fast-growing short video platform TikTok (and its Chinese version Douyin) might also exert significant influence on the health information seeking behavior of internet users in the future. However, TikTok videos might be hard to archive, and video analysis tools might not be as well developed as text mining tools, which might hinder analysis by public health researchers. The same applies to the visual contents posted on Instagram. Current findings seem to suggest that sufficient research on misinformation disseminating through these platforms is missing from the current literature and should be addressed in future research. Readers should be aware that the publication bias toward Europe and North America, especially the United States, indicates that the current body of knowledge might not reflect the wider spectrum of misinformation on global health issues, especially in other parts of the world with large online communities, such as Asia and South America.

The most productive author was found to be Emily K Vraga who is based in the United States. Her studies focused on how to correct health misinformation (in other words, overturn subjects’ misperceptions) dispersed on social media, particularly Facebook and Twitter [ 39 , 41 - 43 ]. Though China was among the top 10 most productive countries, we found that only 2 of the top 50 most productive authors were based in China. They were King-Wa Fu (n=4) and Chung-Hong Chan (n=3) from the University of Hong Kong, and they focused solely on the Ebola virus [ 44 - 47 ]. This implied that, to grasp the research foci from China, readers need to refer to diverse works from multiple authors instead of that from a few prominent authors. With the continued growth of netizens in China, we anticipate that more productive authors might be based in China in the future.

Elaboration on the Recurring Themes of the Literature

A very important role of social media is to provide peer-to-peer support, as investigated by some publications identified in this study (see Tables 3 and 5 ), for example, by forming online health communities and support groups, and ensuring stakeholder access to the latest and most relevant scientific information and health interventions [ 32 ]. For instance, users could post supportive comments and advice to YouTube videos uploaded by individuals with mental health issues [ 48 ]. On the other hand, misinformation spread via social media (especially related to Twitter, see Figure 4 A) might lead to cyberchondria (cyberspace-related hypochondria; the unfounded concern escalation about common symptomology based on information found on the internet), with a study revealing that unverified information might be more easily shared by internet users who trust online information and perceived an information overload, and that women were more likely to experience cyberchondria [ 49 ]. Cyberchondria could be an important health concern, as a meta-analysis established the correlation between health anxiety and both online information seeking and cyberchondria [ 50 ], and another work revealed that it had 5 facets, including reassurance seeking and mistrust of medical professionals [ 51 ]. Being flooded by online misinformation would not alleviate the situation but may worsen it. A recent study found that government and professional videos containing solely factual information only accounted for 11% of all YouTube videos on COVID-19 and 10% of views, whereas 28% of the analyzed videos contained misinformation and had up to 62 million views [ 52 ]. In this context, the adequacy of funding and resources allocated by governmental bodies to online health literacy campaigns needs to be questioned.

Meanwhile, from the perspective of policy makers, the large amount of information from social media can be monitored and used for the achievement of efficient outcomes. As seen from the results, “health policy services” was among the most recurring journal categories for the analyzed literature set ( Table 1 ) and “policy maker” was one of the recurring terms with the highest CPP ( Table 3 ). For instance, by analyzing tweets related to the COVID-19 pandemic, researchers could identify the top 4 concerns of Twitter users, namely the origin, sources, impact on various levels, and ways to reduce the infection risk [ 53 ]. While using similar approaches, keeping these concerns anonymized at an individual level and ensuring that social or ethnic groups expressing specific concerns do not become targets of discrimination are crucial. Authorities could therefore focus on these concerns as they derive measures and disseminate information to the public to contain the pandemic and reduce fears within the community. In this regard, authorities could collaborate with scientific associations and provide incentives to civil society to address ignorance or misinformation on the detected concerns. Future research could compare relevant social media content following interventions, to define the optimal strategies of tackling misinformation on social media.

As mentioned in the Introduction, the fraudulent study linking the MMR vaccine to autism still has a lingering influence on social media, as it is still posted on the Facebook platform by antivaccine advocates, despite its retraction due to fraudulency [ 54 ]. Moreover, it was found that the content posted by Facebook users regarding vaccination has been increasingly polarized [ 20 ]. One study suggested that the use of angry language could promote the viral spread of the messages, including the misinformation of vaccines causing autism [ 55 ], though it was not investigating contents exclusive to vaccines and only binarized words into positive and negative emotions. Summarizing the results from n-gram analysis, netizens might need to be aware of fake news, rumors, stigma, and conspiracy theories circulating on the internet. Content reliability and quality should be assessed, and information should be verified before sharing. One way to cohort authoritative or accurate health care information shared by experts on social media (eg, Twitter) is by the use of a hashtag, so that others can search easily. One example was #PedsICU that promoted pediatric critical care content, as found by n-gram analysis. By sensible collaboration, there may be a chance to mitigate misinformation.

Limitations

Any publications in journals not indexed by WoS were missed in the current analysis. For example, there is a relevant paper investigating misinformation of COVID-19 on TikTok, which is not indexed by WoS [ 56 ]. Besides, WoS mainly indexes papers written in English. There may be papers investigating Weibo and WeChat written in Chinese or published in Chinese journals that are not yet indexed by WoS. Preprints are also not indexed in WoS, which could be an important source of preliminary information, but the reliability of such information is debatable due to the lack of peer-review assessment. Moreover, a bibliometric study cannot assess the scientific quality of the content, such as risk of bias, effect size or statistical significance of the results, and whether the conclusions are justified by the respective data reported. The accuracy of data tagging by the literature database could also pose a limitation. For instance, KeyWords Plus are keywords tagged to a paper by an algorithm used by WoS based on the terms from the titles of the cited references [ 57 ], and are more broadly descriptive and therefore applicable to analyzing the structure of scientific fields [ 58 ]. However, it was unclear how accurate they were compared to other tags such as the National Center for Biotechnology Information’s Medical Subject Headings (“MeSH terms”). Future studies should also incorporate “conspiracy theory” and related terms into their search protocols for more comprehensive results.

Future Research

For potential future research, artificial intelligence (AI) applications for social media content analysis would be an especially promising avenue. With increasing content and misinformation circulating on social media, it becomes practically impossible to manually determine and classify misinformation. AI or machine learning might be employed for such content analysis, which has the potential to achieve high accuracy [ 59 ]. Yet, AI could also be exploited to generate and disseminate misinformation to targeted audiences [ 60 ]. AI research in health care was most frequently published in journals in computer science and engineering, as reported by Guo et al [ 61 ], and indeed, among their identified top 10 most productive journals, only PLOS One was on the list in our study ( Table 1 ). Along this line, with the further development of AI applications for social media content analysis, it might also be of interest to promote the dissemination of such research in mainstream public health journals, in order to reach a broader relevant audience.

Conclusions

Based on an analysis of 529 papers on medical and health-related misinformation on social media, we found that the United States contributed to half of the papers, with 80% of the top 10 most productive institutions being based in this country. The papers were mostly published in journals belonging to the categories public environmental and occupational health, communication, health care sciences services, medical informatics, and medicine general internal. However, they were generally less cited than papers published in immunology, suggesting that more publications did not warrant more citations. Social media platform–specific papers accounted for 44% of all papers. The most popularly investigated social media platforms were Twitter, YouTube, and Facebook. They also had higher CPPs than other social media. Investigations on these platforms had different foci. Twitter-based research investigated cyberchondria and hypochondriasis, YouTube-based research investigated tobacco smoking, and Facebook-based research investigated vaccine hesitancy related to autism. COVID-19 was a common topic investigated across all platforms. An important implication of these findings is that often knowledge on specific themes related to medical misinformation relies on the predominant study of a single social media platform or limited number of platforms, and broader cross-platform studies could be a promising direction for future research. Future studies should also include social platforms aimed at non-English users to provide a wider perspective on global health misinformation.

Conflicts of Interest

None declared.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) flow diagram of the literature search.

  • Chew C, Eysenbach G. Pandemics in the age of Twitter: content analysis of Tweets during the 2009 H1N1 outbreak. PLoS One 2010 Nov 29;5(11):e14118 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kouzy R, Abi Jaoude J, Kraitem A, El Alam MB, Karam B, Adib E, et al. Coronavirus Goes Viral: Quantifying the COVID-19 Misinformation Epidemic on Twitter. Cureus 2020 Mar 13;12(3):e7255 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jang SM, Mckeever BW, Mckeever R, Kim JK. From Social Media to Mainstream News: The Information Flow of the Vaccine-Autism Controversy in the US, Canada, and the UK. Health Commun 2019 Jan 13;34(1):110-117. [ CrossRef ] [ Medline ]
  • Househ M, Borycki E, Kushniruk A. Empowering patients through social media: the benefits and challenges. Health Informatics J 2014 Mar 18;20(1):50-58 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hagg E, Dahinten VS, Currie LM. The emerging use of social media for health-related purposes in low and middle-income countries: A scoping review. Int J Med Inform 2018 Jul;115:92-105 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Haber N, Smith ER, Moscoe E, Andrews K, Audy R, Bell W, CLAIMS research team. Causal language and strength of inference in academic and media articles shared in social media (CLAIMS): A systematic review. PLoS One 2018 May 30;13(5):e0196346 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wang Y, McKee M, Torbica A, Stuckler D. Systematic Literature Review on the Spread of Health-related Misinformation on Social Media. Soc Sci Med 2019 Nov;240:112552 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Racovita M. Lost in translation. Scientists need to adapt to a postmodern world; constructivism can offer a way. EMBO Rep 2013 Aug 16;14(8):675-678 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hill J, Agewall S, Baranchuk A, Booz GW, Borer JS, Camici PG, et al. Medical Misinformation: Vet the Message!. J Am Heart Assoc 2019 Feb 05;8(3):e011838 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wakefield AJ, Murch SH, Anthony A, Linnell J, Casson DM, Malik M, et al. Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet 1998 Feb 28;351(9103):637-641. [ CrossRef ] [ Medline ]
  • No Authors. A timeline of the Wakefield retraction. Nat Med 2010 Mar;16(3):248-248. [ CrossRef ]
  • Dyer C. Lancet retracts Wakefield's MMR paper. BMJ 2010 Feb 02;340(feb02 4):c696-c696. [ CrossRef ] [ Medline ]
  • Godlee F, Smith J, Marcovitch H. Wakefield's article linking MMR vaccine and autism was fraudulent. BMJ 2011 Jan 05;342(jan05 1):c7452-c7452. [ CrossRef ] [ Medline ]
  • Omer SB. The discredited doctor hailed by the anti-vaccine movement. Nature 2020 Oct 27;586(7831):668-669. [ CrossRef ]
  • Hotez PJ, Nuzhath T, Colwell B. Combating vaccine hesitancy and other 21st century social determinants in the global fight against measles. Curr Opin Virol 2020 Apr;41:1-7. [ CrossRef ] [ Medline ]
  • Gunaratne K, Coomes EA, Haghbayan H. Temporal trends in anti-vaccine discourse on Twitter. Vaccine 2019 Aug 14;37(35):4867-4871. [ CrossRef ] [ Medline ]
  • Gandhi CK, Patel J, Zhan X. Trend of influenza vaccine Facebook posts in last 4 years: a content analysis. Am J Infect Control 2020 Apr;48(4):361-367. [ CrossRef ] [ Medline ]
  • Dunn AG, Leask J, Zhou X, Mandl KD, Coiera E. Associations Between Exposure to and Expression of Negative Opinions About Human Papillomavirus Vaccines on Social Media: An Observational Study. J Med Internet Res 2015 Jun 10;17(6):e144 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chou WS, Oh A, Klein WMP. Addressing Health-Related Misinformation on Social Media. JAMA 2018 Dec 18;320(23):2417-2418. [ CrossRef ] [ Medline ]
  • Schmidt AL, Zollo F, Scala A, Betsch C, Quattrociocchi W. Polarization of the vaccination debate on Facebook. Vaccine 2018 Jun 14;36(25):3606-3612. [ CrossRef ] [ Medline ]
  • Suarez-Lledo V, Alvarez-Galvez J. Prevalence of Health Misinformation on Social Media: Systematic Review. J Med Internet Res 2021 Jan 20;23(1):e17187 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pritchard A. Statistical bibliography or bibliometrics. Journal of Documentation 1969;25(4):348-349.
  • Huang Z, Chen H, Liu Z. The 100 top-cited systematic reviews/meta-analyses in central venous catheter research: A PRISMA-compliant systematic literature review and bibliometric analysis. Intensive Crit Care Nurs 2020 Apr;57:102803. [ CrossRef ] [ Medline ]
  • Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Celliers M, Hattingh M. A Systematic Review on Fake News Themes Reported in Literature. In: Hattingh M, Matthee M, Smuts H, Pappas I, Dwivedi YK, Mäntymäki M, editors. Responsible Design, Implementation and Use of Information and Communication Technology. I3E 2020. Lecture Notes in Computer Science, vol 12067. Cham: Springer; 2020:223-234.
  • Atanasov AG, Yeung AWK, Klager E, Eibensteiner F, Schaden E, Kletecka-Pulker M, et al. First, Do No Harm (Gone Wrong): Total-Scale Analysis of Medical Errors Scientific Literature. Front Public Health 2020 Oct 16;8:558913 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • van Eck NJ, Waltman L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 2010 Aug 31;84(2):523-538 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Waltman L, van Eck NJ, Noyons EC. A unified approach to mapping and clustering of bibliometric networks. Journal of Informetrics 2010 Oct;4(4):629-635. [ CrossRef ]
  • Van Eck N, Waltman L. Text mining and visualization using VOSviewer. ISSI Newsletter 2011;7:50-54.
  • Online NGram Analyzer. Guide to Data Mining.   URL: http://guidetodatamining.com/ngramAnalyzer/ [accessed 2022-01-11]
  • Yaqub O, Castle-Clarke S, Sevdalis N, Chataway J. Attitudes to vaccination: a critical review. Soc Sci Med 2014 Jul;112:1-11 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Naslund JA, Aschbrenner KA, Marsch LA, Bartels SJ. The future of mental health care: peer-to-peer support and social media. Epidemiol Psychiatr Sci 2016 Jan 08;25(2):113-122. [ CrossRef ]
  • Madathil KC, Rivera-Rodriguez AJ, Greenstein JS, Gramopadhye AK. Healthcare information on YouTube: A systematic review. Health Informatics J 2015 Sep;21(3):173-194 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kamel Boulos MN, Resch B, Crowley DN, Breslin JG, Sohn G, Burtner R, et al. Crowdsourcing, citizen sensing and sensor web technologies for public and environmental health surveillance and crisis management: trends, OGC standards and application examples. Int J Health Geogr 2011 Dec 21;10:67 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Betsch C, Brewer NT, Brocard P, Davies P, Gaissmaier W, Haase N, et al. Opportunities and challenges of Web 2.0 for vaccination decisions. Vaccine 2012 May 28;30(25):3727-3733. [ CrossRef ] [ Medline ]
  • Syed-Abdul S, Fernandez-Luque L, Jian WS, Li YC, Crain S, Hsu MH, et al. Misleading health-related information promoted through video-based social media: anorexia on YouTube. J Med Internet Res 2013 Feb 13;15(2):e30 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Depoux A, Martin S, Karafillakis E, Preet R, Wilder-Smith A, Larson H. The pandemic of social media panic travels faster than the COVID-19 outbreak. J Travel Med 2020 May 18;27(3):taaa031 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Singh AG, Singh S, Singh PP. YouTube for information on rheumatoid arthritis--a wakeup call? J Rheumatol 2012 May;39(5):899-903. [ CrossRef ] [ Medline ]
  • Bode L, Vraga EK. In Related News, That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media. J Commun 2015 Jun 23;65(4):619-638. [ CrossRef ]
  • Bornmann L, Haunschild R, Mutz R. Should citations be field-normalized in evaluative bibliometrics? An empirical analysis based on propensity score matching. Journal of Informetrics 2020 Nov;14(4):101098. [ CrossRef ]
  • Bode L, Vraga EK. See Something, Say Something: Correction of Global Health Misinformation on Social Media. Health Commun 2018 Sep 16;33(9):1131-1140. [ CrossRef ] [ Medline ]
  • Vraga EK, Bode L. Using Expert Sources to Correct Health Misinformation in Social Media. Science Communication 2017 Sep 14;39(5):621-645. [ CrossRef ]
  • Vraga EK, Bode L. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information, Communication & Society 2017 Apr 19;21(10):1337-1353. [ CrossRef ]
  • Fung IC, Duke CH, Finch KC, Snook KR, Tseng P, Hernandez AC, et al. Ebola virus disease and social media: A systematic review. Am J Infect Control 2016 Dec 01;44(12):1660-1671. [ CrossRef ] [ Medline ]
  • Fung I, Fu K, Chan C, Chan B, Cheung C, Abraham T, et al. Social Media's Initial Reaction to Information and Misinformation on Ebola, August 2014: Facts and Rumors. Public Health Rep 2016;131(3):461-473 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fung IC, Fu K, Chan C, Chan BSB, Cheung C, Abraham T, et al. Social Media's Initial Reaction to Information and Misinformation on Ebola, August 2014: Facts and Rumors. Public Health Rep 2016 May 01;131(3):461-473 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liang H, Fung IC, Tse ZTH, Yin J, Chan C, Pechta LE, et al. How did Ebola information spread on twitter: broadcasting or viral spreading? BMC Public Health 2019 Apr 25;19(1):438 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Naslund JA, Grande SW, Aschbrenner KA, Elwyn G. Naturally occurring peer support through social media: the experiences of individuals with severe mental illness using YouTube. PLoS One 2014 Oct 15;9(10):e110171 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Laato S, Islam AKMN, Islam MN, Whelan E. What drives unverified information sharing and cyberchondria during the COVID-19 pandemic? European Journal of Information Systems 2020 Jun 07;29(3):288-305. [ CrossRef ]
  • McMullan RD, Berle D, Arnáez S, Starcevic V. The relationships between health anxiety, online health information seeking, and cyberchondria: Systematic review and meta-analysis. J Affect Disord 2019 Feb 15;245:270-278. [ CrossRef ] [ Medline ]
  • McElroy E, Shevlin M. The development and initial validation of the cyberchondria severity scale (CSS). J Anxiety Disord 2014 Mar;28(2):259-265. [ CrossRef ] [ Medline ]
  • Li HO, Bailey A, Huynh D, Chan J. YouTube as a source of information on COVID-19: a pandemic of misinformation? BMJ Glob Health 2020 May 14;5(5):e002604 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Abd-Alrazaq A, Alhuwail D, Househ M, Hamdi M, Shah Z. Top Concerns of Tweeters During the COVID-19 Pandemic: Infoveillance Study. J Med Internet Res 2020 Apr 21;22(4):e19016 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shelby A, Ernst K. Story and science: how providers and parents can utilize storytelling to combat anti-vaccine misinformation. Hum Vaccin Immunother 2013 Aug 27;9(8):1795-1801 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bail CA. Emotional Feedback and the Viral Spread of Social Media Messages About Autism Spectrum Disorders. Am J Public Health 2016 Jul;106(7):1173-1180. [ CrossRef ]
  • Basch C, Hillyer G, Jaime C. COVID-19 on TikTok: harnessing an emerging social media platform to convey important public health messages. Int J Adolesc Med Health 2020 Aug 10:2020. [ CrossRef ] [ Medline ]
  • Garfield E, Sher IH. KeyWords Plus™—algorithmic derivative indexing. J. Am. Soc. Inf. Sci 1993 Jun;44(5):298-299. [ CrossRef ]
  • Zhang J, Yu Q, Zheng F, Long C, Lu Z, Duan Z. Comparing keywords plus of WOS and author keywords: A case study of patient adherence research. J Assn Inf Sci Tec 2015 Jan 08;67(4):967-972. [ CrossRef ]
  • Choudrie J, Banerjee S, Kotecha K, Walambe R, Karende H, Ameta J. Machine learning techniques and older adults processing of online information and misinformation: A covid 19 study. Comput Human Behav 2021 Jun;119:106716 [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wilner AS. Cybersecurity and its discontents: Artificial intelligence, the Internet of Things, and digital misinformation. International Journal 2018 Jul 26;73(2):308-316. [ CrossRef ]
  • Guo Y, Hao Z, Zhao S, Gong J, Yang F. Artificial Intelligence in Health Care: Bibliometric Analysis. J Med Internet Res 2020 Jul 29;22(7):e18228 [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

artificial intelligence
citations per paper
measles, mumps, and rubella
Web of Science

Edited by A Mavragani; submitted 23.02.21; peer-reviewed by K Reuter, C Weiger, A Roundtree, JP Allem; comments to author 16.03.21; revised version received 30.03.21; accepted 20.12.21; published 25.01.22

©Andy Wai Kan Yeung, Anela Tosevska, Elisabeth Klager, Fabian Eibensteiner, Christos Tsagkaris, Emil D Parvanov, Faisal A Nawaz, Sabine Völkl-Kernstock, Eva Schaden, Maria Kletecka-Pulker, Harald Willschke, Atanas G Atanasov. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.01.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

The disaster of misinformation: a review of research in social media

  • Published: 15 February 2022
  • Volume 13 , pages 271–285, ( 2022 )

Cite this article

health misinformation on social media a literature review

  • Sadiq Muhammed T   ORCID: orcid.org/0000-0002-4614-2333 1 &
  • Saji K. Mathew   ORCID: orcid.org/0000-0002-8551-8209 1  

50k Accesses

57 Citations

550 Altmetric

69 Mentions

Explore all metrics

The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the USA elections, we identified disaster, health, and politics as specific domains for a research review on social media misinformation. Following a systematic review process, we chose 28 articles, relevant to the three themes, for synthesis. We discuss the characteristics of misinformation in the three domains, the methodologies that have been used by researchers, and the theories used to study misinformation. We adapt an Antecedents-Misinformation-Outcomes (AMIO) framework for integrating key concepts from prior studies. Based on the AMIO framework, we further discuss the inter-relationships of concepts and the strategies to control the spread of misinformation on social media. Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.

Similar content being viewed by others

health misinformation on social media a literature review

Fake news on Social Media: the Impact on Society

health misinformation on social media a literature review

The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review

health misinformation on social media a literature review

Media and Stereotypes

Avoid common mistakes on your manuscript.

1 Introduction

1.1 information disorder in social media.

Rumors, misinformation, disinformation, and mal-information are common challenges confronting media of all types. It is, however, worse in the case of digital media, especially on social media platforms. Ease of access and use, speed of information diffusion, and difficulty in correcting false information make control of undesirable information a horrid task [ 1 ]. Alongside these challenges, social media has also been highly influential in spreading timely and useful information. For example, the recent #BlackLivesMatter movement was enabled by social media, which united concurring people's solidarity across the world when George Floyd was killed due to police brutality, and so are 2011 Arab spring in the Middle East and the 2017 #MeToo movement against sexual harassments and abuse [ 2 , 3 ]. Although, scholars have addressed information disorder in social media, a synthesis of the insights from these studies are rare.

The information which is fake or misleading and spreads unintentionally is known as misinformation [ 4 ]. Prior research on misinformation in social media has highlighted various characteristics of misinformation and interventions thereof in different contexts. The issue of misinformation has become dominant with the rise of social media, attracting scholarly attention, particularly after the 2016 USA Presidential election, when misinformation apparently influenced the election results [ 5 ]. The word 'misinformation' was listed as one of the global risks by the World Economic Forum [ 6 ]. A similar term that is popular and confusing along with misinformation is 'disinformation'. It is defined as the information that is fake or misleading, and unlike misinformation, spreads intentionally. Disinformation campaigns are often seen in a political context where state actors create them for political gains. In India, during the initial stage of COVID-19, there was reportedly a surge in fake news linking the virus outbreak to a particular religious group. This disinformation spread gained media attention as it was widely shared on social media platforms. As a result of the targeting, it eventually translated into physical violence and discriminatory treatment against members of the community in some of the Indian states [ 7 ]. 'Rumors' and 'fake news' are similar terms related to misinformation. 'Rumors' are unverified information or statements circulated with uncertainty, and 'fake news' is the misinformation that is distributed in an official news format. Source ambiguity, personal involvement, confirmation bias, and social ties are some of the rumor-causing factors. Yet another related term, mal-information, is accurate information that is used in different contexts to spread hatred or abuse of a person or a particular group. Our review focuses on misinformation that is spread through social media platforms. The words 'rumor', and 'misinformation' are used interchangeably in this paper. Further, we identify factors that cause misinformation based on a systematic review of prior studies.

Ours is one of the early attempts to review social media research on misinformation. This review focuses on three sensitive domains of disaster, health, and politics, setting three objectives: (a) to analyze previous studies to understand the impact of misinformation on the three domains (b) to identify theoretical perspectives used to examine the spread of misinformation on social media and (c) to develop a framework to study key concepts and their inter-relationships emerging from prior studies. We identified these specific areas as the impact of misinformation with regards to both speed of spread and scale of influence are high and detrimental to the public and governments. To the best of our knowledge, the review of the literature on social media misinformation themes are relatively scanty. This review contributes to an emerging body of knowledge in Data Science and informs the efforts to combat social media misinformation. Data Science is an interdisciplinary area which incorporates different areas like statistics, management, and sociology to study the data and create knowledge out of data [ 8 ]. This review will also inform future studies that aim to evaluate and compare patterns of misinformation on sensitive themes of social relevance, such as disaster, health, and politics.

The paper is structured as follows. The first section introduces misinformation in social media context. In Sect.  2 , we provide a brief overview of prior research works on misinformation and social media. Section  3 describes the research methodology, which includes details of the literature search and selection process. Section  4 discusses the analysis of spread of misinformation on social media based on three themes- disaster, health, and politics and the review findings. This includes current state of research, theoretical foundations, determinants of misinformation in social media platforms, and strategies to control the spread of misinformation. Section  5 concludes with the implications and limitations of the paper.

2 Social media and spread of misinformation

Misinformation arises in uncertain contexts when people are confronted with a scarcity of information they need. During unforeseen circumstances, the affected individual or community experiences nervousness or anxiety. Anxiety is one of the primary reasons behind the spread of misinformation. To overcome this tension, people tend to gather information from sources such as mainstream media and official government social media handles to verify the information they have received. When they fail to receive information from official sources, they collect related information from their peer circles or other informal sources, which would help them to control social tension [ 9 ]. Furthermore, in an emergency context, misinformation helps community members to reach a common understanding of the uncertain situation.

2.1 The echo chamber of social media

Social media has increasingly grown in power and influence and has acted as a medium to accelerate sociopolitical movements. Network effects enhance participation in social media platforms which in turn spread information (good or bad) at a faster pace compared to traditional media. Furthermore, due to a massive surge in online content consumption primarily through social media both business organizations and political parties have begun to share content that are ambiguous or fake to influence online users and their decisions for financial and political gains [ 9 , 10 ]. On the other hand, people often approach social media with a hedonic mindset, which reduces their tendency to verify the information they receive [ 9 ]. Repetitive exposure to contents that coincides with their pre-existing beliefs, increases believability and shareability of content. This process known as the echo-chamber effect [ 11 ] is fueled by confirmation bias. Confirmation bias is the tendency of the person to support information that reinforces pre-existing beliefs and neglect opposing perspectives and viewpoints other than their own.

Platforms’ structure and algorithms also have an essential role in spreading misinformation. Tiwana et al. [ 12 ] have defined platform architecture as ‘a conceptual blueprint that describes how the ecosystem is partitioned into a relatively stable platform and a complementary set of modules that are encouraged to vary, and the design rules binding on both’. Business models of these platforms are based upon maximizing user engagement. For example, in the case of Facebook or Twitter, user feed is based on their existing belief or preferences. User feeds provide users with similar content that matches their existing beliefs, thus contributing to the echo chamber effect.

Platform architecture makes the transmission and retransmission of misinformation easier [ 12 , 13 ]. For instance, WhatsApp has a one-touch forward option that enables users to forward messages simultaneously to multiple users. Earlier, a WhatsApp user could forward a message to 250 groups or users at a time, which as a measure for controlling the spread of misinformation was limited to five members in 2019. WhatsApp claimed that globally this restriction reduced message forwarding by 25% [ 14 ]. Apart from platform politics, users also have an essential role in creating or distributing misinformation. In a disaster context, people tend to share misinformation based on their subjective feeling [ 15 ].

Misinformation has the power to influence the decisions of its audience. It can change a citizen's approach toward a topic or a subject. The anti-vaccine movement on Twitter during the 2015 measles (highly communicable disease) outbreak in Disneyland, California, serves as a good example. The movement created conspiracy theories and mistrust on the State, which increased vaccine refusal rate [ 16 ]. Misinformation could even influence election of governments by manipulating citizens’ political attitudes as seen in the 2016 USA and 2017 French elections [ 17 ]. Of late, people rely heavily on Twitter and Facebook to collect the latest happenings from mainstream media [ 18 ].

Combating misinformation in social media has been a challenging task for governments in several countries. When social media influences elections [ 17 ] and health campaigns (like vaccination), governments and international agencies demand social media owners to take necessary actions to combat misinformation [ 13 , 15 ]. Platforms began to regulate bots that were used to spread misinformation. Facebook announced the filtering of their algorithms to combat misinformation, down-ranking the post flagged by their fact-checkers which will reduce the popularity of the post or page. [ 17 ]. However, misinformation has become a complicated issue due to the growth of new users and the emergence of new social media platforms. Jang et al. [ 19 ] have suggested two approaches other than governmental regulation to control misinformation literary and corrective. The literary approach proposes educating users to increase their cognitive ability to differentiate misinformation from the information. The corrective approach provides more fact-checking facilities for users. Warnings would be provided against potentially fabricated content based on crowdsourcing. Both approaches have limitations; the literary approach attracted criticism as it transfers responsibility for the spread of misinformation to citizens. The corrective approach will only have a limited impact as the volume of fabricated content escalates [ 19 , 20 , 21 ].

An overview of the literature on misinformation reveals that most investigations focus on examining the methods to combat misinformation. Social media platforms are still discovering new tools and techniques to mitigate misinformation from their platforms, this calls for a research to understand their strategies.

3 Review method

This research followed a systematic literature review process. The study employed a structured approach based on Webster’s Guidelines [ 22 ] to identify relevant literature on the spread of misinformation. These guidelines helped in maintaining a quality standard while selecting the literature for review. The initial stage of the study involved exploring research papers from relevant databases to understand the volumes and availability of research articles. We extended the literature search to interdisciplinary databases too. We gathered articles from Web of Science, ACM digital library, AIS electronic library, EBSCO host business source premier, ScienceDirect, Scopus, and Springer link. Apart from this, a manual search was performed in Information Systems (IS) scholars' basket of journals [ 23 ] to ensure we did not miss any articles from these journals. We have also preferred articles that have Data Science and Information Systems background. The systematic review process began with keyword search using predefined keywords (Fig.  2 ). We identified related synonyms such as 'misinformation', 'rumors', 'spread', and 'social media' along with their combinations for the search process. The keyword search was on the title, abstract, and on the list of keywords. The literature search was conducted in the month of April 2020. Later, we revisited the literature in December 2021 to include latest publications from 2020 to 2021.

It was observed that scholarly discussion about ‘misinformation and social media’ began to appear in research after 2008. Later in 2010, the topic gained more attention when Twitter bots were used or spreading fake news on the replacement of a USA Senator [ 24 ]. Hate campaigns and fake follower activities were simultaneously growing during that period. As evident from Fig.  1 , showing number of articles published between 2005 and 2021 on misinformation in three databases: Scopus, S pringer, and EBSCO, academic engagement on misinformation seems to have gained more impetus after the 2016 US Presidential election, when social media platforms had apparently influenced the election [ 20 ].

figure 1

Articles published on misinformation during 2005–2021 (Databases; Scopus, Springer, and EBSCO)

As Data Science is an interdisciplinary field, the focus of our literature review goes beyond disciplinary boundaries. In particular, we focused on the three domains of disaster, health, and politics. This thematic focus of our review has two underlying reasons (a) the impact of misinformation through social media is sporadic and has the most damaging effects in these three domains and (b) our selection criteria in systematic review finally resulted in research papers that related to these three domains. This review has excluded platforms that are designed for professional and business users such as LinkedIn and Behance. A rational for the choice of these themes are discussed in the next section.

3.1 Inclusion–exclusion criteria

Figure  2 depicts the systematic review process followed in this study. In our preliminary search, 2148 records were retrieved from databases—all those articles were gathered onto a spreadsheet, which was manually cross-checked with the journals linked to the articles. Studies published during 2005–2021, studies published in English language, articles published from peer-reviewed journals, journals rating and papers relevant to misinformation were used as the inclusion criteria. We have excluded reviews, thesis, dissertations, and editorials; and articles on misinformation that are not akin to social media. To fetch the best from these articles, we selected articles that were from top journals, rated above three according to ABS rating and A*, A, and B according to ABDC rating. This process, while ensuring the quality of papers, also effectively shortened purview of study to 643 articles of acceptable quality. We have not performed track-back and track-forward on references. During this process, duplicate records were also identified and removed. Further screening of articles based on the title, abstract, and full text (wherever necessary)—brought down the number to 207 articles.

figure 2

Systematic literature review process

Further screening based on the three themes reduced the focus to 89 articles. We conducted a full-text analysis of these 89 articles. We further excluded articles that had not considered misinformation as a central theme and finally arrived at 28 articles for detailed review (Table 1 ).

The selected studies used a variety of research methods to examine the misinformation on social media. Experimentation and text mining of tweets emerged as the most frequent research methods; there were 11 studies that used experimental methods, and eight used Twitter data analyses. Apart from these, there were three survey methods, two mixed methods, and case study methods each, and one opportunistic sampling and exploratory study each. The selected literature for review includes nine articles on disaster, eight on healthcare, and eleven from politics. We preferred papers for review based on three major social media platforms; Twitter, Facebook, and WhatsApp. These are the three social media owners with the highest transmission rates and most active users [ 25 ] and most likely platforms for misinformation propagation.

3.2 Coding procedure

Initially both the authors have manually coded the articles individually by reading full text of each article and then identified the three themes; disaster, health, and politics. We used an inductive coding approach to derive codes from the data. The intercoder reliability rate between the authors were 82.1%. Disagreement among authors related to deciding in which theme few papers fall under were discussed and a resolution was arrived at. Later we used NVIVO, a qualitative data analysis software, to analyze unstructured data to encode and categorize the themes from the articles. The codes emerged from the articles were categorized into sub-themes and later attached to the main themes; disaster, health, and politics. NVIVO produced a rank list of codes based on frequency of occurrence (“ Appendix ”). An intercoder reliability check was completed for the data by an external research scholar having a different areas of expertise to ensure reliability. The coder agreed upon 26 articles out of 28 (92.8%), which indicated a high level intercoder reliability [ 49 ]. The independent researcher’s disagreement about the code for two authors was discussed between the authors and the research scholar and a consensus was arrived at.

We initially reviewed articles separately from the categories of disaster, health, and politics. We first provide emergent issues that cut across these themes.

4.1 Social media misinformation research

Disaster, health, and politics emerged as the three domains (“ Appendix ”) where misinformation can cause severe harm, often leading to casualties or even irreversible effects. The mitigation of these effects can also demand substantial financial or human resources burden considering the scale of effect and risk of spreading negative information to the public altogether. All these areas are sensitive in nature. Further, disaster, health, and politics have gained the attention of researchers and governments as the challenges of misinformation confronting these domains are rampant. Besides sensitivity, misinformation in these areas has higher potential to exacerbate the existing crisis in society. During the 2020 Munich security conference, WHO’s Director-General noted: “We are not just fighting an epidemic; we are fighting an infodemic”, referring to the faster spread of COVID-19 misinformation than the virus [ 50 ].

More than 6000 people were hospitalized due to COVID-19 related misinformation in the first three months of 2020 [ 51 ]. As COVID-19 vaccination began, one of the popular myths was that Bill Gates wanted to use vaccines to embed microchips in people to track them and this created vaccine hesitancy among the citizens [ 52 ]. These reports show the severity of the spread of misinformation and how misinformation can aggravate a public health crisis.

4.2 Misinformation during disaster

In the context of emergency situations (unforeseen circumstances), the credibility of social media information has often been questioned [ 11 ]. When a crisis occurs, affected communities often experience a lack of localized information needed for them to make emergency decisions. This accelerates the spread of misinformation as people tend to fill this information gap with misinformation or 'improvised news' [ 9 , 24 , 25 ]. The broadcasting power of social media and re-sharing of misinformation could weaken and slow down rescue operations [ 24 , 25 ]. As the local people have more access to the disaster area, they become immediate reporters of a crisis through social media. Mainstream media comes into picture only later. However, recent incidents reveals that voluntary reporting of this kind has begun to affect rescue operations negatively as it often acts as a collective rumor mill [ 9 ], which propagates misinformation. During the 2018 floods in the South-Indian state of Kerala a fake video on Mullaperiyar Dam leakage created unnecessary panic among the citizens, thus negatively impacting the rescue operations [ 53 ]. Information from mainstream media is relatively more reliable as they have traditional gatekeepers such as peer reviewers and editors who cross-check the information source before publication. Chua et al. [ 28 ] found that a major chunk of corrective tweets were retweeted from mainstream news media, thus mainstream media is considered as a preferred rumor correction channel, where they attempt to correct misinformation with the right information.

4.2.1 Characterizing disaster misinformation

Oh et al. [ 9 ] studied citizen-driven information processing based on three social crises using rumor theory. The main characteristic of a crisis is the complexity of information processing and sharing [ 9 , 24 ]. A task is considered complex when characterized by increase in information load, information diversity or rate of information change [ 54 ]. Information overload and information dearth are the two grave concerns that interrupt the communication between the affected community and a rescue team. Information overload, where too many enquiries and fake news distract a response team, slows them down to recognize valid information [ 9 , 27 ]. According to Balan and Mathew [ 55 ] information overload occurs when volume of information such as complexity of words and multiple languages that exceeds and cannot be processed by a human being. Here information dearth in our context is the lack of localized information that is supposed to help the affected community to make emergency decisions. When the official government communication channels or mainstream media cannot fulfill citizen's needs, they resort to information from their social media peers [ 9 , 27 , 29 ].

In a social crisis context, Tamotsu Shibutani [ 56 ] defines rumoring as collective sharing and exchange of information, which helps the community members to reach a common understanding about the crisis situation [ 30 ]. This mechanism works in social media, which creates information dearth and information overload. Anxiety, information ambiguity (source ambiguity and content ambiguity), personal involvement, and social ties are the rumor-causing variables in a crisis context [ 9 , 27 ]. In general, anxiety is a negative feeling caused by distress or stressful situation, which fabricates or produces adverse outcomes [ 57 ]. In the context of a crisis or emergency, a community may experience anxiety in the absence of reliable information or in other cases when confronted with overload of information, making it difficult to take appropriate decisions. Under such circumstances, people may tend to rely on rumors as a primary source of information. The influence level of anxiety is higher during a community crisis than during a business crisis [ 9 ]. However, anxiety, as an attribute, varies based on the nature of platforms. For example, Oh et al. [ 9 ] found that the Twitter community do not fall into social pressure as like WhatsApp community [ 30 ]. Simon et al. [ 30 ] developed a model of rumor retransmission on social media and identified information ambiguity, anxiety and personal involvement as motives for rumormongering. Attractiveness is another rumor-causing variable. It occurs when aesthetically appealing visual aids or designs capture a receiver’s attention. Here believability matters more than the content’s reliability or the truth of the information received.

The second stage of the spread of misinformation is misinformation retransmission. Apart from the rumor-causing variables that are reported in Oh et al. [ 9 ], Liu et al. [ 13 ] found senders credibility and attractiveness as significant variables related to misinformation retransmission. Personal involvement and content ambiguity can also affect misinformation transmission [ 13 ]. Abdullah et al. [ 25 ] explored retweeter's motive on the Twitter platform to spread disaster information. Content relevance, early information [ 27 , 31 ], trustworthiness of the content, emotional influence [ 30 ], retweet count, pro-social behavior (altruistic behavior among the citizens during the crisis), and the need to inform their circle are the factors that drive users’ retweet [ 25 ]. Lee et al. [ 26 ] have also examined the impact of Twitter features on message diffusion based on the 2013 Boston marathon tragedy. The study reported that during crisis events (especially during disasters), a tweet that has less reaction time (time between the crisis and initial tweet) and had higher impact than other tweets. This shows that to an extent, misinformation can be controlled if officials could communicate at the early stage of a crisis [ 27 ]. Liu et al. [ 13 ] showed that tweets with hashtags influence spread of misinformation. Further, Lee et al. [ 26 ] found that tweets with no hashtags had more influence due to contextual differences. For instance, usage of hashtags for marketing or advertising has a positive impact, while in the case of disaster or emergency situations, usage of hashtags (as in case of Twitter) has a negative impact. Messages with no hashtag get widely diffused when compared to messages with the hashtag [ 26 ].

Oh et al. [ 15 ] explored the behavioral aspects of social media participants that led to retransmission and spread of misinformation. They found that when people believe a threatening piece of misinformation they received, they are more likely to spread it, and they take necessary safety measures (sometimes even extreme actions). Repetition of the same misinformation from different sources also makes it more believable [ 28 ]. However, when they realize the received information was false they were less likely to share it with others [ 13 , 26 ]. The characteristics of the platform used to deliver the misinformation also matters. For instance, numbers of likes and shares of the information increases the believability of the social media post [ 47 ].

In summary, we found that platform architecture also has an essential role in spreading and believability of misinformation. While conducting this systematic literature review, we observed that more studies on disaster and misinformation are based on the Twitter platform. The six papers out of nine that we reviewed on disaster area were based on the Twitter platform. When a message was delivered in video format, it had a higher impact compared to audio or text messages. If the message had a religious or cultural narrative, it led to behavioral action (danger control response) [ 15 ]. Users were more likely to spread misinformation through WhatsApp than Twitter. It was difficult to find the source of shared information on WhatsApp [ 30 ].

4.3 Misinformation related to healthcare

From our review, we found two systematic literature reviews that discusses health-related misinformation on social media. Yang et al. [ 58 ] explores the characteristics, impact and influences of health misinformation on social media. Wang et al. [ 59 ] addresses health misinformation related to vaccines and infectious diseases. This review shows that health-related misinformation, especially on M.M.R. vaccine and autism are largely spreading on social media and the government is unable to control it.

The spread of health misinformation is an emerging issue facing public health authorities. Health misinformation could delay proper treatment to patients, which could further add more casualties to the public health domain [ 28 , 59 , 60 ]. Often people tend to believe health-related information that is shared by their peers. Some of them tend to share their treatment experience or traditional remedies online. This information could be in a different context and may not be even accurate [ 33 , 34 ]. Compared to health-related websites, the language used to detail the health information shared on social media will be simple and may not include essential details [ 35 , 37 ]. Some studies reported that conspiracy theories and pseudoscience have escalated casualties [ 33 ]. Pseudoscience is the term referred to as the false claim, which pretends as if the shared misinformation has scientific evidence. The anti-vaccination movement on Twitter is one of the examples of pseudoscience [ 61 ]. Here the user might have shared the information due to the lack of scientific knowledge [ 35 ].

4.3.1 Characterizing healthcare misinformation

The attributes that characterize healthcare misinformation are distinctly different from other domains. Chua and Banerjee, [ 37 ] identified the characteristics of health misinformation as dread and wish. Dread is the rumor which creates more panic and unpleasant consequences. For example, in the wake of COVID-19, misinformation was widely shared on social media, which claimed that children 'died on the spot' after the mass COVID-19 vaccination program in Senegal, West Africa [ 61 ]. This message created panic among the citizens, as the misinformation was shared more than 7000 times on Facebook [ 61 ]. Wish is the type of rumor that gives hope to the receiver (e.g.,: rumor on free medicine distribution) [ 62 ]. Dread rumor looks more trustworthy and more likely to get viral. Dread rumor was the cause of violence against a minority group in India during COVID-19 [ 7 ]. Chua and Banerjee, [ 32 ] added pictorial and textual representations as the characteristics of health misinformation. The rumor that contains only text is textual rumor. Pictorial rumor on the other hand contains both text and images. However, Chua and Banerjee, [ 32 ] found that users prefer textual rumor than pictorial. Unlike rumors that are circulated during a natural disaster, health misinformation will be long-lasting, and it can spread cutting across boundaries. Personal involvement (the importance of information for both sender and receiver), rumor type and presence of counter rumor are some of the variables that can escalate users’ trusting and sharing behavior related to rumor [ 37 ]. The study of Madraki et al. [ 46 ] study on COVID-19 misinformation /disinformation reported that COVID-19 misinformation on social media differs significantly based on the languages, countries and their culture and beliefs. Acceptance of social media platforms as well as Governmental censorship also play an important role here.

Widespread misinformation could also change collective opinion [ 29 ]. Online users’ epistemic beliefs could control their sharing decisions. Chua and Banerjee, [ 32 ] argued that epistemologically naïve users (users who think knowledge can be acquired easily) are the type of users who accelerate the spread of misinformation on platforms. Those who read or share the misinformation are not likely to follow it [ 37 ]. Gu and Hong [ 34 ] examined health misinformation on mobile social media context. Mobile internet users are different from large screen users. The mobile phone user might have a more emotional attachment toward the gadget. It also motivates them to believe received misinformation. The corrective effort focused on large screen users may not work with mobile phone users or small screen users. Chua and Banerjee [ 32 ] suggested that simplified sharing options of platforms also motivate users to share the received misinformation before validating it. Shahi et al. [ 47 ] found that misinformation is also propagated or shared even by the verified Twitter handles. They become a part of misinformation transmission either by creating it or endorsing it by liking or sharing the information.

The focus of existing studies is heavily based on data from social networking sites such as Facebook and Twitter, although other platforms too escalate the spread of misinformation. Such a phenomenon was evident in the wake of COVID-19 as an intense trend of misinformation spread was reported on WhatsApp, TikTok, and Instagram.

4.4 Social media misinformation and politics

There have been several studies on the influence of misinformation on politics across the world [ 43 , 44 ]. Political misinformation has been predominantly used to influence the voters. The USA Presidential election of 2016, French election of 2017 and Indian elections in 2019 have been reported as examples where misinformation has influenced election process [ 15 , 17 , 45 ]. During the 2016 USA election, the partisan effect was a key challenge, where false information was presented as if it was from an authorized source [ 39 ]. Based on a user's prior behavior on the platform, algorithms can manipulate the user's feed [ 40 ]. In a political context, fake news can create more harm as it can influence the voters and the public. Although, fake news has less ‘life’, it's consequences may not be short living. Verification of fake news takes time and by the time verification results are shared, fake news could achieve its goal [ 43 , 48 , 63 ].

4.4.1 Characterizing misinformation in politics

Confirmation bias has a dominant role in social media misinformation related to politics. Readers are more likely to read and engage with the information that confirms their preexisting beliefs and political affiliations and reject information that challenges it [ 46 , 48 ]. For example, in the 2016 USA election, Pro-Trump fake news was accepted by Republicans [ 19 ]. Misinformation spreads quickly among people who have similar ideologies [ 19 ]. The nature of interface also could escalate the spread of misinformation. Kim and Dennis [ 36 ] investigated the influence of platforms' information presentation format and reported that social media platforms indirectly force users to accept certain information; they present information such that little importance is given to the source of information. This presentation is manipulative as people tend to believe information from a reputed source and are more likely to reject information that is from a less-known source [ 42 ].

Pennycook et al. [ 39 ], and Garrett and Poulsen [ 40 ] argued that warning tags (or flagging) on the headline can reduce the spread of misinformation. However, it is not practical to assign warning tags to all misinformation as it gets generated faster than valid information. The fact-checking process in social media also takes time. Hence, people tend to believe that the headlines which do not have warning tags are true and the idea of warning tags will thus not serve any purpose [ 39 ]. Furthermore, it could increase the reader's belief in warning tags and lead to misperception [ 39 ]. Readers tend to believe that all information is verified and consider untagged false information as more accurate. This phenomenon is known as the implied truth effect [ 39 ]. In this case, source reputation rating will influence the credibility of the information. The reader gives less importance to the source that has a low rating [ 17 , 50 ].

5 Theoretical perspectives of social media misinformation

We identified six theories among the articles we reviewed in relation to social media misinformation. We found rumor theory was used most frequently among all the studies chosen for our review; the theory was used in four articles as a theoretical foundation [ 9 , 11 , 13 , 37 , 43 ]. Oh et al. [ 9 ], studied citizen-driven information processing on Twitter using rumor theory in three social crises. This paper identified four key variables (source ambiguity, personal involvement, and anxiety) that spread misinformation. The authors further examined the acceptance of hate rumors and the aftermath of community crisis based on the Bangalore mass exodus of 2012. Liu et al. [ 13 ], examined the reason behind the retransmission of messages using rumor theory in disasters. Hazel Kwon and Raghav Rao [ 43 ] investigated how internet surveillance by the government impacts citizens’ involvement with cyber-rumors during a homeland security threat. Diffusion theory has also been used in IS research to discern the adoption of technological innovation. Researchers have used diffusion theory to study the retweeting behavior among Twitter users (tweet diffusion) during extreme events [ 26 ]. This research investigated information diffusion during extreme events based on four major elements of diffusion: innovation, time, communication channels and social systems. Kim et al. [ 36 ] examined the effect of rating news sources on users’ belief in social media articles based on three different rating mechanisms expert rating, user article rating and user source rating. Reputation theory was used to show how users would discern cognitive biases in expert ratings.

Murungi et al. [ 38 ] used rhetorical theory to argue that fact-checkers have less effectiveness on fake news that spreads on social media platforms. The study proposed a different approaches by focusing on underlying belief structure that accepts misinformation. The theory was used to identify fake news and socially constructed beliefs in the context of Alabama’s senatorial election in 2017. Using third person effect as the theoretical ground, the characteristics of rumor corrections on Twitter platform have also been examined in the context of death hoax of Singapore’s first prime minister Lee Kuan Yew [ 28 ]. This paper explored the motives behind collective rumor and identified the key characteristics of collective rumor correction. Using situational crisis communication theory (SCCT), Paek and Hove [ 44 ] examined how government could effectively respond to risk-related rumors during national-level crises in the context of food safety rumor. Refuting rumor, denying it and attacking the source of rumor are the three rumor response strategies suggested by the authors to counter rumor-mongering (Table 2 ).

5.1 Determinants of misinformation in social media platforms

Figure  3 depicts the concepts that emerged from our review using a framework of Antecedents-Misinformation-Outcomes (AMIO) framework, an approach we adapt from Smith HJ et al. [ 66 ]. Originally developed to study information privacy, the Antecedent-Privacy-Concerns-Outcomes (APCO) framework provided a nomological canvas to present determinants, mediators and outcome variables pertaining to information privacy. Following this canvas, we discuss the antecedents of misinformation, mediators of misinformation and misinformation outcomes, as they emerged from prior studies (Fig.  3 ).

figure 3

Determinants of misinformation

Anxiety, source ambiguity, trustworthiness, content ambiguity, personal involvement, social ties, confirmation bias, attractiveness, illiteracy, ease of sharing options and device attachment emerged as the variables determining misinformation in social media.

Anxiety is the emotional feeling of the person who sends or receives the information. If the person is anxious about the information received, he or she is more likely to share or spread misinformation [ 9 ]. Source ambiguity deals with the origin of the message. When the person is convinced of the source of information, it increases his trustworthiness and the person shares it. Content ambiguity addresses the content clarity of the information [ 9 , 13 ]. Personal involvement denotes how much the information is important for both the sender and receiver [ 9 ]. Social ties, information shared by a family member or social peers will influence the person to share the information [ 9 , 13 ]. From prior literature, it is understood that confirmation bias is one of the root causes of political misinformation. Research on attractiveness of the received information reveals that users tend to believe and share the information that is received on her or his personal device [ 34 ]. After receiving the misinformation from various sources, users accept it based on their existing beliefs, and social, cognitive factors and political factors. Oh et al. [ 15 ] observed that during crises, people by default have a tendency to believe unverified information especially when it helps them to make sense of the situation. Misinformation has significant effects on individuals and society. Loss of lives [ 9 , 15 , 28 , 30 ], economic loss [ 9 , 44 ], loss of health [ 32 , 35 ] and loss of reputation [ 38 , 43 ] are the major outcome of misinformation emerged from our review.

5.2 Strategies for controlling the spread of misinformation

Discourse on social media misinformation mitigation has resulted in prioritization of strategies such as early communication from the officials and use of scientific evidence [ 9 , 35 ]. When people realize that the received information or message is false, they are less likely to share that information with others [ 15 ]. Other strategies are 'rumor refutation—reducing citizens' intention to spread misinformation by real information which reduces their uncertainty and serves to control misinformation [ 44 ]. Rumor correction models for social media platforms also employ algorithms and crowdsourcing [ 28 ]. Majority of the papers that we have reviewed suggested fact-checking by experts, source rating of the received information, attaching warning tags to the headlines or entire news [ 36 ], and flagging content by the platform owners [ 40 ] as the strategies to control the spread of misinformation. Studies on controlling misinformation in the public health context showed that the government could also seek the help of public health professionals to mitigate misinformation [ 31 ].

However, the aforementioned strategies have been criticized for several limitations. Most papers mentioned confirmation bias as having a significant impact on the misinformation mitigation strategies, especially in the political context where people tend to believe the information that matches their prior belief. Garrett and Poulsen [ 40 ] argued that during an emergency situation, misinformation recipient may not be able to characterize the misinformation as true or false. Thus, providing alternative explanation or the real information to the users have more effect than providing fact-checking report. Studies by Garrett and Poulsen [ 40 ], and Pennycook et al. [ 39 ] reveal a drawback of attaching warning tags to news headlines. Once the flagging or tagging of the information is introduced, the information with the absence of tags will be considered as true or reliable information. This creates an implied truth effect. Further, it is also not always practical to evaluate all social media posts. Similarly, Kim and Dennis [ 36 ] studied fake news flagging and found that fake news flags did not influence users’ belief. However, they created cognitive dissonance and users were in search of the truthfulness of the headline. Later in 2017 Facebook discontinued the fake news flagging service owing to its limitations [ 45 ]

6 Key research gaps and future directions

Although, misinformation is a multi-sectoral issue, our systematic review observed that interdisciplinary research on social media misinformation is relatively scarce. ‘Confirmation bias’ is one of the most significant behavioral problem that motivates the spread of misinformation. However, lack of research on it reveals the scope for future interdisciplinary research across the fields of Data Science, Information Systems and Psychology in domains such as politics and health care. In the disaster context, there is a scope for study on the behavior of a first respondent and an emergency manager to understand their information exchange pattern with the public. Similarly, future researchers could analyze communication patterns between citizens and frontline workers in the public health context, which may be useful to design counter-misinformation campaigns and awareness interventions. Since information disorder is a multi-sectoral issue, researchers need to understand misinformation patterns among multiple government departments for coordinated counter-misinformation intervention.

There is a further dearth of studies on institutional responses to control misinformation. To fill the gap, future studies could concentrate on the analysis of governmental and organizational interventions to control misinformation at the level of policies, regulatory mechanisms, and communication strategies. For example, in India there is no specific law against misinformation but there are some provisions in the Information Technology Act (IT Act) and Disaster Management Act which can control misinformation and disinformation. An example of awareness intervention is an initiative named ‘Satyameva Jayate’ launched in Kannur district of Kerala, India which focused on sensitizing children at school to spot misinformation [ 67 ]. As noted earlier, within the research on Misinformation in the political context, there is a lack of research on strategies adopted by the state to counter misinformation. Therefore, building on cases like 'Satyameva Jayate' would further contribute to knowledge in this area.

Technology-based strategies adopted by social media to control the spread of misinformation emphasize the corrective algorithms, keywords and hashtags as a solution [ 32 , 37 , 43 ]. However, these corrective measures have their own limitations. Misinformation corrective algorithms are ineffective if not used immediately after the misinformation has been created. Related hashtags and keywords are used by researchers to find content shared on social media platforms to retrieve data. However, it may not be possible for researchers to cover all the keywords or hashtags employed by users. Further, algorithms may not decipher content shared in regional languages. Another limitation of algorithms employed by platforms is that they recommend and often display content based on user activities and interests which limits the users access to information from multiple perspectives, thus reinforcing their existing belief [ 29 ]. A reparative measure is to display corrective information as 'related stories' for misinformation. However, Facebook’s related stories algorithm only activates when an individual clicks on an outside link, which limits the number of people who will see the corrective information through the algorithm which turns out to be a challenge. Future research could investigate the impact of related stories as a corrective measure by analyzing the relation between misinformation and frequency of related stories posted vis a vis real information.

Our review also found a scarcity of research on the spread of misinformation on certain social media platforms while studies being skewed toward a few others. Of the studies reviewed, 15 articles were concentrated on misinformation spread on Twitter and Facebook. Although, from recent news reports it is evident that largely misinformation and disinformation are spread through popular messaging platforms like the 'WhatsApp', ‘Telegram’, ‘WeChat’, and ‘Line’, research using data from these platforms are, however, scanty. Especially in the Indian context, the magnitude of problems arising from misinformation through WhatsApp are overwhelming [ 68 ]. To address the lacunae of research on messaging platforms, we suggest future researchers to concentrate on investigating the patterns of misinformation spreading on platforms like WhatsApp. Moreover, message diffusion patterns are unique to each social media platform; therefore, it is useful to study the misinformation diffusion patterns on different social media platforms. Future studies could also address the differential roles, patterns and intensity of the spread of misinformation on various messaging and photo/ video-sharing social networking services.

Evident from our review, most research on misinformation is based on Euro-American context and the dominant models proposed for controlling misinformation may have limited applicability to other regions. Moreover, the popularity of social media platforms and usage patterns are diverse across the globe consequent to cultural differences and political regimes of the region, therefore necessitating researchers of social media to take cognizance of empirical experiences of ' left-over' regions.

7 Conclusion

To understand the spread of misinformation on social media platforms, we conducted a systematic literature review in three important domains where misinformation is rampant: disaster, health, and politics. We reviewed 28 articles relevant to the themes chosen for the study. This is one of the earliest reviews focusing on social media misinformation research, especially based on three sensitive domains. We have discussed how misinformation spreads in the three sectors, the methodologies that have been used by researchers, theoretical perspectives, Antecedents-Misinformation-Outcomes (AMIO) framework for understanding key concepts and their inter-relationships, and strategies to control the spread of misinformation.

Our review also identified major gaps in IS research on misinformation in social media. This includes the need for methodological innovations in addition to experimental methods which have been widely used. This study has some limitations that we acknowledge. We might not have identified all relevant papers on spread of misinformation on social media from existing literature as some authors might have used different keywords and also due to our strict inclusion and exclusion criteria. There might also have been relevant publications in languages other than English which were not covered in this review. Our focus on three domains also restricted the number of papers we reviewed.

Thai, M.T., Wu, W., Xiong, H.: Big Data in Complex and Social Networks, 1st edn. CRC Press, Boca Raton (2017)

Google Scholar  

Peters, B.: How Social Media is Changing Law and Policy. Fair Observer (2020)

Granillo, G.: The Role of Social Media in Social Movements. Portland Monthly (2020)

Wu, L., Morstatter, F., Carley, K.M., Liu, H.: Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor. 21 (1), 80–90 (2019)

Article   Google Scholar  

Sam, L.: Mark Zuckerberg: I regret ridiculing fears over Facebook’s effect on election|technology|the guardian. Theguardian (2017)

WEF: Global Risks 2013—World Economic Forum (2013)

Scroll: Communalisation of Tablighi Jamaat Event. Scroll.in (2020)

Cao, L.: Data science: a comprehensive overview. ACM Comput. Surv. 50 (43), 1–42 (2017)

Oh, O., Agrawal, M., Rao, H.R.: Community intelligence and social media services: a rumor theoretic analysis of tweets during social crises. MIS Q. Manag. Inf. Syst. 37 (2), 407–426 (2013)

Mukherjee, A., Liu, B., Glance, N.: Spotting fake reviewer groups in consumer reviews. In: WWW’12—Proceedings of the 21st Annual Conference on World Wide Web, pp. 191–200 (2012)

Cerf, V.G.: Information and misinformation on the internet. Commun. ACM 60 (1), 9–9 (2016)

Tiwana, A., Konsynski, B., Bush, A.A.: Platform evolution: coevolution of platform architecture, governance, and environmental dynamics. Inf. Syst. Res. 21 (4), 675–687 (2010)

Liu, F., Burton-Jones, A., Xu, D.: Rumors on social media in disasters: extending transmission to retransmission. PACIS 2014

Hern, A.: WhatsApp to impose new limit on forwarding to fight fake news. The Guardian (2020)

Oh, O., Gupta, P., Agrawal, M., Raghav Rao, H.: ICT mediated rumor beliefs and resulting user actions during a community crisis. Gov. Inf. Q. 35 (2), 243–258 (2018)

Xiaoyi, A.T.C.Y.: Examining online vaccination discussion and communities in Twitter. In: SMSociety ’18: Proceedings of the 9th International Conference on Social Media and Society (2018)

Lazer, D.M.J., et al.: The science of fake news. Sciencemag.org (2018)

Peter, S.: More Americans are getting their news from social media. forbes.com (2019)

Jang, S.M., et al.: A computational approach for examining the roots and spreading patterns of fake news: evolution tree analysis. Comput. Hum. Behav. 84 , 103–113 (2018)

Mele, N., et al.: Combating Fake News: An Agenda for Research and Action (2017)

Bernhard, U., Dohle, M.: Corrective or confirmative actions? Political online participation as a consequence of presumed media influences in election campaigns. J. Inf. Technol. Polit. 12 (3), 285–302 (2015)

Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. 26 (2) (2002)

aisnet.org: Senior Scholars’ Basket of Journals|AIS. aisnet.org . [Online]. Available: https://aisnet.org/page/SeniorScholarBasket . Accessed: 16 Sept 2021

Torres, R.R., Gerhart, N., Negahban, A.: Epistemology in the era of fake news: an exploration of information verification behaviors among social networking site users. ACM 49 , 78–97 (2018)

Abdullah, N.A., Nishioka, D., Tanaka, Y., Murayama, Y.: Why I retweet? Exploring user’s perspective on decision-making of information spreading during disasters. In: Proceedings of the 50th Hawaii International Conference on System Sciences (2017)

Lee, J., Agrawal, M., Rao, H.R.: Message diffusion through social network service: the case of rumor and non-rumor related tweets during Boston bombing 2013. Inf. Syst. Front. 17 (5), 997–1005 (2015)

Mondal, T., Pramanik, P., Bhattacharya, I., Boral, N., Ghosh, S.: Analysis and early detection of rumors in a post disaster scenario. Inf. Syst. Front. 20 (5), 961–979 (2018)

Chua, A.Y.K., Cheah, S.-M., Goh, D.H., Lim, E.-P.: Collective rumor correction on the death hoax. In: PACIS 2016 Proceedings (2016)

Bode, L., Vraga, E.K.: In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J. Commun. 65 (4), 619–638 (2015)

Simon, T., Goldberg, A., Leykin, D., Adini, B.: Kidnapping WhatsApp—rumors during the search and rescue operation of three kidnapped youth. Comput. Hum. Behav. 64 , 183–190 (2016)

Ghenai, A., Mejova, Y.: Fake cures: user-centric modeling of health misinformation in social media. In: Proceedings of ACM Human–Computer Interaction, vol. 2, no. CSCW, pp. 1–20 (2018)

Chua, A.Y.K., Banerjee, S.: To share or not to share: the role of epistemic belief in online health rumors. Int. J. Med. Inf. 108 , 36–41 (2017)

Kou, Y., Gui, X., Chen, Y., Pine, K.H.: Conspiracy talk on social media: collective sensemaking during a public health crisis. In: Proceedings of ACM Human–Computer Interaction, vol. 1, no. CSCW, pp. 1–21 (2017)

Gu, R., Hong, Y.K.: Addressing health misinformation dissemination on mobile social media. In: ICIS 2019 Proceedings (2019)

Bode, L., Vraga, E.K.: See something, say something: correction of global health misinformation on social media. Health Commun. 33 (9), 1131–1140 (2018)

Kim, A., Moravec, P.L., Dennis, A.R.: Combating fake news on social media with source ratings: the effects of user and expert reputation ratings. J. Manag. Inf. Syst. 36 (3), 931–968 (2019)

Chua, A.Y.K., Banerjee, S.: Intentions to trust and share online health rumors: an experiment with medical professionals. Comput. Hum. Behav. 87 , 1–9 (2018)

Murungi, D., Purao, S., Yates, D.: Beyond facts: a new spin on fake news in the age of social media. In: AMCIS 2018 Proceedings (2018)

Pennycook, G., Bear, A., Collins, E.T., Rand, D.G.: The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag. Sci. (2020). https://doi.org/10.1287/mnsc.2019.3478

Garrett, R., Poulsen, S.: Flagging Facebook falsehoods: self identified humor warnings outperform fact checker and peer warnings. J. Comput. Commun. (2019). https://doi.org/10.1093/jcmc/zmz012

Shin, J., Thorson, K.: Partisan selective sharing: the biased diffusion of fact-checking messages on social media. J. Commun. 67 (2), 233–255 (2017)

Kim, A., Dennis, A.R.: Says who? The effects of presentation format and source rating on fake news in social media. MIS Q. (2019). https://doi.org/10.25300/MISQ/2019/15188

Hazel Kwon, K., Raghav Rao, H.: Cyber-rumor sharing under a homeland security threat in the context of government Internet surveillance: the case of South–North Korea conflict. Gov. Inf. Q. 34 (2), 307–316 (2017)

Paek, H.J., Hove, T.: Effective strategies for responding to rumors about risks: the case of radiation-contaminated food in South Korea. Public Relat. Rev. 45 (3), 101762 (2019)

Moravec, P.L., Minas, R.K., Dennis, A.R.: Fake news on social media: people believe what they want to believe when it makes no sense at all. MIS Q. (2019). https://doi.org/10.25300/MISQ/2019/15505

Madraki et al.: Characterizing and comparing COVID-19 misinformation across languages, countries and platforms. In: WWW ’21 Companion Proceedings of Web Conference (2021)

Shahi, G.K., Dirkson, A., Majchrzak, T.A.: An exploratory study of COVID-19 misinformation on Twitter. Public Heal. Emerg. COVID-19 Initiat. 22 , 100104 (2021)

Otala, M., et al.: Political polarization and platform migration: a study of Parler and Twitter usage by United States of America Congress Members. In: WWW ’21 Companion Proceedings of Web Conference (2021)

Paul, L.J.: Encyclopedia of Survey Research Methods. Sage Research Methods, Thousand Oaks (2008)

WHO Munich Security Conference: WHO.int. [Online]. Available: https://www.who.int/director-general/speeches/detail/munich-security-conference . Accessed 24 Sept 2021

Coleman, A.: Hundreds dead’ because of Covid-19 misinformation—BBC News. BBC News (2020)

Benenson, E.: Vaccine myths Facts vs fiction|VCU Health. vcuhealth.org , 2021. [Online]. Available: https://www.vcuhealth.org/news/covid-19/vaccine-myths-facts-vs-fiction . Accessed 24 Sept 2021

Pierpoint, G.: Kerala floods: fake news ‘creating unnecessary panic’—BBC News. BBC (2018)

Campbell, D.J.: Task complexity: a review and analysis. Acad. Manag. Rev. 13 (1), 40 (1988)

Balan, M.U., Mathew, S.K.: Personalize, summarize or let them read? A study on online word of mouth strategies and consumer decision process. Inf. Syst. Front. 23 , 1–21 (2020)

Shibutani, T.: Improvised News: A Sociological Study of Rumor. The Bobbs-Merrill Company Inc, Indianapolis (1966)

Pezzo MV, Beckstead JW (2006) A multilevel analysis of rumor transmission: effects of anxiety and belief in two field experiments Basic Appl. Soc. Psychol. https://doi.org/10.1207/s15324834basp2801_8

Li, Y.-J., Cheung, C.M.K. Shen, X.-L., Lee, M.K.O.: Health misinformation on social media: a literature review. In: Association for Information Systems (2019)

Wang, Y., McKee, M., Torbica, A., Stuckler, D.: Systematic literature review on the spread of health-related misinformation on social media. Soc. Sci. Med. 240 , 112552 (2019)

Pappa, D., Stergioulas, L.K.: Harnessing social media data for pharmacovigilance: a review of current state of the art, challenges and future directions. Int. J. Data Sci. Anal. 8 (2), 113–135 (2019)

BBC: Fighting Covid-19 fake news in Africa. BBC News (2020)

Chua, A.Y.K., Aricat, R., Goh, D.: Message content in the life of rumors: comparing three rumor types. In: 2017 12th International Conference on Digital Information Management, ICDIM 2017, vol. 2018, pp. 263–268

Lee, A.R., Son, S.-M., Kim, K.K.: Information and communication technology overload and social networking service fatigue: a stress perspective. Comput. Hum. Behav. 55 , 51–61 (2016)

Foss, K., Foss, S., Griffin, C.: Feminist rhetorical theories (1999). https://doi.org/10.1080/07491409.2000.10162571

Coombs, W., Holladay, S.J.: Reasoned action in crisis communication: an attribution theory-based approach to crisis management. Responding to Cris. A Rhetor. approach to Cris. Commun. (2004)

Smith, H.J., Dinev, T., Xu, H.: Information privacy research: an interdisciplinary review. MIS Q. 35 , 989–1015 (2011)

Ammu, C.: Kerala: Kannur district teaches school kids to spot fake news—the week. theweek.in (2018)

Ponniah, K.: WhatsApp: the ‘black hole’ of fake news in India’s election. BBC News (2019)

Download references

This research did not receive any specific Grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and affiliations.

Department of Management Studies (DoMS), Indian Institute of Technology Madras, Chennai, Tamil Nadu, 600036, India

Sadiq Muhammed T & Saji K. Mathew

You can also search for this author in PubMed   Google Scholar

Contributions

TMS: Conceptualization, Methodology, Investigation, Writing—Original Draft, SKM: Writing—Review & Editing, Supervision.

Corresponding author

Correspondence to Sadiq Muhammed T .

Ethics declarations

Conflict of interest.

On behalf of two authors, the corresponding author states that there is no conflict of interest in this research paper.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Code

Sub themes

Frequency

Themes

Social crisis situations

Situations

43

Disaster

Uncertain situations

Real community crisis situations

Post-disaster situation

Crisis situations

Ambiguous situations

Unpredictable crisis situations

Uncertain crisis situations

Emergency situations

Disaster situations

Emergency crisis communication

Crisis

36

Unexpected crisis events

Crisis scenario

Crisis management

Addressing health misinformation dissemination

Health

77

Health

Global health misinformation

Online health misinformation

Health communication

Public health

Health pandemic

Health-related conspiracy theories

Conspiracy

33

Anti-government rumors

Rumor

44

Politics

Political headlines

Headlines

30

Political situations

Situations

25

National threat situations

Homeland threat situations

Military conflict situations

Rights and permissions

Reprints and permissions

About this article

Muhammed T, S., Mathew, S.K. The disaster of misinformation: a review of research in social media. Int J Data Sci Anal 13 , 271–285 (2022). https://doi.org/10.1007/s41060-022-00311-6

Download citation

Received : 31 May 2021

Accepted : 06 January 2022

Published : 15 February 2022

Issue Date : May 2022

DOI : https://doi.org/10.1007/s41060-022-00311-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Misinformation
  • Information disorder
  • Social media
  • Systematic literature review
  • Find a journal
  • Publish with us
  • Track your research

share this!

May 30, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Misleading COVID-19 headlines from mainstream sources did more harm on Facebook than fake news, study finds

by MIT Sloan School of Management

Reexamining misinformation: How unflagged, factual content drives vaccine hesitancy

Since the rollout of the COVID-19 vaccine in 2021, fake news on social media has been widely blamed for low vaccine uptake in the United States—but research by MIT Sloan School of Management Ph.D. candidate Jennifer Allen and Professor David Rand finds that the blame lies elsewhere.

In a new paper published in Science and co-authored by Duncan J. Watts of the University of Pennsylvania, the researchers introduce a new methodology for measuring social media content's causal impact at scale. They show that misleading content from mainstream news sources—rather than outright misinformation or " fake news "—was the primary driver of vaccine hesitancy on Facebook.

A new approach to estimating impact

"Misinformation has been correlated with many societal challenges, but there's not a lot of research showing that exposure to misinformation actually causes harm," explained Allen.

During the COVID-19 pandemic, for example, the spread of misinformation related to the virus and vaccine received significant public attention. However, existing research has, for the most part, only established correlations between vaccine refusal and factors such as sharing misinformation online—and largely overlooked the role of "vaccine-skeptical" content, which was potentially misleading but not flagged as misinformation by Facebook fact-checkers.

To address that gap, the researchers first asked a key question: What would be necessary for misinformation or any other type of content to have far-reaching impacts?

"To change behavior at scale, content has to not only be persuasive enough to convince people not to get the vaccine, but also widely seen," Allen said. "Potential harm results from the combination of persuasion and exposure."

To quantify content's persuasive ability, the researchers conducted randomized experiments in which they showed thousands of survey participants the headlines from 130 vaccine-related stories—including both mainstream content and known misinformation—and tested how those headlines impacted their intentions to get vaccinated against COVID-19.

Researchers also asked a separate group of respondents to rate the headlines across various attributes, including plausibility and political leaning. One factor reliably predicted impacts on vaccination intentions: the extent to which a headline suggested that the vaccine was harmful to a person's health.

Using the "wisdom of crowds" and natural language processing AI tools, Allen and her co-authors extrapolated those survey results to predict the persuasive power of all 13,206 vaccine-related URLs that were widely viewed on Facebook in the first three months of the vaccine rollout.

By combining these predictions with data from Facebook showing the number of users who viewed each URL, the researchers could predict each headline's overall impact—the number of people it might have persuaded not to get the vaccine. The results were surprising.

The underestimated power of exposure

Contrary to popular perceptions, the researchers estimated that vaccine-skeptical content reduced vaccination intentions 46 times more than misinformation flagged by fact-checkers.

The reason? Even though flagged misinformation was more harmful when seen, it had relatively low reach. In total, the vaccine-related headlines in the Facebook data set received 2.7 billion views—but content flagged as misinformation received just 0.3% of those views, and content from domains rated as low-credibility received 5.1%.

"Even though the outright false content reduced vaccination intentions the most when viewed, comparatively few people saw it," explained Rand. "Essentially, that means there's this class of gray-area content that is less harmful per exposure but is seen far more often —and thus more impactful overall—that has been largely overlooked by both academics and social media companies."

Notably, several of the most impactful URLs within the data set were articles from mainstream sources that cast doubt on the vaccine's safety. For instance, the most-viewed was an article—from a well-regarded mainstream news source—suggesting that a medical doctor died two weeks after receiving the COVID-19 vaccine. This single headline received 54.9 million views—more than six times the combined views of all flagged misinformation .

While the body of this article did acknowledge the uncertainty of the doctor's cause of death, its "clickbait" headline was highly suggestive and implied that the vaccine was likely responsible. That's significant since the vast majority of viewers on social media likely never click out to read past the headline.

How journalists and social media platforms can help

According to Rand, one implication of this work is that media outlets need to take more care with their headlines, even if that means they aren't as attention-grabbing.

"When you are writing a headline, you should not just be asking yourself if it's false or not," he said. "You should be asking yourself if the headline is likely to cause inaccurate perceptions."

For platforms, added Allen, the research also points to the need for more nuanced moderation—across all subjects, not just public health.

"Content moderation focuses on identifying the most egregiously false information—but that may not be an effective way of identifying the most overall harmful content," she says. "Platforms should also prioritize reviewing content from the people or organizations with the largest numbers of followers while balancing freedom of expression. We need to invest in more research and creative solutions in this space—for example, crowdsourced moderation tools like X's Community Notes."

"Content moderation decisions can be really difficult because of the inherent tension between wanting to mitigate harm and allowing people to express themselves," Rand said. "Our paper introduces a framework to help balance that trade-off by allowing tech companies to actually quantify potential harm."

And the trade-offs could be large. An exploratory analysis by the authors found that if Facebook users hadn't been exposed to this vaccine -skeptical content, as many as 3 million more Americans could have been vaccinated.

"We can't just ignore this gray area-content," Allen concluded. "Lives could have been saved."

Journal information: Science

Provided by MIT Sloan School of Management

Explore further

Feedback to editors

health misinformation on social media a literature review

New method could allow multi-robot teams to autonomously and reliably explore other planets

37 minutes ago

health misinformation on social media a literature review

Virgin Galactic completes final spaceflight before two-year pause

5 hours ago

health misinformation on social media a literature review

Study finds fresh water and key conditions for life appeared on Earth a half-billion years earlier than thought

20 hours ago

health misinformation on social media a literature review

Saturday Citations: Praising dogs; the evolution of brown fat; how SSRIs relieve depression. Plus: Boeing's Starliner

Jun 8, 2024

health misinformation on social media a literature review

Nonreciprocal quantum batteries exhibit remarkable capacities and efficiency

health misinformation on social media a literature review

New method optimizes lithium extraction from seawater and groundwater

health misinformation on social media a literature review

Rare 7-foot fish washed ashore on Oregon's coast garners worldwide attention

Jun 7, 2024

health misinformation on social media a literature review

California wildfire pollution killed 52,000 in a decade: study

health misinformation on social media a literature review

Quantum chemistry and simulation help characterize coordination complex of elusive element 61

health misinformation on social media a literature review

A protein that enables smell in ants—and stops cell death

Relevant physicsforums posts, bach, bach, and more bach please.

27 minutes ago

What is your favorite drawing?

4 hours ago

Cover songs versus the original track, which ones are better?

17 hours ago

Today's Fusion Music: T Square, Cassiopeia, Rei & Kanade Sato

Jun 6, 2024

Another Word I Got Wrong : Vile

What's the opposite of subtlety.

Jun 5, 2024

More from Art, Music, History, and Linguistics

Related Stories

health misinformation on social media a literature review

Democrats and Republicans have sharply different attitudes about removing misinformation from social media, finds study

Nov 6, 2023

health misinformation on social media a literature review

Facebook's vaccine misinformation policy reduces anti-vax information

Mar 3, 2022

health misinformation on social media a literature review

Meta's success in suppressing misinformation on Facebook is patchy at best, finds study

Mar 22, 2024

health misinformation on social media a literature review

Twitter lifted its ban on COVID misinformation—research shows this is a grave risk to public health

Dec 5, 2022

health misinformation on social media a literature review

Facebook says it is helping reduce COVID vaccine 'hesitancy'

Aug 18, 2021

health misinformation on social media a literature review

YouTube to remove Covid vaccine misinformation

Oct 14, 2020

Recommended for you

health misinformation on social media a literature review

Study suggests evolutionary basis for male risk-taking behaviors

health misinformation on social media a literature review

Study finds we spend more with cashless payments

health misinformation on social media a literature review

Study shows banning false information traffickers online can improve public discourse

health misinformation on social media a literature review

Study finds US Islamist extremist co-offenders form close-knit groups driven by mutual contacts, homophily effects

health misinformation on social media a literature review

Others' words, not firsthand experience, shape scientific and religious belief formation, study finds

Jun 4, 2024

health misinformation on social media a literature review

How can we make good decisions by observing others? A videogame and computational model have the answer

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Bull World Health Organ
  • v.100(9); 2022 Sep 1

Logo of bullwho

Language: English | French | Spanish | Arabic | Chinese | Russian

Infodemics and health misinformation: a systematic review of reviews

Infodémie et désinformation sanitaire: revue systématique des revues, la infodemia y la información errónea sobre la salud: una revisión sistemática de las revisiones, المعلومات غير الدقيقة والمعلومات الصحية الخاطئة: مراجعة منهجية للمراجعات, 信息流行病和健康错误信息:针对审查的系统评价, Инфодемия и дезинформация в области здравоохранения: систематический анализ обзоров, israel júnior borges do nascimento.

a School of Medicine and University Hospital, Federal University of Minas Gerais, Belo Horizonte, Brazil.

Ana Beatriz Pizarro

b Clinical Research Center, Fundación Valle del Lili, Cali, Colombia.

Jussara M Almeida

c Department of Computer Science, Institute of Exact Science, Federal University of Minas Gerais, Brazil.

Natasha Azzopardi-Muscat

d Division of Country Health Policies and Systems, World Health Organization Regional Office for Europe, UN City, Marmorvej 51, 2100 Copenhagen, Denmark.

Marcos André Gonçalves

Maria björklund.

e Faculty of Medicine, Lund University, Lund, Sweden.

David Novillo-Ortiz

To compare and summarize the literature regarding infodemics and health misinformation, and to identify challenges and opportunities for addressing the issues of infodemics.

We searched MEDLINE®, Embase®, Cochrane Library of Systematic Reviews, Scopus and Epistemonikos on 6 May 2022 for systematic reviews analysing infodemics, misinformation, disinformation and fake news related to health. We grouped studies based on similarity and retrieved evidence on challenges and opportunities. We used the AMSTAR 2 approach to assess the reviews’ methodological quality. To evaluate the quality of the evidence, we used the Grading of Recommendations Assessment, Development and Evaluation guidelines.

Our search identified 31 systematic reviews, of which 17 were published. The proportion of health-related misinformation on social media ranged from 0.2% to 28.8%. Twitter, Facebook, YouTube and Instagram are critical in disseminating the rapid and far-reaching information. The most negative consequences of health misinformation are the increase of misleading or incorrect interpretations of available evidence, impact on mental health, misallocation of health resources and an increase in vaccination hesitancy. The increase of unreliable health information delays care provision and increases the occurrence of hateful and divisive rhetoric. Social media could also be a useful tool to combat misinformation during crises. Included reviews highlight the poor quality of published studies during health crises.

Available evidence suggests that infodemics during health emergencies have an adverse effect on society. Multisectoral actions to counteract infodemics and health misinformation are needed, including developing legal policies, creating and promoting awareness campaigns, improving health-related content in mass media and increasing people’s digital and health literacy.

Résumé

Comparer et synthétiser la littérature consacrée à l'infodémie et à la désinformation sanitaire, mais aussi identifier les défis et opportunités inhérents à la lutte contre cette problématique.

Méthodes

Nous avons exploré les bases de données MEDLINE®, Embase®, Cochrane Library of Systematic Reviews, Scopus et Epistemonikos le 6 mai 2022 à la recherche de revues systématiques analysant les infodémies, la désinformation, les fausses informations et les «fake news» liées à la santé. Nous avons ensuite regroupé les études en fonction de leurs similitudes et en avons extrait des éléments probants relatifs aux défis et opportunités. Nous avons employé l'approche AMSTAR-2 afin de mesurer la qualité méthodologique des différentes revues. Enfin, pour évaluer la qualité des éléments probants, nous avons utilisé les critères du système GRADE (Grading of Recommendations, Assessment, Development and Evaluation, soit «grade donné aux recommandations, examen, élaboration et évaluation»).

Résultats

Nos recherches nous ont permis de dénicher 31 revues systématiques, dont 17 ont été publiées. Sur les réseaux sociaux, le pourcentage d'informations fallacieuses concernant la santé était compris entre 0,2 et 28,8%. Twitter, Facebook, YouTube et Instagram jouent un rôle prépondérant dans la propagation rapide d'informations à grande échelle. Cette désinformation entraîne de multiples conséquences négatives: hausse du nombre d'interprétations erronées ou trompeuses des preuves existantes, impact sur la santé mentale, mauvaise affectation des ressources en santé et méfiance croissante vis-à-vis de la vaccination. La prolifération des informations sanitaires non fiables retarde la prise en charge tout en alimentant les réticences et les discours clivants. Néanmoins, les réseaux sociaux peuvent aussi se révéler utiles dans la lutte contre la désinformation lors des crises. Les revues examinées soulignent la qualité médiocre des études publiées durant les crises sanitaires.

Tout porte à croire que les infodémies qui surgissent dans le cadre des urgences sanitaires sont néfastes pour la société. Des actions multisectorielles sont nécessaires pour combattre les fausses informations, notamment le développement de politiques juridiques, l'élaboration et le déploiement de campagnes de sensibilisation, l'amélioration des contenus dédiés à la santé dans les médias de masse, et une meilleure éducation à la culture numérique et à la santé.

Comparar y resumir la literatura relacionada con la infodemia y la información errónea sobre la salud, e identificar los desafíos y las oportunidades para abordar los problemas de la infodemia.

Métodos

Se realizaron búsquedas en MEDLINE®, Embase®, la Biblioteca Cochrane de Revisiones Sistemáticas, Scopus y Epistemonikos el 6 de mayo de 2022 para obtener revisiones sistemáticas que analizaran la infodemia, la información errónea, la desinformación y las noticias falsas relacionadas con la salud. Se agruparon los estudios en función de la similitud y se recuperaron las pruebas sobre los desafíos y las oportunidades. Se utilizó el enfoque AMSTAR-2 para valorar la calidad metodológica de las revisiones. Además, para evaluar la calidad de las pruebas, se utilizaron los criterios del sistema GRADE (Grading of Recommendations Assessment, Development and Evaluation, o bien, el grado asignado a las recomendaciones, la valoración, el desarrollo y la evaluación).

Nuestra búsqueda identificó 31 revisiones sistemáticas, de las que 17 estaban publicadas. El porcentaje de información errónea relacionada con la salud en las redes sociales osciló entre el 0,2 y el 28,8 %. Twitter, Facebook, YouTube e Instagram son fundamentales en la difusión de la información rápida y de gran alcance. Las consecuencias más negativas de la información errónea sobre la salud son el aumento de las interpretaciones engañosas o incorrectas de las pruebas disponibles, el impacto en la salud mental, la asignación inadecuada de los recursos sanitarios y el aumento de las dudas sobre la vacunación. El aumento de la información sanitaria poco fiable retrasa la prestación de cuidados y aumenta la aparición de una retórica de rechazo y división. Por otra parte, los medios sociales podrían ser una herramienta útil para combatir la información errónea durante las crisis. Las revisiones incluidas destacan la mala calidad de los estudios publicados durante las crisis sanitarias.

Conclusión

Las pruebas disponibles sugieren que la infodemia durante las emergencias sanitarias tiene un efecto adverso en la sociedad. Se necesitan acciones multisectoriales para contrarrestar la infodemia y la información errónea sobre la salud, como el desarrollo de políticas legales, la creación y promoción de campañas de sensibilización, la mejora de los contenidos relacionados con la salud en los medios de comunicación y el aumento de la alfabetización digital y sanitaria de la población.

ملخص

الغرض.

مقارنة وتلخيص المنشورات المتعلقة بالمعلومات غير الدقيقة والمعلومات الصحية الخاطئة، وتحديد التحديات والفرص لمواجهة مشكلات المعلومات غير الدقيقة.

الطريقة

قمنا بالبحث في MEDLINE®‎، وEmbase®‎، ومكتبة Cochrane للمراجعات المنهجية، وScopus، وEpistemonikos، في 6 مايو/أيار 2022 عن المراجعات المنهجية التي تحلل المعلومات غير الدقيقة، والمعلومات الخاطئة، والمعلومات المضللة، والأخبار الملفقة المتعلقة بالصحة. قمنا بتجميع الدراسات على أساس التشابه، واسترجعنا الأدلة على التحديات والفرص. وقمنا بالاستعانة بأسلوب AMSTAR-2 لتقييم الجودة المنهجية للمراجعات. لتقييم جودة الأدلة، قمنا بالاستعانة بتصنيف تقييم التوصيات (Grading of Recommendations Assessment)، وبالإرشادات الخاصة بالتطوير والتقييم (Development and Evaluation).

النتائج

حدد البحث الذي قمنا به عدد 31 مراجعة منهجية، تم نشر 17 منها. تراوحت نسبة المعلومات الخاطئة المتعلقة بالصحة على وسائل التواصل الاجتماعي من 0.2% إلى 28.8% . تعد كل من منصات Twitter، وFacebook، وYouTube، وInstagram، منصات أساسية في نشر المعلومات السريعة وبعيدة الانتشار. تتمثل أكثر العواقب السلبية للمعلومات الصحية الخاطئة في زيادة التفسيرات المضللة أو غير الصحيحة للأدلة المتاحة، والتأثير على الصحة العقلية، وسوء تخصيص الموارد الصحية، وزيادة في التردد بخصوص التحصين. تؤدي زيادة المعلومات الصحية غير الموثوقة إلى تأخير تقديم الرعاية، وتزيد من حدوث خطاب الكراهية والانقسام. يمكن أن تكون وسائل التواصل الاجتماعي أيضًا أداة مفيدة لمواجهة المعلومات الخاطئة أثناء الأزمات. تسلط المراجعات المتضمنة الضوء على الجودة المنخفضة للدراسات المنشورة أثناء الأزمات الصحية.

الاستنتاج

تشير الأدلة المتاحة إلى أن المعلومات غير الدقيقة أثناء حالات الطوارئ الصحية لها تأثير سلبي على المجتمع. هناك حاجة إلى إجراءات متعددة القطاعات لمواجهة المعلومات غير الدقيقة والمعلومات الصحية الخاطئة، بما في ذلك تطوير السياسات القانونية، وإطلاق حملات للتوعية وترويجها، وتحسين المحتوى المتعلق بالصحة في وسائل الإعلام، وزيادة المعرفة الرقمية والصحية للأشخاص.

摘要

目的.

比较和总结与信息流行病和健康错误信息有关的文献,并确定在解决信息流行病问题方面所面临的挑战和机遇。

方法

我们已于 2022 年 5 月 6 日搜索了 MEDLINE®、Embase®、Cochrane 系统评价图书馆、Scopus 和 Epistemonikos,通过分析信息流行病以及与健康相关的错误信息、虚假信息和假新闻,完成了系统评价。我们基于相似性对研究进行了分组,并检索了与挑战和机遇有关的证据。我们使用 AMSTAR-2 方法来评估审查的方法学质量。为了评估证据的质量,我们使用了《推荐意见评估、制定和评价分级指南》。

结果

经搜索,我们发现了 31 篇系统评价,其中 17 篇已发表。社交媒体上健康相关错误信息的比例占 0.2% 至 28.8% 不等。推特网 (Twitter)、脸书 (Facebook)、YouTube 和 Instagram 是致使信息得以快速传播并造成深远影响的重要渠道。健康错误信息导致的最严重负面影响是对现有证据的误导或错误理解进一步加剧、对心理健康造成不利影响、导致卫生资源分配错误以及导致疫苗接种犹豫人群的比例增加。不可靠健康信息的增加导致护理服务延迟提供且反对和分裂言论增多。社交媒体也可成为在危机期间打击错误信息的有用工具。有些评论强调,健康危机期间所公布的研究质量较差。

结论

现有证据表明,卫生突发事件期间信息流行病对社会产生了不利影响。需要多个部门共同行动以抵制信息流行病和健康错误信息,包括制定法律政策、创建和推广意识活动、加强对大众媒体健康相关内容的管理以及提高人们的数字和健康素养。

Резюме

Цель.

Сопоставить и обобщить литературу по инфодемии и дезинформации в области здравоохранения, а также определить сложные задачи и возможности для решения проблем инфодемии.

Методы

6 мая 2022 г. авторы выполнили поиск информации в базе данных MEDLINE®, Embase®, Cochrane Library of Systematic Reviews, Scopus и Epistemonikos на предмет систематических обзоров, анализирующих инфодемию, ложную информацию, дезинформацию и фейковые новости о здравоохранении. Авторы сгруппировали исследования на основе сходства и получили данные о сложных задачах и возможностях. Авторы использовали подход AMSTAR-2 для оценки методологического качества обзоров. Для оценки качества данных авторы использовали Руководство по ранжированию оценки, разработки и экспертизы рекомендаций.

Результаты

Поиск выявил 31 систематический обзор, 17 из которых были опубликованы. Доля дезинформации в области здравоохранения в социальных сетях колебалась от 0,2 до 28,8%. Twitter, Facebook, YouTube и Instagram играют решающую роль в быстром и широкомасштабном распространении информации. Наиболее негативными последствиями дезинформации в области здравоохранения являются увеличение количества вводящих в заблуждение или неверных интерпретаций имеющихся данных, воздействие на психическое здоровье, нерациональное использование ресурсов в сфере здравоохранения и усиление сомнений в необходимости вакцинации. Увеличение количества недостоверной информации в области здоровья задерживает оказание медицинской помощи и увеличивает количество проявлений ненависти и разногласий. Социальные сети также могут быть полезным инструментом для борьбы с дезинформацией во время кризисов. Во всех включенных обзорах подчеркивается низкое качество опубликованных исследований во время кризисов в области здравоохранения.

Вывод

Имеющиеся данные свидетельствуют о том, что инфодемия во время чрезвычайных ситуаций в области здравоохранения оказывает неблагоприятное воздействие на общество. Необходимы многосекторальные действия по противодействию инфодемии и дезинформации в области здравоохранения, включая разработку правовой политики, создание и продвижение кампаний по повышению осведомленности, улучшение контента, связанного со здоровьем, в средствах массовой информации и повышение цифровой и медицинской грамотности населения.

Introduction

During crises, such as infectious disease outbreaks and disasters, the overproduction of data from multiple sources, the quality of the information and the speed at which new information is disseminated create social and health-related impacts. 1 – 3 This phenomenon, called an infodemic, involves a torrent of online information containing either false and misleading information or accurate content. 4

To tackle the production of misinformation (that is, false or inaccurate information deliberately intended to deceive) and disinformation (that is, deliberately misleading or biased information; manipulated narrative or facts; and propaganda) during recent pandemics or health emergencies, research on infodemics has increased. This research focuses on understanding the general effect of infodemics on society, dissemination patterns and delineating appropriate countermeasures policies. 5 – 7 Several studies are analysing the effects of infodemics and misinformation and how societal behaviours are affected by that information. 8 – 10 Particularly, evaluating infodemic-related concepts, such as impact on humans’ lives and communities, frequency and most common sources to widespread unreliable data, using comprehensive and evidence-based criteria, has gained more attention. 11 Therefore, assessing how infodemics and health misinformation affect public health and identifying the availability and quality of evidence-based infodemic characteristics is timely and pertinent to inform appropriate management of its potential harms and support the development of monitoring guidelines.

We conducted a systematic review of reviews to collate, compare and summarize the evidence from the recent infodemics. To improve and guide the infodemic management, we designed our study to identify current opportunities, knowledge gaps and challenges in addressing the negative effects of the dissemination of health misinformation on public health.

We registered our systematic review in PROSPERO (CRD42021276755). The review adheres to the Preferred Reporting Items for Systematic reviews and Meta-Analyses 2020 and the Quality of Reporting of Meta-analyses statement. 12 , 13

We explored the following research questions: (i) To what extent are evidence-based studies addressing peculiarities and singularities associated with infodemics? (ii) What type of information on the topic of infodemics are published in systematic reviews? (iii) What main challenges, opportunities and recommendations addressing infodemics did systematic review authors highlight? and (iv) What is the methodological and reporting quality of published systematic reviews conveying research questions related to infodemic?

Inclusion criteria

We used a published definition of systematic reviews 14 and included a systematic review or mini-reviews if: (i) the search strategy was conducted at least in two databases; (ii) the study had at least two authors; and (iii) the study comprehensively presented a methods section or description of inclusion and exclusion criteria. We only included systematic reviews that directly analysed the available evidence correlated to infodemics, misinformation, disinformation, health communication, information overload and fake news (defined as: purposefully crafted, sensational, emotionally charged, misleading or totally fabricated information that mimics the form of mainstream news). We excluded preprints, unpublished data and narrative or literature reviews.

Search methods

With an information specialist, we designed the search strategy using medical subject headings and specific keywords ( Box 1 ). We had no restriction on publication date or languages. We searched five databases (MEDLINE®, Embase®, Cochrane Library of Systematic Reviews, Scopus and Epistemonikos), explored the reference lists of the included studies and searched for potential review protocols registered on PROSPERO. We first conducted the search on 4 November 2021 and we re-ran the search on 6 May 2022.

Box 1

Search strategy for the systematic review on infodemics and health misinformation.

#1 Communication OR consumer health information OR information dissemination OR health literacy

#2 (infodemic* OR misinformation OR disinformation OR disinformation OR information dissemination OR information sharing* OR information overload) OR (fake new* OR influencer* OR conspirac* OR hate* OR infoxication) OR ((viral AND (news OR social media OR media)) OR (consumer health information OR health literacy OR health information literacy)

#3 - #1 OR #2

#4 Systematic review as topic OR PT Systematic review OR AB “systematic review” OR TI “systematic review”

#5 - #3 AND #5

After removing duplicates, two authors independently screened title, abstract and full-text of articles and included eligible articles for evaluation. An independent third author resolved any disagreements. We performed the screening process in Covidence (Covidence, Melbourne, Australia).

Data collection and analysis

Two independent researchers extracted the general characteristics of each study and classified them into six major categories: (i) reviews evaluating negative effects of misinformation; (ii) reviews assessing the sources of health misinformation and the most used platforms; (iii) reviews evaluating the proportion of health-related misinformation on social media; (iv) reviews evaluating the beneficial features of social media use; (v) reviews associated with corrective measures against health misinformation; and (vi) reviews evaluating characteristics associated with studies’ quality. We clustered systematic reviews based on similar properties associated with the stated objective and the reported outcomes. Although infodemics were primarily defined as the overabundance of information, usually with a negative connotation, we decided to report data from systematic reviews which also described the potential beneficial effects of the massive circulation of information and knowledge during health emergencies. We summarized challenges and opportunities associated with infodemics and misinformation. A third author verified the retrieved data and another author resolved any inter-reviewer disagreement.

Assessment of methodological quality

Two authors independently appraised the quality of included systematic reviews using the AMSTAR 2 tool, containing 16 domains. 15 We rated each categorical domain using the online platform and obtained an overall score of critical and non-critical domains. Inter-rater discrepancies were resolved through discussion. We calculated inter-rater reliability with a Cohen’s κ and we classified reliability as adequate if κ > 0.85.

Data synthesis

We synthesized the characteristics of included reviews, reporting their primary outcomes categorized by the similarity of the review question or results. Additionally, we created summary tables showing current evidence and knowledge gaps. We rated the certainty of the evidence through an adapted version of the Grading of Recommendations Assessment, Development and Evaluation approach for the defined primary outcomes. 16 , 17

We identified 9008 records and after removing 443 duplicates, we screened 8565 studies of which 111 were eligible for full-text assessment. Of these, we excluded 80 studies (available in the data repository). 18 We included 31 systematic reviews, of which 17 studies were published between 2018 and 2022, 19 – 35 three awaiting classification (we were unable to retrieve full text during our review) 36 – 38 and 11 ongoing reviews ( Fig. 1 ). 39 – 49 Inter-rater reliability was high ( κ  = 0.9867).

An external file that holds a picture, illustration, etc.
Object name is BLT.21.287654-F1.jpg

Selection of systematic reviews on infodemics and health misinformation

Note: We denoted studies for which we were unable to retrieve the full text, even after an exhaustive search, as awaiting classification. Therefore, these studies were considered in the included studies section based on the inclusion criteria. However, we cannot guarantee if these records are definitely eligible for inclusion.

Out of 17 published systematic reviews, 14 were published after the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak. 19 – 35 The published reviews included 1034 primary studies covering 12 infectious diseases and three major topics (vaccination hesitancy, disaster communication and disease outbreaks) related to infodemics, misinformation, disinformation, fake news or any other variation of these terms ( Table 1 ). The included reviews covered 19 official scientific databases.

Review, yearNo. of databases (names)No. of studies (study types)Study objective
Abbott et al., 2022 8 (PubMed®, Epistemonikos, Cochrane Library of Systematic Reviews, Cochrane COVID-19 Study Register, Embase®, CINAHL, Web of Science and WHO databases)280 (systematic reviews, overviews and meta-analysis)To map the nature, scope and quality of evidence syntheses on COVID-19 and to explore the relationship between review quality and the extent of researcher, policy and media interest
Alvarez-Galvez et al., 2021 7 (Scopus, MEDLINE®, Embase®, CINAHL, Sociological Abstracts, Cochrane Library of Systematic Reviews and grey literature )42 (quantitative and qualitative studies and mixed-methods studies)To identify the factors that make possible the spread of medical and health misinformation during outbreaks and to reveal the needs and future directions for the development of new protocols that might contribute to the assessment and control of information quality in future infodemics
Aruhomukama & Bulafu, 2021 2 (PubMed® and CINAHL)10 (quantitative and qualitative studies)To interrogate and integrate knowledge levels and media sources of information findings of the studies on knowledge, attitudes, perceptions and practices towards COVID-19 done in low- and middle-income countries in Africa
Bhatt et al., 2021 4 (MEDLINE®, Embase®, Cochrane Databases and Google)5 (quantitative and qualitative studies)To assess the current use of social media in clinical practice guidelines dissemination across different medical specialties
Eckert et al., 2018 8 (PubMed®, Web of Science, CINAHL, CINAHL Complete, Communication and Mass Media Complete, PsychInfo®, WHO databases and Google Scholar) along with social media companies' reports79 (quantitative and qualitative studies and case studies)To conduct a systematic review on the extant literature on social media use during all phases of a disaster cycle
Gabarron et al., 2021 5 (PubMed®, Scopus, Embase®, PsychInfo® and Google Scholar)22 (mixed-methods studies)To review misinformation related to COVID-19 on social media during the first phase of the pandemic and to discuss ways to counter misinformation
Gunasekeran et al., 2022 3 (PubMed®, including MEDLINE® and Institute of Electrical and Electronics Engineers Xplore)35 (quantitative and qualitative studies)To highlight a brief history of social media in health care and report its potential negative and positive public health impacts
Lieneck et al., 2022 2 (EBSCO host and PubMed®)25 (quantitative and qualitative studies)To identify common facilitators and barriers in the literature which influence the promotion of vaccination against COVID-19
Muhammed & Mathew, 2022 7 (Web of Science, ACM digital library, AIS electronic library, EBSCO host, ScienceDirect, Scopus and Springer link)28 (quantitative and qualitative studies)To identify relevant literature on the spread of misinformation
Patel et al., 2020 6 (all databases of Web of Science, PubMed®, ProQuest, Google News, Google and Google Scholar)35To canvas the ways disinformation about COVID-19 is being spread in Ukraine, so as to form a foundation for assessing how to mitigate the problem
Pian et al., 2021 12 (PubMed®, CINAHL Complete, PsychInfo®, Psych Articles, ScienceDirect, Wiley Online Library, Web of Science, EBSCO, Communication & Mass Media Complete Library, Information Science & Technology Abstracts and Psychology & Behavioral Sciences Collection)251 (quantitative and qualitative studies)To synthesize the existing literature on the causes and impacts of the COVID-19 infodemic
Rocha et al., 2021 3 (MEDLINE®, Virtual Health Library and Scielo)14 (quantitative and qualitative studies)To evaluate the impact of social media on the dissemination of infodemic knowing and its impacts on health
Suarez-Lledo & Alvarez-Galvez, 2021 2 (MEDLINE® and PREMEDLINE)69 (policy briefs and technical reports)To identify the main health misinformation topics and their prevalence on different social media platforms, focusing on methodological quality and the diverse solutions that are being implemented to address this public health concern
Tang et al., 2018 5 (PubMed®, PsychInfo®, CINAHL Plus, ProQuest® and Communication Source)30 (quantitative and qualitative studies)To better understand the status of existing research on emerging infectious diseases communication on social media
Truong et al., 2022 4 (PsychInfo®, MEDLINE®, Global Health and Embase®)28 (quantitative and qualitative studies)To examine the factors that promote vaccine hesitancy or acceptance during pandemics, major epidemics and global outbreaks
Walter et al., 2021 7 (Communication Source, Education Resources Information Center, Journal Storage, MEDLINE®, ProQuest, PubMed® and Web of Science)24 (quantitative and qualitative studies)To evaluate the relative impact of social media interventions designed to correct health-related misinformation
Wang et al., 2019 5 (PubMed®, Cochrane Library of Systematic Reviews, Web of Science, Scopus and Google Scholar)57 (mixed-methods studies)To uncover the current evidence and better understand the 47 mechanisms of misinformation spread
Adu et al., 2021 NANATo estimate COVID-19 vaccine uptake and hesitancy rates for before-and-after the first COVID-19 vaccine was approved by FDA
Dong et al., 2022 NANATo review and synthesize the findings from qualitative studies conducted in different countries on the emergence, spread and consequences of false and misleading information about the pandemic
Fazeli et al., 2021 NANAAwaiting classification (limited access to the full-text file)
Gentile et al., 2021 NANAAwaiting classification (limited access to the full-text file)
Goldsmith et al., 2022 NANATo determine the extent and nature of social media use in migrant and ethnic minority communities for COVID-19 information and implications for preventative health measures including vaccination intent and uptake
Hilberts et al., 2021 NANATo establish the risk of health misinformation in social media to public health
Karimi-Shahanjarin et al., 2021 NANATo identify what initiatives and policies have been suggested and implemented to respond to and alleviate the harm caused by misinformation and disinformation concerning COVID-19
McGowan & Ekeigwe, 2021 NANATo assess if exposure to misinformation or disinformation influence health information-seeking behaviours
Pauletto et al., 2021 NANATo evaluate what are pros and cons of using social media during the COVID-19 pandemic
Pimenta et al., 2020 NANATo gather evidence on the impact of information about COVID-19 on the mental health of the population
Prabhu & Nayak, 2021 NANATo appraise what are the effects of the COVID-19 media based infodemic on mental health of general public
Trushna et al., 2021 NANATo undertake a mixed-methods systematic review exploring COVID-19 stigmatization, in terms of differences in experience and/or perception of different population sub-groups exposed to COVID-19, its mediators including media communications, coping strategies adopted to deal with such stigmata and the consequences in terms of health effects and health-seeking behaviour of affected individuals
Vass et al., 2022 NANAAwaiting classification (limited access to the full-text file)
Zhai et al., 2021 NANATo provide an overview of the current state of research concerning individual-level psychological and behavioural response to COVID-19-related information from different sources, as well as presenting the challenges and future research directions

COVID-19: coronavirus disease 2019; FDA: Food and Drug Administration; NA: not applicable; WHO: World Health Organization.

a There was an inconsistency between the used databases provided in the study’s abstract and those presented in the methods section. We considered the databases shown in the methods section.

The main outcomes, categorized in six themes, are summarized in Box 2 and by study in Table 2 . Below we describe the outcomes, by theme, in more detail.

Box 2

Summary of studies’ outcomes, effects of infodemics, misinformation, disinformation and fake news (10 studies).

  • Reduce patients’ willingness to vaccinate
  • Obstruct measures to contain disease outbreaks
  • Instigate the physical interruption of access to health care
  • Amplify and promote discord to enhance political crisis
  • Increase social fear, panic, stress and mental disorders
  • Enhance misallocation of resources
  • Weak and slow countermeasures interventions
  • Exacerbate poor quality content creation

Source of health misinformation propagation (six studies)

  • Social media platforms are associated as a potential source of promotion of anecdotal evidence, rumours, fake news and general misinformation
  • Twitter, Facebook, Instagram and blogs play an important role in spreading rumours and speculating on health-related content during pandemics
  • Digital influencers or well-positioned individuals acts as distractors or judges in social networks
  • Closed communication within online communities can be used to propagate and reverberate unreliable health information
  • Misinformation can be derived from poor quality scientific knowledge

Proportion of health misinformation on social media (four studies)

  • Health misinformation in posts on social media is common (1–51% on posts associated with vaccine, 0.2–28.8% on posts associated with COVID-19 and 4–60% for pandemics)
  • Approximately 20–30% of the YouTube videos about emerging infectious diseases contain inaccurate or misleading information

Adequate use of social media (eight studies)

  • Social media platforms and traditional media might be useful during crisis communication and during emerging infectious disease pandemics, regardless of the geographical settings
  • Using social media properly, infosurveillance can be highly functional in tracking disease outbreaks
  • Social media can improve knowledge acquisition, awareness, compliance and positive behaviour towards adherence to clinical infection protocols and behaviours

Corrective interventions (four studies)

  • Correcting misinformation delivered by health professionals is harder than information delivered by health agencies
  • Misinformation corrected by experts is more effective than when corrected by non-experts
  • The effectiveness of correcting misinformation using text or images is similar
  • Use of refutational messages, directing the user to evidenced-based information platforms, creation of legislative councils to battle fake news and increase health literacy are shown to be effective countermeasures

Overall quality of publications during infodemics (three studies)

  • Most studies published during an infodemic are of low methodological quality
  • There is a substantial overlap of published studies addressing the same research questions during an infodemic

COVID-19: coronavirus disease 2019.

Note: Grading of evidence is presented in Table 4.

Review (disease and/or condition)Summary of findings
Abbott et al. (SARS-CoV-2) • Overlap of published studies related to SARS-CoV-2 between 10 and 15 June 2020 (for example, 16 reviews addressed cerebrovascular-related comorbidities and COVID-19, as well as 13 reviews evaluating the broad topic related to chloroquine and hydroxychloroquine).
• Despite the rapid pace to gather evidence during the pandemic, published studies were lacking in providing crucial methodological and reporting components (for instance, less than half of included studies critically appraised primary studies, only a fifth of included reviews had an information specialist involved in the study, and only a third registered a protocol).
• Lack of transparent searching strategies and a lack of assessment and consideration of potential limitations and biases within the included primary studies limits the validity of any review and the generalizability of its findings.
• The lack of prior registration of a review protocol was directly associated with poor quality of evidence.
• Even though some reviews had been considered of low methodological quality, social media and academic circles highlighted these studies.
Alvarez-Galvez et al. (SARS, H1N1 and H7N9 influenza viruses, Ebola virus, Zika virus, Dengue virus, generic diseases, poliomyelitis) • The authors identified five determinants of infodemics: (i) information sources; (ii) online communities' structure and consensus; (iii) communication channels; (iv) message content; and (v) health emergency context.
• Health misinformation can propagate through influencers, opinion leaders, or well-positioned individuals that may act as distractors or judges in specific social networks and certain philosophies and ideologies have higher impact on low health-literate individuals.
• Misinformation is frequently derived from poor quality scientific knowledge.
• Traditional media can contribute to the wrong interpretation of existing scientific evidence.
• Opinion polarization and echo chamber effects can increase misinformation due to the homophily between social media users. For instance, considering Facebook and Twitter, people tend to spread both reliable and untrusting information to their networks.
• Misleading health contents propagate and reverberate among closed online communities which ultimately reject expert recommendations and research evidence.
• Although social media platforms offer opportunities for specialists to convey accurate information, they also offer other non-specialists opportunities to counter this with the spread of misinformation and exacerbating outrage.
• Mass media can propagate poor-quality information during public health emergencies: it seems to be an ideal channel to spread anecdotal evidence, rumours, fake news and general misinformation on treatments and existing knowledge about health topics.
• Included studies demonstrated that the number of high-quality platforms with health-related content is limited and these have several issues (e.g. language restriction and failure to publicize).
• Alarmist, misleading, shorter messages and anecdotal evidence seem to have a stronger impact on the spread of misinformation.
Aruhomukama & Bulafu (SARS-CoV-2) • Forty per cent of included studies showed that nearly all of the respondents had heard about COVID-19, while only one included study stated that participants had inadequate knowledge of COVID-19.
• Participants reported that social media and local television and radio stations were their major source of information with regards to COVID-19.
• In two studies, participants confirmed that their family members and places of worship (churches and mosques) were the main information resource.
• Authors also suggest the SARS-CoV-2 pandemic has not dramatically affected Africa due to high levels of knowledge, positive attitudes and perceptions and good practices for infection control.
• Authors also suggest the need for health agencies to trail misinformation related to COVID-19 in real time, and to involve individuals, communities and societies at large to demystify misinformation.
Bhatt et al. (neurological, gastrointestinal, cardiovascular and urological diseases) • Based on included studies, there was a significant improvement in knowledge, awareness, compliance, and positive behaviour towards clinical practice guidelines with the use of social media dissemination compared to standard methods.
• Included studies found that social media has a crucial role in rapid and global information exchange among medical providers, organizations and stakeholders in the medical field, and its power can be harnessed in the dissemination of evidence-based clinical practice guidelines that guide physicians in practice.
• Methods for data dissemination varied from systematic tweets on clinical practice guidelines at regular intervals using a social media model, audio podcasts and videos on YouTube. Studies also found that the mixture of written text and visual images on social media with links to medical guidelines, multimedia marketing, and production company-led paid social media advertising campaigns also has great effect in improving knowledge.
• The review did not find any standardized method of analysing the impact of social media on clinical practice guidelines dissemination as the methods of dissemination were highly variable.
Eckert et al. (disaster communication) • Each social media platform used for information streaming is beneficial during crisis communication for government agencies, implementing partners, first responders, and the public to create two-way conversations to exchange information, create situational awareness and facilitate delivery of care.
• Social media mostly focused on spreading verified information and eliminating rumours via crowd-sourced peer rumour control, sometimes combined with quick and effective myth-busting messages by government officials.
• Social media must be combined with other channels, especially with messages on traditional news media as they still have high credibility and were most often referenced on Twitter and social media.
• Social media should be used by agencies, first responders and the public to monitor public reactions during a crisis, to address the public, create situational awareness, for citizen's peer-to-peer communication and aid, and to solicit responses from the ground (specifically of those individuals who are directly affected by a disaster).
• Social media can also be effective during the preparation phase as it can train potentially vulnerable populations who would need to be evacuated.
• Social media should be used to send and receive early warning messages during all phases of the disaster, to share information on the situation on the ground during onset and containment phases, and to inform friends, families and communities about aid, food, and evacuees during the containment phase. Twitter was suggested as a tool to map in real time the spread of floods and assess damage during a disaster.
Gabarron et al. (SARS-CoV-2) • Six of 22 studies that reported the proportion of misinformation related to SARS-CoV-2 showed that misinformation was presented on 0.2% (413/212 846) to 28.8% (194/673) of posts.
• Eleven studies did not categorize the specific type of COVID-19-related misinformation, nine described specific misinformation myths and two categorized the misinformation as sarcasm or humour related to the disease.
• Four studies examined the effect of misinformation (all reported that it led to fear and panic). One of the four reported that misallocation of resources and stress experienced by medical workers were also possible consequences of misinformation.
• One study reported that approximately 46.8% (525/1122) of survey respondents were tired of COVID-19 being the main theme across all media.
• Four studies mentioned increasing the health literacy of social media users.
• These studies highlighted the need to educate social media users on how to determine what information is reliable and to encourage them to assume personal responsibility for not circulating false information.
Gunasekeran et al. (SARS-CoV-2 and COVID-19) • The exponential potential of social media for information dissemination has been strategically used for positive impact in the past. They can be applied to reinvigorate public health promotion efforts and raise awareness about diseases.
• The epidemiological value of social media applications includes surveillance of information, disease syndromes and events (outbreak tracing, needs or shortages during disasters).
• To draw attention to accurate information, social media seems to present a potential tool for governments to (i) rapidly assess public reaction to an outbreak, (ii) identify critical time points and topics that need to be addressed, and (iii) rapidly disseminate vital public health communication during outbreaks.
• The review suggested that infoveillance (i.e. information surveillance) is the detection of events using web-based data, which can be faster than traditional surveillance methods. Earlier studies have successfully illustrated the use of microblogs and users’ geographical locations to track infectious disease outbreaks in many countries.
• Although social media has the potential for positive public health utility, it can also amplify poor quality content. Public fear and anxiety are known to be heightened by sensational reporting in the media during outbreaks, a phenomenon heightened by the ease of sharing on social media.
• Despite the negative impact of social media in propagating infodemics, it also provides a reservoir of user-generated content as individuals share a range of topics from emotions to symptoms.
• Social media has also been applied as a tool for grassroots health promotion initiatives.
Lieneck et al. (SARS-CoV-2 and COVID-19) • One of the largest barriers to vaccine promotion through social media during the COVID-19 pandemic has been misinformation spread on social media.
• Many sites such as Twitter and Facebook do not directly monitor these falsehoods which can be detrimental to the acceptance of the COVID-19 vaccine and putting a stop to the virus.
• As vaccine hesitancy grows, social media can either be a tool to encourage greater protection via the COVID-19 vaccine or continue to fill knowledge gaps with misinformation preventing vaccination.
• During the COVID-19 pandemic specifically, studies show that social media is contributing to the spread of misinformation about the vaccine, and that individuals who were hesitant about the vaccine were more likely to only use social media as their source of news.
• Due to a lack of regulation of social media, a lot of vaccine scepticism can spread via such channels. This lack can particularly affect the COVID-19 vaccine acceptance rate among individuals.
• As social media continues to rise in popularity, it has the potential to be an effective source of public health information that is accessible and up to date.
• Social media platforms are increasing their efforts to reduce the amount of misinformation by limiting the untrue information and directing people to evidence-based websites. One potential strategy for controlling the spread of misinformation suggests the use of elaborated refutational messages, which can reduce misperceptions because they help people understand the flaws of misinformation.
Muhammed & Mathew (COVID-19, Australian Bushfire and the USA elections) • When a crisis occurs, affected communities often experience a lack of localized information needed for them to make emergency decisions.
• Information overload and information dearth are the two concerns that interrupt the communication between the affected community and a rescue team.
• Dread rumour looks more trustworthy and more likely to get viral. Dread rumour was the cause of violence against a minority group during COVID-19.
• Political misinformation has been predominantly used to influence the voters. Misinformation spreads quickly among people who have similar ideologies.
• Confirmation bias has a dominant role in social media misinformation related to politics. Readers are more likely to read and engage with the information that confirms their pre-existing beliefs and political affiliations and reject information that challenges it.
• Health misinformation could delay proper treatment, which could further deteriorate patients’ health status and affect relevant outcomes, including mortality rate.
• In the context of emergency situations (unforeseen circumstances), the credibility of social media information has often been questioned mostly by users, lawmakers, health professionals and the media.
• The broadcasting power of social media and re-sharing of misinformation could weaken and slow down rescue operations.
• Discourse on social media misinformation mitigation has resulted in prioritization of strategies such as early communication from the officials and use of scientific evidence.
• Rumour correction models for social media platforms employ algorithms, mathematical simulations and crowdsourcing.
• Studies on controlling misinformation in the public health context showed that the government could also seek the help of public health professionals to mitigate misinformation
Patel et al. (SARS-CoV-2) • The disinformation related to crisis communication about COVID-19 was focused on eroding trust in the government’s response and the accuracy of the official health messaging or misleading the public about accessing and receiving resources or support.
• Decreased trust in governments and public health systems leads to disregard for the official health advice and impacts the population’s medical decision-making, often with serious detrimental effects.
• The combination of actions to decrease trust in governments and health-related organizations are compounded in disadvantaged or vulnerable populations, such as those living in poverty, regions of conflict or in areas with poor infrastructure. The communication crisis faced during the COVID-19 pandemic can be attributed to a legacy of government mistreatment and a general lack of access to reliable information, which strengthens the impact of disinformation campaigns.
• The malicious intent and execution of disinformation campaigns in Ukraine were intended to amplify and promote discord to create a political impact in Ukraine, particularly in the context of the ongoing war.
• Disinformation instigated the physical interruption of access to health care.
Pian et al. (COVID-19) • Social media use and low level of health and/or eHealth literacy were identified as the major causes of the infodemic.
• There is a pattern of spiral-like interactions between rumour-spreading and psychological issues. Integrating psychological variables with models of rumour-sharing behaviour might be beneficial.
• Multidisciplinary empirical studies should be conducted to validate the effectiveness of countermeasures applied to multiple groups (such as low level of health/eHealth literacy, social media/mass media platforms, governments, and organizations). Even if the countermeasures seem logical, how effective they are when applied in different contexts (e.g. different geographical regions, user profile, social media platform, etc.) need to be investigated.
• One of the major causes of the infodemic is social media use, although social media can play a positive or negative role.
• The rapid publication of editorials, commentaries, viewpoints and perspectives are also mentioned by the authors of the review to be the major cause of the infodemic, due to its low level of certainty and evidence.
• Negative impacts were identified and related to the infodemic, including public psychological issues, breakdown of trust, inappropriate protective measures, panic purchase and the global economy.
• The authors proposed various countermeasures against the COVID-19 infodemic, which were grouped into the following categories: countermeasure strategies for a low level of health and/ or eHealth literacy, social media/mass media platforms, governments, and organizations, risk communication and health information needs and seeking.
Rocha et al. (COVID-19) • Infodemic can cause psychological disorders and panic, fear, depression and fatigue.
• Many occurrences were false news masquerading as reliable disease prevention and control strategies, which created an overload of misinformation.
• Different age groups interact differently with the fake news propagated by social media. A specific focus should be given to people older than 65 years as they usually have limited skills managing social media systems.
• Social media has contributed to the spread of false news and conspiracy theories during the COVID-19 pandemic.
• Infodemic is part of people’s lives around the world, causing distrust in governments, researchers and health professionals, which can directly impact people’s lives and health.
• During the COVID-19 pandemic, the disposition to spread incorrect information or rumours is directly related to the development of anxiety in populations of different ages.
Suarez-Lledo & Alvarez-Galvez (vaccines, smoking, drugs, noncommunicable diseases, COVID-19, diet and eating disorders) • Health topics were ubiquitous on all social media platforms included in the study. However, the health misinformation proportion for each topic varied depending on platform characteristics.
• The proportion of health misinformation posts was dependent on the topic: vaccines (32%; 22/69), drugs or smoking issues (22%; 16/69), noncommunicable diseases (19%; 13/69), pandemics (10%; 7/69), eating disorders (9%; 6/69) and medical treatments (7%; 5/69).
• Twitter was the most used source for work on vaccines (14%; 10/69), drugs or smoking products (14%; 10/69), pandemics (10%; 7/69) and eating disorders (4%; 3/69). For studies on noncommunicable diseases (13%; 9/69) or treatments (7%; 5/69), YouTube was the most used social media platform.
• Health misinformation was most common in studies related to smoking products, such as hookah and water pipes, e-cigarettes and drugs, such as opioids and marijuana.
• Health misinformation about vaccines was also very common. Therefore, the potential effect on population health was ambivalent, that is, both positive and negative effects were found depending on the topic and on the group of health information seekers.
• Authors identified social media platforms as a potential source of illegal promotion of the sale of controlled substances directly to consumers.
• Misleading videos promoted cures for diabetes, negated scientific arguments or provided treatments with no scientific basis.
• Although social media was described as a forum for sharing health-related knowledge, these tools are also recognized by researchers and health professionals as a source of misinformation that needs to be controlled by health experts.
Tang et al. (H1N1 and H7N9 influenza viruses, Ebola virus, West Nile virus, measles, MERS-CoV and enterohaemorrhagic ) • In general, approximately 65% (225/344) of videos contained useful information (either accurate medical information or outbreak updates) across different emerging infectious diseases, while the rest of videos contained inaccurate or misleading information. Whether misleading videos had a significantly higher number of views per day is unclear.
• Independent users were more likely to post misleading videos and news agencies were more likely to post useful videos.
Truong et al. (vaccination, H1N1 and Ebola) • Lack of information and misinformation about vaccination against H1N1 influenced participants’ decision to vaccinate.
• Lacking adequate information surrounding vaccination against H1N1 or encountering contradictory information from different sources can reduce an individual’s willingness to vaccinate. The lack of accurate information associated with vaccines would affect the population’s willingness to vaccinate against other infectious diseases (such as Ebola).
• Although the internet can be a useful resource to spread vital public health information during a pandemic, a lack of clarity and consistency of information may deter people from vaccination.
• People that do not have a comprehensive understanding of how vaccines work are unable to make informed and confident decisions about vaccination. Therefore, communicating information regarding vaccination in a clear and accessible manner to better educate people and overcome barriers to vaccination is essential.
Walter et al. (countermeasures against misinformation) • The meta-analysis showed that source of misinformation emerged as a significant moderator ( -value: 0.001). Specifically, correcting misinformation is more challenging when it is delivered by our peers (  = 0.24; 95% CI: 0.11–0.36) as opposed to news agencies (  = 0.48; 95% CI: 0.15–0.81).
• The source of the correction played a significant role ( -value: 0.031), resulting in stronger effects when corrective messages were delivered by experts (  = 0.42; 95% CI: 0.28–0.55) compared with non-experts (  = 0.24; 95% CI: 0.13–0.34).
• There was no significant difference ( -value: 0.787) between interventions that employed Facebook rather than Twitter.
• Finally, the results suggest that it is more difficult to correct misinformation in the context of infectious disease (  = 0.28; 95% CI: 0.17–0.39) as opposed to other health-related issues (  = 0.55; 95% CI: 0.31–0.79).
• The effects of myths about genetically modified produce, nutrition and reproductive health were more effectively attenuated by corrective interventions than misinformation about Zika virus, measles, HIV and other communicable diseases.
Wang et al. (vaccination, Ebola virus and Zika virus, along with other conditions and topics, including nutrition, cancer and smoking) • Misinformation is abundant on the internet and is often more popular than accurate information.
• Most commonly health-related topics associated with misinformation are communicable diseases (30 studies), including vaccination in general (eight studies) and specifically against human papillomavirus (three studies), measles, mumps and rubella (two studies) and influenza (one study), as well as infections with Zika virus (nine studies), Ebola virus (four studies), MERS-CoV (one study) and West Nile virus (one study).
• Misconceptions about measles, mumps and rubella vaccine and autism, in particular, remain prevalent on social media.
• Other topics share scientific uncertainty, with the authorities unable to provide confident explanations or advice, as with newly emerging virus infections such as Ebola and Zika viruses.

CI: confidence interval; COVID-19: coronavirus disease 2019; H1N1: influenza A virus subtype H1N1; H7N9: Asian lineage avian influenza A H7N9; HIV: human immunodeficiency virus; MERS-CoV: Middle East respiratory syndrome coronavirus; SARS: severe acute respiratory syndrome; SARS-CoV-2: severe acute respiratory syndrome coronavirus 2.

a Numbers are reported as given in original publication despite that the percentage is inconsistent with numerator and denominator.

Negative effects of misinformation

Ten systematic reviews presented evidence of the negative effects associated with the dissemination of misinformation during an infodemic. 20 , 21 , 24 , 26 , 27 , 31 – 35 Several of the consequences were linked to altering people’s attitude towards the situation: (i) distorting the interpretation of scientific evidence; (ii) opinion polarization and echo chamber effects (that is, the formation of groups of like-minded users framing and reinforcing a shared narrative); (iii) offering non-specialists’ opinions to counter accurate information; (iv) promoting fear and panic; (v) increasing mental and physical fatigue of population; and (vi) decreasing credibility of circulating information on different platforms during unforeseen circumstances. Infodemics could also decrease trust in governments and public health systems as well as in the government’s response and accuracy of the official health messaging. Other societal consequences could be amplifying and promoting discord to create a hostile political environment, increasing violence against ethnic and minority groups and affecting the global economy. Within the health system, infodemics could lead to (i) misallocation of resources and increasing stress among medical providers; (ii) decreased access to health care; (iii) increased vaccine hesitancy and conspiracy beliefs; (iv) increased illegal promotion of the sale of controlled substances; and (v) delayed delivery of high-quality care and proper treatment to patients, which could further have a negative effect on public health-care systems.

Sources of health misinformation

Six reviews reported potential links between misinformation and sources of misinformation. 21 , 24 , 28 , 32 , 34 , 35 All reviews emphasized that mass media can propagate poor-quality health information during public health emergencies, particularly through social media. Authors of the systematic reviews highlighted that health misinformation can be quickly propagated through media posts and videos, usually circulated among closed online groups, significantly influencing individuals with low health literacy and elderly patients. 21 , 34 , 35 Similarly, two reviews found that social media networks were often identified as a source of illegal or inappropriate promotion of health misinformation, including the sale of controlled substances. 24 , 32 One review tracked the main sources of health-related misinformation spreading on social media during infectious disease outbreaks worldwide, noting that the primary sources of misinformation are groups against immunization, online communication groups (such as WhatsApp groups and Facebook communities) and pharmaceutical and marketing industries, who could favour conspiracy theories. 28

Proportion of health-related misinformation

Four reviews evaluated the proportion of health misinformation on different social media platforms. 20 , 24 , 25 , 28 In a meta-analysis, the proportion ranged from 0.2% (413/212 846) to 28.8% (194/673) of posts. 20 Similarly, a review identified that the proportion of the literature containing health misinformation is dependent on the topic, which were articulated in six categories (vaccines had the highest proportion, 32%; 22/69, whereas medical treatments had the lowest, 7%; 5/69). 24 One review identified 47 mechanisms driving misinformation spread. 28 The authors also argued that misconceptions about vaccine administration in general and about infectious diseases (45 studies out of 57) and chronic noncommunicable diseases (8 studies out of 57) are highly prevalent on social media; however, the review lacks comprehensive presentation of epidemiologically-relevant data. 28 Additionally, authors of a review estimated around 20% to 30% of YouTube videos about emerging infectious diseases contained inaccurate or misleading information. 25

Beneficial features of social media use

Although infodemics are often associated with negative impacts, eight reviews reported positive outcomes related to infodemics on social media during a pandemic. 21 , 23 , 24 , 29 – 33 Social media can be used for crisis communication and management during emerging infectious disease pandemics, regardless of geographical location of the recipient of information. Furthermore, reviews found that dissemination of information on several social media platforms had significantly improved knowledge awareness and compliance to health recommendations among users. 21 , 30 , 31 Notably, some authors also stressed the fact that social media created a positive health-related behaviour among general users compared with classic dissemination models. 21 , 30 , 31 In particular, these platforms can be used for education, suggesting that social media could outrank traditional communication channels. 21 , 31 Also, content created by professionals and published on social networks, especially YouTube, might serve the community with online health-related content for self-care or health-care training. 24 Three reviews evaluated the effectiveness of social media platforms as vehicles for information dissemination during health-related disasters, including pandemics, as well as a tool to promote vaccination awareness. The reviews evidenced the effectiveness of social media platforms as an information surveillance approach, as these platforms could provide complementary knowledge by assessing online trends of disease occurrences, collecting and processing data obtained from digital communication platforms. 23 , 31 , 32 Twitter and Facebook emerged as beneficial tools for crisis communication for government agencies, implementing partners, first responders and for the public to exchange information, create situational awareness, decrease rumours and facilitate care provision. 23 Furthermore, these authors also argued that social media is a viable platform to spread verified information and eliminate unfiltered and unreliable information through crowd-sourced, peer-based rumour control (that is, technologies that network users can collaboratively implement for more effective rumour control). 23 , 31 , 32

Interestingly, one study suggested that the use of social media to mitigate misinformation associated with health-related data might result not only from the prioritization of strategies taken by governmental and health authorities, but also from the economy sector, which also includes the information technology market, the media and knowledge-based services. Also, citizens’ intention to spread misinformation by using real information would ultimately serve as a natural controlling system. 33

One systematic review evaluated the knowledge levels and media sources of information about coronavirus disease 2019 (COVID-19) in African countries 29 and found that 40% (4/10) of studies reported that the participants used social media as their source to acquire information about COVID-19. Likewise, traditional communication channels (such as television and radio stations), family members and places of worship were also used to receive information about the disease.

Corrective measures against health misinformation

Four reviews evaluated the impact and effectiveness of social media interventions created to correct health-related misinformation. 22 , 32 – 34 In general, eliminating health-related misinformation delivered by family or colleagues is more challenging than eliminating misinformation from organizations. Furthermore, evidence shows that a greater corrective effect occurs when content experts correct misconceptions compared with non-experts. 22 In addition, authors of three reviews suggested redirecting users to evidence-based or well-founded websites, besides computer-based algorithms for rumour correction, as countermeasures to limiting the circulation of unreliable information. 32 – 34 Early communication from health authorities and international health organizations plays an important role in providing misinformation mitigation. 22 , 32 – 34

Characteristics associated with studies’ quality

Three reviews reported results on methodological quality of included studies. 19 , 21 , 34 Generally, studies related to SARS-CoV-2 and infodemics showed critical quality flaws. For example, 49% (138/280) of eligible studies critically appraised the quality of original records. 19 In comparison, only 33.0% (29/88) of the studies reported the registration of a scientific protocol before the beginning of the study. 19 Several systematic reviews did not consider in their final analysis and conclusion statements the limitations of each included study’s design. 20 , 22 – 25 , 28 , 29 , 31 One study concluded that the spread of misinformation has been frequently derived from poor-quality investigations. 21 Lastly, a large number of editorials, commentaries, viewpoints and perspectives were published since the onset of the COVID-19 pandemic; these types of articles are fast-tracked publications not based on new experimental and analytical data. 34

Methodological quality

When appraised using the AMSTAR 2 critical domains, 16 reviews (94.1%) scored as having critically low quality across most major domains. 19 , 21 – 35 Only one review showed a moderate risk of bias for most domains ( Table 3 ). 20 Meta-analysis was conducted in only two reviews, which used appropriate statistical methods and considered the potential impact of risk of bias in each of the primary studies. 19 , 22 The overall quality of the evidence is shown in Table 4 . All themes had low quality, except the proportion of health-related misinformation which had very low quality of evidence.

Review        Methodological requirements met, by domain Overall quality
12345678910111213141516
Abbott et al. YesPartly metYesPartly metYesNoNoPartly metNoNoNANANoYesNAYesCritically low
Alvarez-Galvez et al. YesNoYesNoYesNoNoPartly metYesNoNANANoNoNAYesCritically low
Aruhomukama & Bulafu YesNoNoNoYesNoNoPartly metYesNoNANANoNoNAYesCritically low
Bhatt et al. YesPartly metYesPartly metYesYesNoPartly metPartly metNoNANANoNoNAYesCritically low
Eckert et al. YesNoYesYesNoNoNoPartly metYesNoNANAYesNoNANoCritically low
Gabarron et al. YesPartly metYesPartly metYesYesYesYesYesNoNANAYesNoNAYesLow
Gunasekeran et al. YesNoNoNoNoNoNoNoNoNoNANANoNoNAYesCritically low
Lieneck et al. YesNoYesNoYesNoNoPartly metYesNoNANANoNoNAYesCritically low
Muhammed & Mathew YesNoYesPartly metYesYesNoPartly metPartly metNoNANANoYesNAYesCritically low
Patel et al. YesPartly metYesPartly metNoNoNoPartly metNoNoNANANoNoNANoCritically low
Pian et al. YesNoYesPartly metYesYesNoPartly metNoYesNANAYesNoNAYesCritically low
Rocha et al. YesNoYesPartly metNoNoNoPartly metNoNoNANANoNoNANoCritically low
Suarez-Lledo & Alvarez-Galvez YesPartly metYesPartly metYesYesNoPartly metYesNoNANANoNoNAYesCritically low
Tang et al. YesPartly metYesNoNoNoNoPartly metNoNoNANANoNoNANoCritically low
Truong et al. YesNoNoNoYesYesNoNoNoNoNANANoNoNAYesCritically low
Walter et al. YesPartly metYesPartly metYesYesNoPartly metYesNoYesYesNoYesYesNoCritically low
Wang et al. YesPartly metYesNoNoNoNoPartly metNoNoNANANoNoNANoCritically low

NA: not applicable.

Note: We judged studies using the AMSTAR 2 tool. 15 For domains rated NA, the review lacked a meta-analysis.

a Domain 1: did the research questions and inclusion criteria for the review include the components of PICO (population, intervention, comparator and outcomes)? Domain 2: did the report of the review contain an explicit statement that the review methods were established before the conduct of the review and did the report justify any significant deviations from the protocol? Domain 3: did the review authors explain their selection of the study designs for inclusion in the review? Domain 4: did the review authors use a comprehensive literature search strategy? Domain 5: did the review authors perform study selection in duplicate? Domain 6: did the review authors perform data extraction in duplicate? Domain 7: did the review authors provide a list of excluded studies and justify the exclusions? Domain 8: did the review authors describe the included studies in adequate detail? Domain 9: did the review authors use a satisfactory technique for assessing the risk of bias in individual studies that were included in the review? Domain 10: did the review authors report on the sources of funding for the studies included in the review? Domain 11: if meta-analysis was performed did the review authors use appropriate methods for statistical combination of results? Domain 12: if meta-analysis was performed, did the review authors assess the potential impact of risk of bias in individual studies on the results of the meta-analysis or other evidence synthesis? Domain 13: did the review authors account for risk of bias in individual studies when interpreting/discussing the results of the review? Domain 14: did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review? Domain 15: if they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review? Domain 16: did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?

Theme (no. of systematic reviews)Certainty of the evidence (GRADE)
Methodological limitations Inconsistency Indirectness Imprecision Publication bias Overall quality
Negative effects of misinformation (10) CriticalNot seriousNANANot seriousLow
Source of health misinformation (6) CriticalNot seriousNANANot seriousLow
Proportion of health-related misinformation (4) CriticalSeriousNANANot seriousVery low
Beneficial features of social media use (8) CriticalNot seriousNANANot seriousLow
Corrective interventions against health misinformation (4) CriticalNot seriousNot seriousNANot seriousLow
Characteristics associated with studies’ quality (3) CriticalNot seriousNANANot seriousLow

GRADE: Grading of Recommendations Assessment, Development and Evaluation; NA: not applicable.

a Methodological limitations were essentially associated with the overall AMSTAR 2 rating.

b Inconsistency was judged by evaluating the consistency of the direction and primarily the difference in the magnitude of effects across studies (since statistical measures of heterogeneity are not available). As we did not find differing results for each outcome across included studies, we considered “not serious” risk for inconsistency, except for the “Proportion of health misinformation on social media,” as previously mentioned.

c We did not downgrade the indirectness and imprecision domains for most outcomes because they were not referring to any applicable intervention on human beings or health condition. Thus, we marked it as “Not applicable.” For the only outcome associated with an intervention (Corrective interventions for health misinformation), we considered it to be at “not serious” risk of indirectness because there was an adequate association between the evidence presented and review question.

d We downgraded the publication bias domain if the body of literature appears to selectively evidence a certain topic or trend from a specific outcome.

e Downgraded due to methodological limitations of the included systematic reviews (most included reviews had an overall critically low methodological quality).

f Downgraded due to methodological limitations of the studies (most included reviews had an overall critically low methodological quality).

g Downgraded due to methodological limitations of the studies (most included reviews had an overall critically low methodological quality), and inconsistency (studies had widely differing estimates of the proportion, indicating inconsistency in reporting).

h Social media also serves as a place where health-care professionals fight against false beliefs and misinformation on emerging infectious diseases.

Note: Low certainty by the GRADE Working Group grades of evidence: the summary rating of the included studies provides some indication of the likely effect. The likelihood that the effect will be substantially different is high. Very low certainty: the summary rating of the included studies does not provide a reliable indication of the likely effect. The likelihood that the effect will be substantially different is very high.

Opportunities and challenges

We evaluated reported data on current opportunities and challenges associated with infodemics and misinformation that may impact society worldwide. A summary of the main opportunities for future research and challenges is presented in Box 3 .

Box 3

Summary of reported research opportunities and challenges, future research.

  • Future investigations should be performed to provide different aspects of the impact and reliability of SARS-CoV-2-related or any other health emergency information.
  • There is a need to balance the gold standard systematic reviews with faster pragmatic studies.
  • Studies need to evaluate effective methods to precisely combat the determinants of health misinformation during pandemics and subsequent infodemics across different social media platforms.
  • Novel investigations could focus on creating a basis to conduct future studies (especially randomized trials) comparing the use of social media interventions with traditional methods in the dissemination of clinical practice guidelines.
  • Future studies should assess the potential of social media use on the recovery and preparation phases of emergency events.
  • Researchers could analyse communication patterns between citizens and frontline workers in the public health context, which may be useful to design counter-misinformation campaigns and awareness interventions.
  • A multidisciplinary specialist team could concentrate on the analysis of governmental and organizational interventions to control misinformation at the level of policies, regulatory mechanisms and communication strategies.
  • Studies should address the impact of fake news on social media and its influence on mental health and overall health.
  • Future studies should examine how social media users process the emerging infectious diseases-related information they receive.
  • Focus should be given to how users evaluate the validity and accuracy of such information and how they decide whether they will share the information with their social media contacts.
  • Further interdisciplinary research should be warranted to identify effective and tailored interventions to counter the spread of health-related misinformation online.
  • Overlap of studies covering the same topic.
  • Overall low quality of studies and the excessive and inordinate media attention given to these studies.
  • Creation and use of reliable health-related information and scientific evidence considering real-time updates.
  • Inadequate orientation of the population and medical providers into wrong pharmacological and non-pharmacological interventions.
  • New trends in personal content creation are constantly emerging, such as TikTok, which represent new challenges for regulation.
  • Further understanding the economic impact of misinformation, the difference in distribution of health misinformation in low- and high-income countries and the real impact of antivaccine activism groups.
  • Decisive and pro-active actions are required from government authorities and social media developers to avoid the destruction of positive achievements that social media has already promoted.
  • The difficulty of characterizing and evaluating the quality of the information on social media.

SARS-CoV-2: severe acute respiratory syndrome coronavirus 2.

In our study, most systematic reviews evaluating the social-, economic- and health-related repercussions of misinformation on social media noted a negative effect, either an increase in erroneous interpretation of scientific knowledge, opinion polarization, escalating fear and panic or decreased access to health care. Furthermore, studies reported that social media has been increasingly propagating poor-quality, health-related information during pandemics, humanitarian crises and health emergencies. Such spreading of unreliable evidence on health topics amplifies vaccine hesitancy and promotes unproven treatments. Moreover, reviews evidenced the low quality of studies during infodemics, mostly related to overlap between studies and minimal methodological criteria.

The increased spread of health-related misinformation in social and traditional media and within specific communities during a health emergency is accelerated by easy access to online content, especially on smartphones. 49 This increased access and the rapid spreading of health-related misinformation through social media networks could have a negative effect on mental health. 50 – 53

Although the number of studies evaluating variables associated with infodemics has risen, some variables still require further scientific exploration. For instance, some studies described the need for better methods of detecting health-related misinformation and disinformation, as the propagation methods are constantly evolving. 54 – 56 Different initiatives have been used by individuals to reveal untrustworthy content, including website review, lateral reading approaches (that is, verifying content while you read) and emotion check analysis. 54 – 57 However, no consensus exists on which method is more effective in battling unreliable content. Moreover, the techniques to build social media content that convey misinformation vary across different social media platforms and over time, even for the same platform. 58 – 60 This variation implies the need to employ various multilingual detection and eradication techniques, which should be frequently updated to keep up with misinformation patterns. Evidence-based studies could evaluate the effectiveness of different misinformation detection models by comparing performance metrics and prediction values. 61 – 65 Further priorities include recognizing methods to decrease the high-speed dissemination of misinformation and understanding the role social media plays in individuals’ real life after obtaining a certain content, information or knowledge from these platforms.

Only one review recommended legal measures against the publication of false or manipulated health information. 20 Indeed, the discussion of this topic in the literature is controversial and polemical and is limited by the diversity of national legislative processes. 66 For several jurists, the criminalization of intentionally sharing health misinformation acknowledges the wrongful violation of the right to life and liberty. 67 – 69 Furthermore, proper attention must be paid to predatory journals that publish articles without minimum quality checks. 70 , 71 For anti-criminalization supporters, creating policies controlling health misinformation and disinformation goes against freedom of speech and a free flow of information. 72 , 73 Countermeasures not involving legal actions against health-related misinformation can be awareness campaigns for patients and health-care professionals, the creation and dissemination of easy-to-navigate platforms with evidence-based data, the improvement of health-related content in mass media by using high-quality scientific evidence, the increase of high-quality online health information and improved media literacy. Promoting and disseminating trustworthy health information is crucial for governments, health authorities, researchers and clinicians to outweigh false or misleading health information disseminated in social media. Another option is to use social media channels to counter the false or misleading information, which may require further studies to evaluate the best format for this outreach and which channels work best for different populations in different geographical and cultural settings.

This review has some limitations. First, we did not search for grey literature because studies have suggested that the searching efficiency and replicability of searches depends on geographical location and users’ content profile. 74 Second, we assessed only systematic reviews and may have overlooked helpful non-systematic reviews. Nevertheless, by incorporating reviews published in scientific journals indexed in relevant databases, we obtained a comprehensive snapshot of the literature and we summarized reported gaps and implications for future research. Third, overviews of systematic reviews by nature depend on other researchers regarding inclusion criteria and methods of synthesizing data or outcomes. Thus, our conclusions may have been affected by by the bias that any systematic review author is potentially affected by. We took steps to minimize this bias, through creating a research protocol, assessing records by two authors and evaluating the quality of the evidence. Fourth, the quality of most included reviews were rated critically low due to non-adherence to important methodological features, a known issue of systematic reviews. 14 , 75 Therefore, we advocate that researchers comply with reporting and executing guidelines for systematic reviews, which increases the completeness of reporting and assists with transparency and reproducibility of the study. Likewise, journals’ editors and reviewers should put into practice endorsed reporting guidelines which, although commonly displayed at journals’ interfaces, are not systematically employed during the evaluation process. However, we considered the low quality of included reports when interpreting and discussing the results.

Based on the available evidence, people are feeling mental, social, political and/or economic distress due to misleading and false health-related content on social media during pandemics, health emergencies and humanitarian crises. Although the literature exponentially increases during health emergencies, the quality of publications remains critically low. Future studies need improved study design and reporting. Local, national and international efforts should seek effective counteractive measures against the production of misinformative materials on social media. Future research should investigate the effectiveness and safety of computer-driven corrective and interventional measures against health misinformation, disinformation and fake news and tailor ways to share health-related content on social media platforms without distorted messaging.

Acknowledgements

We thank Anneliese Arno (University College London, England), Leandro Alves Siqueira (former vice-president, Project Management Institute, United States of America) and Tina Poklepović Peričić (Medicinski Fakultet Split and Cochrane Croatia, Croatia). Israel Júnior Borges do Nascimento is also affiliated with the School of Medicine at the Medical College of Wisconsin (Milwaukee, USA). Ana Beatriz Pizarro is also affiliated with the Department of Health Systems, Pan American Health Organization (Washington, DC, USA).

Competing interests:

None declared.

  • Corpus ID: 270211062

How disinformation and fake news impact public policies?: A review of international literature

  • Ergon Cugler de Moraes Silva , Jose Carlos Vaz
  • Published 3 June 2024
  • Political Science, Sociology

Ask This Paper BETA AI-Powered

64 references, prisma 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews.

  • Highly Influential

COMO AS FAKE NEWS PREJUDICAM A POPULAÇÃO EM TEMPOS DE PANDEMIA COVID-19?: REVISÃO NARRATIVA

Desinformação e mensagens sobre a hidroxicloroquina no twitter: da pressão política à disputa científica, infodemics and health misinformation: a systematic review of reviews, the disaster of misinformation: a review of research in social media, misinformation about covid-19 vaccines on social media: rapid review, fake news during the pandemic times: a systematic literature review using prisma, agenda temática, metodologías e impacto de la investigación sobre desinformación. revisión sistemática de la literatura (2016-2020), the impact of fake news on social media and its influence on health during the covid-19 pandemic: a systematic review, desinformação na pandemia de covid-19: similitudes informacionais entre trump e bolsonaro, related papers.

Showing 1 through 3 of 0 Related Papers

Loading metrics

Open Access

Peer-reviewed

Research Article

Functional connectivity changes in the brain of adolescents with internet addiction: A systematic literature review of imaging studies

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliation Child and Adolescent Mental Health, Department of Brain Sciences, Great Ormond Street Institute of Child Health, University College London, London, United Kingdom

Roles Conceptualization, Supervision, Validation, Writing – review & editing

* E-mail: [email protected]

Affiliation Behavioural Brain Sciences Unit, Population Policy Practice Programme, Great Ormond Street Institute of Child Health, University College London, London, United Kingdom

ORCID logo

  • Max L. Y. Chang, 
  • Irene O. Lee

PLOS

  • Published: June 4, 2024
  • https://doi.org/10.1371/journal.pmen.0000022
  • Peer Review
  • Reader Comments

Fig 1

Internet usage has seen a stark global rise over the last few decades, particularly among adolescents and young people, who have also been diagnosed increasingly with internet addiction (IA). IA impacts several neural networks that influence an adolescent’s behaviour and development. This article issued a literature review on the resting-state and task-based functional magnetic resonance imaging (fMRI) studies to inspect the consequences of IA on the functional connectivity (FC) in the adolescent brain and its subsequent effects on their behaviour and development. A systematic search was conducted from two databases, PubMed and PsycINFO, to select eligible articles according to the inclusion and exclusion criteria. Eligibility criteria was especially stringent regarding the adolescent age range (10–19) and formal diagnosis of IA. Bias and quality of individual studies were evaluated. The fMRI results from 12 articles demonstrated that the effects of IA were seen throughout multiple neural networks: a mix of increases/decreases in FC in the default mode network; an overall decrease in FC in the executive control network; and no clear increase or decrease in FC within the salience network and reward pathway. The FC changes led to addictive behaviour and tendencies in adolescents. The subsequent behavioural changes are associated with the mechanisms relating to the areas of cognitive control, reward valuation, motor coordination, and the developing adolescent brain. Our results presented the FC alterations in numerous brain regions of adolescents with IA leading to the behavioural and developmental changes. Research on this topic had a low frequency with adolescent samples and were primarily produced in Asian countries. Future research studies of comparing results from Western adolescent samples provide more insight on therapeutic intervention.

Citation: Chang MLY, Lee IO (2024) Functional connectivity changes in the brain of adolescents with internet addiction: A systematic literature review of imaging studies. PLOS Ment Health 1(1): e0000022. https://doi.org/10.1371/journal.pmen.0000022

Editor: Kizito Omona, Uganda Martyrs University, UGANDA

Received: December 29, 2023; Accepted: March 18, 2024; Published: June 4, 2024

Copyright: © 2024 Chang, Lee. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting information files.

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The behavioural addiction brought on by excessive internet use has become a rising source of concern [ 1 ] since the last decade. According to clinical studies, individuals with Internet Addiction (IA) or Internet Gaming Disorder (IGD) may have a range of biopsychosocial effects and is classified as an impulse-control disorder owing to its resemblance to pathological gambling and substance addiction [ 2 , 3 ]. IA has been defined by researchers as a person’s inability to resist the urge to use the internet, which has negative effects on their psychological well-being as well as their social, academic, and professional lives [ 4 ]. The symptoms can have serious physical and interpersonal repercussions and are linked to mood modification, salience, tolerance, impulsivity, and conflict [ 5 ]. In severe circumstances, people may experience severe pain in their bodies or health issues like carpal tunnel syndrome, dry eyes, irregular eating and disrupted sleep [ 6 ]. Additionally, IA is significantly linked to comorbidities with other psychiatric disorders [ 7 ].

Stevens et al (2021) reviewed 53 studies including 17 countries and reported the global prevalence of IA was 3.05% [ 8 ]. Asian countries had a higher prevalence (5.1%) than European countries (2.7%) [ 8 ]. Strikingly, adolescents and young adults had a global IGD prevalence rate of 9.9% which matches previous literature that reported historically higher prevalence among adolescent populations compared to adults [ 8 , 9 ]. Over 80% of adolescent population in the UK, the USA, and Asia have direct access to the internet [ 10 ]. Children and adolescents frequently spend more time on media (possibly 7 hours and 22 minutes per day) than at school or sleeping [ 11 ]. Developing nations have also shown a sharp rise in teenage internet usage despite having lower internet penetration rates [ 10 ]. Concerns regarding the possible harms that overt internet use could do to adolescents and their development have arisen because of this surge, especially the significant impacts by the COVID-19 pandemic [ 12 ]. The growing prevalence and neurocognitive consequences of IA among adolescents makes this population a vital area of study [ 13 ].

Adolescence is a crucial developmental stage during which people go through significant changes in their biology, cognition, and personalities [ 14 ]. Adolescents’ emotional-behavioural functioning is hyperactivated, which creates risk of psychopathological vulnerability [ 15 ]. In accordance with clinical study results [ 16 ], this emotional hyperactivity is supported by a high level of neuronal plasticity. This plasticity enables teenagers to adapt to the numerous physical and emotional changes that occur during puberty as well as develop communication techniques and gain independence [ 16 ]. However, the strong neuronal plasticity is also associated with risk-taking and sensation seeking [ 17 ] which may lead to IA.

Despite the fact that the precise neuronal mechanisms underlying IA are still largely unclear, functional magnetic resonance imaging (fMRI) method has been used by scientists as an important framework to examine the neuropathological changes occurring in IA, particularly in the form of functional connectivity (FC) [ 18 ]. fMRI research study has shown that IA alters both the functional and structural makeup of the brain [ 3 ].

We hypothesise that IA has widespread neurological alteration effects rather than being limited to a few specific brain regions. Further hypothesis holds that according to these alterations of FC between the brain regions or certain neural networks, adolescents with IA would experience behavioural changes. An investigation of these domains could be useful for creating better procedures and standards as well as minimising the negative effects of overt internet use. This literature review aims to summarise and analyse the evidence of various imaging studies that have investigated the effects of IA on the FC in adolescents. This will be addressed through two research questions:

  • How does internet addiction affect the functional connectivity in the adolescent brain?
  • How is adolescent behaviour and development impacted by functional connectivity changes due to internet addiction?

The review protocol was conducted in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (see S1 Checklist ).

Search strategy and selection process

A systematic search was conducted up until April 2023 from two sources of database, PubMed and PsycINFO, using a range of terms relevant to the title and research questions (see full list of search terms in S1 Appendix ). All the searched articles can be accessed in the S1 Data . The eligible articles were selected according to the inclusion and exclusion criteria. Inclusion criteria used for the present review were: (i) participants in the studies with clinical diagnosis of IA; (ii) participants between the ages of 10 and 19; (iii) imaging research investigations; (iv) works published between January 2013 and April 2023; (v) written in English language; (vi) peer-reviewed papers and (vii) full text. The numbers of articles excluded due to not meeting the inclusion criteria are shown in Fig 1 . Each study’s title and abstract were screened for eligibility.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmen.0000022.g001

Quality appraisal

Full texts of all potentially relevant studies were then retrieved and further appraised for eligibility. Furthermore, articles were critically appraised based on the GRADE (Grading of Recommendations, Assessment, Development, and Evaluations) framework to evaluate the individual study for both quality and bias. The subsequent quality levels were then appraised to each article and listed as either low, moderate, or high.

Data collection process

Data that satisfied the inclusion requirements was entered into an excel sheet for data extraction and further selection. An article’s author, publication year, country, age range, participant sample size, sex, area of interest, measures, outcome and article quality were all included in the data extraction spreadsheet. Studies looking at FC, for instance, were grouped, while studies looking at FC in specific area were further divided into sub-groups.

Data synthesis and analysis

Articles were classified according to their location in the brain as well as the network or pathway they were a part of to create a coherent narrative between the selected studies. Conclusions concerning various research trends relevant to particular groupings were drawn from these groupings and subgroupings. To maintain the offered information in a prominent manner, these assertions were entered into the data extraction excel spreadsheet.

With the search performed on the selected databases, 238 articles in total were identified (see Fig 1 ). 15 duplicated articles were eliminated, and another 6 items were removed for various other reasons. Title and abstract screening eliminated 184 articles because they were not in English (number of article, n, = 7), did not include imaging components (n = 47), had adult participants (n = 53), did not have a clinical diagnosis of IA (n = 19), did not address FC in the brain (n = 20), and were published outside the desired timeframe (n = 38). A further 21 papers were eliminated for failing to meet inclusion requirements after the remaining 33 articles underwent full-text eligibility screening. A total of 12 papers were deemed eligible for this review analysis.

Characteristics of the included studies, as depicted in the data extraction sheet in Table 1 provide information of the author(s), publication year, sample size, study location, age range, gender, area of interest, outcome, measures used and quality appraisal. Most of the studies in this review utilised resting state functional magnetic resonance imaging techniques (n = 7), with several studies demonstrating task-based fMRI procedures (n = 3), and the remaining studies utilising whole-brain imaging measures (n = 2). The studies were all conducted in Asiatic countries, specifically coming from China (8), Korea (3), and Indonesia (1). Sample sizes ranged from 12 to 31 participants with most of the imaging studies having comparable sample sizes. Majority of the studies included a mix of male and female participants (n = 8) with several studies having a male only participant pool (n = 3). All except one of the mixed gender studies had a majority male participant pool. One study did not disclose their data on the gender demographics of their experiment. Study years ranged from 2013–2022, with 2 studies in 2013, 3 studies in 2014, 3 studies in 2015, 1 study in 2017, 1 study in 2020, 1 study in 2021, and 1 study in 2022.

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.t001

(1) How does internet addiction affect the functional connectivity in the adolescent brain?

The included studies were organised according to the brain region or network that they were observing. The specific networks affected by IA were the default mode network, executive control system, salience network and reward pathway. These networks are vital components of adolescent behaviour and development [ 31 ]. The studies in each section were then grouped into subsections according to their specific brain regions within their network.

Default mode network (DMN)/reward network.

Out of the 12 studies, 3 have specifically studied the default mode network (DMN), and 3 observed whole-brain FC that partially included components of the DMN. The effect of IA on the various centres of the DMN was not unilaterally the same. The findings illustrate a complex mix of increases and decreases in FC depending on the specific region in the DMN (see Table 2 and Fig 2 ). The alteration of FC in posterior cingulate cortex (PCC) in the DMN was the most frequently reported area in adolescents with IA, which involved in attentional processes [ 32 ], but Lee et al. (2020) additionally found alterations of FC in other brain regions, such as anterior insula cortex, a node in the DMN that controls the integration of motivational and cognitive processes [ 20 ].

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.g002

thumbnail

The overall changes of functional connectivity in the brain network including default mode network (DMN), executive control network (ECN), salience network (SN) and reward network. IA = Internet Addiction, FC = Functional Connectivity.

https://doi.org/10.1371/journal.pmen.0000022.t002

Ding et al. (2013) revealed altered FC in the cerebellum, the middle temporal gyrus, and the medial prefrontal cortex (mPFC) [ 22 ]. They found that the bilateral inferior parietal lobule, left superior parietal lobule, and right inferior temporal gyrus had decreased FC, while the bilateral posterior lobe of the cerebellum and the medial temporal gyrus had increased FC [ 22 ]. The right middle temporal gyrus was found to have 111 cluster voxels (t = 3.52, p<0.05) and the right inferior parietal lobule was found to have 324 cluster voxels (t = -4.07, p<0.05) with an extent threshold of 54 voxels (figures above this threshold are deemed significant) [ 22 ]. Additionally, there was a negative correlation, with 95 cluster voxels (p<0.05) between the FC of the left superior parietal lobule and the PCC with the Chen Internet Addiction Scores (CIAS) which are used to determine the severity of IA [ 22 ]. On the other hand, in regions of the reward system, connection with the PCC was positively connected with CIAS scores [ 22 ]. The most significant was the right praecuneus with 219 cluster voxels (p<0.05) [ 22 ]. Wang et al. (2017) also discovered that adolescents with IA had 33% less FC in the left inferior parietal lobule and 20% less FC in the dorsal mPFC [ 24 ]. A potential connection between the effects of substance use and overt internet use is revealed by the generally decreased FC in these areas of the DMN of teenagers with drug addiction and IA [ 35 ].

The putamen was one of the main regions of reduced FC in adolescents with IA [ 19 ]. The putamen and the insula-operculum demonstrated significant group differences regarding functional connectivity with a cluster size of 251 and an extent threshold of 250 (Z = 3.40, p<0.05) [ 19 ]. The molecular mechanisms behind addiction disorders have been intimately connected to decreased striatal dopaminergic function [ 19 ], making this function crucial.

Executive Control Network (ECN).

5 studies out of 12 have specifically viewed parts of the executive control network (ECN) and 3 studies observed whole-brain FC. The effects of IA on the ECN’s constituent parts were consistent across all the studies examined for this analysis (see Table 2 and Fig 3 ). The results showed a notable decline in all the ECN’s major centres. Li et al. (2014) used fMRI imaging and a behavioural task to study response inhibition in adolescents with IA [ 25 ] and found decreased activation at the striatum and frontal gyrus, particularly a reduction in FC at inferior frontal gyrus, in the IA group compared to controls [ 25 ]. The inferior frontal gyrus showed a reduction in FC in comparison to the controls with a cluster size of 71 (t = 4.18, p<0.05) [ 25 ]. In addition, the frontal-basal ganglia pathways in the adolescents with IA showed little effective connection between areas and increased degrees of response inhibition [ 25 ].

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.g003

Lin et al. (2015) found that adolescents with IA demonstrated disrupted corticostriatal FC compared to controls [ 33 ]. The corticostriatal circuitry experienced decreased connectivity with the caudate, bilateral anterior cingulate cortex (ACC), as well as the striatum and frontal gyrus [ 33 ]. The inferior ventral striatum showed significantly reduced FC with the subcallosal ACC and caudate head with cluster size of 101 (t = -4.64, p<0.05) [ 33 ]. Decreased FC in the caudate implies dysfunction of the corticostriatal-limbic circuitry involved in cognitive and emotional control [ 36 ]. The decrease in FC in both the striatum and frontal gyrus is related to inhibitory control, a common deficit seen with disruptions with the ECN [ 33 ].

The dorsolateral prefrontal cortex (DLPFC), ACC, and right supplementary motor area (SMA) of the prefrontal cortex were all found to have significantly decreased grey matter volume [ 29 ]. In addition, the DLPFC, insula, temporal cortices, as well as significant subcortical regions like the striatum and thalamus, showed decreased FC [ 29 ]. According to Tremblay (2009), the striatum plays a significant role in the processing of rewards, decision-making, and motivation [ 37 ]. Chen et al. (2020) reported that the IA group demonstrated increased impulsivity as well as decreased reaction inhibition using a Stroop colour-word task [ 26 ]. Furthermore, Chen et al. (2020) observed that the left DLPFC and dorsal striatum experienced a negative connection efficiency value, specifically demonstrating that the dorsal striatum activity suppressed the left DLPFC [ 27 ].

Salience network (SN).

Out of the 12 chosen studies, 3 studies specifically looked at the salience network (SN) and 3 studies have observed whole-brain FC. Relative to the DMN and ECN, the findings on the SN were slightly sparser. Despite this, adolescents with IA demonstrated a moderate decrease in FC, as well as other measures like fibre connectivity and cognitive control, when compared to healthy control (see Table 2 and Fig 4 ).

thumbnail

https://doi.org/10.1371/journal.pmen.0000022.g004

Xing et al. (2014) used both dorsal anterior cingulate cortex (dACC) and insula to test FC changes in the SN of adolescents with IA and found decreased structural connectivity in the SN as well as decreased fractional anisotropy (FA) that correlated to behaviour performance in the Stroop colour word-task [ 21 ]. They examined the dACC and insula to determine whether the SN’s disrupted connectivity may be linked to the SN’s disruption of regulation, which would explain the impaired cognitive control seen in adolescents with IA. However, researchers did not find significant FC differences in the SN when compared to the controls [ 21 ]. These results provided evidence for the structural changes in the interconnectivity within SN in adolescents with IA.

Wang et al. (2017) investigated network interactions between the DMN, ECN, SN and reward pathway in IA subjects [ 24 ] (see Fig 5 ), and found 40% reduction of FC between the DMN and specific regions of the SN, such as the insula, in comparison to the controls (p = 0.008) [ 24 ]. The anterior insula and dACC are two areas that are impacted by this altered FC [ 24 ]. This finding supports the idea that IA has similar neurobiological abnormalities with other addictive illnesses, which is in line with a study that discovered disruptive changes in the SN and DMN’s interaction in cocaine addiction [ 38 ]. The insula has also been linked to the intensity of symptoms and has been implicated in the development of IA [ 39 ].

thumbnail

“+” indicates an increase in behaivour; “-”indicates a decrease in behaviour; solid arrows indicate a direct network interaction; and the dotted arrows indicates a reduction in network interaction. This diagram depicts network interactions juxtaposed with engaging in internet related behaviours. Through the neural interactions, the diagram illustrates how the networks inhibit or amplify internet usage and vice versa. Furthermore, it demonstrates how the SN mediates both the DMN and ECN.

https://doi.org/10.1371/journal.pmen.0000022.g005

(2) How is adolescent behaviour and development impacted by functional connectivity changes due to internet addiction?

The findings that IA individuals demonstrate an overall decrease in FC in the DMN is supported by numerous research [ 24 ]. Drug addict populations also exhibited similar decline in FC in the DMN [ 40 ]. The disruption of attentional orientation and self-referential processing for both substance and behavioural addiction was then hypothesised to be caused by DMN anomalies in FC [ 41 ].

In adolescents with IA, decline of FC in the parietal lobule affects visuospatial task-related behaviour [ 22 ], short-term memory [ 42 ], and the ability of controlling attention or restraining motor responses during response inhibition tests [ 42 ]. Cue-induced gaming cravings are influenced by the DMN [ 43 ]. A visual processing area called the praecuneus links gaming cues to internal information [ 22 ]. A meta-analysis found that the posterior cingulate cortex activity of individuals with IA during cue-reactivity tasks was connected with their gaming time [ 44 ], suggesting that excessive gaming may impair DMN function and that individuals with IA exert more cognitive effort to control it. Findings for the behavioural consequences of FC changes in the DMN illustrate its underlying role in regulating impulsivity, self-monitoring, and cognitive control.

Furthermore, Ding et al. (2013) reported an activation of components of the reward pathway, including areas like the nucleus accumbens, praecuneus, SMA, caudate, and thalamus, in connection to the DMN [ 22 ]. The increased FC of the limbic and reward networks have been confirmed to be a major biomarker for IA [ 45 , 46 ]. The increased reinforcement in these networks increases the strength of reward stimuli and makes it more difficult for other networks, namely the ECN, to down-regulate the increased attention [ 29 ] (See Fig 5 ).

Executive control network (ECN).

The numerous IA-affected components in the ECN have a role in a variety of behaviours that are connected to both response inhibition and emotional regulation [ 47 ]. For instance, brain regions like the striatum, which are linked to impulsivity and the reward system, are heavily involved in the act of playing online games [ 47 ]. Online game play activates the striatum, which suppresses the left DLPFC in ECN [ 48 ]. As a result, people with IA may find it difficult to control their want to play online games [ 48 ]. This system thus causes impulsive and protracted gaming conduct, lack of inhibitory control leading to the continued use of internet in an overt manner despite a variety of negative effects, personal distress, and signs of psychological dependence [ 33 ] (See Fig 5 ).

Wang et al. (2017) report that disruptions in cognitive control networks within the ECN are frequently linked to characteristics of substance addiction [ 24 ]. With samples that were addicted to heroin and cocaine, previous studies discovered abnormal FC in the ECN and the PFC [ 49 ]. Electronic gaming is known to promote striatal dopamine release, similar to drug addiction [ 50 ]. According to Drgonova and Walther (2016), it is hypothesised that dopamine could stimulate the reward system of the striatum in the brain, leading to a loss of impulse control and a failure of prefrontal lobe executive inhibitory control [ 51 ]. In the end, IA’s resemblance to drug use disorders may point to vital biomarkers or underlying mechanisms that explain how cognitive control and impulsive behaviour are related.

A task-related fMRI study found that the decrease in FC between the left DLPFC and dorsal striatum was congruent with an increase in impulsivity in adolescents with IA [ 26 ]. The lack of response inhibition from the ECN results in a loss of control over internet usage and a reduced capacity to display goal-directed behaviour [ 33 ]. Previous studies have linked the alteration of the ECN in IA with higher cue reactivity and impaired ability to self-regulate internet specific stimuli [ 52 ].

Salience network (SN)/ other networks.

Xing et al. (2014) investigated the significance of the SN regarding cognitive control in teenagers with IA [ 21 ]. The SN, which is composed of the ACC and insula, has been demonstrated to control dynamic changes in other networks to modify cognitive performance [ 21 ]. The ACC is engaged in conflict monitoring and cognitive control, according to previous neuroimaging research [ 53 ]. The insula is a region that integrates interoceptive states into conscious feelings [ 54 ]. The results from Xing et al. (2014) showed declines in the SN regarding its structural connectivity and fractional anisotropy, even though they did not observe any appreciable change in FC in the IA participants [ 21 ]. Due to the small sample size, the results may have indicated that FC methods are not sensitive enough to detect the significant functional changes [ 21 ]. However, task performance behaviours associated with impaired cognitive control in adolescents with IA were correlated with these findings [ 21 ]. Our comprehension of the SN’s broader function in IA can be enhanced by this relationship.

Research study supports the idea that different psychological issues are caused by the functional reorganisation of expansive brain networks, such that strong association between SN and DMN may provide neurological underpinnings at the system level for the uncontrollable character of internet-using behaviours [ 24 ]. In the study by Wang et al. (2017), the decreased interconnectivity between the SN and DMN, comprising regions such the DLPFC and the insula, suggests that adolescents with IA may struggle to effectively inhibit DMN activity during internally focused processing, leading to poorly managed desires or preoccupations to use the internet [ 24 ] (See Fig 5 ). Subsequently, this may cause a failure to inhibit DMN activity as well as a restriction of ECN functionality [ 55 ]. As a result, the adolescent experiences an increased salience and sensitivity towards internet addicting cues making it difficult to avoid these triggers [ 56 ].

The primary aim of this review was to present a summary of how internet addiction impacts on the functional connectivity of adolescent brain. Subsequently, the influence of IA on the adolescent brain was compartmentalised into three sections: alterations of FC at various brain regions, specific FC relationships, and behavioural/developmental changes. Overall, the specific effects of IA on the adolescent brain were not completely clear, given the variety of FC changes. However, there were overarching behavioural, network and developmental trends that were supported that provided insight on adolescent development.

The first hypothesis that was held about this question was that IA was widespread and would be regionally similar to substance-use and gambling addiction. After conducting a review of the information in the chosen articles, the hypothesis was predictably supported. The regions of the brain affected by IA are widespread and influence multiple networks, mainly DMN, ECN, SN and reward pathway. In the DMN, there was a complex mix of increases and decreases within the network. However, in the ECN, the alterations of FC were more unilaterally decreased, but the findings of SN and reward pathway were not quite clear. Overall, the FC changes within adolescents with IA are very much network specific and lay a solid foundation from which to understand the subsequent behaviour changes that arise from the disorder.

The second hypothesis placed emphasis on the importance of between network interactions and within network interactions in the continuation of IA and the development of its behavioural symptoms. The results from the findings involving the networks, DMN, SN, ECN and reward system, support this hypothesis (see Fig 5 ). Studies confirm the influence of all these neural networks on reward valuation, impulsivity, salience to stimuli, cue reactivity and other changes that alter behaviour towards the internet use. Many of these changes are connected to the inherent nature of the adolescent brain.

There are multiple explanations that underlie the vulnerability of the adolescent brain towards IA related urges. Several of them have to do with the inherent nature and underlying mechanisms of the adolescent brain. Children’s emotional, social, and cognitive capacities grow exponentially during childhood and adolescence [ 57 ]. Early teenagers go through a process called “social reorientation” that is characterised by heightened sensitivity to social cues and peer connections [ 58 ]. Adolescents’ improvements in their social skills coincide with changes in their brains’ anatomical and functional organisation [ 59 ]. Functional hubs exhibit growing connectivity strength [ 60 ], suggesting increased functional integration during development. During this time, the brain’s functional networks change from an anatomically dominant structure to a scattered architecture [ 60 ].

The adolescent brain is very responsive to synaptic reorganisation and experience cues [ 61 ]. As a result, one of the distinguishing traits of the maturation of adolescent brains is the variation in neural network trajectory [ 62 ]. Important weaknesses of the adolescent brain that may explain the neurobiological change brought on by external stimuli are illustrated by features like the functional gaps between networks and the inadequate segregation of networks [ 62 ].

The implications of these findings towards adolescent behaviour are significant. Although the exact changes and mechanisms are not fully clear, the observed changes in functional connectivity have the capacity of influencing several aspects of adolescent development. For example, functional connectivity has been utilised to investigate attachment styles in adolescents [ 63 ]. It was observed that adolescent attachment styles were negatively associated with caudate-prefrontal connectivity, but positively with the putamen-visual area connectivity [ 63 ]. Both named areas were also influenced by the onset of internet addiction, possibly providing a connection between the two. Another study associated neighbourhood/socioeconomic disadvantage with functional connectivity alterations in the DMN and dorsal attention network [ 64 ]. The study also found multivariate brain behaviour relationships between the altered/disadvantaged functional connectivity and mental health and cognition [ 64 ]. This conclusion supports the notion that the functional connectivity alterations observed in IA are associated with specific adolescent behaviours as well as the fact that functional connectivity can be utilised as a platform onto which to compare various neurologic conditions.

Limitations/strengths

There were several limitations that were related to the conduction of the review as well as the data extracted from the articles. Firstly, the study followed a systematic literature review design when analysing the fMRI studies. The data pulled from these imaging studies were namely qualitative and were subject to bias contrasting the quantitative nature of statistical analysis. Components of the study, such as sample sizes, effect sizes, and demographics were not weighted or controlled. The second limitation brought up by a similar review was the lack of a universal consensus of terminology given IA [ 47 ]. Globally, authors writing about this topic use an array of terminology including online gaming addiction, internet addiction, internet gaming disorder, and problematic internet use. Often, authors use multiple terms interchangeably which makes it difficult to depict the subtle similarities and differences between the terms.

Reviewing the explicit limitations in each of the included studies, two major limitations were brought up in many of the articles. One was relating to the cross-sectional nature of the included studies. Due to the inherent qualities of a cross-sectional study, the studies did not provide clear evidence that IA played a causal role towards the development of the adolescent brain. While several biopsychosocial factors mediate these interactions, task-based measures that combine executive functions with imaging results reinforce the assumed connection between the two that is utilised by the papers studying IA. Another limitation regarded the small sample size of the included studies, which averaged to around 20 participants. The small sample size can influence the generalisation of the results as well as the effectiveness of statistical analyses. Ultimately, both included study specific limitations illustrate the need for future studies to clarify the causal relationship between the alterations of FC and the development of IA.

Another vital limitation was the limited number of studies applying imaging techniques for investigations on IA in adolescents were a uniformly Far East collection of studies. The reason for this was because the studies included in this review were the only fMRI studies that were found that adhered to the strict adolescent age restriction. The adolescent age range given by the WHO (10–19 years old) [ 65 ] was strictly followed. It is important to note that a multitude of studies found in the initial search utilised an older adolescent demographic that was slightly higher than the WHO age range and had a mean age that was outside of the limitations. As a result, the results of this review are biased and based on the 12 studies that met the inclusion and exclusion criteria.

Regarding the global nature of the research, although the journals that the studies were published in were all established western journals, the collection of studies were found to all originate from Asian countries, namely China and Korea. Subsequently, it pulls into question if the results and measures from these studies are generalisable towards a western population. As stated previously, Asian countries have a higher prevalence of IA, which may be the reasoning to why the majority of studies are from there [ 8 ]. However, in an additional search including other age groups, it was found that a high majority of all FC studies on IA were done in Asian countries. Interestingly, western papers studying fMRI FC were primarily focused on gambling and substance-use addiction disorders. The western papers on IA were less focused on fMRI FC but more on other components of IA such as sleep, game-genre, and other non-imaging related factors. This demonstrated an overall lack of western fMRI studies on IA. It is important to note that both western and eastern fMRI studies on IA presented an overall lack on children and adolescents in general.

Despite the several limitations, this review provided a clear reflection on the state of the data. The strengths of the review include the strict inclusion/exclusion criteria that filtered through studies and only included ones that contained a purely adolescent sample. As a result, the information presented in this review was specific to the review’s aims. Given the sparse nature of adolescent specific fMRI studies on the FC changes in IA, this review successfully provided a much-needed niche representation of adolescent specific results. Furthermore, the review provided a thorough functional explanation of the DMN, ECN, SN and reward pathway making it accessible to readers new to the topic.

Future directions and implications

Through the search process of the review, there were more imaging studies focused on older adolescence and adulthood. Furthermore, finding a review that covered a strictly adolescent population, focused on FC changes, and was specifically depicting IA, was proven difficult. Many related reviews, such as Tereshchenko and Kasparov (2019), looked at risk factors related to the biopsychosocial model, but did not tackle specific alterations in specific structural or functional changes in the brain [ 66 ]. Weinstein (2017) found similar structural and functional results as well as the role IA has in altering response inhibition and reward valuation in adolescents with IA [ 47 ]. Overall, the accumulated findings only paint an emerging pattern which aligns with similar substance-use and gambling disorders. Future studies require more specificity in depicting the interactions between neural networks, as well as more literature on adolescent and comorbid populations. One future field of interest is the incorporation of more task-based fMRI data. Advances in resting-state fMRI methods have yet to be reflected or confirmed in task-based fMRI methods [ 62 ]. Due to the fact that network connectivity is shaped by different tasks, it is critical to confirm that the findings of the resting state fMRI studies also apply to the task based ones [ 62 ]. Subsequently, work in this area will confirm if intrinsic connectivity networks function in resting state will function similarly during goal directed behaviour [ 62 ]. An elevated focus on adolescent populations as well as task-based fMRI methodology will help uncover to what extent adolescent network connectivity maturation facilitates behavioural and cognitive development [ 62 ].

A treatment implication is the potential usage of bupropion for the treatment of IA. Bupropion has been previously used to treat patients with gambling disorder and has been effective in decreasing overall gambling behaviour as well as money spent while gambling [ 67 ]. Bae et al. (2018) found a decrease in clinical symptoms of IA in line with a 12-week bupropion treatment [ 31 ]. The study found that bupropion altered the FC of both the DMN and ECN which in turn decreased impulsivity and attentional deficits for the individuals with IA [ 31 ]. Interventions like bupropion illustrate the importance of understanding the fundamental mechanisms that underlie disorders like IA.

The goal for this review was to summarise the current literature on functional connectivity changes in adolescents with internet addiction. The findings answered the primary research questions that were directed at FC alterations within several networks of the adolescent brain and how that influenced their behaviour and development. Overall, the research demonstrated several wide-ranging effects that influenced the DMN, SN, ECN, and reward centres. Additionally, the findings gave ground to important details such as the maturation of the adolescent brain, the high prevalence of Asian originated studies, and the importance of task-based studies in this field. The process of making this review allowed for a thorough understanding IA and adolescent brain interactions.

Given the influx of technology and media in the lives and education of children and adolescents, an increase in prevalence and focus on internet related behavioural changes is imperative towards future children/adolescent mental health. Events such as COVID-19 act to expose the consequences of extended internet usage on the development and lifestyle of specifically young people. While it is important for parents and older generations to be wary of these changes, it is important for them to develop a base understanding of the issue and not dismiss it as an all-bad or all-good scenario. Future research on IA will aim to better understand the causal relationship between IA and psychological symptoms that coincide with it. The current literature regarding functional connectivity changes in adolescents is limited and requires future studies to test with larger sample sizes, comorbid populations, and populations outside Far East Asia.

This review aimed to demonstrate the inner workings of how IA alters the connection between the primary behavioural networks in the adolescent brain. Predictably, the present answers merely paint an unfinished picture that does not necessarily depict internet usage as overwhelmingly positive or negative. Alternatively, the research points towards emerging patterns that can direct individuals on the consequences of certain variables or risk factors. A clearer depiction of the mechanisms of IA would allow physicians to screen and treat the onset of IA more effectively. Clinically, this could be in the form of more streamlined and accurate sessions of CBT or family therapy, targeting key symptoms of IA. Alternatively clinicians could potentially prescribe treatment such as bupropion to target FC in certain regions of the brain. Furthermore, parental education on IA is another possible avenue of prevention from a public health standpoint. Parents who are aware of the early signs and onset of IA will more effectively handle screen time, impulsivity, and minimize the risk factors surrounding IA.

Additionally, an increased attention towards internet related fMRI research is needed in the West, as mentioned previously. Despite cultural differences, Western countries may hold similarities to the eastern countries with a high prevalence of IA, like China and Korea, regarding the implications of the internet and IA. The increasing influence of the internet on the world may contribute to an overall increase in the global prevalence of IA. Nonetheless, the high saturation of eastern studies in this field should be replicated with a Western sample to determine if the same FC alterations occur. A growing interest in internet related research and education within the West will hopefully lead to the knowledge of healthier internet habits and coping strategies among parents with children and adolescents. Furthermore, IA research has the potential to become a crucial proxy for which to study adolescent brain maturation and development.

Supporting information

S1 checklist. prisma checklist..

https://doi.org/10.1371/journal.pmen.0000022.s001

S1 Appendix. Search strategies with all the terms.

https://doi.org/10.1371/journal.pmen.0000022.s002

S1 Data. Article screening records with details of categorized content.

https://doi.org/10.1371/journal.pmen.0000022.s003

Acknowledgments

The authors thank https://www.stockio.com/free-clipart/brain-01 (with attribution to Stockio.com); and https://www.rawpixel.com/image/6442258/png-sticker-vintage for the free images used to create Figs 2 – 4 .

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. Association AP. Diagnostic and statistical manual of mental disorders: DSM-5. 5 ed. Washington, D.C.: American Psychiatric Publishing; 2013.
  • 10. Stats IW. World Internet Users Statistics and World Population Stats 2013 [ http://www.internetworldstats.com/stats.htm .
  • 11. Rideout VJR M. B. The common sense census: media use by tweens and teens. San Francisco, CA: Common Sense Media; 2019.
  • 37. Tremblay L. The Ventral Striatum. Handbook of Reward and Decision Making: Academic Press; 2009.
  • 57. Bhana A. Middle childhood and pre-adolescence. Promoting mental health in scarce-resource contexts: emerging evidence and practice. Cape Town: HSRC Press; 2010. p. 124–42.
  • 65. Organization WH. Adolescent Health 2023 [ https://www.who.int/health-topics/adolescent-health#tab=tab_1 .
  • Study Guides
  • Homework Questions

Week 2 Literature Review PSYC 4600

IMAGES

  1. Prevalence of Health Misinformation on Social Media: Systematic Review

    health misinformation on social media a literature review

  2. Report: Misinformation On Facebook Poses A Major Threat To Public

    health misinformation on social media a literature review

  3. Fighting Health Misinformation On Social Media

    health misinformation on social media a literature review

  4. Mapping the harm of COVID-19 misinformation on social media

    health misinformation on social media a literature review

  5. The Danger of Medical Misinformation on Social Media

    health misinformation on social media a literature review

  6. Report: Misinformation On Facebook Poses A Major Threat To Public

    health misinformation on social media a literature review

VIDEO

  1. Misinformation on social media

  2. Social Media BOOK TAG

  3. COVID, Political Mobilization & Digital Platforms

COMMENTS

  1. Health Misinformation on Social Media: A Systematic Literature Review

    Health misinformation on social media is an emerging public concern as the COVID-19 infodemic tragically evidences. Key challenges that empower health misinformation's spread include rapidly advancing social technologies and high social media usage penetration. However, research on health misinformation on social media lacks cohesion and has received limited attention from information ...

  2. Prevalence of Health Misinformation on Social Media: Systematic Review

    Through a systematic review of the literature, this study offers a general characterization of the main topics, areas of research, methods, and techniques used for the study of health misinformation. ... We found that health misinformation on social media is generally linked to the following six topical domains: (1) vaccines, (2) diets and ...

  3. Social Media and Health Misinformation: A Literature Review

    This research was a literature review exploring the role of social media in the spread of health-related misinformation online using sources that were mostly released in the last 10 years. As depicted in Fig. 1, keywords were used in the search for research articles

  4. Social Media and Health Misinformation: A Literature Review

    Abstract. This paper examines the current state of health misinformation within social media platforms and its potential impact on public health and safety. With the proliferation of the internet, specifically social medias, there has been an exponential increase in the speed of spread and accessibility of misinformation online.

  5. Prevalence of Health Misinformation on Social Media: Systematic Review

    Background: Although at present there is broad agreement among researchers, health professionals, and policy makers on the need to control and combat health misinformation, the magnitude of this problem is still unknown. Consequently, it is fundamental to discover both the most prevalent health topics and the social media platforms from which these topics are initially framed and subsequently ...

  6. Systematic Literature Review on the Spread of Health-related

    In the health arena, much concern has focused on the spread of misinformation on immunisation, with social media acting as a powerful catalyst for the 'anti-vaxxer movement'. By encouraging individuals not to vaccinate their children, this movement has been linked to recent measles outbreaks in countries such as the UK, the US, Germany and ...

  7. The disaster of misinformation: a review of research in social media

    To the best of our knowledge, the review of the literature on social media misinformation themes are relatively scanty. This review contributes to an emerging body of knowledge in Data Science and informs the efforts to combat social media misinformation. ... This review shows that health-related misinformation, especially on M.M.R. vaccine and ...

  8. PDF Health Misinformation on Social Media: A Systematic Literature Review

    Health misinformation on social media is an emerging public concern as the COVID-19 infodemic tragically evidences. ... Health Misinformation, Fake News, Social Media, Literature Review, Shannon-Weaver Model of Communication, Stage-based Framework, Research Directions.

  9. Where We Go From Here: Health Misinformation on Social Media

    The research priorities we have outlined should inform and improve policy and practice aimed at addressing health misinformation on social media, such as content moderation standards used by platforms and rumor mitigation efforts undertaken by public health agencies. ... Torbica A, Stuckler D. Systematic literature review on the spread of ...

  10. Systematic Literature Review on the Spread of Health-related

    Overall, we observe an increasing trend in published articles on health-related misinformation and the role of social media in its propagation. The most extensively studied topics involving misinformation relate to vaccination, Ebola and Zika Virus, although others, such as nutrition, cancer, fluoridation of water and smoking also featured.

  11. Health-related fake news on social media platforms: A systematic

    Wang Y, McKee M, Torbica A, et al. (2019) Systematic literature review on the spread of health-related misinformation on social media. Social Science and Medicine 240: 112552. Crossref

  12. Health misinformation on social media: A literature review

    Abstract. Health misinformation on social media is considered as a major public concern. This study evaluates the current state of this issue by conducting a systematic literature review. Based on a stepwise literature search and selection procedure, we have identified 21 articles relevant to the topic of health misinformation on social media.

  13. Misinformation: susceptibility, spread, and interventions to immunize

    This Review will provide readers with a conceptual overview of recent literature on misinformation, ... health misinformation on social media ... Misinformation about health: a review of health ...

  14. Mental health misinformation on social media: Review and future

    Abstract. Social media use for health information is extremely common in the United States. Unfortunately, this use may expose users to misinformation. The prevalence and harms of misinformation are well documented in many health domains (e.g., infectious diseases). However, research on mental health misinformation is limited.

  15. Mental health misinformation on social media: Review and future

    Health misinformation on social media: a systematic literature review and future research directions. AIS Trans Hum-Comput Interact, 14 ( 2022), pp. 116 - 149. Google Scholar. This systematic review provides a broad overview of the methods and findings of health-related misinformation research on social media.

  16. Social media is a source of health-related misinformation

    Commentary on: Wang Y, McKee M, Torbica A, et al . Systematic review on the spread of health-related misinformation on social media. Soc Sci Med .2019;240:112552.doi: 10.1016/j.socscimed.2019.112552. [Epub ahead of print 18 Sep 2019]. Over the past 25 years, the Internet and social media have rapidly become ubiquitous in daily life, and despite improved access to information there are ...

  17. COVID-19-related misinformation on social media: a systematic review

    The proportion of COVID-19 misinformation on social media ranged from 0.2% (413/212 846) to 28.8% (194/673) of posts. Of the 22 studies, 11 did not categorize the type of COVID-19-related misinformation, nine described specific misinformation myths and two reported sarcasm or humour related to COVID-19. Only four studies addressed the possible ...

  18. A broader view of misinformation reveals potential for intervention

    Allen et al. combine high-powered controlled experiments with machine learning and social media data on roughly 2.7 billion vaccine-related URL views on Facebook during the rollout of the first COVID-19 vaccine at the start of 2021.Contrary to claims that misinformation does not affect choices (), the causal estimates reported by Allen et al. suggest that exposure to a single piece of vaccine ...

  19. Prevalence of Health Misinformation on Social Media: Systematic Review

    Through a systematic review of the literature, this study offers a general characterization of the main topics, areas of research, methods, and techniques used for the study of health misinformation. ... We found that health misinformation on social media is generally linked to the following six topical domains: (1) vaccines, (2) diets and ...

  20. Misunderstanding the harms of online misinformation

    The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims ...

  21. Using psychology to understand and fight health misinformation

    Misinformation spreads rapidly across social media and other online platforms, posing risks to individual health and societal well-being. Research on the psychology of misinformation has proliferated in recent years, yet many questions remain about how and why misinformation spreads, how it affects behavior, and how best to counter it.Answering these questions well depends in part on how ...

  22. Promoting a healthy lifestyle: exploring the role of social media and

    Literature review. Unhealthy lifestyles such as physical inactivity, ... Social media has been shown to often be a source of misinformation, especially about nutrition and exercise. ... Associations of health literacy, social media use, and self-efficacy with health information-seeking intentions among social media users in China: cross ...

  23. Medical and Health-Related Misinformation on Social Media: Bibliometric

    Background: Social media has been extensively used for the communication of health-related information and consecutively for the potential spread of medical misinformation. Conventional systematic reviews have been published on this topic to identify original articles and to summarize their methodological approaches and themes. A bibliometric study could complement their findings, for instance ...

  24. The disaster of misinformation: a review of research in social media

    The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the ...

  25. Misleading COVID-19 headlines from mainstream sources did more harm on

    Since the rollout of the COVID-19 vaccine in 2021, fake news on social media has been widely blamed for low vaccine uptake in the United States—but research by MIT Sloan School of Management Ph ...

  26. Infodemics and health misinformation: a systematic review of reviews

    The proportion of health-related misinformation on social media ranged from 0.2% to 28.8%. Twitter, Facebook, YouTube and Instagram are critical in disseminating the rapid and far-reaching information. ... To conduct a systematic review on the extant literature on social media use during all phases of a disaster cycle: Gabarron et al., 2021 20: ...

  27. How disinformation and fake news impact public policies?: A review of

    This study investigates the impact of disinformation on public policies. Using 28 sets of keywords in eight databases, a systematic review was carried out following the Prisma 2020 model (Page et al., 2021). After applying filters and inclusion and exclusion criteria to 4,128 articles and materials found, 46 publications were analyzed, resulting in 23 disinformation impact categories.

  28. Functional connectivity changes in the brain of adolescents with

    Internet usage has seen a stark global rise over the last few decades, particularly among adolescents and young people, who have also been diagnosed increasingly with internet addiction (IA). IA impacts several neural networks that influence an adolescent's behaviour and development. This article issued a literature review on the resting-state and task-based functional magnetic resonance ...

  29. Week 2 Literature Review PSYC 4600 (docx)

    1 Literature Review: Retraction Pamela Hayes Capella University PSYC4600: Research Methods in Psychology Literature Review Jean Hunt April, 2024. 2 Literature Review: Retraction Berinsky (2017) has stated the definition of a rumor, a type of misinformation, as being one that has accepted information that is unsupported and doubtful facts.