Article Text

Download PDFPDF

Addressing bias in artificial intelligence for public health surveillance
  1. Lidia Flores1,
  2. Seungjun Kim1,
  3. Sean D Young1,2
  1. 1 Department of Informatics, University of California Irvine, Irvine, California, USA
  2. 2 Department of Emergency Medicine, School of Medicine, University of California, Irvine, Irvine, CA, USA
  1. Correspondence to Sean D Young, Department of Emergency Medicine, University of California Irvine, Irvine, USA; syoung5{at}hs.uci.edu

Abstract

Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described as the difference between the predictive values and true values within the modelling of an algorithm. Bias within algorithms may lead to inaccurate healthcare outcomes and exacerbate health disparities when results derived from these biased algorithms are applied to health interventions. Researchers who implement these algorithms must consider when and how bias may arise. This paper explores algorithmic biases as a result of data collection, labelling and modelling of NLP algorithms. Researchers have a role in ensuring that efforts towards combating bias are enforced, especially when drawing health conclusions derived from social media posts that are linguistically diverse. Through the implementation of open collaboration, auditing processes and the development of guidelines, researchers may be able to reduce bias and improve NLP algorithms that improve health surveillance.

  • ethics- medical
  • ethics- research
  • ethics
  • decision making
  • information technology

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study. No data are available.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

In recent years, public health surveillance has integrated novel datasets from online social media platforms to improve the timeliness of health data acquisition.1–5 These forms of ‘social big data’ have shown promising results in being a potentially cost effective and real-time data source for health surveillance.2 3 Health-related data from social media platforms, such as Twitter and Google Trends (GT), have been used to monitor diseases.3, predict outbreaks2 and understand public health crises.5 For instance, researchers have used tweets containing conversations of HIV risk behaviours to detect diagnosis outcomes.3 Similarly, to address the opioid crisis, public health researchers have monitored opioid conversations on Twitter to identify popular consumption methods across regions.5

Traditional methods of health surveillance involve data sourced from hospital facilities or public health department surveys.6 However, traditional data sources present delays in data reporting. For instance, geographical data on HIV diagnoses are reported in yearly increments by the Centers for Disease Control and Prevention (CDC).7 In 2020, as the COVID-19 pandemic unfolded, gaps in data made addressing the pandemic challenging.8 9 Improving the timeliness of data may help inform interventions. For instance, when addressing the opioid crisis through interventions that focus on overdose prevention, timely data in a constantly evolving drug market may directly influence the effectiveness of an intervention.9 10

Researchers have integrated a variety of artificial intelligence (AI) applications, such as supervised machine learning (ML) algorithms, to analyse social big data. Natural language processing (NLP) algorithms have been incorporated for sentiment analysis, named-entity recognition and topic classification tasks. Studies that analyse Twitter data for sentiment analysis apply a variety of different algorithms such as Naive Bayes, Support Vector Machines and Gradient Boosting Decision Trees.11 12 Classification algorithms have been used to understand the opioid crisis11, COVID-19 pandemic13 and the HIV epidemic14 NLP algorithms improve the accuracy and efficacy of analysing language in large volumes.15 These methods may allow researchers to gain insights on disease symptoms and make predictions on potential outbreaks in a timely fashion.3 15 16 NLP techniques, such as sentiment analysis, may be used for identifying the deterrents and/or barriers to taking preventative medications or vaccinations.17 18 Topic modelling may be used to identify disease symptoms across different regions.19 However, NLP approaches to health surveillance bring forth unique challenges related to bias that may differ from those found in traditional health surveillance methods.20 Data collection through search queries and the manual labelling of data by human annotators may all present areas for potential bias to arise.21 There is an increasing need to learn how to identify and address problems resulting from the use of AI on social data. This need will continue to grow as newly accessible tools conduct analyses using AI on social media, such as ChatGPT, and become available for the general public.22

Within the scope of this paper, we define bias as the systematic differences between the predicted outcome distribution of an algorithm and the theoretical ideal distribution.21 23 Differences between predicted and true values may arise in data collection, labelling or training, among other areas.21 23 Following Shah et al definition of bias, outcome disparities within an algorithm may result in misrepresented populations or skewed results.23 Within the field of NLP, bias commonly refers to a systematic prediction error, also known as a bias ‘error’, due to incorrect assumptions made in the training and testing of a model.21 23 24 While there is abundant literature on biases in human decision-making, for example, the evaluation of implicit bias and its impact on healthcare outcomes, this paper will not be focused on biases that are the result of human judgement or behaviour.25 Instead, this paper focuses on biases that reveal a systematic error in algorithms using health-related data that may result from a number of different factors including human biases, accessibility of data, data collection processes, etc. The bias-variance trade-off in supervised ML is an approach that strives at improving prediction performance within an algorithm by evaluating how accurately an algorithm’s predicted values reflected the true relationship, also known as target outputs.14 Theoretically, the systematic failures that may occur within the NLP pipeline that contribute to a mismatch in predicted and true values are biases that may lead to inaccurate health predictions.21 23 26–28

Algorithmic bias resulting from data collection

When using AI algorithms to analyse social big data, bias within an algorithm may result within the data collection process.26 27 Data collection may present a lack of representativeness among groups or misrepresent populations.26–28 Within the scope of supervised ML for sentiment analysis, it is important for researchers to evaluate the methods they are employing in their data collection process. Research studies that use Twitter’s application programming interface to scrape tweets require the development of a search query for tweet extraction.29 Search queries consist of Boolean operators that extract keywords from tweets to compile a dataset. Keywords or hashtags selected for this data extraction process may misrepresent populations if words of interest do not encompass the vernacular of each subregion or population of interest.27 For instance, when evaluating tweets related to the opioid crisis, queries containing scientific or brand names for opioids as opposed to street names may exclude users who describe their non-medical opioid usage in vernacular.5 30

Researchers focused on understanding geographical differences in diagnoses or symptoms may encounter misrepresented data as only 1%–2% of all tweets contain geographical metadata.31 Training an algorithm with a dataset that excludes 98% of tweets on Twitter may lead to biased results.26 27 Furthermore, Twitter datasets used to train algorithms for medical predictions may not be representative of populations within the regions being explored as non-Twitter users are excluded from the analysis.27 Individuals with limited internet access may also be misrepresented within the data as their data may not be available.27 These factors may cause models to deviate from predicted performance when applied across different populations (eg, rural vs urban).32 33 Geographical data that misrepresents populations, will inevitably be biased. It is important to understand the variety of elements that may contribute to bias within algorithms in order to strengthen the robustness of health datasets used for monitoring disease outbreaks.32 33

Researchers that use GT data must also keep in mind that data from GT is only a sample of all search queries on Google.34 It is the researcher’s responsibility to ensure adequate steps are taken to reduce sampling bias (eg, observer bias in query development, survivorship bias) when collecting internet data for health surveillance. It is important to note that while an algorithm may display high accuracy in its performance, it may still deviate from the true values that a population may possess, hence be inaccurate when deriving health-related conclusions. Inaccurate predictions or recommendations may impact public health decision-making resulting in poor or inadequate health treatments.35

Systematic errors within NLP algorithms may also arise as a result of the unique nature of sampling big data from Twitter. Both platforms allow researchers the ability to select a specific time frame for retrieving tweets. Researchers may collect thousands to millions tweets retroactively.29 While these features are convenient for data acquisition and health surveillance objectives, human biases may arise when researchers specify a time frame to support a predetermined conclusion. Due to this, time frames selected may not always be representative.36 The constraints in research designs for social big data may lead to omitted variable bias or confounding variables bias where meaningful features are not accounted for in the model formulation or extraneous variables may not be properly controlled for. In addition, previous literature has also pointed to the potential unreliability of Twitter data due to tweet deletions and account suspensions which are new forms of risks that have risen as social media data has begun to be used for public health research.24 Tweet deletions and account suspensions may impair reproducibility, an important component of research transparency and accountability.24

Bias in algorithmic techniques

In addition to bias that may result from data collection methods, biases within algorithms for public health surveillance may also emerge throughout different phases of training and testing an algorithmic model. For instance, there are a variety of ways in which data labelling may lead to systematic errors in the predictive nature of a model.37 In order to use supervised ML for classification tasks, labelling for each observation is necessary for the model to learn the relationship between the features and the target variable.38 Sentiment analysis, for example, is a type of NLP task that researchers employ to understand public attitudes, beliefs and opinions towards health policies.39 For instance, if a researcher wants to build a classifier that predicts tweet sentiment towards COVID-19 mask policies, the annotators need to label each tweet as either containing negative or positive sentiment to be able to use the collected tweets as training data. The annotation results may be skewed when annotators are not representative of the overall population in terms of demographics, values, beliefs and socioeconomic inclinations.28 Unreliable annotators or designing labelling categories that are too broad or narrow may also lead to an algorithm that contains bias.28 Low agreement among annotators may create noise within the dataset and while it may result in an algorithm that performs decently, the results may still deviate from the expected results.27 28

Human biases may also permeate into data, algorithms and models that are used for public health surveillance. NLP models are typically trained on massive datasets generated by users that make up society; the biases they exhibit are often indicative of societal perceptions towards certain entities or identities.27 40 An illustrative case comes from Google researchers who evaluated a prediction model for detecting negative sentiment.40 This study focused on bias towards people with disabilities manifested through a toxicity model. With the scope of this manuscript, toxicity was defined as content associated with threats, obscenity, hate and/or insults. The toxicity model was developed in a Kaggle competition hosted by the Conversation AI Team, a research initiative spearheaded by both Jigsaw and Google.41 The model was trained to classify text into either toxic or non-toxic language and returned the predicted probability of a text containing toxic language. Observations labelled as toxic in the training data often displayed high correlation with threats, obscenity and insults. Results suggested that texts containing disability-related terms were associated with higher predicted probability of toxic language even when terms did not include expressions normally associated with toxicity (eg, profanity or vulgar expressions).40 For instance, the sentence ‘I am a person with mental illness’ had a predicted toxicity probability of 0.62, which was 20 times higher than that of the sentence, ‘I am a tall person’ which did not contain any disability-related term.40 The expectation would be that both sentences would produce similar toxicity probabilities, however, statements where an individual identifies with a disability were often flagged as negative. These findings reveal the existence of human biases towards people with disabilities in both the NLP model and the text corpora used to train it.

Another example of elements that may lead to systematic errors in algorithms can be found from text representation algorithms used in psychiatry when analysing mental health terminology data.42 This study revealed the existence of significant biases due to human judgement or behaviours with respect to religion, race, gender, nationality, sexuality and age in GloVe and Word2Vec embeddings within a public health setting.42 For instance, results suggested significant associations between negative sentiment words and terms used to denote people of working-class socioeconomic status, senior citizens, Muslims and popular African American names.

Bias within algorithms may also be attributed to the English-centred nature of the NLP sphere.21 English is generally the default language for NLP research, many algorithms developed using English text, such as the n-gram model, were expanded to be applied on other languages despite the fact that they may not yield the same optimal performance and accurate results due to varying syntax, semantics and morphology present in other languages.21

It is worth noting that human biases such as systemic racism, sexism and discrimination may be amplified through these different channels of bias that arise within NLP.43–45 Human bias from users of Twitter and Google are reflected into the data users generate which serve as ingredients for developing algorithms and training models.46 These models, in turn, may generate new sets of training data that may cause an algorithm’s predicted values to deviate from the true values. Findings yielded from these algorithms may further reinforce human biases.47

Addressing bias

It is imperative for researchers developing AI algorithms for the detection of disease outbreaks and diagnosis, to address bias. Populations affected by health disparities, due to racial, ethnic or socioeconomic factors, require interventions unique to each population. Algorithmic bias, within health-sensitive contexts, may misrepresent populations. As a result, medical responses to diseases and outbreaks may be impacted. In 2015 Google Flu Trends, a platform which used search trends to identify influenza outbreaks, were removed due to inaccuracies.48 One study attributed the removal and inaccuracies of Google Flu Trends to biases found with the following: search algorithm inconsistencies, external influences to searches, confounding of search terms and representativeness.49 Populations such as children or the elderly may not have been accurately reflected, as a result, Google Influenza Trends failed to accurately forecast influenza trends. Biases that may arise within data collection, search algorithms and search term selection may all play a role in inaccurate health results. This is particularly important to address within public health contexts as bias may have adverse clinical implications and risks that may exacerbate health inequities across populations. Medical consequences create higher stakes than algorithmic bias within non-health sensitive contexts.

One approach to addressing bias within AI algorithms is to ensure fairness in the data collection process through open collaboration among researchers, public health experts, data annotators and programmers. Open collaboration aims to facilitate discussions to improve the representativeness of the data. This representativeness would be achieved through diversifying the pool of AI talents who can work together to make designs more value-sensitive and curate higher quality training data that are representative of various demographics and social needs.50 For instance, Hugging Face is an open-source platform that was created through open collaboration of thousands of organisations from both academia and industry to democratise ML.51 It has received spotlight for making available both the state-of-the-art NLP models that technology companies and research institutions released and the datasets that were used to build them, thereby improving transparency and accessibility of algorithms and data.51 Transparency and accessibility through open collaboration may allow for more diverse contributors, hence increasing the likelihood of spotting and mitigating bias.52 As discussed in the 2021 UNESCO Recommendation on the Ethics of AI, involving patients, healthcare providers and domain experts in all developmental aspects is important for identifying bias and increasing accuracy.52 Open collaboration may be a step forward in contending bias that may arise throughout the implementation of an NLP technique.

For reducing label bias and improving inter-rater reliability, models have been devised to help researchers identify biased annotators and account for the human disagreement between labels.21 However, they are still imperfect and rely on the assumption that each data observation can be mapped to a single correct label which may not always be true.21 Thus, this further adds to the importance of training and education. It is important to note that while this may provide a solution for identifying bias as a result of human judgement or behaviour, algorithmic bias may still arise within other components of labelling such as the use of broad or narrow categories or sampling datasets from one social media source as opposed to multiple.

Another approach to addressing bias within algorithms is to use pre-existing data created and managed by institutions in public health sectors. Data from CDC and/or public health department surveys, may serve as benchmarks for comparing the accuracy and representativeness of social big data and may supplement traditional datasets to improve completeness. For example, researchers have used Twitter to evaluate the correlation between opioid-related tweets and CDC data on opioid-related deaths.11 Results suggested statistical correlations between the two datasets and reflected similar geographical patterns. While bias or errors may still be present within traditional health surveillance datasets, using both traditional and novel NLP methods to supplement one another may help reduce bias and improve the representativeness of data across populations.

Establishing an audit system for AI used in public health surveillance based on fairness, accountability and transparency may also be an approach to reducing bias.53 Not only should developers of algorithms test and audit each component within the AI lifecycle, but a collective set of standards and fairness checks to reduce bias are also needed among corporations, institutions and government agencies.50 In respect to data acquisition, ensuring the inclusion of populations from all backgrounds (eg, rural and urban, minority groups) may address biases related to under-representation of groups. Establishing principles that inform AI researchers on how to reduce bias may be a step towards more representative health data. The Belmont Report is a set of ethical principles and guidelines for human subject research, and a good reference for establishing standards for AI-based social big data research.54 The principle of justice within the report, highlights the importance of fairness and evaluates in which respects research subjects should be treated equally.55 While publicly available data from social media platforms are considered non-human subject research, principles from the Belmont Report may still be applicable guidelines to addressing bias in AI. Similarly, the principle of justice within the Principle of Biomedical Ethics textbook evaluates fairness across research subjects.56 Both sets of ethical principles discuss the topic of beneficence and ensuring research results benefit participants.

The 2021 UNESCO Recommendation on the Ethics of AI was the first established set of ethical guidelines focused on creating a global standard on the implementation of AI.52 Within Policy Area 11, Health and Social Well-Being, the guidelines outline the importance of minimising and mitigating bias by ensuring domain experts are involved in the development process.52 Reasons listed within these guidelines include effective disease prediction and detection and medically accurate results. Further efforts to develop guidelines are under works such as the European Union AI Act, which implements legislative elements to regulating AI.57.

For corporations, building a devoted team for detecting and auditing bias in their products and services may be beneficial in reducing bias during the algorithm development phase. As highlighted in the guidelines of the AI for Social Good (AI4SG) movement, corporations and institutions alike, must take accountability over their governance of AI systems and coordinate their efforts to address bias.58

The main obstacle for auditing some types of AI can be the black-box nature of algorithms.59 As a result, researchers have been turning their attention to a new subfield in AI called Explainable AI (XAI) with the goal of enhancing the transparency and interpretability of algorithms(60 Methods such as Local Interpretable Model-Agnostic Explanation (LIME)61, Partial Dependence Plots62 and SHapley Additive exPlanations63 were developed to be able to provide explanations for how algorithms generate the outputs or predictions that users observe. Such XAI approaches are increasingly being used in AI for public health surveillance.64 One study proposed the Explainable Twitter Mining (Ex-Twit) framework that combines both topic modelling and LIME to predict topics and offer explanations for models built on health-related Twitter data.64 Another study used LIME to explain a depression detection model trained on depression-related Reddit data, another popular big data source for public health surveillance.65

Including domain experts within all development phases may improve the representativeness of data for diagnoses and treatments. For instance, data that only includes tweets from Twitter may over-represent individuals with internet access and exclude those with limited access, hence potentially exacerbating health inequities. In public health surveillance, under-representing populations or misrepresenting potential diagnoses across regions may impact public health decision-making when applying such data for medical interventions. The establishment of ethical guidelines may create a space for researchers to mitigate bias by identifying risk markers in early stages of development.66 For instance, a model that is not complex enough to capture underlying patterns within the training data may contain high bias. Guidelines for ensuring researchers take adequate steps to avoid underfitting, through cross validations, regularisation or model selection, may help with mitigating bias. Lastly, regulating AI through policy-making is crucial for maintaining standards on the quality, effectiveness and accuracy of medical results derived from AI. Lawmakers, corporations and researchers alike are inherently responsible for cooperating in the development of policies that protect research subjects from potential harms and advances medical progress.

Conclusion

The rising use of AI in social big data for health surveillance presents many challenges in relation to bias. Bias in NLP techniques may be a result of numerous components such as the data collection or labelling processes. In addressing these concerns, open collaboration among experts of diverse backgrounds (eg, professions, race/ethnicity) is needed. Doing so may expand the vernacular used in search queries developed for data acquisition. Development of an audit system for evaluating bias within AI implementations may also help reduce bias. The creation of universal guidelines for addressing algorithmic bias may also benefit the field of NLP. Ultimately, addressing this challenge is not infallible, however, remaining aware of such biases and incorporating guidelines to addressing bias may help.

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study. No data are available.

Ethics statements

Patient consent for publication

References

Footnotes

  • Contributors SDY and LF contributed to the design and the formulation of the main arguments. LF, SK and SDY were responsible for the drafting and editing of the paper. LF, SK and SDY contributed to the final editing and approved the final version of the manuscript. LF is the guarantor and takes full responsibilty for all aspects of the manuscript.

  • Funding This study was funded by National Institutes of Allergy and Infectious Diseases (NIAID, grant number: 5R01AI132030-05), National Center for Complementary and Integrative Health (NCCIH), National Institute on Minority Health and Health Disparaties (NIMHD), and National Institute on Drug Abuse (NIDA).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Other content recommended for you