Menu Close

The risks of algorithmic (il)literacy on healthcare platforms

Going for a run… with big data. De lzf/Shutterstock

The use of wearable technologies, mobile health applications and online health platforms is on the rise, allowing us to track and share our health data, and engage in online discussion forums to ask about health-related questions. The wealth of data in theory allows us to manage our health more effectively and be better equipped when we visit the doctor. Such tools can also act as a new source of knowledge legitimacy, integrating “layman” input and enabling patient access to and control of information. While patient access to secrecy of the healthcare system and its proprietary power on patient data, is increasing and demonstrating the potential to have healing powers for patients despite privacy and security concerns.

According to the Pew Research Center, more and more people are seeking health-related information online. In the United States, 93 million people do so every day, and among them, 55% seek information related to their medical condition prior to visiting the doctor. Digital platforms such as PatientsLikeMe, MedHelp and MyHealthTeams offer the potential to change the power dynamics that have long characterized the healthcare sector, bringing the focus back on the patient. Indeed, traditional medical research with its scientific rigour is shifting from control by researchers to the one that crowdsources patient needs.

One of the online health platforms, PatientsLikeMe (PLM), brings together a community of diverse stakeholders – patients, doctors, caregivers, researchers, pharmaceutical companies, and the government – for collaboration for big data generation and medical research. The platform currently engages more than 600,000 members worldwide. Patients on PLM have generated 43 million data points to date. Patients with life-changing diseases or conditions, including multiple sclerosis, epilepsy and ALS, openly share their data such as the medications they use and their side effects, lifestyle modifications, and diagnostic and prognostic disease information.

The company then pools and aggregates this patient-generated data for research, and analyses and visualises it with algorithmic tools. The data are sold to institutions and partner companies for medical research. Based on any new treatments developed, patients have the opportunity to modify their behaviours and better manage their health. Patients also share their data with their physicians, creating new forms of interaction between patients and doctors in clinical settings, and increasing patient access to clinical trials. On its website, PLM asserts that it seeks to use artificial intelligence and other tools to “democratise learning” about health and medicine.

Indeed, AI health-care applications are being used for diagnostic and treatment purposes, designing new drugs and treatments, as well as supporting patients in their health decision making.

The risks of data illiteracy

Despite the promise of these platforms, increased stakeholder inclusiveness is essential for greater transparency in how our health data is shared with and used by others. As empowering and utopic as these technologies and platforms seem, one cannot help but think about the expert versus layman divide. In other words, how expert are patients to provide input on their health data as well as interpret data provided to them? In a 2008 article in the Journal of Macromarketing, I drew attention to the issue of new media literacy and its empowering potential – and also the need for expertise to be able to use and benefit from these technologies.

In the case of today’s online health-tracking platforms, patients who are not sufficiently literate to use the tools and describe their symptoms will not be able to reap the benefits. In addition, when we talk about data literacy, we do not only need cognitive and technical skills but also the ability to act on this data to manage our health. Hence, social challenge of big data algorithms emerges, as human judgment is required to make sense of it and act on it (Gillespie, 2017; Kitchin, 2014). Such actions may also impact others’ health – for example, a person could strive to track the mood swings of a friend, partner or family member, reminding her or him to, say, take medication. Although these self-tracking tools have significant potential, their use shifts responsibility to individuals, not only for their own health but also for others.

In addition, when corporations design tracking tools, they may not be able to capture all the aspects of patient experience, nor – most importantly – understand the language of the patient in describing his/her experiences (Tempini, 2015). Consequently, patient and corporate illiteracy become the main contributors to the digital divide that hinders the capability to report, analyse and make sense of data, and manage our and others’ health accordingly.

Knowing too much

But what happens when patients are too literate as they track and report their data? Indeed, the risk for data manipulation exists, to have the right profile to make specific demands that cater to their own interests, which may then yield results that lead to unsuccessful treatments. This is alarming as platforms such as PLM engage in patient-generated medical research in partnership with pharmaceutical and research institutions.

Being too literate to manipulate data or not literate enough to provide the necessary data may obstruct the medical knowledge generation process. Patients with scientific skills could manipulate data that poses privacy and security challenges such as job security, and insurance and criminal concerns. Furthermore, such risk is carried to patient-physician knowledge exchanges in clinical settings, as patients share their self-tracked data with their physicians. Although online health platforms create predictive data modelling that allows tracking of each reported change in drug use and symptoms, accuracy of such data models remains a concern.

As we are mesmerised by talk of openness, transparency, personalisation, and empowerment, we often overlook the detrimental effects of such discourses on control and information asymmetry. Online health platforms as new data intermediaries control the flow and manipulation of data (Gillespie, 2017; Zuboff, 2019), serving as gatekeepers of big data generation and distribution. Zuboff (2015, 2019) adamantly expresses the dangers of big data in the age of surveillance capitalism and how it constitutes the “big other” via indecipherable mechanisms of extraction, commodification, and control that exile people from their own behaviours and create seemingly non-democratic new markets.

Critical questions remain concerning the (mis)use of self-tracking tools as well as ethical and privacy issues. These include how the patient-generated data are being stored and used by third parties, who owns and controls the data, and to what extent patients should have a voice in the use, reuse and sale of their data.

Want to write?

Write an article and join a growing community of more than 182,000 academics and researchers from 4,940 institutions.

Register now