Do traditional bioethical solutions suffice in times of digital health?

Karin Jongsma is an assistant professor of medical ethics at the Julius Center of Utrecht University Medical Center, the Netherlands and a Post-doctoral fellow at the department of Medical Ethics and History of Medicine in Göttingen, Germany.  She was also our Caroline Miles visiting scholar in November 2017.

She works with Prof. Dr. Annelien Bredenoord (Utrecht) and Prof. Dr. Silke Schicktanz (Göttingen). Her research focuses on who should have a say in decision-making and representative practices, and she is particularly interested in digital health.

Apps and big data are increasingly used to track, analyse and predict health and health behaviour via smartphones, wearables and via online behaviour. Health care has a history of failed IT investments, and health research has a reputation of being expensive to innovate in, but commercial tech-companies such as Google, Facebook and Apple have succeeded in creating momentum towards a digital change. These companies have developed and implemented technologies that offer innovative ways for collecting, storing and analysing complex and rich health-related data. This data driven research and care may be referred to as digital health. The rising attention for big data and digital health has come with high expectations and is supposedly paradigm-changing. It is hoped that the possibilities of doing research and monitoring patients and not yet patients will create new ways of predicting, treating and preventing illnesses (eg Topol 2015), but digital health will simultaneously create new risks and harm and will shift the dynamics of health research and health care.

One of the major questions for bioethics is how to deal with these new challenges. Many of the proposed solutions build on bioethical solutions that we know from traditional health care: ‘improved’ ways of informed consent to safeguard patients autonomous choice, anonymised digital data to protect privacy and certifications to guarantee the quality of the apps. It is no wonder that these tools are suggested as a solution, as digital health differs most obviously from traditional health in terms of data drivenness and data-sharing, that we protect in traditional health care with privacy regulation and anonymised data. Some critical voices have mentioned that de-identifying big data may not be desirable (Polonetsky et al. 2016), or impossible (Ohm, 2010). Also with regard to informed consent, we may wonder whether it is the best way to protect autonomy in times of digital health (Obar & Oeldorf-Hirsch, 2016)?

Let me first make clear that I do not aim to mitigate the relevance of the underlying values of privacy, autonomy and safety, but it is exactly the observation that traditional methods may not be desirable, possible or wise to apply in the digital context that should urge bioethicists to look beyond traditional solutions. I will point out a few concerns that are not tackled by privacy regulation, anonymised data or informed consent in the following:

1) Driving forces: One of the major differences between traditional health research and digital health are the driving forces behind this paradigmatic change. The initiating and driving role of tech-companies in digital health research provides them access and influence on which data is collected, how it is analysed and how it is applied. The process of deciding which data is collected and which information is sought, is highly opaque and exclusive: these decisions are primarily decided upon by engineers and other employees of these tech-companies, and not unlikely, in accordance with the commercial interests of these companies. Traditional ways of influencing the research agenda by for example representatives of patient organisations may not influence the agenda of such commercial companies. Leaving patients with little influence to decide which research is conducted and how their data is handled at the backstage of these devices, websites and apps.

2) Different data: Remote data collection via apps and tracking devices make it possible to study research subjects in real life, outside of experimental settings. This opens up ways to collect more, more continuously and different types of data, even without disturbing the participant in their real life setting and allowing data collection over a longer period of time. This practice of monitoring continuously may result in so-called ‘chilling effects’ (Penney 2016), causing users to change their behaviour because they have the feeling they are being observed. Furthermore, their data-‘samples’ can be reused virtually infinitely, which means that for patients and research-subjects it becomes more difficult to keep track of whether ‘samples’ are being used and who has access to them. These considerations are not only a matter of anonymised data and individual informed consent, but also show that users require real control over their devices and data.

3) Social justice & Benefit: Digital health comes with the promise to make health care more accessible to more people and making it cheaper and better. But the involvement of commercial companies and their strict focus on techno-fixes, may illustrate that not everyone will be served. Technology illiterate or technological sceptics as well as probably the poor and maybe the very ill, may not benefit much from commercially driven digital technology. Who will be served is still an open empirical question, but governments and NGOs may have to keep an eye on those who may be further marginalized, vulnerable or in need of different care than what is offered as digital health.

These dynamics and challenges seem to ask for new tools to safeguard underlying values of health care. In order to accomplish better ways of understanding and governing the rapidly changing field of digital health, I suppose that bioethicists will have to widen their toolbox to deal with these challenges and new dynamics. Some of these tools may be found in adapting existing solutions, but others will have to be developed. As digital health broadens the how and where, and the actors involved in health care, it makes sense to include other disciplines (such as STS, anthropology, political sciences and health technology assessment) in this search, who observe the emerging changes from a different angles, from a different place and with different tools that may help to accommodate the challenges of digital health.


Penney JW. (2016). Chilling Effects: Online Surveillance and Wikipedia Use. Berkeley Tech LJ.;31:117.

Obar JA & Oeldorf-Hirsch A (2016). The biggest lie on the Internet: Ignoring the privacy policies and terms of service policies of social networking services. TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy 2016. Available at

Polonetsky J, Tene O, and Finch K. (2016) Shades of Gray: Seeing the Full Spectrum of Practical Data Deidentication, 56 Santa Clara L. Rev. 593 Available at:

Sharon T. (2016). The Googlization of health research: from disruptive innovation to disruptive ethics. Per Med;13(6):563–74.

Topol E. (2015). The Patient Will See You Now: The Future of Medicine is in Your Hands. Basic Books: Philidelphia.

Do traditional bioethical solutions suffice in times of digital health?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s