Skip to main content Skip to secondary navigation

WQ2. What are the most pressing challenges and significant opportunities in the use of artificial intelligence to provide physical and emotional care to people in need?

Main content start

1AI devices are now moving into some of our most intimate settings, augmenting, and in some cases replacing, human-given care. Smart home devices can give Alzheimer's patients medication reminders, pet avatars and humanoid robots can offer companionship, and chatbots can help veterans living with PTSD treat their mental health.

These intimate forms of AI caregiving challenge how we think of core human values, like  privacy, compassion, trust, and the very idea of care itself. In doing so, they raise questions about what conceptions of care and well-being should be encoded within these technologies and whether technology can be purpose-built with certain capabilities of care—such as compassion, responsiveness, and trustworthiness. We can even ask whether there are forms of care that intimate AI is better positioned to give than a human caregiver.
From smart home sensors to care robots, new markets in intimate AI also urge us to examine complex challenges surrounding the automation of care work, such as the continual data collection required for attentive care and the labor concerns of replacing human caregivers with autonomous systems. We must ask if these devices perpetuate racial or gender stereotypes, as when social companion robots for the elderly speak only in a female voice, and whether AI technologies in health and welfare serve mainly to discharge us of our responsibilities towards people in need. The demand for physical separation and sterility during the COVID-19 pandemic has only brought new questions about the role of technological mediation, whether by a robot or a digital assistant whose help was called for in a patient's dying moments.2


[1] This topic was the subject of a two-day workshop entitled "Coding Caring," convened at Stanford University in May 2019. The workshop was organized with support from AI100, the Presence Artificial Intelligence in Medicine: Inclusion & Equity (AiMIE) 2018 Seed Grants, and the McCoy Family Center for Ethics in Society at Stanford University. The discussion involved practitioners from the health and AI industries along with designers and researchers bringing a diversity of analytical frameworks, including feminist ethics of care, bioethics, political theory, postcolonial theory, and labor theory. The workshop was co-organized by Thomas Arnold, Morgan Currie, Andrew Elder, Jessica Feldman, Johannes Himmelrich, and Fay Niker. More information on the event can be found at https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/coding_caring_workshop_report_1000w_0.pdf.

[2] Ajmal Zemmar, Andres M. Lozano, and Bradley J. Nelson, “The rise of robots in surgical environments during COVID-19,” Nat Mach Intell 2, 566–572 (2020); 'Alexa, Help': Patient Begged Echo for Help Before Dying of COVID-19

Autonomous Systems Are Enhancing Human-to-Human Care

AI offers extraordinary tools to support caregiving and increase the autonomy and well-being of those in need. AI-analyzed x-rays now bring a higher degree of confidence to medical diagnoses. AI is starting to help clinicians diagnose and treat wounds faster and with more accuracy, while phone apps with AI capabilities allow patients to monitor chronic wounds from home—an especially useful function for patients in rural settings. Researchers are developing AI-powered wheelchairs to help children navigate with more independence and avoid obstacles, while trials have found that robot interventions improve language and social functioning in children with autism, who may feel comfortable around the robots’ consistent, repetitive behavior.3

Support for aging in place and institutional care will benefit from these technological interventions, which offer physical assistance and companionship as well as health and safety monitoring. Some patients may even express a preference for robotic care in contexts where privacy is an acute concern, as with intimate bodily functions or other activities where a non-judgmental helper may preserve privacy or dignity. These technologies can greatly improve patients’ lives, but they can also reshape traditional caring relationships. By mediating or replacing human-to-human care, their use raises questions of how care receivers form relationships with AI providers. More broadly, the introduction of AI also requires us to ask when AI care can augment human caring in ways that meaningfully address the inadequacies of current care systems. Might there also be certain situations when AI could offer short-term solutions, but reduce important care infrastructures—whether family or institutional—in the long-term?


[3] Despoina Damianidou, Ami Eidels, and Michael Arthur-Kelly, “The Use of Robots in Social Communications and Interactions for Individuals with ASD: a Systematic Review,” Adv Neurodev Disord 4, 357–388 (2020).

Autonomous Systems Should Not Replace Human-Care Relationships

While some occupational knowledge can be standardized and codified, most care relations require improvisation and an understanding of specific contexts that will be difficult, if not impossible, to generalize in code. Radiologists will remain in charge of cancer diagnoses, and in elder care, particularly for dementia patients, companion robots will not replace the human decision-makers who increase a patient’s comfort through intimate knowledge of their conditions and needs. The use of AI technologies in caregiving should aim to supplement or augment existing caring relationships, not replace them, and should be integrated in ways that respect and sustain those relationships.

Take the example of care robots or personal assistants replacing human caregiving. A major concern is that these technologies offer an illusory form of care and reciprocity. According to ethicist Nell Noddings, care is not simply the fulfilling of an instrumental need or outcome; caring is a relational act between caregiver and care receiver that requires time and commitment, presence, and attention, and should foster the care receiver’s independence and self-determination.4 Good care demands respect and dignity, things that we simply do not know how to code into procedural algorithms.

While AI can now be trained to be more responsive and dynamic than in the past, care robots still have limited ability to interpret complex situations. They have less capacity for open-ended improvisation and no agency beyond their designed technical attributes. These systems are not moral agents, and they do not make sacrifices through their care work. Although a person might feel they were being cared for by a robotic caregiver, the emotions associated with the relationship would not meet many criteria of human flourishing. There is also concern that the effects of artificial sentiment could be deceptive or manipulative. And while human caregivers may at times be disinclined to deliver good care (considering that the caring process also places great demands on the caregivers), and while AI could at times offer a more dignified caring process that well-informed patients prefer, in many situations these technologies will not replace the standard of genuine human-to-human care.


[4]  Noddings, Nel, “The Language of Care Ethics,” Knowledge Quest, v40 n5 p52-56 May-Jun 2012, https://eric.ed.gov/?id=EJ989072

Autonomous Care Technologies Produce New Challenges

AI is likely to change norms around care in ways that could introduce new harms. If technology leads us to believe that challenges in caregiving can be solved through technical rather than social or economic solutions, for instance, we could increasingly absolve care practitioners, family members, and state service providers from their responsibilities towards care receivers. Replacing human judgment with AI systems may also lead to the loss of occupational knowledge in the caregiving field.


Another important ethical concern, particularly around smart homes or robot companions, is their invasive surveillance of patients, particularly in the intimate sphere of the home. Intuition Robotics’ social companion robot ElliQ,5 for instance, allows relatives to monitor a senior family member living alone. Using an app, relatives can check on their loved one, access a networked camera, and check sensors that track activity. Although it is intended for safety and companionship, this technology can give control of data to family members, rather than the elderly themselves, which raises questions around privacy and consent. Similarly, Amazon Echo’s Care Hub and Google’s Nest Hub support the ability to monitor elderly family members’ activity feeds.

In-home sensors and robots are on the rise, offering new ways to provide support and care, but also raising concerns about the negative impacts of pervasive surveillance. The ElliQ robot is shown here.

Person talks to the ElliQ device.
In-home sensors and robots are on the rise, offering new ways to provide support and care, but also raising concerns about the negative impacts of pervasive surveillance. The ElliQ robot is shown here. Image from https://info.elliq.com/care-program

 


[5] https://elliq.com. This dilemma around the control of ElliQ’s user data was raised in the Coding Caring workshop held to support this report.

Caring AI Should Be Led by Social Values, Not the Market

The number of people around the world aged 80 or over will grow from 143 million in 2019 to 426 million in 2050, according to the UN.6 By that date, demographic projections in Europe and North America also expect one in four people to be aged 65 or over. Meanwhile, researchers predict a global shortage of caregivers. In Japan, the shortfall is predicted to be 370,000 care workers by 2025,7 while the EU’s anticipated shortfall is 4 million care workers by 2030.8 This deficiency is in part due to the occupation’s relatively low social status, as caregivers typically receive low pay and are devalued compared to other healthcare professionals.9 In this landscape, robots become a tempting option to address the widening gulf between care needs and services.

However, AI caring technologies should not be market-led technical fixes for these problems, which are largely economic, political, and cultural, and innovation and convenience through automation should not come at the expense of authentic care. Instead, regulators, developers, and funders should put resources into supporting better human care. To this end, there is an urgent need to slow down and subject care technologies to regulating bodies and pre-market approval that can intervene in the technical designs and policies around AI care. Too often, oversight has been neglected during implementation; such was the case when medical facilities adopted electronic medical record systems with little input from doctors and nurses,10 and it is likely to be the case with AI-based caregiving. While AI applications may seem inevitable, oversight can put ethical practices into place prior to real-world use.

Further, while many societies prize technological innovation, caregiving is too often stigmatized and left to the private sphere of women. Today, caregiving is also a racialized and class-based practice that remains invisible and underfunded. AI should address, rather than reinforce, these inequities. An ethics-of-care approach, in particular, directs us to consider how AI technologies can be part of economic structures that honor and support care work, rather than to create new forms of invisible labor through their maintenance.11

Finally, should we place certain demands on the role of the designer in the caring ecosystem? Is the engineer of a caring technology taking part in care work? Is the engineer properly placed to understand the context of use, and does the engineering process incorporate diverse voices? Does the design process involve the input of caregivers and care receivers? The development of any caregiving technology should incorporate the perspectives of care receivers and caregivers, while advocating for designs that are sensitive to cross-cultural differences and potential biases.


[6]  https://population.un.org/wpp/

[7]  https://www.theguardian.com/world/2018/feb/06/japan-robots-will-care-for-80-of-elderly-by-2020

[8] Jean-Pierre Michel and Fiona Ecarnot, “The shortage of skilled workers in Europe: its impact on geriatric medicine,” Eur Geriatr Med, 2020 Jun;11(3):345-347

[9] https://www.japantimes.co.jp/news/2020/06/20/national/media-national/essential-care-workers-coronavirus/https://www.theguardian.com/society/2015/feb/09/care-workers-underpaid-resolution-foundation-minimum-wage

[10]  This point came up in the Coding Caring workshop held to support this report.

[11]  Jennifer Rhee, The Robotic Imaginary: The Human and the Price of Dehumanized Labor, University of Minnesota Press, 2018

Cite This Report

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report. Accessed: September 16, 2021.

Report Authors

AI100 Standing Committee and Study Panel 

Copyright

© 2021 by Stanford University. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report is made available under a Creative Commons Attribution-NoDerivatives 4.0 License (International): https://creativecommons.org/licenses/by-nd/4.0/.