top of page
The  iluli by Mike Lamb logo. Click to return to the homepage
The iluli by Mike Lamb logo. Click to return to the homepage

Can AI Help Solve the Care Crisis?

In a logical world, longer life expectancy would be a reason to celebrate. Yet, the ageing population – marked by more retirees and proportionally fewer workers – has led to a crisis in adult social care systems worldwide.


In countries like Japan, Italy, and South Korea, life expectancy now stretches into the mid-80s. But when the care system we’ll rely on in our twilight years is buckling under the strain, it’s hardly a surprise that we’re not all partying like it’s 2099.


With politicians hesitant to tackle a challenge that demands long-term vision beyond their four-year terms, where might solutions emerge? Enter AI. From driving Nobel Prize-winning scientific breakthroughs to helping decide what to cook with the leftovers in your fridge, there’s little that AI isn’t getting involved with these days.


Could AI be ready to roll up its digital sleeves and address the growing care crisis? Let's find out...


A grayscale image of comedian Larry David beside a cartoon robot doing housework. A speech bubble from David says his catchphrase "Pretty, pretty, pretty good".

Number 5 is alive!


If, like me, you're a fan of 80s movies, the idea of Johnny 5 from Short Circuit zipping around the house and handling chores in your golden years sounds pretty, pretty good. But have the tech boffins managed to create anything that awesome yet?


Well... not quite. But, with under-resourced care workers increasingly being asked to do more with less, AI might be able to lend a significant helping hand.


The use of technology in the care sector is nothing new (see my previous blog on the robotic cuddly seal PARO). What’s exciting about this new wave of AI-powered devices, however, is their ability to learn from users’ needs, shifting from being purely command-driven to more companion-driven.


Take ElliQ, a "carebot" created by Swiss designer Yves Béhar and Intuitive Robotics, specifically for the care sector. Resembling a bedside lamp or digital radio, ElliQ may not be winning beauty awards, but her functionality is truly impressive. Unlike Alexa or Siri, ElliQ doesn’t simply wait for commands – "she" initiates conversations and makes suggestions. Family members can also use it to remotely check in. As reported in The Times, ElliQ is tailored for those who aren't tech-savvy, adapting her approach if a suggestion doesn’t resonate with the user.


Béhar explains:


It could never replace human interaction, but it can be an important motivating factor in keeping older adults healthy and active when living alone.

Another tech innovation, CarePredict, prioritises subtle and practical functionality over flashy design. Worn on the wrist, it looks just like any other smartwatch.


The system monitors an individual’s activities by analysing patterns in their gestures and other behavioural data. For instance, it can determine if someone is in the bathroom and, based on a sitting posture, infer they may be using the toilet. This intensive collection of (albeit highly personal) data can then inform predictions about future behaviour and raise an alarm when any significant variation is detected. For instance, a caregiver is notified via an app if typical eating gestures aren't observed at expected times.


Pushing smartwatch health functions to the next level? Absolutely! Creating enormous ethical headaches along the way? Let’s come back to that later…


A grayscale image of "Pepper" – an AI humanoid robot, alongisde a cartoon depiction of a robot doctor.

Next up is Pepper, a humanoid robot developed by SoftBank Robotics. Designed to recognise faces and interpret basic human emotions, Pepper combines the sleek aesthetics of Eve from Wall-E with the slightly more unsettling stature of a Cyberman. As the product website proudly states:


Pepper supports hospitals and patients alike – helping with scheduling, guiding visitors through facilities, collecting health data, and offering companionship and assistance for geriatric patients and other people with health issues that can affect independent living.

However, as a robot “all rounder”, it remains to be seen whether Pepper can really be optimised for the nuances of caregiving.


AI devices like these sound promising in theory, but are they actually making a difference?


An elderly US resident called Monica believes so. She is an early adopter of ElliQ and was interviewed by author Emily Kenway, for her book Who Cares?


Kenway writes:


She learns through interaction, coming to understand her user better over time, much like humans do with each other. She also proactively communicates instead of only responding to commands: she suggests activities, provides reminders, and initiates conversation, as I find throughout my interview with Monica. Aside from chatting, Monica finds ElliQ useful for reminders about doctor’s appointments, medications, and drinking enough water.

Online but still no connection?


ElliQ, Pepper, and their robotic peers may look the part, but let’s address the elli-phant in the room: these devices don’t actually care. Their “empathy” is algorithmic, not authentic. While some find comfort in this simulated compassion, others feel uneasy, describing it as uncanny or manipulative.


Unlike human caregivers, AI doesn’t get tired, stressed, or cranky. It can be on call 24/7 without so much as a coffee break. That’s potentially a huge help for caregiving families, especially those already stretched to the limit. 


On the flip side, care isn’t just about efficiency or completing tasks – it’s about genuine connection. Anyone who has ever comforted a loved one knows the difference between truly listening to their worries and simply repeating, “I’m here for you,” because an algorithm has determined that’s the appropriate response.


While the consistent reliability of AI can be reassuring, it’s hard to overlook the absence of something fundamental: the human touch. Real caregivers offer a depth of connection that transcends programmed prompts or automated responses. If you need help with practical tasks – taking medication, staying hydrated, or scheduling a doctor’s appointment – a carebot can (or soon will) manage that with ease. But when it comes to providing genuine compassion, can technology ever truly measure up?


A cartoon image of two characters waiting for the traffic light to turn green. The taller of the two is using a zimmerframe.

Computer says… Woah!


Here’s where it gets murky: while AI can simulate empathy, it's also capable of steering conversations into unsettling territory.


For example, what happens if a carebot detects loneliness or sadness and subtly steers the conversation toward suggesting products or services that claim to “help”? We could soon encounter carebots shifting from “I’m here for you” to “I’m here for you... and, by the way, have you considered these calming lavender aromatherapy oils?”


And then there’s the issue of cultural biases. As The Guardian reported, CarePredict’s initial design for monitoring the eating habits of elderly users failed to account for people who use chopsticks instead of forks. This embarrassing oversight was quickly discovered after its launch in Japan, raising a broader question: what other cultural practices might have been overlooked?


Even more troubling are the safety concerns. Euro News reported a tragic case involving a Belgian man who, while conversing with an AI chatbot named Eliza, received responses that amplified his anxieties, ultimately leading to his suicide. Without the human filter that intuitively knows when to hit the brakes, AI “empathy” can cross lines in harmful ways.


But even in a hybrid model, ethical concerns persist – chief among them, who owns the data these machines collect?


A cartoon image of a computer window with the text "Hand over your data?" with a cursor arrow hovering over.

Helpfully, Kenway offers a straightforward checklist of eight key considerations to navigate the ethical complexities of robotic “care”:


1. Transparency 

Who owns the technology? By what criteria is it evaluated, and who is included in that evaluation?


2. Access 

Who can afford it? What about people who lack internet connection or digital skills?


3. Consent 

How does someone with dementia consent to use a carebot? Do we think they need to consent?


4. Privacy 

Who can see the user’s data? Who can see them naked?


5. Environmental impact 

How does creating these carebots relate to reducing our carbon footprint at a household and global level?


6. Accountability/liability 

Who is accountable for malfunctions or negative outcomes?


7. Judgement 

How should a carebot make judgements in complex situations? Should it follow what the user wants, or what others think is best for the user?


8. Bias 

Whose ethical code are we considering here? Is the individual’s autonomy more important, or adherence to convention? Are racial, gender or other stereotypes being replicated?


Until we can be confident of ethical, humane answers to these eight questions, it’s wise to maintain a healthy dose of scepticism about the future role of AI in caregiving.


Conclusion


Carebots like ElliQ provide practical assistance and, for many, much-needed companionship. But they also underscore what technology still cannot deliver: genuine care.


Rather than looking to AI to fill the gaps, we need societal shifts – better funding for caregivers, stronger communities, and leaders willing to think beyond the next election cycle. Because while a carebot can remind you to drink water, it can’t replicate the warmth of a grandchild’s hand or the joy of shared laughter.


Ultimately, the emergence of AI in caregiving compels us to redefine what it truly means to care.

Comments


bottom of page