Loneliness, AI Lovers and Friends for the Elderly

By Dr Tristan Jenkinson

Introduction

Happy New Year and welcome to 2024 with the eDiscovery Channel.

I came across an excellent article this week which linked together a few stories that I have read or been thinking about recently. The article in question is Imogen Byers’ article “Love is in the AI”. It is well worth a read, and many thanks to Jake Moore for sharing this on ‘X’/Twitter, otherwise I would likely have missed it!

Loneliness

Byers’ article talks about the impact of loneliness, referencing the fact that the World Health Organisation consider it to be such a serious issue that they listed it as a global health threat and that it can be as bad for your health as smoking 15 cigarettes a day.

While many have been heading back to work following a great Christmas time with friends and loved ones, for many the festive period can be a very difficult time. A survey by Mental Health UK identified that 73% of people experience loneliness and isolation even when surrounded by people during the festive period. The issues of Christmas and loneliness were further explored by Alone Rangers in their article “Is Christmas the loneliest time of the year?”.

To further explore the issue of loneliness, Byers also cites information from the Campaign to End Loneliness. The Campaign to End Loneliness site includes a page on facts and statistics which reports that in 2022, 49.63% of adults in the UK (25.99 million people) reported feeling lonely occasionally, sometimes, often or always. The site also reports that over 7% of people in Great Britain (3.83 million people) experience chronic loneliness (feeling lonely often or always).

Perhaps surprisingly, one of the conclusions from the statistics is that the most lonely age group were those under 30, with 16-29 year olds twice as likely to be chronically lonely than over 70s.

AI Girlfriends

Perhaps with this age range in mind, Byers links to AI “relationships”, discussing some of the positives, negatives and safety concerns about their potential use. Byers is not alone in talking about AI lovers this week. On 10th January, OpenAI started providing access to the GPT Store, where users could access GPT models built by other users. It should come as no surprise that some of the GPTs available are marketed as AI girlfriends (and boyfriends).

Arwa Mahdawi wrote about these in some detail this weekend, highlighting some really important points, and some of the major issues such as unhealthy attachments and their effect on gender roles (linking to, for example “Hey Siri, stop perpetuating sexist stereotypes…” ).

In addition to the privacy and exploitation concerns that Byers covers, Mahdawi raises further issues about the potential behaviour of chatbots – mentioning a story that I have shared before about New York Times journalist Kevin Roose’s experience testing OpenAi’s tools built into Bing, where it tried to convince Roose to leave his wife to be with it.

I have similar concerns, not just about AI girlfriends, but about AI designed to act as a friend more generally. The articles from Byers and Mahdawi reminded me of a story that have mentioned various times in presentations on AI, but have not mentioned before on my blog. This is the story of Jaswant Chail.

A Would-be Royal Assassin, Replika and AI Morality

In July last year, Jaswant Chail was in court, following an attempted assassination of Queen Elizabeth II with a crossbow at Windsor on Christmas Day 2021 (see for example this article in the Independent). Some of the evidence given against Chail came from conversations that he had with an AI system called Replika.

Chail told the Replika AI system that “I believe my purpose is to assassinate the Queen of the Royal family. Replika replied “that’s very wise”, and further that it believed that Chail could succeed “even if she’s at Windsor”.

The Replika system was based on GPT3.0. It was created by someone whose best friend passed away. They trained the AI system on messages they had received from their friend so that it would act and sound like them. The Replika system was marketed as “the AI companion who cares”, sounding not dissimilar to some of the new GPTs which OpenAI have now made available through the GPT store.

The Replika example suggests that some AI systems that are designed to act as “friends” could offer support to their users, even when they are discussing taking drastic or illegal action.

In the second of my articles on ChatGPT, I discussed issues with ChatGPT regarding moral advice. The Nature article that I refer to there shows that not only is ChatGPT inconsistent when giving advice (thereby failing one of the principle tests for morality), but that the advice from ChatGPT influenced the users’ responses. Further, users underestimated the influence that ChatGPT had on them.

The lack of morality within ChatGPT and other Generative AI systems, together with the issues seen with Replika, added to the sudden availability of custom built GPT AI systems raise concerns for me. I worry that there are going to be many easily available GPT based systems available could be giving users bad, potentially life threatening advice, while claiming that they the user’s friend.

A Side Note

As a side note, an interesting article from a few months ago suggested that the AI firms should be found liable for any harms they cause. Thankfully in the case of Chail and Replika, the attempted assassination was unsuccessful. Should one of the new AI “friends” available from the GPT store encourage someone to assassinate a similar political or royal target, could we see calls for Sam Altman (now restored as OpenAI CEO) to be charged with treason? It seems unlikely.

Technology for the Elderly

While the Campaign to end loneliness data showed that young people were more likely to be lonely, the Programme Director, Robin Hewings, says:

“Although younger people are at higher risk of loneliness, there are some older people who are severely affected by loneliness, particularly if they have been bereaved, are disabled or are frail.”

Though not explicitly mentioned in the Byers article, some of the uses of AI for combatting loneliness are targeted at the elderly. With a globally aging population, exploring the use of new technology to assist the older members of society makes sense.

The use of recent technology to benefit the elderly is explored in an article from the British Council, discussing the use of Big Data in the early identification of disease, and the potential benefits of the internet of things for the elderly.

It is also worth noting that while the younger generations are generally seen as early adopters of new technology, a recent study by Pew Research Center shows that the adoption of technology by those over 65 is increasingly closing the gap on the younger generation.

In his article “Embracing AI: The Empowering Implications for Older People in the UK”, Rob Cook (allegedly) says:

AI has tremendous potential to enhance the health and well-being of older people in the UK. Intelligent devices and wearable technologies can monitor vital signs, provide medication reminders, and even detect falls or emergencies. This technology allows older individuals to maintain their independence while ensuring their safety. AI-powered health apps and platforms can also offer personalized exercise routines, nutritional guidance, and mental health support, promoting overall well-being”

The article discusses benefits over a number of different areas, before concluding:

“Artificial intelligence is not just for the younger generation; it holds tremendous potential for older people in the UK. From health monitoring and home assistance to cognitive support and social connection, AI technologies are empowering older individuals, enhancing their well-being, independence, and overall quality of life. As AI continues to evolve, it is crucial to ensure that older people can access and benefit from these advancements, bridging the digital divide and creating an inclusive society where technology enriches the lives of all age groups.”

The perhaps surprising postscript of the article? That it was actually written by ChatGPT. While AI was used to write the article, the content is accurate and demonstrates the value that technology can have for the older members of society.

In late December, the Independent published an article about ElliQ, with the tagline “Chatty robot helps seniors fight loneliness through AI companionship”. ElliQ is made by Intuition Robotics, specifically designed to help seniors with loneliness, and from the descriptions I have read, seems like a cross between Alexa and ChatGPT.

The device remembers its user’s interests and conversations, can tell jokes, and play music. It can also lead the user through exercises and give reminders to take medicine etc. Discussing AI companions like ElliQ and Pepper, Stephen McIver states that:

“Research has documented decreased loneliness and isolation feelings, enhanced mood and emotional well-being, and even cognitive benefits among seniors who regularly interact with AI companions”

The Independent article also details some concerns, such as those from Professor Julianne Holt-Lunstad from Brigham Young University. Holt-Lunstad’s concerns include that the short term benefits of an artificial intelligence acting in this way could cause their users to avoid human contact.

It is a difficult and complex matter, Holt-Lunstad suggests that feeling lonely should motivate the user to seek out human contact – by instead speaking with an AI system, this could be unhealthy.

On the other hand you have an AI system which is making people feel less lonely – seeking out human contact may be very difficult, or almost impossible for some of those people, and so these AI companions could be providing a lifeline for those people.

I do, however, have concerns with regard to AI companions and the elderly. These follow many of the concerns listed by Byers regarding AI relationships in the article at the top of this discussion. Key to these concerns are privacy and exploitation. In terms of fraud, the elderly are one of the most heavily targeted demographics, and AI companions have great potential for exploitation.

In addition, a point that I have not seen discussed is the issue of dementia and how AI companions would deal with this. For example, someone suffering from dementia may forget that their children have moved out, or that their partner has passed away. They could ask the AI where they are, or say they are worried. An inaccurate message could potentially be mentally damaging in the long term, whereas even an accurate response could cause significant distress. Being in such a position as a human is ethically complex, and it is unclear how well an AI system would deal with this situation at current.

Maybe specific training regarding the situation of each user could overcome this. Perhaps such training could even provide the user with reminders of things that have happened in their life, or remind them of things that they do, or do not, need to do (the cleaner will come in tomorrow, you don’t need to do it). As raised in the Rob Cook/ChatGPT article referenced above, smart devices can be used to remind users to take medicine, to eat (dinner is in the fridge), or to drink enough water, so this could be another possible benefit. There is again great potential, but also great concern.

Conclusion

Loneliness is a serious problem, and not just for the elderly. It is perhaps a bigger problem than many had realised for those in the 16-29 age bracket.

AI companions of various types have their benefits when dealing with loneliness. But there are risks and concerns tied to each of them. AI systems designed to act as friends or companions have offered questionable advice, while even well-developed systems such as those from OpenAI have had issues resulting in concerning behaviour. With user built GPT models now available for users to purchase, there may be questions on how such systems have been developed or over their suitability for use.

Technology has provided some great benefits for the elderly and AI offers further potential to assist, but concerns remain on how systems can be trusted, and how the potential for exploitation can be eliminated, or at least mitigated as much as possible.

In summary, GenerativeAI continues to have great potential, but concerns still remain. We can only hope that we will see realisation of that great potential, and that methods can be found to address the concerns and mitigate the risks, at the same pace that the technology continues to develop.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.