The human-AI partnership in dementia care: navigating ethical frontiers
Casson, D. and Iglesias, A. (2025) ‘The human-AI partnership in dementia care: navigating ethical frontiers’, Journal of Dementia Care, 33(6) pp. 40-43
Daniel Casson and colleagues consider the ethical issues arising from the use of artificial intelligence (AI) in dementia care.
The dementia care crisis has reached a tipping point.
Recent findings from the Alzheimer’s Society reveal a system buckling under pressure as diagnostic delays stretch for months, families struggle without adequate support, and care quality varies dramatically across the country. With over 944,000 people living with dementia in the UK and costs exceeding £42 billion annually, urgent action is essential.
AI emerges as a potential game changer. From accelerating early diagnosis to enabling people to live independently at home for longer, AI technologies offer transformative possibilities. Yet these advances come with profound ethical challenges that demand careful consideration.
The Oxford Initiative on Responsible AI in Adult Social Care is a unique sector-wide collaborative of people drawing on care, care services, commissioners, staff, carers, and tech suppliers which provides crucial guidance. The initiative led to a shared definition of responsible use of generative AI in social care, to ensure that, “the use of AI systems supports and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.”
The promise of AI as a diagnostic tool
AI’s potential to revolutionise dementia care is undeniable. Early diagnosis represents the most promising application. Professor Geoff Parker’s research at University College London, funded by the Alzheimer’s Society, is developing faster, more accurate MRI scans using machine-learning, potentially making early diagnosis more cost effective and widely available.
Dr Riccardo Marioni’s research uses machine learning to analyse genetic profiles and lifestyle factors, potentially identifying high-risk individuals years before symptoms appear. Meanwhile, Dr Magdalena Jones at Imperial College London uses AI-based technology called Cognitron for patient assessments, providing more reliable monitoring than traditional questionnaires.
The promise of AI as care assistant
Many models are being developed for AI as an assistant or companion to the person living with dementia, their family carer or professional carers. These are very different scenarios, and each has potential benefits, so that people can be informed in real time how to access the information and guidance they need in any specific situation.
Summary
The UK’s dementia care system is under immense strain, with nearly a million people affected and costs exceeding £42 billion annually. Delays in diagnosis, inconsistent care quality, and lack of support for families have created a crisis. AI offers transformative potential, from early diagnosis to enabling independent living and raises complex ethical concerns.
AI-driven tools like genetic risk profiling, and cognition assessments promise faster, more accurate diagnosis.
AI can support routine tasks and provide guidance, freeing professionals to focus on relational care.
However, concerns about replacing human contact and ensuring meaningful consent require careful navigation.
Bias in training data risks under-diagnosis in minority groups, demanding inclusive design and diverse stakeholder involvement. Accountability remains a challenge, with unclear liability when AI systems err.
Regulatory bodies like the CQC stress the need for oversight, audit trails, and human decision-making. AI can also support the overstretched care workforce
by streamlining operations and improving coordination. Yet, cultural shifts are needed to embrace AI as a collaborative tool rather than a replacement for human
interaction.
Responsible innovation hinges on co-design with carers and people living with dementia. Examples like Peru’s Ana chatbot and Dementia Australia’s “Talk with Ted” avatar show how inclusive design enhances trust and relevance. Ethical data handling and robust governance are vital to prevent misuse.
Ultimately, AI must be human-centred, equitable and compassionate. A growing coalition of organisations and tech firms is committing to ethical development and turning pledges into practice remains the critical next step.
For example, for the person living with dementia, AI tools such as Sentai can give daily routine reminders, act as a companion and detect unusual patterns of behaviour. For family carers, an AI tool can help in responding to behaviours, such as ‘sundowning’ or resistance to care, and help track a person’s mood. For professional carers there are training and decision support tools (e.g. Carebrain and Access Evo:) which can provide real-time guidance on best practices, safeguarding, or ethical dilemmas.
Yet these benefits come with risks, particularly around the substitution of human contact. Dementia care fundamentally relies on human engagement, empathy, and trust. The solution lies in positioning AI as a care assistant, rather than a care worker, in all these situations: it can handle routine tasks and give appropriate support whilst freeing health and social care practitioners to deliver the personal, relational care that only humans can provide.
Author Details

Daniel Casson is Managing Director of Casson Consulting which encourages and guides innovation and transformation in adult social care. He is co-founder of the Oxford Project: The Responsible Use of Generative AI in Care. Alex Iglesias, representing OneAdvanced, a software supplier launching care- specific AI solutions.
The authors would also like to acknowledge guidance received from Dr Caroline Green and Katie Thorn while writing this article. Dr Green is Director of Research at the Institute for Ethics in AI, University of Oxford and co-founder of the Oxford Project: The Responsible Use of Generative AI in Care. Katie Thorn is Project Lead at Digital Care Hub and co-founder of the Oxford Project: The Responsible Use of Generative AI in Care.
Navigating consent and autonomy
The challenge of obtaining meaningful consent from people with dementia represents one of the most complex ethical dilemmas. Research from BMC Medical Ethics highlights key concerns: can people with dementia meaningfully consent to AI-driven interventions? How do we balance privacy with surveillance?
The progressive nature of dementia complicates traditional consent models. As cognitive capacity diminishes, understanding and consenting to new technologies become increasingly compromised. Advance directives offer one solution: when people are still cognitively capable, they can express their wishes about future AI-assisted care.
Transparent AI design becomes crucial for ethical consent. Systems must be explainable to patients, families, and healthcare professionals. The FAIR approach developed by the Scottish Human Rights Commission provides guidance:
- Facts: establish the experience of the individual
- Analysis: what human rights are involved?
- Identification: what are the shared responsibilities?
- Review: what actions are needed?
Co-design with carers and people living with dementia can ensure that AI systems remain accountable. The Longitude Prize for Dementia, a £4.42 million award offered by the Alzheimer’s Society, driving personalised, technology-based tools co-created with people in early-stage dementia, exemplifies this approach.
Addressing bias and ensuring fairness
AI learns from historical data, and when that data contains biases, these become amplified in AI systems. Poor training data could, for example, result in under-diagnosis for minority ethnic groups, or non-standard care contexts. If AI diagnostic tools are primarily trained on white, middle class populations, they may fail to recognise dementia symptoms in communities with diverse demographics.
The International Journal of Social Work Values and Ethics warns that algorithms risk amplifying existing biases, leading to discriminatory practices. Designing for equity requires involving diverse stakeholders and ensuring that training data represents all populations who will use these technologies.
Accountability and oversight
As AI systems become more sophisticated, questions of accountability become more complex. When AI recommendations prove incorrect, determining liability becomes a challenge. The Care Quality Commission (CQC) recognises AI’s potential whilst emphasising the need for responsible implementation, but current regulatory frameworks lack specific guidelines.
This gap highlights the need for AI audit trails, explainable systems, human oversight at critical decision points, and clear protocols for when systems malfunction.
Key Points
- Artificial intelligence (AI) is increasingly commonplace in dementia care and potentially brings great benefits. An essential focus should be on how AI is used in an ethical and responsible manner.
- Improved efficiency and responsiveness: AI can automate routine tasks, streamline workflows and enable faster decision-making, freeing up time for care workers.
- Predictive analytics for early intervention: AI models can identify patterns in health and care data to predict risks and intervene earlier, potentially preventing crises.
- Personalised care planning: AI can support tailored care by analysing individual needs, preferences, and outcomes, especially when integrated with real-time monitoring tools.
- Bias and under-representation: Poor training data can lead to under-diagnosis or misdiagnosis for minority ethnic groups and poor performance in non-standard care contexts.
- Consent and autonomy: Checks should be in place to ensure that AI systems do not operate without clear, informed consent from people who draw on care and support, especially in settings where digital literacy is low.
- Accountability and transparency: It is often unclear who is responsible when AI systems make errors or cause harm, and many models lack explainability.
- Oversight and regulation: There is a need for robust governance frameworks to ensure AI tools are safe, fair and aligned with care values.
- Responsible use and implementation: AI should augment and not replace human judgment. Its deployment must be guided by ethical principles and co-designed with care professionals and people who draw on care and support.
Supporting the care workforce
The dementia care workforce faces unprecedented challenges. OneAdvanced’s Care Trends Report 2025, in partnership with Care England, found that 48% of providers lack the tools, systems, or data insights needed to fully understand or monitor what is happening across their services in real time. 47% struggled to track resident movements in a care home setting, which is an important part of understanding the person and good care planning.
AI offers relief without replacing human carers. Administrative AI can reduce cognitive load, streamline documentation, and provide support for clinical decision-making. Smart sensors monitor safety without constant surveillance, while AI-powered tools facilitate better co-ordination between teams and families. AI tools offer shared data dashboards, real-time care insights, and collaborative spaces for teams and families. They also provide secure, role-based access to care plans, alerts, and updates, which help everyone stay informed and consistent in their actions.
The cultural shift required is significant. Care professionals need to understand AI as a collaborative tool that enhances their ability to provide compassionate, person-centred care.
Building responsible innovation
Effective AI tools must reflect lived experiences. Co-design approaches involving family carers, people with dementia, clinicians, and care workers, are essential for creating ethically grounded solutions.
For example, in Peru, an AI-driven chatbot to support dementia carers was developed with strong involvement from carers, practitioners, and local stakeholders throughout its design. Their engagement shaped the chatbot’s language, tone, and accessibility: for example, it simplified medical terms and added voice interaction to support people who draw on care and support with low literacy. This collaboration improved trust, usability, and cultural relevance. It ensured the product, called Ana, met real needs and reflected local contexts, ultimately enhancing its acceptance and impact among the target community.
Elsewhere, Dementia Australia’s “Talk with Ted” AI avatar, was developed with care professionals’ input, and provides immersive training in empathetic communication.
Responsible data-handling forms the foundation of ethical AI. Systems must prioritise privacy, consent, and contextual sensitivity. People receiving care must understand how data is collected, stored, and used. Robust governance frameworks prevent “mission creep” where care data ends up with insurance companies or other third parties.
Towards a compassionate AI future
The successful integration of AI into dementia care is both a technical challenge and a moral imperative. AI must be ethical by design, equitable in implementation, and always human-centred in philosophy.
The ultimate vision is a care system where AI handles routine tasks, freeing people to provide what only they can: dignity, empathy, and presence. By prioritising human-AI partnership, we can navigate the ethical frontiers of this technology and build a future where innovation serves compassion.
Encouragingly, more organisations are recognising the importance of an ethical approach. As part of the AI in social care project led by the Institute for Ethics in AI, Digital Care Hub and Casson Consulting, care providers, tech suppliers and a range of other stakeholders have issued a call to action on AI. In addition, almost 30 tech companies have signed the Tech Suppliers’ Pledge to develop and use technology ethically, with transparency, accountability, fairness, and respect for rights. The challenge now is to ensure those aspirations become reality.
Links
Digital Care Hub (2025) Oxford Project: The Responsible Use of Generative AI in Care. [online] Available at: https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-responsible-use-of-generative-ai-in-social-care/
References

Alzheimer’s Society (2018) ‘Left in the dark’ – One in five people affected by dementia get no support: the true impact of dementia laid bare. [online] Available at: https://www.alzheimers.org.uk/news/2025-08-27/survey-report-true-impact-dementia [Accessed 24 Oct 2025].
Alzheimer’s Society (2023) Q&A: Geoff Parker on improving dementia diagnosis with MRI scans. [online] Available at: https://www.alzheimers.org.uk/get-support/publications-and-factsheets/dementia-together/researcher-geoff-parker-dementia-diagnosis-mri-scans [Accessed 24 Oct 2025].
Alzheimer’s Society (n.d.) Assessment process and tests. [online] Available at: https://www.alzheimers.org.uk/about-dementia/symptoms-and-diagnosis/diagnosis/assessment-process-tests [Accessed 24 Oct 2025].
Anon, (2022) Homepage – Care England. [online] Available at: https://www.careengland.org.uk/ [Accessed 24 Oct 2025].
centreforcare.ac.uk. (n.d.) Technology in social care report Dec 2022_FINAL. [online] Available at: https://centreforcare.ac.uk/wp-content/uploads/2022/12/Technology-in-social-care-report-Dec-2022_FINAL.html [Accessed 24 Oct 2025].
Dementia.org.au. (2021) World-first AI Avatar in dementia education set to improve care. [online] Available at: https://www.dementia.org.au/media-centre/media-releases/world-first-ai-avatar-dementia-education-set-improve-care [Accessed 24 Oct 2025].
Dementia.org.au. (2025) Exploring the ethics of technology and AI in dementia care. [online] Available at: https://www.dementia.org.au/news/exploring-ethics-technology-and-ai-dementia-care [Accessed 24 Oct 2025].
Digital Care Hub (2025) Ensuring the responsible use of Generative AI in social care: A collaborative call to action – Digital Care Hub. [online] Available at: https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-responsible-use-of-generative-ai-in-social-care/ensuring-the-responsible-use-of-generative-ai-in-social-care-a-collaborative-call-to-action/ [Accessed 24 Oct 2025].
Digital Care Hub (2025) Pledge by tech suppliers on the responsible use of AI in social care – Digital Care Hub. [online] Available at: https://www.digitalcarehub.co.uk/ai-and-robotics/oxford-project-the-responsible-use-of-generative-ai-in-social-care/pledge-by-tech-suppliers-on-the-responsible-use-of-ai-in-social-care/ [Accessed 24 Oct 2025].
eqhria.scottishhumanrights.com. (n.d.) The FAIR approach – SHRC – Equality & Human Rights Impact Assessment. [online] Available at: https://eqhria.scottishhumanrights.com/eqhriatrainingfair.html [Accessed 24 Oct 2025].
Espinoza, F., Cook, D., Butler, C.R. and Calvo, R.A. (2023) ‘Supporting dementia caregivers in Peru through chatbots: generative AI vs structured conversations’, Electronic workshops in computing. [online] doi:https://doi.org/10.14236/ewic/bcshci2023.11.
Hiltz, B. (2025) ‘Editorial: Embracing AI in Social Work: Why Ethical Concerns Should Drive Integration, not Avoidance’, International Journal of Social Work Values and Ethics, 22(1), pp.5–9. doi:https://doi.org/10.55521/10-022-102
Imperial.ac.uk. (2025) Discovery. [online] Available at: https://profiles.imperial.ac.uk/m.sastre [Accessed 24 Oct 2025].
London, I.C. (2017) Test your mental skills with an Artificial Intelligence tool called Cognitron | Imperial News | Imperial College London. [online] Imperial News. Available at: https://www.imperial.ac.uk/news/181546/test-your-mental-skills-with-artificial/ [Accessed 24 Oct 2025].
Marko, J.G.O., Neagu, C.D. and Anand, P.B. (2025) ‘Examining inclusivity: the use of AI and diverse populations in health and social care: a systematic review’, BMC Medical Informatics and Decision Making, [online] 25(1). doi:https://doi.org/10.1186/s12911-025-02884-1.
Oneadvanced.com. (2025). OneAdvanced Care Trends Report 2025. [online] Available at: https://www.oneadvanced.com/trends-reports/care/ [Accessed 24 Oct 2025].
Oneadvanced.com. (2025). OneAdvanced AI for UK social care. [online] Available at: https://www.oneadvanced.com/ai/social-care/ [Accessed 24 Oct 2025].
Ox.ac.uk. (2024). Oxford Statement on the responsible use of generative AI in Adult Social Care | Ethics in AI. [online] Available at: https://www.oxford-aiethics.ox.ac.uk/oxford-statement-responsible-use-generative-ai-adult-social-care [Accessed 24 Oct 2025].
Sedini, C., Biotto, M., Crespi Bel’skij, L.M., Moroni Grandini, R.E. and Cesari, M. (2021) ‘Advance care planning and advance directives: An overview of the main critical issues’, Aging Clinical and Experimental Research, [online] 34(2), pp.325–330. doi:https://doi.org/10.1007/s40520-021-02001-y.
Seibert, K., Domhoff, D., Fürstenau, D., Bießmann, F., Matthias Schulte-Althoff and Wolf‐Ostermann, K. (2023) ’Exploring needs and challenges for AI in nursing care – results of an explorative sequential mixed methods study’, BMC Digital Health, 1(1). doi:https://doi.org/10.1186/s44247-023-00015-2.
Sentai UK Shop. (2025). About Sentai – Supporting Independence Through Voice Technology. [online] Available at: https://sentai.co.uk/pages/about-us [Accessed 24 Oct 2025].
Socialworker.com. (2025). Blocked. [online] Available at: https://www.socialworker.com/jswve/ [Accessed 24 Oct 2025].
Theaccessgroup.com. (2024). Access Evo: AI enabled software boosts productivity. [online] Available at: https://www.theaccessgroup.com/en-gb/evo/ [Accessed 24 Oct 2025].
