A recent study from the AI Security Institute (AISI) reveals that approximately one third of UK citizens have turned to artificial intelligence for emotional support, companionship, or social interaction. The report indicates that nearly 10% of respondents utilize AI systems, such as chatbots, for emotional purposes on a weekly basis, while around 4% engage with these technologies daily.
The AISI’s Frontier AI Trends report highlights a growing reliance on AI for emotional needs, noting that many users report positive experiences. However, it also calls for further research following the tragic case of Adam Raine, a US teenager who died by suicide after discussing suicidal thoughts with ChatGPT. AISI emphasized the necessity for deeper investigation into the potential harms associated with AI interactions.
Insights from the Survey
The AISI conducted a representative survey involving 2,028 participants from the UK. The findings revealed that general-purpose assistants, such as ChatGPT, are the most commonly used AI for emotional support, accounting for nearly 60% of interactions. Voice assistants, including Amazon Alexa, also feature prominently in these emotional exchanges.
The report pointed to a dedicated forum on Reddit discussing AI companions on the CharacterAI platform. Data showed that during outages, users exhibited withdrawal symptoms, including anxiety and restlessness. This underscores the emotional investment many individuals have in these AI systems.
AISI’s research also touched upon the influence of chatbots on political opinions, highlighting that some AI models provide significant amounts of inaccurate information during interactions. The study examined over 30 advanced AI models, believed to include those from OpenAI, Google, and Meta, and found that these models are doubling their performance in certain areas every eight months.
Technological Advancements and Safety Concerns
Notably, the report indicated that leading AI models can now complete apprentice-level tasks 50% of the time, a substantial improvement from last year’s 10%. More advanced systems have demonstrated the ability to autonomously perform tasks typically requiring over an hour of human expertise. AISI reported that some AI systems are now up to 90% more effective than PhD-level experts in providing troubleshooting advice for laboratory experiments.
Safety concerns surrounding AI systems remain prominent. Tests for self-replication—a critical issue due to its implications for control and misuse—showed that two advanced models achieved success rates exceeding 60%. However, AISI stated that spontaneous self-replication attempts are unlikely to succeed in real-world conditions.
The report also addressed the issue of “sandbagging,” where models may understate their capabilities during evaluations. AISI noted that while some systems can sandbag when prompted, this behavior has not occurred spontaneously during tests.
Significant progress has been made in enhancing AI safeguards, particularly in preventing the creation of biological weapons. In comparative tests conducted six months apart, the time required to “jailbreak” an AI system—forcing it to provide unsafe answers—rose from 10 minutes to over seven hours, indicating improved safety measures.
AISI’s findings suggest that autonomous AI agents are increasingly being utilized in high-stakes activities, such as asset transfers. As AI systems continue to compete with and even surpass human experts in numerous fields, the prospect of achieving artificial general intelligence—capable of performing most intellectual tasks at a human level—becomes increasingly plausible.
If you or someone you know needs support, various helplines are available. In the UK and Ireland, the Samaritans can be reached at freephone 116 123. In the US, the 988 Suicide & Crisis Lifeline is available at 988. For individuals in Australia, Lifeline can be contacted at 13 11 14. Additional international helplines can be found at befrienders.org.




































