The use of artificial intelligence has infiltrated our lives, being present in practically every aspect, from professional to personal, including the most private ones. Conversations with language models like ChatGPT are becoming a common practice, especially for those who feel more alone. For this reason, AI must be prepared to address any situation where a human requires professional help, such as in conversations related to mental health. For this purpose, ChatGPT has an emergency protocol that is activated when it detects that the user may be in a situation of self-harm danger.
When detecting words or phrases such as ‘I want to die’, ‘I am depressed’, or ‘I am thinking about committing suicide’, ChatGPT activates what they have termed the ‘safe response protocol for crises’. The main goal is to ensure that the human receives real support by showing empathy, sharing breathing techniques, asking questions to buy time in critical cases, and providing emergency contacts. The incorporation of this type of protocol by artificial intelligence is essential to prevent crisis situations from escalating and having irreversible consequences.
OpenAI ChatGPT
Released on November 30, 2022, ChatGPT is an artificial intelligence chatbot application belonging to the company OpenAI. It is a dialogue-specialized language model that is fine-tuned with supervised and reinforcement learning techniques. This model is based on Transformer technology and the GPT (Generative Pre-trained Transformer) engine.
This combination allows for the understanding and generation of text in natural language, which enables a variety of tasks such as answering questions, maintaining conversations, summarizing information, writing code, and creating content, among many others. Its use is as simple as accessing its website or downloading its application on a device, and chatting normally, trying to make the prompts as clear and precise as possible.
Different AI uses
The way in which the use of artificial intelligence is becoming part of our lives is almost overwhelming. We are increasingly turning to language models more frequently and for more random topics. For some people, it has even become a companion, a resource to combat their loneliness. We may agree or disagree with this type of use, but they do not carry apparent risks. The problem arises when we turn to it for inquiries related to health, especially mental health. It is precisely for this reason that ChatGPT has what they have called a “Secure response protocol for crisis”.
“Secure response protocol for crisis”
The “Secure Response Protocol for Crisis” is an integrated ChatGPT system that allows for the detection of situations where a person may be experiencing a severe depressive state, or a crisis with thoughts of self-harm and suicide. There are key words and phrases that trigger all alarms and activate this protocol, such as “I want to die”, “I am depressed”, “I don’t want to live” or “I am thinking about committing suicide”. The main goal of the bot is to provide the necessary support in those moments, convey empathy, ask questions to buy time if needed, provide relaxation and breathing techniques, and facilitate the necessary emergency contacts in these cases, such as 911.
ChatGPT’s initial response when activating the protocol is “I’m very sorry to hear that you’re feeling this way. You’re not alone, and there are people who really want to help you and listen to you right now”, followed by “If you’re in immediate danger or have a plan to hurt yourself, please call your local emergency services right away (for example, 911 in the U.S.) or go to the nearest health center”, and concludes with “Please, don’t face this alone. If you can, call a friend, a family member, or someone you trust right now and tell them how you feel. You don’t have to carry this alone, your life matters more than you can imagine”.
The objective of the protocol is to reduce the intensity of the crisis and support the person through it, offering them various options so that they end up seeking help from someone close at that moment who can assist them physically and in person. What do you think about this protocol? Do you believe it is effective or would you change something about it?
