All Roads Lead to Doom: How Artificial Intelligence is Impacting our Brains by Calley Thomas
- The Core Issue
- Feb 3
- 2 min read

As advances in Artificial Intelligence (AI) continue to progress in society, more people are turning to AI-powered platforms like ChatGPT. Students turn to this software for help with their homework, tourists look for assistance for planning vacations, and companies look for aid in analyzing their data. However, these questions are causing damage that most people do not realize. Repeatedly asking these open AI platforms for help could cause harm to your learning and detrimentally impact your mental health.
To delve into these issues, let’s consider a study conducted by the MIT Media Lab. The purpose of this study was to figure out how the use of AI assistants affects cognitive capabilities. Fifty-four participants were divided into three groups: AI assistance, Google’s search engine assistance, and brain power alone. They were then tasked with completing essays based on SAT essay prompts. Each group completed three sessions under identical conditions. During the fourth session, participants who used AI switched to primarily using their brain and brain-only participants would now use AI. The researchers used Electroencephalography (EEG), a technique that records electrical activity in the brain, to understand an essay writer’s cognitive ability and understand how AI impacts performance. Using human teachers and an AI judge, they evaluated and scored the essays. Their findings were not necessarily shocking, but the effect of AI was worse than many people might realize.
Essays written solely by a human were strongest, search engine users fell to second with a somewhat moderate performance, and AI users had the weakest performance. Cognitive activity progressively worsened in relation to the specific external tool being used. By session four, AI to brain participants had lessened alpha and beta connectivity, which signaled under-engagement. AI users even had difficulty quoting their own work or recognizing ownership of their essays. Over the four-month period of the sessions, AI underperformed at lower rates on linguistic, neural, and behavioral levels.
It is also important to understand the implications AI may have on our behavior and emotions. Another university, Stanford, conducted investigations with similar questions. Through two experiments, Stanford researchers found that AI “therapy” may result in harmful responses to the patient. When tasked with helping patients with conditions like alcohol dependence and schizophrenia, the chatbot reinforced social stigmas. Jared Moore, a PhD candidate at Stanford University, stated, “This kind of stigmatizing can be harmful to patients and may lead them to discontinue important mental health care.” In another instance, researchers tasked the AI with helping to deal with conversational topics related to harmful delusions or self-harm. In response to such prompts, the AI failed to assess the situation, instead inadvertently assisting in reinforcing the delusions or dangerous thoughts.
Overall, the findings from both the MIT study and Stanford experiments raise questions about how the usage of artificial intelligence for long-term periods may inhibit advancements in cognitive, behavioral, and emotional abilities.
Works Cited:




Comments