What if its use is making us more dumb?

After a 2023 and a 2024 full of promises, we are living a reality where generative artificial intelligence is integrated to many of the applications of our phones and is used as a tool by various companies, from news portals to banks, to do many simpler tasks. However, a recent study brought an alarming hypothesis: Are we paying a high cognitive price for this comfort?

According to an investigation from the Carnegie Mellon University and Microsoft Research, The use of generative artificial intelligence could reduce the exercise of critical thinkingbecoming more prone to naivety and superficial thought.

The work, which followed by 319 workers qualified in different knowledge industries, showed that As the use of tools such as massive language models (Chatgpt, Copilot, Gemini) progresses, it decreases the use of our capacity for reflection and analysis. Although they are professionals who know how these models work, the delegation of reasoning tasks to the machines leads to accept their answers without questioning them.

Trust in AI decreases the use of our ability to reflect and analysis.

“The trust we deposit in the results of artificial intelligence decreases the perceived need to think critically,” concludes the authors of the study, detailing that Once a user perceives the platform as reliable, he does not feel the need to verify the information or to consider alternative perspectives, which leads to a lower diversity of ideas and a possible impoverishment of independent thought.

Artificial intelligence apps. Photo. Bloomberg

The convenience paradox

While the automation of cognitive tasks seems beneficial when saving time, reducing mental effort and increasing productivity, we run the risk of losing the ability to evaluate the precision of the results.

This is what the cognitive psychologist Lisanne Bainbridge defined in the ’80s as “The convenience paradox”: The more we depend on automatic systems to perform complex tasks, the less training and experience we acquire to handle exceptions when automation fails.

As we let chatgpt or co -pilot do the heavy work of thought, we lose the habit of questioning, analyzing and evaluating information, essential skills in a saturated world of data and misinformation.

Of course, this does not mean that artificial intelligence is inherently harmful to critical thinking. This research He warns us about the danger of excessive dependence and the importance of continuing to question their results, analyzing the data and contrasting them with other sources.

To think that an app, a platform or a chatbot will do all our work is illusory. These are tools that can be very powerful if we know how to use them, but always with supervision and exercise critical thinking, That is, after all, one of the abilities that distinguish us as humans. Are we willing to sacrifice our ability to think critically simply for Mental Fiaca?

sbobet88 sbobet88 sbobet88 judi bola

By adminn