How does Artificial Intelligence work better, when you treat it well or badly?

Although the initial furor over ChatGPT, Gemini and other massive language models is slowly waning, it seems certain that In the future, instances will multiply where we will have to interact with Artificial Intelligence platforms.

This will generate new protocols and habits, reviving an old dispute: does it make sense to be nice to these devices?

According to a new study, it is the best strategy if we want good results.

After analyzing the results obtained with commands (or prompts) more cordial in different programs in English, Japanese and Chinese, a team of scientists led by Ziqi Yin and Hao Wang concluded that be rude or have too many imperatives Not only does it not result in good results, but it can significantly affect its performance.

“It has been proven that impolite prompts can lead to a deterioration in model performance, including results and answers containing errorsstronger biases and omission of information”, they concluded after working with half a dozen chatbots on a hundred tasks with 150 prompts each.

The strongest hypothesis to account for this result is that the massive language models They reflect certain traits of human communication, including the power of courtesy.

Thus, being polite with chatbots tends to generate better responses, just as it does courtesy in conversations that we human beings maintain.

Likewise, excessive praise is also not a good idea as these also resulted in poor performance. And since they are systems that feed on user feedback, friendliness is also a good strategy to maintain good responses in the future and make these systems sustainable.

Despite what it may seem at first glance, the debate on how we should interact with these platforms It is far from being superficial and repeats some of the concerns that arise when we talk about virtual assistants such as Siri, Alexa or Google Assistant, where there are still no unanimous opinions.

For specialists like psychologist Sherry Turkle, a psychologist at MIT, these devices are “simulated objects of empathy” that should not be treated with the same empathy as a human: “It is deceptive to use this type of language and deference with devices. When We treat machines as if they were peoplewe allow ourselves to be treated as objects to be deceived.

For Turkle, the problem with being cordial and seeking empathy in these systems is moving from AI to “Artificial Intimacy,” a kind of pantomime of true bonds.

But not everyone shares the prognosis and there are those who support the need to not be rude.

In the case of voice assistants, furthermore, the discussion intersects with sexism since the most popular projects of the moment are feminized, with names, personalities and feminine tones of voice by default, which has the consequences that many users are treated violently, with sexist comments and degrading orders.

pragmatic play sbobet88 sbobet88 judi bola

By adminn