While the promises of what Artificial Intelligence (AI) can do continue to grow, in a spiral that specialists have seen at other times in history and that does not usually end well, we also the exaggerations grow and scams… is it possible to make companies fulfill what they promise?
Already in the ’80s, computer science professor Drew McDermott explained that AI has been through several times optimistic prediction periods and massive investments, which he called “AI springs”, and periods of disappointment, loss of confidence and reduced funding or “AI winters”.
These cycles go back almost 70 years and there is no doubt that today we are experiencing a complete spring, with companies generating large profits and attracting investors. Sometimes, however, promises are being made that cannot be kept.
The deceptions may soon come to an end: last week the US government launched Operation AI Complyan effort to provide security to its citizens that new AI platforms can do what they promote.
In the first stage, five companies were sued: two chose to close their businesses and three decided to face trial. one of them was DoNotPaywith a “robot lawyer” who offered legal advice for a much lower cost than a human professional.
However, numerous complaints to the Federal Trade Commission warned about its limitations when driving even simple legal cases.
As users we must demand that transparency includes clear explanations of the purpose of the system, the use of data and the logic of decision-making.
The other company that decided to end its activities was Rytra massive language model that offered fake reviews and testimonials about businesses to be able to better position them among the favorites of Google Maps users.
Although there is no doubt that these measures are welcome steps towards a scenario of greater transparency and honesty in the technological fieldthe road ahead is still very long, not only because of the pressure from the companies themselves, which have very great lobbying power, but also the speed of innovation, which makes it difficult for regulators to keep up and establish effective guidelines.
An additional difficulty is given by the opacity of the operation of these platforms, since many times they are virtually black boxes. The impossibility of knowing how artificial intelligence systems operate It is an obstacle to correctly identifying potential biases or ensuring compliance with regulations.
As users, we must demand that transparency includes clear explanations of the purpose of the system, the use of data and the logic of decision making.
They are not easy challenges. We need a flexible and adaptable regulatory framework for Artificial Intelligence that can address the constantly evolving ethical issues and, at the same time, encourage innovation and ensure public trust.
This will not be achieved without dialogue and constant collaboration between policy makersthe sector experts, the ethicists and us, the users.