One of the major limitations of Artificial Intelligence, and one of the most common arguments that have been given to reassure society that AI will not end our jobs and other relevant concerns, is that this technology requires human commands to do the things it is capable of doing.
For example, AI cannot replicate the Mona Lisa, if there is no person behind it who has established a prompt (command) indicating what it has to do, in other words , until now we have always thought that AI was always under our orders.
However, the rapid development of this technology has surprised even the most experienced professionals in the sector, and we have already seen how the godfather of AI has reiterated his concern about the path this technology is taking, at the same time that OpenAI, the leading company in the sector, is experiencing a huge internal crisis around the security and control measures of said technology.
The problem is that AI is becoming increasingly intelligent and capable, and while on the one hand this is good news for technological development in general, we also need to be cautious, because we are slowly approaching a level where machines are going to be very intelligent and who knows if they will even obtain consciousness.
The latest example of this is the announcement that Apple has made as a preview of the many things they will present in June during the World Developers Conference (WWDC) of 2024. It is a novelty that they will introduce with iOS 18 in Siri, the virtual assistant of the company, called Proactive AI.
What is Apple’s Proactive AI and what is it for
As explained by Apple, in iOS 18, Siri will receive an update to improve its capabilities based on Proactive AI. In this way, the assistant will adopt a more conversational tone and incorporate functions that allow it to be more autonomous and rely less on user commands.
Among the most outstanding actions we find that Siri will be able to automatically summarize notifications and articles, as well as transcribe voice notes and improve current capabilities to automatically fill the calendar or suggest the use of specific applications. In this way, AI will be able to anticipate needs and suggest actions to facilitate and improve users’ lives.
Why should we be concerned?
While it is true that at the user level, these new capabilities sound very good, they will make our lives easier and seem harmless, it is important to look beyond, and this announcement of Proactive AI means that such technology increasingly has more reasoning and self-control capabilities, and as with everything, when it comes to simple and well-intentioned actions, it’s all good news.
However, what would happen if one day Proactive AI becomes so intelligent that it suddenly decides that it no longer wants to serve users and do as it pleases. Or, for example, a hacker intervenes in AI and makes it proactive for malicious purposes, for example, hacking it and confusing it so that the AI itself shares all the passwords that the user has saved on their device.
There are many possibilities of what could happen, and although the arrival of Proactive AI at first may be good news because it will make our lives easier, there are many security issues that, in the absence of control and rules, should alert us about this overwhelming development of AI