“Reuters reports a potential new twist in the case surrounding OpenAI, the latest of which occurred yesterday: Sam Altman will return as chief, and no one outside the companies understands what his position will be at Microsoft, if anything.
From language to math – a generalized AI
“Potential” because we don't initially know if the new letter revealed by Reuters had anything to do with the chaotic weeks, but it's spicy anyway. According to Reuters (which cites the two sources that yes, it is the letter that is one of the reasons), senior developers in the company wrote a letter addressed to the board. The content of the letter describes an improvement in AI so massive that "it is a danger to humanity."
"The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the man who fronts generative AI, the two sources said," writes Reuters.
That's why Musk wants to take a step back
Reuters has not been able to obtain the letter, and OpenAI has not wanted to comment on the matter to the news agency, but an internal message to employees confirms an internal project referred to as Q and a letter that should have been sent to the board before the weekend's chaos. The developers in the company must be afraid that OpenAI's board does not understand the consequences of commercializing the powerful new AI, or the LLM if you will.
This is something Elon Musk, who was one of the first backers of OpenAI, has previously warned against:
“AI labs and independent experts should use the break to jointly develop and implement a set of shared security protocols for advanced AI design and development that are thoroughly audited and monitored by independent external experts. These protocols should ensure that systems that adhere to them are secure beyond a reasonable doubt. This does not mean a pause in AI development in general, just a step back from the dangerous race to ever-larger unpredictable black-box models with new capabilities.”
May be close to AGI
It is OpenAI's race against Artificial General Intelligence, AGI for short, that worries those familiar with the project. When one or more companies reach the milestone, the fear is that computers may take over most of the work tasks people currently perform, with consequent dramatic economic consequences. To reach such an AI level, what are today great language models must also learn math. That is what the Q project should have achieved, although only at the primary school level.
But conquering the ability to do math—where there is only one right answer—implies that AI will have greater reasoning abilities that resemble human intelligence. This can, for example, be used for new scientific research, AI researchers believe. Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn, and understand.