Days before Sam Altman’s removal, researchers warned board of ‘AI threat to humanity’
x
Sam Altman has now been re-instated as the CEO of OpenAI. File photo

Days before Sam Altman’s removal, researchers warned board of ‘AI threat to humanity’

An undisclosed letter from some researchers in the company sent to the board warned of a powerful artificial intelligence discovery that could potentially harm humanity


Why did the OpenAI board sack Sam Altman, the CEO of the company, a few days ago?

The answer seems to be primarily because of an undisclosed letter from some of the researchers in the company to its board that warned of a powerful artificial intelligence discovery that could potentially harm humanity, reported Reuters.

Certain sources in the company told Reuters that the letter was one of the reasons, apart from several other grievances, that led to Altman’s ouster from the company.

The letter referred to a project called Q* (pronounced as Q-Star) which some OpenAI employees believed was a breakthrough in the company’s search for superintelligence, also called artificial general intelligence (AGI) in the industry. AGI is defined as AI systems that are more intelligent than humans.

The new model developed by Q* was supposedly able to solve certain mathematical problems, which made the researchers very optimistic about its future potential and success.

Powerful than humans, machines

Why is that important? Because researchers consider math to be a frontier of generative artificial intelligence development. The present AI systems are good at writing and language translations. But in these areas, there could be multiple answers to a question. Whereas in math, there is only one correct answer. By creating a system that could conquer the ability to do math, it would mean that it has higher reasoning capabilities that resemble human intelligence.

This kind of AGI system would be more powerful than a calculator, which can only solve a limited number of problems. It would be able to generalise, learn, and comprehend.

The letter that was sent to the OpenAI board did not specify the exact safety concerns posed by the project. However, there have been discussions for a long time about the danger of superintelligent machines to human beings, for example, if the machines decide that destroying humanity was in their interest.

Altman seemed to be alluding to this discovery last week. While addressing world leaders at the Asia-Pacific Economic Cooperation summit in San Francisco, he said, “Four times now in the history of OpenAI, the most recent time was in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime.”

The OpenAI board fired Altman one day later. He has now been reinstated as the company's CEO.

Read More
Next Story