- FLI initiates a call for AI pause considering risks to society.
- Along with Musk, Apple’s co-founder joined hands for AI pause.
More than thousands of tech giants and researchers signed an open letter stating the halt of Artificial Intelligence (AI) developments for 6 months claiming ‘profound risks to society and humanity’.
📢 We're calling on AI labs to temporarily pause training powerful models!
— Future of Life Institute (@FLIxrisk) March 29, 2023
Join FLI's call alongside Yoshua Bengio, @stevewoz, @harari_yuval, @elonmusk, @GaryMarcus & over a 1000 others who've signed: https://t.co/3rJBjDXapc
A short 🧵on why we're calling for this – (1/8)
Tech giants including Elon Musk, the CEO of SpaceX, Tesla, and Twitter, Steve Wozniak, the co-founder of Apple, Yoshua Bengio, the founder and scientific director at Mila, and many others, signed for the temporary pause on human-competitive intelligence developments.
Will AI Summer be Rewarding?
The Future of Life Institute(FLI), a non-profit organization with advisors including Elon Musk – called for a pause concerning the future safety protocols on AI systems. The letter added if this pause isn’t sustained, there will be government intervention in the future.
GPT-4, an advanced version of ChatGPT, developed and released on March 14th by Elon Musk. This updated version of the Open AI has become more vigorous. This has helped people pass their high schools, law exams, and much more with 90%. FLI proclaimed the ‘out-of-control race’ within AI experiments considering creators’ understandability or prediction on the effect or even control it with reliability.
Nevertheless, the FLI added,
Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
The recent Open AI’s statement on AI says,
“At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”
FLI agreed to the Open AI statement and continued that the pause should be verified and open to the public and enacted sooner. Else, the government would intervene in the future.
Also, the FLI’s open letter mentioned certain constraints to oversight of highly capable AI systems accelerated through provenance and liability. Henceforth, it is expected that the ‘AI summer’ will reap the rewards and overcome catastrophic effects – with the powerful successive AI system near future.