Thread by Alex Banks
- Tweet
- Mar 30, 2023
- #ArtificialIntelligence #ChatGPT
Thread
Elon Musk, Steve Wozniak and 1,121 tech experts just signed an open letter to pause the training of AI models more powerful than GPT-4.
They warn this could "represent a profound change in the history of life on Earth."
Here's what I found out:
They warn this could "represent a profound change in the history of life on Earth."
Here's what I found out:
1/ Calling for a pause
Top AI researchers and tech leaders have asked AI labs to pause the training of AI systems for at least 6 months.
This will help address safety and ethical concerns.
If a voluntary pause cannot be enacted, they suggest government intervention is needed.
Top AI researchers and tech leaders have asked AI labs to pause the training of AI systems for at least 6 months.
This will help address safety and ethical concerns.
If a voluntary pause cannot be enacted, they suggest government intervention is needed.
2/ AI systems becoming human-competitive
The letter raises serious concerns about AI systems:
• Taking over jobs
• Spreading propaganda
• Outsmarting and replacing humans
It's important that such decisions are not left to unelected tech leaders.
The letter raises serious concerns about AI systems:
• Taking over jobs
• Spreading propaganda
• Outsmarting and replacing humans
It's important that such decisions are not left to unelected tech leaders.
3/ Developing shared safety protocols for advanced AI systems
The letter proposes:
• AI labs & experts use the pause to develop safety protocols
• This should be overseen by independent outside experts
• The pause does not apply to AI development in general
The letter proposes:
• AI labs & experts use the pause to develop safety protocols
• This should be overseen by independent outside experts
• The pause does not apply to AI development in general
4/ Refining existing AI systems
We need to step back from the "dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
AI research and development should be refocused on making today's state-of-the-art systems more accurate.
We need to step back from the "dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
AI research and development should be refocused on making today's state-of-the-art systems more accurate.
5/ Creating robust AI governance systems
AI developers need to work with policymakers on:
• Regulatory authorities dedicated to AI
• Oversight and tracking mechanisms
• Watermarking to distinguish real from synthetic
• Institutions to manage economic disruption
AI developers need to work with policymakers on:
• Regulatory authorities dedicated to AI
• Oversight and tracking mechanisms
• Watermarking to distinguish real from synthetic
• Institutions to manage economic disruption
6/ A flourishing future with AI
It's important we build a future where AI can benefit everyone.
A pause on certain AI technologies will enable advancements to be made in the best interest of humanity.
We need to give society a chance to adapt—not rush forward unprepared.
It's important we build a future where AI can benefit everyone.
A pause on certain AI technologies will enable advancements to be made in the best interest of humanity.
We need to give society a chance to adapt—not rush forward unprepared.
The rate of change in AI is increasing exponentially.
One thing is for sure—society has had a hard time adapting.
Dangerous things could happen if we lose control of self-improving systems.
Having governance systems in place is necessary to make AI development responsible.
One thing is for sure—society has had a hard time adapting.
Dangerous things could happen if we lose control of self-improving systems.
Having governance systems in place is necessary to make AI development responsible.
Follow me @thealexbanks for more on AI.
If you liked this thread, you'll love the newsletter.
Subscribe here:
noise.beehiiv.com/subscribe
If you liked this thread, you'll love the newsletter.
Subscribe here:
noise.beehiiv.com/subscribe
Help everyone learn and retweet this thread:
You can find the open letter here:
futureoflife.org/open-letter/pause-giant-ai-experiments/
futureoflife.org/open-letter/pause-giant-ai-experiments/
Are you for/against pausing AI development over the next 6 months?
Let me know in the comments.
Let me know in the comments.
I recommend you read this piece by @GaryMarcus on AI risk ≠ AGI risk for more context:
garymarcus.substack.com/p/ai-risk-agi-risk
garymarcus.substack.com/p/ai-risk-agi-risk