If playback doesn't begin shortly, try restarting your device.
•
You're signed out
Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.
CancelConfirm
Share
An error occurred while retrieving sharing information. Please try again later.
Anonymous founders of the Effective Accelerations (e/acc) movement @Bayeslord and Beff Jezos (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety. We record our interviews with Riverside. Go to https://bit.ly/Riverside_MoZ + use code ZEN for 20%.
(3:00) Intro to effective accelerationism
(8:00) Differences between effective accelerationism and effective altruism
(23:00) Effective accelerationism is bottoms-up
(42:00) Transhumanism
(46:00) “Equanimity amidst the singularity”
(48:30) Why AI safety is the wrong frame
(56:00) Pushing back against effective accelerationism
(1:06:00) The case for AI safety
(1:24:00) Upgrading civilizational infrastructure
(1:33:00) Effective accelerationism is anti-fragile
(1:39:00) Will we botch AI like we botched nuclear?
(1:46:00) Hidden costs of emphasizing downsides
(2:00:00) Are we in the same position as neanderthals, before humans?
(2:09:00) “Doomerism has an unpriced o…...more
Effective Accelerationism and the AI Safety Debate w/ Bayeslord, Beff Jezoz, and Nathan Labenz
246Likes
12,343Views
2023Mar 11
Anonymous founders of the Effective Accelerations (e/acc) movement @Bayeslord and Beff Jezos (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety. We record our interviews with Riverside. Go to https://bit.ly/Riverside_MoZ + use code ZEN for 20%.
(3:00) Intro to effective accelerationism
(8:00) Differences between effective accelerationism and effective altruism
(23:00) Effective accelerationism is bottoms-up
(42:00) Transhumanism
(46:00) “Equanimity amidst the singularity”
(48:30) Why AI safety is the wrong frame
(56:00) Pushing back against effective accelerationism
(1:06:00) The case for AI safety
(1:24:00) Upgrading civilizational infrastructure
(1:33:00) Effective accelerationism is anti-fragile
(1:39:00) Will we botch AI like we botched nuclear?
(1:46:00) Hidden costs of emphasizing downsides
(2:00:00) Are we in the same position as neanderthals, before humans?
(2:09:00) “Doomerism has an unpriced opportunity cost of upside“
More shownotes and reading material released in our Substack: https://momentofzen.substack.com/
Thank you Secureframe for sponsoring (Use "Moment of Zen" for 20% discount) and Graham Bessellieu for production.
Music License:
CMI56YIXORBOB8ZW
AUWPOHS6DAPPCYV1…...more