This article's candid engagement with so-called "dumb questions" is a breath of fresh air, as addressing these inquiries can lead to a deeper understanding of the challenges with AI. It's intriguing how, when faced with a terrifying yet plausible scenario, many make the leap from comprehending how something "could" occur to believing it …
This article's candid engagement with so-called "dumb questions" is a breath of fresh air, as addressing these inquiries can lead to a deeper understanding of the challenges with AI. It's intriguing how, when faced with a terrifying yet plausible scenario, many make the leap from comprehending how something "could" occur to believing it inevitably "will" happen.
Within the EA and rationalist communities, certain narratives of AI-driven devastation have been reiterated so often that they seem to have become the default perspective, with some individuals struggling to envision alternative outcomes. It's possible they possess insights that remain elusive to me and others; for me though, the inconsistencies in the answers I've heard so far reminds me of the confusion I experienced as a child when asking different church elders about the distinction between miracles and magic. While all agreed that miracles weren't magic, no two individuals could provide the same explanation or even a consistent framework for understanding the differences.
Reading some of the other comments, I see that I probably need to pull a bit in the other direction to better explain and indicate the seriousness of the x-risks posed by advanced AI. I should clarify that I think AI x-risk is _the_ most serious threat facing human civilization this century (my x-risk estimate is in the 5-10% ballpark) and the public deserves a plain language explanation of why. While I think most of my friends overestimate the risks, it seems like there are a lot of points of confusion that are leading most others to underestimate the risks. I'll try to write detailed response when I get time. Thanks again for the great article Sam.
This article's candid engagement with so-called "dumb questions" is a breath of fresh air, as addressing these inquiries can lead to a deeper understanding of the challenges with AI. It's intriguing how, when faced with a terrifying yet plausible scenario, many make the leap from comprehending how something "could" occur to believing it inevitably "will" happen.
Within the EA and rationalist communities, certain narratives of AI-driven devastation have been reiterated so often that they seem to have become the default perspective, with some individuals struggling to envision alternative outcomes. It's possible they possess insights that remain elusive to me and others; for me though, the inconsistencies in the answers I've heard so far reminds me of the confusion I experienced as a child when asking different church elders about the distinction between miracles and magic. While all agreed that miracles weren't magic, no two individuals could provide the same explanation or even a consistent framework for understanding the differences.
Also, thank you for saying AI and not AGI!
Reading some of the other comments, I see that I probably need to pull a bit in the other direction to better explain and indicate the seriousness of the x-risks posed by advanced AI. I should clarify that I think AI x-risk is _the_ most serious threat facing human civilization this century (my x-risk estimate is in the 5-10% ballpark) and the public deserves a plain language explanation of why. While I think most of my friends overestimate the risks, it seems like there are a lot of points of confusion that are leading most others to underestimate the risks. I'll try to write detailed response when I get time. Thanks again for the great article Sam.