There was a time when I was slightly embarrassed to bring up concerns about Artificial Intelligence to friends and family who weren’t already AI safety-pilled.
Ohh great, this feels like the best way to make sure that the very worst kind of AI is built.
I don't agree with Yudkowsky et al on the risk for a number of reasons I won't go into here, but for that risk it's still probably worse if we get military AI but no civilian development in the open by people concerned with safety. And the problem is the kind of support you cite doesn't seem like the kind of really robust conviction that's going to demand the kind of laws that limit even NSA programs. That's likely to be the kind of support that imposes some vague limitation and then as long as they don't see it doesn't care.
Ohh great, this feels like the best way to make sure that the very worst kind of AI is built.
I don't agree with Yudkowsky et al on the risk for a number of reasons I won't go into here, but for that risk it's still probably worse if we get military AI but no civilian development in the open by people concerned with safety. And the problem is the kind of support you cite doesn't seem like the kind of really robust conviction that's going to demand the kind of laws that limit even NSA programs. That's likely to be the kind of support that imposes some vague limitation and then as long as they don't see it doesn't care.