Scott Aaronson wrote: [...]given the way civilization seems to be headed, I’m actually mildly in favor of superintelligences coming into being sooner rather than later. Like, given the choice between a hypothetical paperclip maximizer https://wiki.lesswrong.com/wiki/Paperclip_maximizer destroying the galaxy, versus a delusional autocrat burning civilization to the ground while his supporters cheer him on and his opponents fight amongst themselves, I’m just about ready to take my chances with the AI. Sure, superintelligence is scary, but stupidity has already been given its chance and been found wanting. https://www.scottaaronson.com/blog/?p=3553
@LogicalDash I'd rather not think too much about what we could do and what is actually happening because it's never been further apart in my lifetime. I truly share the feeling expressed above that any existential risk isn't worse than the contempt rich humans have for pretty much anything else these days.
@LogicalDash Indeed, but the prospects of a violent revolution are slim, and that it would usher a better society even slimmer. On the contrary, that greed-motivated rich robot builders fuck up and end up sealing our fate seems way more probable to me.