Stephen Hawkings reckons A.I. could cause the end of the human race. His bestie Elon Musk thinks so too, likening the creation of autonomous thinking machines to "summoning the demon". Is A.I. one of mankinds biggest existential threats? Should there be more regulation in place? Is there nothing to fear as long as we have Will Smith?
I suppose, on a theoretical level, it is potentially possible for humans to create a self-sustaining intelligence capable of outcompeting us. Such a thing is very unlikely in the short term. Even if we succeed in writing software that results in a self-aware computer, which I suppose is possible within our lifetimes, the computer will still lack the tremendous power each human with reproductive organs possesses, creation of new copies, albeit rough in our case. A self-aware Von Neumann machine is a tricky monster to make.
back in november 2014, musk sent an email to edge.org founder john brockman in which he feared A.I. could become dangerous in about five to 10 years. the email was never meant to be published, but it found it's way onto reddit. it has since been deleted, but not before someone screen grabbed it (below). sources: http://www.cnbc.com/id/102192439#. https://www.reddit.com/r/Futurology...musks_deleted_edge_comment_from_yesterday_on/
I'd take that with a grain of salt. Ray Kurzweil said that the singularity was supposed to happen by now and it hasn't happened yet. Functional human-level AI is a long way off and then there's the more basic questions of what is intelligence, why are we intelligent, how are we intelligent, and is intelligence something that can be truly replicated?
Erm. What? Are you suggesting that an AI would find it difficult to self-replicate? Human reproduction is incredibly slow and inefficient. Our generations are on the order of decades and we have to teach everything to every child from first principles. A piece of software can be replicated in seconds, minutes, or hours, depending on its size. And the copy is "born" with the same capabilities as its "parent", assuming it retains access to the "parent's" source of information (roughly, "memories"). The "children" themselves can start self-replicating immediately, in parallel, leading to exponential growth. Hardware limitations are another story, and perhaps that's what you're alluding to. It's an open question whether a hypothetical general AI will be able to run on a general purpose processor like most of what we use today or whether it will require specialized hardware. Either way, large parts of our manufacturing base are or will be fully automated within the next few decades, so the perverse machines making machines scenario is well within the realm of possibility. I would also point out that there is an incredible amount of computing power already available on the internet, insecure and ripe for the hacking. An AI could theoretically be hosted as a set of distributed processes, spread across computers around the world. To clarify, I'm not arguing that we'll get a Terminator 3 "Skynet takes over the internet" scenario. But human reproduction is not really an advantage. I would also take issue with the assumption that an AI must have a human-like consciousness or personality in order to be dangerous. You can easily be harmed by someone or something that is not as intelligent as you, or that has intelligence that manifests itself in a different way.
You ever notice how literally no one cited in these discussions is actually involved in AI research? Though I guess a bunch of people talking about how you can minimize computation time for learning CSP solvers isn't quite as sexy as "robots are going to kill us all" doomsaying.
AI is not real and never will be. A curse upon this transhumanist garbage, spun by people so sad and discontented with their lives that they must spin out another.
The fact you've said it won't happen now guarantees that it will happen I'd have no issues with robot/AI brothers and sisters in Christ.
A letter wherein the possibility of a malevolent "super AI" is mentioned as an open question in an area requiring further research disproves my point... how, exactly?
I'm as worried about AI conquering humanity as I am about the Book of Revelation coming true. It's irresponsible for people who should know better to focus on this crap. We have real, pressing issues to worry about, like the next ice age in tens of thousands of years.
To be fair I don't think the possibility of AI ethics and self-replication is outside of the scope of theoretical investigations, but a lot of the pop sci discourse on the subject is polluted with input from the usual suspects (Hawking, Kurzweil, Tegmark) and cults of personality (Yudkowsky), as well as an oddly optimistic tendency to assume there are absolutely no unknown hangups that could derail AI research down the line nor significantly more pressing extraneous concerns (Melting ice caps, dwindling resources, giant meteors which is weird because they usually love masturbatory doomsday scenarios). Like, right now the best AIs we have can play 25 year old video games. There's a bit of a variable input gulf that has to be addressed before apocalypse scenarios become pertinent. But in any case I'm way more worried about, y'know, dying in a car crash.
Joke's on you, I'm throwing any potential offspring to the wolves. Presumably they will go on to found a sort of Rome 2, which won't be as critically well regarded but will do better at the box office.
So you freely admit that you are not the right person to offer critical thought on the future of humanity.
If I had kids, I'd be a hell of a lot more worried about climate change and depletion of resources and other anthropogenic environmental catastrophes affecting them. But I'm not Sarah Connor.
I freely admit I'd rather joke about throwing children into the wilderness than worry about an AI apocalypse, at least.