main
side
curve
  1. In Memory of LAJ_FETT: Please share your remembrances and condolences HERE

Senate Ctrl+Alt+Kill The Artificial Intelligence Thread

Discussion in 'Community' started by Darth Punk , Feb 21, 2015.

  1. SuperWatto

    SuperWatto Chosen One star 7

    Registered:
    Sep 19, 2000
    How about both? And traffic?
     
  2. Ramza

    Ramza Administrator Emeritus star 9 VIP - Former Mod/RSA

    Registered:
    Jul 13, 2008
    Yes, all of those things and traffic are more pressing concerns than an AI apocalypse, an excellent point.
     
  3. Darth Guy

    Darth Guy Chosen One star 10

    Registered:
    Aug 16, 2002
    Steve Winwood is a dangerous man.
     
  4. Ramza

    Ramza Administrator Emeritus star 9 VIP - Former Mod/RSA

    Registered:
    Jul 13, 2008
    I predict copies of John Barleycorn Must Die will be self-replicating by 2035 at the latest.
     
  5. VadersLaMent

    VadersLaMent Chosen One star 10

    Registered:
    Apr 3, 2002


    Spoiler: at the end Robot disobeys and saves humanity.

    Raise your A.I properly, and it might do the same.
     
  6. Jedi Ben

    Jedi Ben Chosen One star 10

    Registered:
    Jul 19, 1999
    Clearly none of you have read Iain M Banks or Neal Asher.

    (ie. Yes, we will end up with AIs more powerful than us, but they can't be arsed to kill us.)
     
  7. VadersLaMent

    VadersLaMent Chosen One star 10

    Registered:
    Apr 3, 2002
    Mistake Not My Current State Of Joshing Gentle Peevishness For The Awesome And Terrible Majesty Of The Towering Seas Of Ire That Are Themselves The Milquetoast Shallows Fringing My Vast Oceans Of Wrath
     
    darskpine10 and Jedi Ben like this.
  8. Jedi Ben

    Jedi Ben Chosen One star 10

    Registered:
    Jul 19, 1999
    Ah, the Meat****ker has arrived.
     
  9. Coruscant

    Coruscant Chosen One star 7

    Registered:
    Feb 15, 2004
    edit: nvm, all hail our AI overlords.
     
  10. Ramza

    Ramza Administrator Emeritus star 9 VIP - Former Mod/RSA

    Registered:
    Jul 13, 2008
    In the OVA for that franchise it's heavily implied that Giant Robo is actually possessed by the spirit of Daisaku's (Johnny in the old Johnny Sokko series dub) father, and consequently is capable of elaborating on Daisaku's orders in a contextually appropriate manner, which is cool because that's an actual goal for learning processes. That OVA also features paranoia regarding Robo due to his origins, mysterious capabilities, and the fact that he has an antiquated nuclear power source. It's all deep and ****.

    Edit: But back on topic, here's a blog post wherein Rodney Brooks expounds on why the malevolent AI question is not really as pertinent as we're inclined to believe it is: http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/
     
  11. Coruscant

    Coruscant Chosen One star 7

    Registered:
    Feb 15, 2004
    All that article has done is make me sad for my dead Roomba. :(
     
  12. GrandAdmiralJello

    GrandAdmiralJello Comms Admin ❉ Moderator Communitatis Litterarumque star 10 Staff Member Administrator

    Registered:
    Nov 28, 2000
    I'd be a lot more worried about you having kids than I would about AI, climate change, or resources.
     
  13. Chancellor_Ewok

    Chancellor_Ewok Chosen One star 7

    Registered:
    Nov 8, 2004
    This is exactly the point I was trying to make on the previous page. There are a lot of really basic questions about the nature of consciousness that we have to answer before we can discuss the prospect of truly intelligent AI. The closest thing we have to AI right now is Siri. In the mid-term I could see, say 20 or 30 years from now, multiple automated factories being networked together to co-ordinate with each other, think Airbus eventually building A380s without any human interaction, for example, but when we talk about AI, we're not talking about networked factories. We're essentially talking about building Threepio and Data and that day is a really long way off.
     
    Ramza likes this.
  14. Ender Sai

    Ender Sai Chosen One star 10

    Registered:
    Feb 18, 2001
    SAYS THE AI ITSELF!

    In any event, I'm pretty sure a self-aware AI that isn't Wocky would be useful to us, as it will no doubt solve the annoying questions of how we can get a ship to light-speed, using some sort of drive that warps.
     
    JoinTheSchwarz likes this.
  15. Obi-Zahn Kenobi

    Obi-Zahn Kenobi Force Ghost star 7

    Registered:
    Aug 23, 1999
    Replication of software is not the problem. Creating multiple copies of an AI program on the same computer does absolutely nothing for the AI. The AI, in order to be free from humans, needs to have its own way to exert force in the world to collect energy and materials. It needs energy and material for maintenance, and a significant amount more of energy and material for creation of copies or expansion of itself. No computer anyone has ever invented has showed the slightest capability of autonomously gathering materials and energy and using it to expand itself or self-replicate. No AI will ever threaten humans until it can do that.

    Here's an example of a commercially successful robot, the automated vacuum cleaner. It requires frequent emptying of its receptacle by a human, it requires humans to mine coal, transport coal, and regulate its burning (or the same with natural gas and uranium, or construction and maintenance of facilities with solar, hydro, and wind). It can also end up getting itself stuck in situations it needs a human to free it from.

    No AI will ever pose a threat until it can expand itself or replicate itself autonomously, and humans have never come close to creating anything capable of that.
     
  16. Rogue_Follower

    Rogue_Follower Manager Emeritus star 6 VIP - Former Mod/RSA

    Registered:
    Nov 12, 2003
    Siri is really more of a personified search engine, probably a few generations behind AI initiatives like IBM's Watson, Google's DeepMind, and a variety of other research projects. The most advanced AI is not commercialized yet. Though one could argue that search engines are a form of rudimentary artificial intelligence: Google, for example, learns your personal preferences and shapes search results to suit them.

    Again, though, personality and human-like consciousness are not prerequisites for artificial intelligence. It is incredibly short-sighted, hubristic even, to believe that if a machine does not think in the way we do, then it is not thinking. Or capable of acting in a dangerous way.

    The article Ramza linked mentions that it's a challenge to simulate the brain of something as simple as a worm. Yeah, perhaps we can't simulate the nervous system of C. elegans... at a neuron level... yet. But it's trivial to make a simulation of worm behavior that would be indistinguishable from an actual worm to a human observer. Such a program wouldn't "think" exactly like a worm, but it doesn't have to. Sometimes simplified models work well enough.

    Limiting the definition of "intelligence" to mean "doing exactly what biological nervous systems do" overlooks the fact that computers don't have the same needs that we do. Much of animal behavior is linked to the need to feed or reproduce. An AI will not necessarily have that motivation. (Though it could, depending on how it was programmed.) That doesn't make AI worse. Or better, for that matter. Just different. The phone in my pocket is far more capable than the worm brain it can't simulate. It doesn't matter that it's an abject failure in the "wriggling", "laying eggs", and "pooping" departments.

    Anyway. AI poses a lot of questions beyond sci-fi robot armageddon. Human abuse/misuse of AI is a more likely issue. Consider the implications of ubiquitous, automated government surveillance and profiling, or the potential military applications of AI.

    Buggy or "stupid" AI is also a potential hazard. Imagine emergency room triage being controlled by a poorly-written AI. The AI detects a pattern that every patient named Bartholomew dies after being admitted (initial sample size: n=1, but the programmer forgot to take that into account...), so new patients named Bartholomew are placed at the end of the line, regardless of the severity of their injuries. The pattern becomes a self-fulfilling prophecy. Or what happens in when your self-driving car decides to take a detour off a cliff, because its flawed map said there was a bridge there? The AI in these cases isn't malicious or super-intelligent, yet still dangerous.

    EDIT:

    I agree that the doomsday scenario is unlikely, though with a few caveats:
    • AI can be dangerous in ways beyond the contrived sci-fi doomsday scenario (see above).
    • AI does not necessarily need to be free of humans. Even if we do want to talk AI doomsday, enslavement is also on the table. :p
    • AI may not have an intrinsic need to expand or replicate itself. Or even a sense of self preservation, for that matter.
    • AI does not need to expand or replicate itself in order to be a threat.
    • If a hypothetical malicious AI did expand or self-replicate, I feel it would largely do so by utilizing existing infrastructure. Parasitic AI, so to speak.
     
    Darth Punk likes this.
  17. Darth Guy

    Darth Guy Chosen One star 10

    Registered:
    Aug 16, 2002
    My kids would be awesome. And they could beat up your kids. If only I didn't need that pesky other half of their genes.
     
    JoinTheSchwarz likes this.
  18. Darth Punk

    Darth Punk JCC Manager star 7 Staff Member Manager

    Registered:
    Nov 25, 2013
    They're just eager swimmers
     
  19. Alpha-Red

    Alpha-Red 15x Hangman Winner. star 7 VIP - Game Winner

    Registered:
    Apr 25, 2004
    Robot pastors...

    [​IMG]
     
    BigAl6ft6 and darkspine10 like this.
  20. VadersLaMent

    VadersLaMent Chosen One star 10

    Registered:
    Apr 3, 2002
    *Rises from the dead*

    Funding

    Elon Musk just got 1 billion from Microsoft for A.I. research.
     
  21. Vaderize03

    Vaderize03 Manager Emeritus star 6 VIP - Former Mod/RSA

    Registered:
    Oct 25, 1999
    The most exciting takeaway for me here is that if they're successful, we might be able to transfer a human consciousness into an artificial brain, which would effectively give our species immortality. It's a LOOONGG way off (assuming digitizing consciousness is even possible, which scientists such as Sir Roger Penrose believe is not), but it's an exciting area of research nonetheless.
     
  22. Darth Punk

    Darth Punk JCC Manager star 7 Staff Member Manager

    Registered:
    Nov 25, 2013
    He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
    The incident raises concerns about guardrails around quickly-proliferating conversational AI models.

    A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported.

    The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.

    SOURCE: https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
     
    Darth Guy likes this.
  23. Lordban

    Lordban Isildur's Bane star 7

    Registered:
    Nov 9, 2000
    Like neurosciences in commerce, AI is presented as a tool that is without danger because of obvious ethical concerns preventing it being abused.

    Ah, yes, ethical concerns as the perfect safeguard. I've heard that one before.
     
  24. Lord Vivec

    Lord Vivec Chosen One star 10

    Registered:
    Apr 17, 2006
    This is a very important piece of news, and I'm going to explain why.

    One of the biggest proponents of the idea of AI being an extinction-level danger is Elon Musk. There's a reason people like Elon Musk like to hype up the supposed "dangers" of artificial intelligence. Because it will lead to regulations that benefit proprietary software. Closed-source programming. They get you to think that somehow because you have to pay a capitalist money to access the software, that it's "safer." And so you'll willingly part with your money to use the fanciest technology instead of open source alternatives because you're being conditioned, right now, into fearing the wrong things about AI.
     
  25. VadersLaMent

    VadersLaMent Chosen One star 10

    Registered:
    Apr 3, 2002
    I'm still not convinced you can be separated from your brain. We are thinking matter after all.