main
side
curve
  1. In Memory of LAJ_FETT: Please share your remembrances and condolences HERE

Lit Humans v. Droids: What's the difference?

Discussion in 'Literature' started by Outsourced, Nov 22, 2019.

  1. Darth Invictus

    Darth Invictus Jedi Grand Master star 5

    Registered:
    Aug 8, 2016
    Souls in the context of SW do unambiguously exist though.

    Force Ghosts and so on prove that.

    That has to factor in how we define sapience as far as SW is concerned.
     
  2. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    Here are my responses to this essay:


    <<<<Films and TV shows like Blade Runner, Humans, and Westworld, where highly advanced robots have no rights, trouble our conscience. They show us that our behaviors are not just harmful to robots—they also demean and diminish us as a species. We like to think we’re better than the characters on the screen, and that when the time comes, we’ll do the right thing, and treat our intelligent machines with a little more dignity and respect.>>>>

    The writer has opened up his essay assuming everyone is bothered by the treatment of robots. I am far more troubled by the treatment of human beings and animals in comparable sci-fi films.

    <<<<<With each advance in robotics and AI, we’re inching closer to the day when sophisticated machines will match human capacities in every way that’s meaningful—intelligence, awareness, and emotions. Once that happens, we’ll have to decide whether these entities are persons, and if—and when—they should be granted human-equivalent rights, freedoms, and protections. >>>>

    The author writes...."Once that happens....." That assumes a "no going back point" attitude. Human beings and animals are infinitely more complex than the author's standard for matching intelligence, awareness, and emotions. Once again, ontogeny, (Life span development) is one giant missing piece (an elephant in the room).

    Our intelligence, awareness, and emotions are fluid and develop over our lifespan. These qualities of human and animals are dynamic and changing.

    Robotic intelligence offers advanced computations within a static system. Robots lack individuality.

    <<<We talked to ethicists, sociologists, legal experts, neuroscientists, and AI theorists with different views about this complex and challenging idea. It appears that when the time comes, we’re unlikely to come to full agreement. Here are some of these arguments.>>>

    The author writes "we’re unlikely to come to full agreement." This notion of "full agreement" is misleading and creates a false urgency for the reader to continue.

    This author would better serve reality by stating that no agreement can be reached.

    <<<<Why give AI rights in the first place?
    We already attribute moral accountability to robots and project awareness onto them when they look super-realistic. The more intelligent and life-like our machines appear to be, the more we want to believe they’re just like us—even if they’re not. Not yet. >>>>

    The form-factor of a computer is deceiving. The author opens this section by framing moral accountability in terms outward appearances. Yoda: "Judge me by my size, do you."

    <<<But once our machines acquire a base set of human-like capacities, it will be incumbent upon us to look upon them as social equals, and not just pieces of property. The challenge will be in deciding which cognitive thresholds, or traits, qualify an entity for moral consideration, and by consequence, social rights. Philosophers and ethicists have had literally thousands of years to ponder this very question. >>>

    At least the author acknowledges that people will have thousands of years to ponder the question. However, the author writes, "it will be incumbent upon us to look upon them as social equals, and not just pieces of property."

    This author has a political agenda to group robots with other groups that have been discriminated against. When the droids get told they can't go certain places, we are not supposed to feel pity nor feel social injustice.

    If something was manufactured from parts and has no individual identity, was never born, how can it not be property? Robots will require ownership papers and a Bill of Sale (receipt) just like any other computer.

    Can there be any point to emancipating a robot from ownership to a human being? What would that emancipation serve? Who benefits from the emancipation?

    <<<<“The three most important thresholds in ethics are the capacity to experience pain, self-awareness, and the capacity to be a responsible moral actor,” sociologist and futurist James Hughes, the Executive Director of the Institute for Ethics and Emerging Technologies, told Gizmodo.

    Hughes believes that self-awareness comes with some minimal citizenship rights, such as the right to not be owned, and to have its interests to life, liberty, and growth respected. With both self-awareness and moral capacity (i.e. knowing right from wrong, at least according to the moral standards of the day) should come full adult human citizenship rights, argues Hughes, such as the rights to make contracts, own property, vote, and so on.

    “Our Enlightenment values oblige us to look to these truly important rights-bearing characteristics, regardless of species, and set aside pre-Enlightenment restrictions on rights-bearing to only humans or Europeans or men,” he said. Obviously, our civilization hasn’t attained the lofty pro-social goals, and the expansion of rights continues to be a work in progress."


    The author quotes James Hughes (first time hearing of him) to say Hughes believes self-awareness comes with minimal citizenship rights, such as the right not to be owned, and interests in life, liberty, and *growth (*normally life, liberty, and property are combined together for citizenship rights).

    This line of argument is full of fallacy. Robots are computers. Computers break down and misbehave. The owner chooses to fix them or scrap them. The author falls into fallacy by comparing rights for robots to pre and post Enlightenment rights for human beings.

    <<<Who gets to be a “person”?
    Not all persons are humans. Linda MacDonald-Glenn, a bioethicist at California State University Monterey Bay and a faculty member at the Alden March Bioethics Institute at Albany Medical Center, says the law already considers non-humans as rights bearing individuals. This is a significant development because we’re already establishing precedents that could pave a path towards granting human-equivalent rights to AI in the future.

    “For example, in the United States corporations are recognized as legal persons,” she told Gizmodo. “Also, other countries are recognizing the interconnected nature of existence on this Earth: New Zealand recently recognized animals as sentient beings, calling for the development and issuance of codes of welfare and ethical conduct, and the High Court of India recently declared the Ganges and Yamuna rivers as legal entities that possessed the rights and duties of individuals.” >>>


    The author opens this section by writing not all persons are humans. The author is playing a game with semantics. We already know businesses, animals, and rivers are not human beings. They can be treated as an entity based on the legal system.

    <<<<Efforts also exist both in the United States and elsewhere to grant personhood rights to certain nonhuman animals, such as great apes, elephants, whales, and dolphins, to protect them against such things as undue confinement, experimentation, and abuse. Unlike efforts to legally recognize corporations and rivers as persons, this isn’t some kind of legal hack. The proponents of these proposals are making the case for bona fide personhood, that is, personhood based on the presence of certain cognitive abilities, such as self-awareness. >>>>

    The author falls into fallacy again. The same game of semantics continues into this paragraph. A chimp is a chimp. A chimp is not a human being. Nature intended chimps to stay as chimps.

    <<<MacDonald-Glenn says it’s important to reject the old school sentiment that places an emphasis on human-like rationality, whereby animals, and by logical extension robots and AI, are simply seen as “soulless machines.” She argues that emotions are not a luxury, but an essential component of rational thinking and normal social behavior. It’s these characteristics, and not merely the ability to crunch numbers, that matters when deciding who or what is deserving of moral consideration. >>>


    Emotions are linked to fundamental survival of the organism. Here is where that argument breaks down. Robots do not need to survive. Since they are computers, they will have a shut down button or kill switch. No survival mechanism is needed with robots. Their emotional responses instead serve a different function of showing human-like or animal-like communication.

    And guess what? Animals can instantly spot a fake member of their breed. They approach a robotic version of their breed, smell it, and instantly know it is something other than one of their kind. The same will be true with human-like robots (androids). Androids will have markers that let us know fairly quickly they are not a genuine biological entity.

    <<<<Indeed, the body of scientific evidence showcasing the emotional capacities of animals is steadily increasing. Work with dolphins and whales suggest they’re capable of experiencing grief, while the presence of spindle neurons (which facilitates communication in the brain and enables complex social behaviors) implies they’re capable of empathy. Scientists have likewise documented a wide range of emotional capacities in great apes and elephants. Eventually, conscious AI may be imbued with similar emotional capacities, which would elevate their moral status by a significant margin.

    “Limiting moral status to only those who can think rationally may work well for AI, but it runs contrary to moral intuition,” MacDonald-Glenn said. “Our society protects those without rational thought, such as a newborn infant, the comatose, the severely physically or mentally disabled, and has enacted animal anti-cruelty laws.” On the issue of granting moral status, MacDonald-Glenn defers to English philosopher Jeremy Bentham, who famously said: “The question is not, Can they reason? nor Can they talk? but, Can they suffer?” >>>


    We already know that animals have emotions that mirror our own emotions. On the question of "Can they Suffer?", YES, this question is close to what is important. However, the programmer and builder determines whether robots suffer internally or not and what value the suffering serves. Can robots communicate suffering without actually suffering internally? I would answer yes. We program them and build them. We shut them down when they deviate from their programming.

    <<<Can consciousness emerge in a machine?
    But not everyone agrees that human rights should be extended to non-humans—even if they exhibit capacities like emotions and self-reflexive behaviors. Some thinkers argue that only humans should be allowed to participate in the social contract, and that the world can be properly arranged into Homo sapiens and everything else—whether that “everything else” is your gaming console, refrigerator, pet dog, or companion robot.

    American lawyer and author Wesley J. Smith, a Senior Fellow at the Discovery Institute’s Center of Human Exceptionalism, says we haven’t yet attained universal human rights, and that it’s grossly premature to start worrying about future robot rights.

    “No machine should ever be considered a rights bearer,” Smith told Gizmodo. “Even the most sophisticated machine is just a machine. It is not a living being. It is not an organism. It would be only the sum of its programming, whether done by a human, another computer, or if it becomes self-programming.”

    >>>>

    My line of argument echoes that of the person quoted here, Wesley J. Smith:

    “Even the most sophisticated machine is just a machine. It is not a living being. It is not an organism. It would be only the sum of its programming, whether done by a human, another computer, or if it becomes self-programming.”

    I agree.


    <<< "Emory Center for Ethics, says machines will likely never deserve human-level rights, or any rights, for that matter. The reason, she says, is that some neuroscientists, like Antonio Damasio, theorize that being sentient has everything to do with whether one’s nervous system is determined by the presence of voltage-gated ion channels, which Marino describes as the movement of positively charged ions across the cell membrane within a nervous system.

    “This kind of neural transmission is found in the simplest of organisms, protista and bacteria, and this is the same mechanism that evolved into neurons, and then nervous systems, and then brains,” Marino told Gizmodo. “In contrast, robots and all of AI are currently made by the flow of negative ions. So the entire mechanism is different.”

    According to this logic, Marino says that even a jellyfish has more sentience than any complex robot could ever have." >>>


    Antonio Damasio is the only person I have heard of so far. He is a professor in neuroscience.

    Damsio knows that neuronal communication occurs in the context of ontogeny (life span development). Brain strata are dynamic and interconnected with the rest of the body (afferant and efferant).

    Damasio is quoted as saying the presence of "voltage-gated ion channels" found in biological organisms contrasts with the "flow of negative ions" found in Artificial Intelligence. This is too reductionistic. The sum is greater than its parts.

    Let's say a majority consensus could be reached that robots can show awareness. The awareness itself (broken down into mental states: waking, alert, sleep states, hunger, sex) would not be sufficient to necessitate the rights of person-hood to robots.


    <<< "Another scientist who believes consciousness is somehow inherently non-computational is Stuart Hameroff, a professor of anesthesiology and psychology at the University of Arizona. He has argued that consciousness is a fundamental and irreducible feature of the cosmos (an idea known as panpsychism). According to this line of thinking, the only brains that are capable of true subjectivity and introspection are those comprised of biological matter.

    Hameroff’s idea sounds interesting, but it also lies outside the realm of mainstream scientific opinion. It is true that we don’t know how sentience and consciousness arises in the brain, but the simple fact is, it does arise in the brain, and by virtue of this fact, it’s an aspect of cognition that must adhere to the laws of physics. It’s wholly possible, as noted by Marino, that consciousness can’t be replicated in a stream of 1's and 0's, but that doesn’t mean we won’t eventually move beyond the current computational paradigm, known as the Von Neumann architecture, or create a hybrid AI system in which artificial consciousness is produced in conjunction with biological components. " >>>


    These statements in bold are important.


    <<<<What if we don’t?
    Once our machines reach a certain threshold of sophistication, we will no longer be able to exclude them from our society, institutions, and laws. We will have no good reason to deny them human rights; to do otherwise would be tantamount to discrimination and slavery. Creating an arbitrary divide between biological beings and machines would be an expression of both human exceptionalism and substrate chauvinism—ideological positions which state that biological humans are special and that only biological minds matter.

    “In considering whether or not we want to expand moral and legal personhood, an important question is ‘what kind of persons do we want to be?’” asked MacDonald-Glenn. “Do we emphasize the Golden Rule or do we emphasize ‘he who has the gold rules’?”

    What’s more, granting AIs rights would set an important precedent. If we respect AIs as societal equals, it would go a long way in ensuring social cohesion and in upholding a sense of justice. Failure here could result in social turmoil, and even an AI backlash against humans. Given the potential for machine intelligence to surpass human abilities, that’s a prescription for disaster.

    Importantly, respecting robot rights could also serve to protect other types of emerging persons, such as cyborgs, transgenic humans with foreign DNA, and humans who have had their brains copied, digitized, and uploaded to supercomputers.

    It’ll be a while before we develop a machine deserving of human rights, but given what’s at stake—both for artificially intelligent robots and humans—it’s not too early to start planning ahead. >>>



    Having read through most of this article, I do not believe the author found sufficient support for his closing assertions. The author is using a spin by saying robots' human rights are tantamount to discrimination and slavery.

    This concept of "Emerging Persons" is another spin. We are not there yet...and...those may never gain "Person Status" nor any status outside of staying as lab experiments.

    Can the person who downloaded their digitized brain to a computer maintain control over their "endowment?" I would argue they cannot maintain control over their downloaded brain after their death.
     
  3. Iron_lord

    Iron_lord Chosen One star 10

    Registered:
    Sep 2, 2012
    Why? The existence of Vulcan Katras didn't factor into how ST characters defined sapience.

    Another interesting set of reasons for eventually granting robots rights:


    https://blog.goodaudience.com/5-reasons-why-robots-should-have-rights-4e62e8698571
     
    Last edited: Dec 16, 2019
    vncredleader likes this.
  4. vncredleader

    vncredleader Force Ghost star 5

    Registered:
    Mar 28, 2016
    Ok......so? Whose to say a machine, even just a machine, cannot also be a person? Don't complain about semantics when the counter argument is itself all semantics and itself just stating a presupposed notion.
     
    Iron_lord likes this.
  5. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    A human being or animal can be enhanced by a machine and still be a member of their species.

    The ancient Egyptians are laughing at us here in the future with their frescos of animal hybrids.
     
  6. vncredleader

    vncredleader Force Ghost star 5

    Registered:
    Mar 28, 2016
    And? That does nothing to preclude robots from also being people. If the robot can self-program then it is more than its original state. Heck the same applies to people, if a person is still a member of their species despite being helped along or informed by other people then the same applies to a robot adapting to other robots or its programming changing and evolving. They have the capacity to evolve, just like humans that capacity can only be accomplished if they interact with other beings. Hegel's master/slave dynamic comes into play, we are only able to become ourselves through interactions with other people, we need them in order to find a view of ourselves.

    Saying that a human or animal can be enhanced but is still a member of their species means nothing, cause the argument has nothing to do with whether or not something is a member of its species. A robot still being a robot does not preclude robots from being people. Those are not exclusive. You are just working backwards from a presupposed notion of reality, and then just stating your presupposition as an argument in and of itself..
     
    Iron_lord likes this.
  7. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    My view is that human beings and robots are 100% mutually exclusive from each other.

    Robots lack ontogeny (life span). Robots are never born and don't have parents. They are a tool and nothing more than a piece of equipment for a biological being. When corporations start mass producing commercialized robots, they will be sold like computers.

    ...better hold on to the receipt...to prove ownership...in case your robot gets into trouble....and a manufacturing defect is discovered...

    But you are insisting a robot has rights and person-hood over and above manufacturing defects...

    We cannot afford to treat a single robot as a legal entity for the reasons stated. The biological owner or corporate entity remain the legal entity responsible for a robot.


    You can call a dog a cat all you want. A dog is still a dog. That is semantics. Calling the thing different names does not change the nature of the thing.

    A robot is still a computer. A human being is still a mammal. A lizard is still a reptile.

    I can't follow your logic on that one. So here is my logic again.

    We can't change the nature of what something is by calling it different names.

    Name calling does not advance progress. Name calling instead slows progress.

    No person and no object exist in isolation.

    And yet, the sum will always be greater than its parts.


    Robots have no will. They are projections of our will. We, as human beings, have a will. Our legal system recognizes our will. Our legal system cannot afford to recognize the will of a robot.

    However...I can admit that some legal system somewhere will erroneously grant legal entity status to a robot and claim that they are being progressive.....


     
    Last edited: Dec 17, 2019
  8. vncredleader

    vncredleader Force Ghost star 5

    Registered:
    Mar 28, 2016
    Our children are the projections of our will, society is a projection of our will. Robots do not somehow have lessor abilities to be persons simply cause they come from humans. Also giving names and questioning names doesn't slow things, it is a principle in much of philosophy. Wittgenstein's language games for instance, in which words are not separate from reality but rather a part of a game with no meaning outside of the rules. Philosophy tube covers that in the linked time in this vid, as well as the important wrap up for that point at 21:53

    This means that you can change the nature of a thing by calling it a different name, and it can also be utterly unchanging at the same time. That is perspective. But more importantly, I have no clue what you are trying to say there. No one is saying don't call robots robots, or that doing so would change what they are. What they are no matter what you call them, is just as validly a person or just as invalidly a person either way. Again you're saying nothing, simply reiterating that opinions are indeed a thing that exist.....and then saying yours is somehow a fixed fact rather than your presupposition.

    Droids in star wars, remember star wars that thing this thread is about?, have wills. We have seen them break away from programming or set orders, maybe not always fully, but they have wills they can act on beyond and in tandem with their programming and orders. You cannot deny that, if you wanna have this convo you have to accept something like Phantom Limb which shows droids having wills of their own, even when not directly able to override an order. They work around it to help another simply cause it is what they WANT.

    That goes back to Bicentennial Man, he wishes the be free, grasps the concept, and does acts for no other reason than his own enjoyment. We cannot quantify if that is any more or less valid than the joy you might get out of doing the same action (in this case wood carving), but that does not make the sense of pleasure invalid. If it can feel pleasure and have preferences, then those matter and exist. That is separate than their orders, if they want something outside of their orders, then the programming has led them to will to power. Or at least having the capacity.

    You will say that is programming and thus does not count, but does the fact that I need to go to the bathroom, or more directly, that I need to urinate regardless of the location, make me not a person? I did not will myself to urinate, I doubt my "soul" (whatever the heck that even means) dictated that, not it was my programming. Of course yeah it is different and literally brainless organisms have involuntary reflexes, but the point is that programming minute and gross is not something that should decide the validity of an action, or a being's capacity for will.

    Cause I can will to power myself not to pee, until my body forces me to. A droid can will itself not to shut off, but it will run out of power either way.

    Droids have will to power, to the point of circumventing their own orders at times to cope with the requirements of their moral orders. People have morals imposed on them, these do not exist naturally, they exist through empathy, social interaction, and of course laws created by and enshrined via social contract. If a robot can even just WISH to act beyond these rules, then that is their equivalent to a human having the freedom to want to break a social mores.

    Of course it is different, no one is saying they are no longer a machine, but being a person does not HAVE to be nailed down to being a human trait. It can be a machine, and be a person. You can call that semantics, I call it literally one of the most significant parts of any philosophy. The word can change, you cannot just say that changing it does nothing cause the nature of it is the same. Cause.....no the nature of personhood is up to us to decide.

    Personhood is not some separate entity from our reality, it is part of a language game. It is not a cosmic concept that we discovered or learned of, it simply is. Otherwise how do you know anything? If we are gonna say the nature of things is immovable, then what does "I think" mean? Heck even Descartes has been critiqued with the idea that "I" cannot be determined, it is a presupposition, thus he should only be able to truly know that "something is thinking"

    These language games do matter, but again just within the context of star wars, droids can circumvent, that being the byproduct of programming an organic being made does not make it less real once it can will things and adapt. You are still just presupposing that it is programming, and as something different it cannot count as personhood cause it comes from people. Again the nature of "person" is not a universal concept that we can quantify; but also if I need to go to the bathroom, that is my bladder telling my brain that it is ready to urinate. This is the byproduct not only of eons of organisms being programmed to the point that even without a parent teaching them, even without any interaction at all their body has that involuntary movement. Beyond that, the bladder is acting totally on its own to inform the brain. Body parts can actually react before the brain even can respond to tell them to act. Each part is programmed.

    Oh and a robot can have a life span, actually Bicentennial Man uses that as the final winning argument. Like not even kidding, the robot proves his personhood by having someone operate on him to make him die at 200 years. Hence the title
     
    Last edited: Dec 17, 2019
    Daneira and Iron_lord like this.
  9. Iron_lord

    Iron_lord Chosen One star 10

    Registered:
    Sep 2, 2012
    A human "conceived in a lab" (a test tube baby) has no less rights than a human created the old-fashioned way.

    Similar principles should apply to lab-created beings.

    It is just as much "cruelty to animals" and equally immoral, to abuse a geep (goat-sheep hybrid) as it is to abuse a sheep or a goat.

    "It's a lab experiment" does not make the creator immune to prosecution if they mistreat it.


    The way I see it, in order to be ready for encounters with nonhuman intelligences (either ones we create, or naturally evolved ones, humanity needs to move away from a system of human rights, and towards a system of sapient rights, person's rights, etc.

    That way, if we ever create sapient animals, or sapient computers, the laws are in place to protect them from exploitation, and ensure they are treated properly.
     
    vncredleader likes this.
  10. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017

    This calls for revisiting Yoda's "luminous beings" quote from Empire Strikes Back. Luminous beings we are. George Lucas is a teaching spirit.




    Yoda Quote:

    Size matters not. Look at me. Judge me by my size, do you? Hmm? Hmm. And well you should not. For my ally is the Force, and a powerful ally it is. Life creates it, makes it grow. Its energy surrounds us and binds us. Luminous beings are we, not this crude matter. You must feel the Force around you; here, between you, me, the tree, the rock, everywhere, yes. Even between the land and the ship.
     
    Last edited: Dec 17, 2019
  11. Iron_lord

    Iron_lord Chosen One star 10

    Registered:
    Sep 2, 2012
    And the Vulcans are "luminous beings" thanks to having Katras. This doesn't stop Data or the Voyager EMH from winning respect as individuals and persons.
     
    Outsourced and vncredleader like this.
  12. Darth Invictus

    Darth Invictus Jedi Grand Master star 5

    Registered:
    Aug 8, 2016
    I would say the most advanced droids in SW pass the Turing Test and are thus sapient, but that definitely does not apply to all droids in the GFFA.
     
    vncredleader likes this.
  13. vncredleader

    vncredleader Force Ghost star 5

    Registered:
    Mar 28, 2016
    Plus Lucas treats droids as being sentient. That and what Yoda says does not preclude them from being people. "we" refers to Yoda and Luke, they are luminous beings, but that does not in any way shape or form make that which is not "we" not a person. Being luminous is not a qualifier for being a person, and Yoda does not say that. Crude matter is not precluded from being sentient just cause it does not have a soul. Personhood is not measurable by having a soul cause what that even is is so dependent
     
    Iron_lord likes this.
  14. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    https://en.wikipedia.org/wiki/Turing_test

    Consciousness vs. the simulation of consciousness
    Main article: Chinese room
    See also: Synthetic intelligence
    The Turing test is concerned strictly with how the subject acts – the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of the mind. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.

    John Searle has argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking."[35] His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)

    Turing anticipated this line of criticism in his original paper,[67] writing:

    I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[68]
     
  15. ColeFardreamer

    ColeFardreamer Force Ghost star 5

    Registered:
    Nov 24, 2013
    Droid Rights Activists, I bring news from Episode IX, saw it twice today and it adds to this topic in interesting ways ;) Brace for impact, not gonna spoil it. Soon you'll know!
     
    Tython Awakening likes this.
  16. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    I would like to think that JJ Abrams has something to say on this topic.
     
  17. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    Revisiting this topic post-Episode 9

    Re: JJ Abrams
    Yes, he did have something to say about this topic in the Ep. 9 scenes with C-3PO. The Mandalorian also addresses this topic.

    "There is nothing to be sad about. I have never been alive."

    https://starwars.fandom.com/wiki/IG-11


     
    Last edited: Feb 16, 2020
  18. CernStormrunner

    CernStormrunner Jedi Grand Master star 4

    Registered:
    Jul 6, 2000
    Droids are made of metal
    Humans are made of meat
     
    Alpha-Red and Iron_lord like this.
  19. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    That's the Kentucky Fried Chicken Human analogy. Or choose a fast food/eatery analogy for your respective country.
     
  20. CernStormrunner

    CernStormrunner Jedi Grand Master star 4

    Registered:
    Jul 6, 2000
    Parts is Parts
     
  21. vncredleader

    vncredleader Force Ghost star 5

    Registered:
    Mar 28, 2016
    I feel IG-88 proves droids are people. There is actual compassion there. a willingness to trust Mando and Cara, etc. It is spelled out by Nick Nolte, just like Baby Yoda, IG is the reflection of who raised him. It was not even programming by coding but rather learning by experience
     
    Iron_lord likes this.
  22. Tython Awakening

    Tython Awakening Force Ghost star 4

    Registered:
    Oct 12, 2017
    I can accept that IG-11 is a surrogate protector for a member of Baby Yoda's species. Essentially, Baby Yoda should be growing up under the protection and care of members of his own species. There is true comfort and safety by being surrounded by members of your own tribe. IG-11 is merely a temporary protector and substitute for a tribe of Baby Yoda's species or sub-species. Everything he offers Baby Yoda for care and protection is a synthetic substitute designed to get Baby Yoda where he needs to go.

    IG-11 is programmed to perform and execute his functions to Baby Yoda. Getting attached interferes with IG-11s programming. That's why IG-11 offers the reassurance, "There is nothing to be sad about. I have never been alive.". We should not assume IG-11 is adequate to replace members of Baby Yoda's species as a parent. IG-11 is not a permanent replacement for Baby Yoda's parents.

    IG-11 also said, "I still have the security protocols from my manufacturer.". Throughout season 1, IG-11 has suggested he is programmed to self-detonate to protect Baby Yoda when the odds of survival are too grim. IG-11 also reiterates that he cannot be captured.
     
    Last edited: Feb 19, 2020
  23. ColeFardreamer

    ColeFardreamer Force Ghost star 5

    Registered:
    Nov 24, 2013
    so once we are done with this topic, what will be up next? Humans vs. Aliens? How human are aliens? How alien are humans?

    or Humans vs. Humans? analysing inner-species diversity and selfdestructive tendencies in some but not all species?

    or Individual Human vs. Groups of Humans, a deepdive into the egoistic and altruistic natures of our selves using SW gffa as examples and setting for analysis

    or... humans vs. god(s)? Humans made in gods image trying to please her and and become like her yet never acknowledged by her due to just being fleshy copies of something divine and luminous?

    or... Droids vs. Humans... crafted in their Makers image attempting to please them and become like them yet failing to recieve proper recongition due to just being parts crafted from nature put together... wait... deja vu!
     
    Iron_lord likes this.