Technology Turing, AI and Rights

Discussion in 'Technology' started by DeusEx, Dec 16, 2004.

  1. DeusEx

    DeusEx Member

    I was chilling with the Machine Mother today, and I wondered... if and when machines pass the Turing test, will they be afforded rights? Should they be afforded rights?

    For that matter, is the Turing Test a valid benchmark for artificial intelligence to be measured against?

  2. pineappleupsidedown

    pineappleupsidedown Premium Member

    Can you explain what the Turing test is?

  3. DeusEx

    DeusEx Member

    The Turing Test is a method devised by Alan Turing in the 50's to determine if a machine is 'intelligent'. While wholey inadequate, it seems to be the only real measure of artificial intelligence one cna really devise... unless anyone here has a better idea.

  4. tablet

    tablet Premium Member

    We know whether they're intelligent or not when they ask for Rights. Of course, we must not program them to ask for it, they have to learn it from basics then reach to such question or desire to have rights.
  5. Bleys

    Bleys Phoenix Takes Flight Staff Member


    Turing test (from wikipedia) - it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test.

    Just my opinion mind you but the Turing test seems a little weak. Just because a machine is able to engage in "preprogrammed" dialogue or natural language doesn't make it intelligent.

    *thinks real hard*

    So how does one test for self-awareness without being fooled through creative programming?

  6. tablet

    tablet Premium Member

    AI programmer are working hard to give code intelligent. I believe AI is possible oneday when we have a very very fast machine that can process trillions of instruction in a second. IF we cannot even with the fastest machine, then we will do it Robocop way.

    but the main question is at large:

    if and when machines pass the Turing test, will they be afforded rights? Should they be afforded rights?

    Such a hard question to be answer at this moment.
  7. tablet

    tablet Premium Member

    Let us look at smaller issue. Take code for example.

    A programmer who came up with intelligent ways of solving a certain problem can patent the code and protect it. In a sense, the code has rights because it's protected underlaw. Protected from being stolen and misuse.

    Suppose oneday someone was smart enough to came up with a code that is perfect for AI and the code is original that it got protected and patented like alot of the encryption method out there.

    When this code got implemented, I think the robot got rights too... I'm not sure, but... let's see.
  8. DeusEx

    DeusEx Member

    I would think that a machine's thought processes would be fundementally foreign to a human's. I do not know whether or not the concepts of rights would occur to them.

  9. sardion2000

    sardion2000 Member

    I touched on this briefly on ATS a while back and I pretty much worry about them becoming a slave race and eventually overthrowing us. We need to be proactive in this regard when the time comes. It could come sooner rather then later.
  10. DeusEx

    DeusEx Member

    I'm more concerned about the fact that perhaps the concept of ethics and morality would not dote upon an SHODAN. Man, did she ever get her evil on. The problem is that a human might neglect to code something like that in, or worse yet, might take it out deliberately. Would that violate the machine's rights?

  11. bodebliss

    bodebliss The Zoc-La of Kromm-B Premium Member

  12. If you read Ray Kurzweil he talks about making AI's with a hierarchy of desires.

    You make the really high ones to protect and serve humans, then as you go down the heirarchy you get into the specifics and then into doing their job. This prevent an AI charged with managing traffic in New York from just killing everyone. Its "voila, no more traffic jams because no more people" idea would conflict with its "protect and serve" goal.

    I can well believe that in the future AI's will be treated as full citizens with full human rights. And while their thought processes would be alien to us, their behaviour would be guided by humans.

    The major problem i can see is if a group with sufficient power, money and lack of oversight (say, the American government-military complex) Created an AI to help them fight wars and dominate rivals. Such a machine would have to be created without the safeguards that civilian AI's would have and could wreak havoc.

    Instead of AI's becoming a slave race, it think it far more likely we will either merge with AI's or become their slave race.

    Of course, most of this is largely hypothetical as sentient AI is still a good way off (although maybe within our lifetimes).
  13. ufia

    ufia New Member

    Considering the poor quality of computer software we are facing today, I would be scared to be anywhere near an intelligent robot. Just imagine the robot trying to open a door and then suddenly decides to crush your head just because the door is 2 inches wider than the one he was pre-programmed to open. As usual, the vendor would argue that it's a feature, not a bug. So lazy people would adhere and think it's normal for a robot to just crush people on its way and eat babies.
  14. sab

    sab Premium Member

    yea, i wonder if a chimpanzee who knows signlanguage can pass the turing test.
  15. blue

    blue Premium Member

    Well, to go back a few months- a thread was going around about granting rights to animals. I figure if that ever comes to pass then i see no reason ppl wont press to gain rights for machines. such is our orwellian world.
  16. JcMinJapan

    JcMinJapan Premium Member

    Well, I think having a computer that can think and develop self reliance will not really be that difficult. Everything we think and here now, has been PROGRAMMED in our brains by society, parents, friends, and experiences.

    If a programmer could give a computer pure FreeWill to do anything and and to learn, then there you have it. You let that computer live in your house, you will of course teach it to speak, teach it to crawl, to walk, to stand and jump and run etc. You will teach it to clean, what is right or wrong, etc etc.... This will all be learned and remembered by the computer. The computer would then take its experiences and have to decide to follow them or not. It would compare similar experiences and make choices. But, then again, you may have robots that actually do not mind killing, some may hate it, some lay listem... but, basically they would be human eh?

    FreeWill, memory, and ability to compare is all it needs.... The rest we just have to pray it listens.
  17. sab

    sab Premium Member

    i just want a computer to be smart enough to be able to identify from an image that object in the image.
    an example of this would be as follows:

    i have this picture and i dont know what kind of flower it is
    so i scan it in to my computer and the computer responds:

    Morning Glory

    Family :Convolvulaceae
    Genus :Ipomoea
    Species :violacea

    Morning Glory; Heavenly Blue Morning Glory; Tlilitzin; Badoh Negro (seeds)


    Ipomoea violacea is a common ornamental vine with heart-shaped leaves and bright white, pink, or purple flowers and small, black seeds that contain LSA. Because of its fast growth and prodigious seed production, many jurisdictions consider it an invasive weed plant. It has a long history of use in Central to Southern Mexico.

    but how about if i have this picture?

    so i scan it into my computer and the computer says:

    Morning Glory Pool

    Morning Glory Pool in Yellowstone National Park was named in the 1880s for its remarkable likeness to its namesake flower. However, this beautiful pool has fallen victim to vandalism. People have thrown literally tons of coins, trash, rocks, and logs into the pool. Much of the debris subsequently became embedded in the sides and vent of the spring, affecting water circulation and accelerating the loss of thermal energy. Through the years Morning Glory's appearance has changed as its temperature dropped. Orange and yellow bacteria that formerly colored only the periphery of the spring now spread toward its center.

    that is what i want my computer to do.
    if i have a picture of something i cant identify, i want my computer to know what it is.

    would that be intilligence?
  18. ufia

    ufia New Member

    That would be amazing, I can't wait for that!

    Target tracking and object recognition technologies are in use today for military purposes. They use a video camera to recognise a pre-defined 3D object in the field, and then it can track it wherever it moves. This technology is slowly finding its way in the civil for robotics. Let's say, to detect defective parts in a manufacturing chain.

    I wouldn't call that a smart computer, in the meaning that it can learn on its own about objects. You still have to build a huge database of every objects you plan to search in the future, every type of flowers, etc..

    Apparently Google Image Search in its early days included a "find similar image" feature, I've read about it, I never seen it personally. I think it used some sort of correlogram or something to find object with the same shape or same set of colors. After a while it started finding too much irrelevent results, so they removed that feature.

    The website have this feature, you upload an image and it try to find similar aircrafts.
  19. sab

    sab Premium Member

    yea, we are getting there
    some time soon (in our life times) this will happen.
    But what if i take a picture of road kill, i know it usually takes my brain a couple of seconds to identify that that smudge was a squirrell or a cat or a possum or dog or rat or whatever,
    then scan it in to the computer, the computer should will be able to deduce that the smudge was infact a black bird of the crow family.
    and that should happen faster than it takes my brain to figure out what is isnt.
  20. bodebliss

    bodebliss The Zoc-La of Kromm-B Premium Member

    I think in time after the singularity. The event horizon that an AI would represent we will all decide to make it our leader and let it guide us in the future.