DrJBHL DrJBHL

New and somewhat concerning development in AI

New and somewhat concerning development in AI

 

Bill Gates, Musk, Hawkings and others have all stated their concern regarding the distressing potential dangers of AI more than once, on we go pell-mell towards self aware/self governing machines.

We can’t even get security updates right without causing severe problems, but somehow think “We can do this. We can win!”.

Just a minor thought…a program, any program (including heuristic ones) are limited by their coding and how, via this coding, they ‘learn’. The same is true about biological systems. Their form, their being carbon based, their being subject to the laws of thermodynamics and sensitivities to environment, and other biological entities all determine and limit how they learn.

Another minor thought, “If something can go wrong, it will.” Just ask God.

Now comes this report by Selmer Bringsfjord (RPI, New York) (inThe New Scientist) regarding a test he ran using the classic “Wise-men Puzzle” on three robots, two of which he silenced, and one he didn’t. All three had auditory sensors.

“In a robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.

They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.

Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says. It then writes a formal mathematical proof and saves it to its memory to prove it has understood.” – New Scientist

Granted, this isn’t “full consciousness”, but this is conscious thought and shows a conception of ‘self’, or “the first-hand experience of conscious thought’.

There are those who are correct in saying that there’s a big difference between saying, “It’s sunrise.” and being able to enjoy the esthetic experience of knowing who you are and being part of that sunrise and . Perhaps central to the experience is knowing one is mortal and what that sunrise signifies in terms of mortality and the passage of time which generates compassion for others subject to that passage of time and the knowledge that each is at a different point in that passage.

Perhaps what I fear most therefore, is a machine which has no compassion and its actions for self preservation without that essential quality, even if through inaction because it simply has no perception that it is doing wrong since ‘right’ and ‘wrong’ are alien to it.

After all, even though very imperfect, we do have a system of checks and balances, ideas of morality, etc. which function (to some degree) to limit us.

If you don’t believe the craziness of all this, if you don’t believe this is real, read about how ‘killer robots’ was to be discussed at the the U.N. Convention on Certain Conventional Weapons. You can look up the meeting (11/2014) search. You read more here.

Source:

https://www.newscientist.com/article/mg22730302-700-robot-homes-in-on-consciousness-by-passing-self-awareness-test/

http://www.computerworld.com/article/2970737/emerging-technology/are-we-safe-from-self-aware-robots.html

http://www.stopkillerrobots.org/2015/03/ccwexperts2015/

http://www.computerworld.com/article/2489408/computer-hardware/evan-schuman--killer-robots--what-could-go-wrong--oh--yeah----.html

202,377 views 38 replies
Reply #26 Top

Quoting Jafo, reply 25


Quoting starkers,

I'm not just concerned about AIs evolving and replicating much faster than we humans do, I'm concerned that AIs will see humans as irrelevant and superfluous to their needs, therefore deciding to kill us all off to enable wiser use of the world's resources.  I mean, AIs that have been programmed with all human intin elligence and more will take one look at us and think most of us are too thick to be spared... and with 'human' intelligence they'll sure know how to kill, won't they.

Frankly, many inventions and leaps in technology have been purely made so mankind can be lazy... and AI is just another step along that path of mankind becoming more slovenly and sloth-like, only this time it will come to bite mankind in its rear... considerably harder that it's ever been bitten before.


,
You need to read my old fave comic [was a Gold Key one] - Magnus the Robot Fighter ...;)  

I never got into comics... guess I never needed superheroes or dweebs with their underwear on the outside.  Nah, as a kids I was always too busy for comics, either being outside in the English countryside or working for my father and/or mother.... and we were always early to bed so there was no time for comics after we'd watched our fave TV shows.

Shoot, even Wonderwoman couldn't get me into 'em.

Reply #27 Top

It's not a live until it can tell me and explain what it's favorite song is, and they would only kill us if there was a need. (Or a program fault or if it was intentionally done.) Because it could be intelligent but my not be cable of killing or thinking of it.

Reply #28 Top

 

The first edition ...;)

Reply #29 Top

Anyone seen Screamers? Now that is a movie that would put most people off the idea of autonomous killer drones.

As far as robotic weaponry/drones etc, I firmly believe there should always be a human behind the trigger. If it is AI controlled, who is to blame if the machine bombs a school or hospital instead of a military target?

There should always be a human there who has all that pressure of morality, duty and the consequences of making a mistake.

Also, I'd rather we build giant mechs and duke it out with each other than have us wiped out by rogue AI =P 

Reply #30 Top

I'm not afraid of any synthetic intelligence or consciousness as long as the developers don't map feelings into it or give it a body that can experience frustration.

The aforementioned rape, murder etc pp all happen because of strictly emotional reasons - this has nothing whatsoever to do that these criminal activities are based on intelligence. It's more or less an absence of it via a loss of self-control. Of course this is just a very big generalization as there are much more reasons to do evil. There are people whose brain apparently doesn't work like it should, be it from sickness, "bad" genetics, lead astray by ideas, immature infantile minds that crave hedonism even over other peoples rights, all the religious known sins etc... the list is endless ... but one thing to realise is that a computer should be oblivious to most of these motives. Perhaps a computer could be made intelligent without being able to have urges, motives etc on his own in the first place.

Then, I don't believe that Artifical Intelligence such as we see in our self is even remotely possible with machines. It might be an expression of biolgical life, and I throw consciousness right therein, too. As of now, both terms lack an ultimate & precise definition, but unlike us, a computer code needs to specify its terms exactly or it won't work.

In this thread I sense irrational fear of the unknown, fear to loose power - both of which could be called a root of evil themselves.

Reply #31 Top

I think, therefore I am.....

 

 

 

 

 

 

 

 

 

 

....I think.

Reply #32 Top

Whenever discussion come up like this I usually take a nap and thinks just sort themselves out............or is it that I realize I have an overwhelming lack of interest.    :zzz:

 

Reply #33 Top

Quoting Maiden666, reply 30

I'm not afraid of any synthetic intelligence or consciousness as long as the developers don't map feelings into it or give it a body that can experience frustration.

The aforementioned rape, murder etc pp all happen because of strictly emotional reasons - this has nothing whatsoever to do that these criminal activities are based on intelligence. It's more or less an absence of it via a loss of self-control. Of course this is just a very big generalization as there are much more reasons to do evil. There are people whose brain apparently doesn't work like it should, be it from sickness, "bad" genetics, lead astray by ideas, immature infantile minds that crave hedonism even over other peoples rights, all the religious known sins etc... the list is endless ... but one thing to realise is that a computer should be oblivious to most of these motives. Perhaps a computer could be made intelligent without being able to have urges, motives etc on his own in the first place.

Then, I don't believe that Artifical Intelligence such as we see in our self is even remotely possible with machines. It might be an expression of biolgical life, and I throw consciousness right therein, too. As of now, both terms lack an ultimate & precise definition, but unlike us, a computer code needs to specify its terms exactly or it won't work.

In this thread I sense irrational fear of the unknown, fear to loose power - both of which could be called a root of evil themselves.

 

Or maybe a natural progression of logic could do us in. Just ask Dr. Mel Practice...

http://www.gocomics.com/brewsterrockit/2015/04/28

+1 Loading…
Reply #34 Top

Quoting Jafo, reply 31

I think, therefore I am.....

 

 

 

 

 

 

 

 

 

 

....I think.

 

Reply #35 Top

"poof?'   "poof?  No way.   "poofet"

Reply #36 Top

I thought.... therefore I was....

 

 

However.... thinking sometimes hurt

 

So I evolved.....

 

 

Now I simply exist.... happily. :grin:

Reply #37 Top

Please wake me up when AIs will be capable of playing games like Rome Total War, Civilization, Go ... or Galactic Civilization on a semi-competent level. Or when they are capable of truly understand human language, and to perceive things like irony, metaphor, symbolism, etc.

I think the current tech is still eons from that.

Meanwhile... why don't we make our world at least a bit asymmetric so that the current AIs have at least a fleeting chance of taking over, just for the sport, okay?

Reply #38 Top

Fascinating read, Doc. Have you seen this?

http://qz.com/481164/ibm-has-built-a-digital-rat-brain-that-could-power-tomorrows-smartphones/

When you couple the leaps in programming with the new leaps in organic computing and quantum computing I do think we'll be seeing at least "weak AI" starting to approach human simulation (though probably not consciousness) within the next 10 years or so, maybe 15. In 10 or 15 years people's Siri's and Cortana's in their phones will be able to have personable conversations with them. It wouldn't surprise me at all if we see the first actual strong AI sometime within the next 20 years, probably less.