jump to navigation

IBM Produces Computer Chip That Acts Like A Brain August 19, 2011

Posted by Metabiological in Synthetic Intelligence.
Tags: , , , ,
add a comment

One of the more interesting aspects of the human brain is its ability to rewire itself.  Indeed the ability of our neurons to form new connections with each other is the basis of both our memory and our ability to learn new skills.  Its also something that separates us from current machine intelligence.  Well IBM may have taken a step closer to bridging that gap with the development of a microprocessor which behaves like a human brain.

In humans and animals, synaptic connections between brain cells physically connect themselves depending on our experience of the world. The process of learning is essentially the forming and strengthening of connections.

A machine cannot solder and de-solder its electrical tracks. However, it can simulate such a system by “turning up the volume” on important input signals, and paying less attention to others…

Instead of stronger and weaker links, such a system would simply remember how much “attention” to pay to each signal and alter that depending on new experiences.

Interesting certainly, but let’s keep things in perspective.  This does not signal the development of a “machine brain”.  While coming closer to the hardware is a nice step the real challenge is going to be figuring out the software (i.e. consciousness) which still presents a host of both engineering and philosophical problems.

The other issue of course is that not everyone agrees that building a machine in the likeness of the human brain is the best way to achieve synthetic intelligence.  More than a few actually think that reverse engineering the brain is precisely the wrong way to go about, being difficult, time-consuming and in their view unnecessary.  I admit to being only an interested observer in the realm of machine intelligence and therefore have to rely on what the experts in the field tell me.  That being said it strikes me as intuitive that an intelligence that is built on the same foundations as our own, with thoughts and information exchanged in a similar thought obviously not identical manner, might be easier for us to relate to and understand (and perhaps more importantly predict) than an intelligence whose very structure is totally alien to our own.

That’s merely a personal opinion but the question may be an important one.  A similar situation can be found in the realm of animal intelligence, specifically the intelligence of creatures like cephalopods whose evolution and brain structure is vastly different from our own.  Whether or not such things matter in how w being perceives and acts is an open question, one I’m sure will be answered sooner than we think.

Scientists Discover How Brain Recognizes Faces June 1, 2011

Posted by Metabiological in Synthetic Intelligence.
Tags: , , ,
add a comment

Interesting news coming out of the PNAS.  A group of researchers have pinpointed the areas of the brain responsible for that most human of talents; recognizing another person’s face.

“Faces are among the most compelling visual stimulation that we encounter, and recognizing faces taxes our visual perception system to the hilt. Carnegie Mellon has a longstanding history for embracing a full-system account of the brain. We have the computational tools and technology to push further into looking past one single brain region. And, that is what we did here to discover that there are multiple cortical areas working together to recognize faces.”

While this is certainly cool in and of itself and will have great implications for our understanding of the brain and conditions like prospagnosia (the inability to recognize faces) what excites me most about this is the implications it could have for the field of synthetic intelligence.

One of the paradoxes of SI research has been that tasks we perform quite easily, such as face recognition or folding the laundry, has consistently given machines difficulties.  The reason for this as far as I can tell (not being a SI person) has to do with the underlying structure of the human brain.  The brain is more or less a giant pattern recognizing device, designed through evolution to allow us to tease apart the facts that allowed us to survive in a prehistoric world.  Facts like whether or not a certain colored fruit is okay to eat or whether there was a dangerous predators hiding in the grass or, you guessed it, whether or not that person standing in front of us was someone we already know.  Machines on the other hand, for all their great speed, are still little more than calculators performing single calculations one after another.  They lack the massive parallel processing abilities of the brain because their not built like it.

There’s a fair amount of debate within the SI community as to whether or not achieving machine sentience requires the construction of an artificial brain first.  Not being a computer scientist I’m not entirely sure where I stand on that debate, though I can say that plenty of computer scientists do believe so, but sooner or later we will create a machine in the model of the human brain.  Research like this may help us get their sooner.

I For One Agree With Ken Jennings February 18, 2011

Posted by Metabiological in Synthetic Intelligence.
Tags: , ,
add a comment

Have to give the guy credit for having a sense of humor.

 

P.S. There’s really nothing else to say on Watson.  Its an interesting PR piece, little more.

Is The Singularity No Threat? January 22, 2011

Posted by Metabiological in Synthetic Intelligence, Transhumanism.
Tags: , ,
comments closed

Over at IEET Kyle Munkittrick has just explained why we have nothing to fear from the rise of machine intelligence.  His post is a response to a short post by Michael Anissimov in which he reiterates a position that he’s held for awhile: that the Singularity is the greatest threat humanity is likely to face in the coming century.

Now I’m not entirely sure where I stand on the issue of the Singularity’s threat.  I certainly recognize that the development of new technologies always brings with it risks (see coal power) and there is little doubt that the emergence of greater than human intelligence will constitute a major risk.  That being said I have a hard time not laughing at some of the more apocalyptic visions I’ve seen and feel are strong urge to punch anyone who brings up Skynet as anything other than a joke.  But Kyle’s naivite, I can think of no other word for it, on the potential threat is nothing short of jaw-dropping.

The major problem with his argument unfortunately happens to be its central thesis; that even if a synthetic intelligence were to arise it would be unable to interact with the physical world and therefore poses no threat.  Even assuming that this scenario is likely, personally I think otherwise, his suggestion that an intelligence confined to a computer wouldn’t be able to affect is downright ludicrous.  He even mentions as an example an SI causing havoc on our communication networks and then brushes it off as if it were nothing.  One would have hoped that a person who writes blogs on the internet for a living would have a little more respect for the way that communications technology has become the bedrock of our society and economy.

In fact, let’s do a little though experiment.  Let’s take an industry, agriculture for example, that is essential to the continued prosperity of humanity and heavily reliant on computer technology.  Nowadays most food is produced far away from its point of consumption.  Whether this is good or bad is a subject for another time because the fact is most of us do not subsist on food grown in our local region.  To maintain the elaborate system that ensures your food gets from its farm thousands of miles away, across continents and oceans, takes a large and powerful infrastructure that today is heavily reliant on telecommunications technology.  Now imagine that something, say an SI, were able to disrupt that system?  How long to you think it would take for cities to turn into battlegrounds?  It wouldn’t even need to be a large disruption.  Most supermarkets don’t plan to carry stocks for large periods of time and an event like what I’m describing could send people into a buyer’s panic.

Want a more relevant example of a computer wreaking havoc in the real world? How about the flash-crash in the stock market last year? Wall Street algorithms caused a seven hundred point drop in the Dow Jones in a matter of minutes. They weren’t malicious (just doing what they were programmed to do) and they sure as hell couldn’t interact with the physical world yet they still managed to send people into a panic if only for a few moments. As we give more and more control to machines who can really predict the next crash won’t be worse.

Now I realize that what I’m saying sounds somewhat apocalyptic and I don’t think that such scenarios are necessarily likely. As I said I’m not sure where exactly I stand on the issue of the Singularity’s threat but one thing I do is acknowledge that there is a threat.  To brush off the danger as Kyle and others are doing is akin to a person walking backwards towards a cliff and saying “Everything looks good from here.”

The Machines Run Wall Street January 13, 2011

Posted by Metabiological in Synthetic Intelligence.
Tags: , ,
comments closed

There’s an excellent interview up on NPR concerning the use of high speed computers on Wall Street.  You can listen to it right here.

A few things I take away:

Firstly, despite what the commentator alludes this isn’t SI really.  These algorithms don’t display sentience and certainly don’t mimic human thought process.  But then they’re not being asked to think like us, they’re being asked to do things we can’t.  Their advantage over a human trader is in being able to sift large amounts of data and data sets and determine what variables exert the greatest affect on the market.  This isn’t an example of synthetic intelligence but it is a further example of the effects that automation will have on our society and economy.

Many people have been under the false pretense (or should I say illusion) that automation would only effect the industrial and manufacturing industries by eliminating blue-collar jobs and anything that requires manual labor.  This is flat out wrong.  While I could point out how we cling to our collective delusion that there is something special about human intelligence that can never be replicated that’s not really what’s going on here.  These algorithms aren’t simulating human thought because human thought isn’t required, only the ability to sift data and recognize patterns.  Now stop for a moment and think of all the jobs that really can be boiled down to those activities.  Quite a few right?  This is the next frontier of automation and it’s going to take us for a ride (for a really good read on the subject, look up The Lights in the Tunnel.)

Secondly, I’m amazed that so much of our prosperity rest on a system that even insiders claim to not really understand.  I can sympathize with those working the market to a degree since I fully understand the difficulty involved in predicting the actions of complex systems (I am an ecologist after all) but it scares me to think of how little we know about something that exerts such a great influence on all of us.  The commentator mentioned one company in particular, Volion, that doesn’t even know what their trading on.  Putting aside the fact of how that’s even possible I wonder… no screw that, HOW IS THAT EVEN POSSIBLE!?  How have we built the driver of our economic prosperity on quicksand?

Finally, the major problem I see here is the major problem that confronts any use of computers to make our decisions for us: how to we know they’ll make the right ones.  This isn’t a question of computers going rogue and taking over the world a la Skynet.  This is a simple question of priorities.  How do we know that computers will have the same priorities as we do?

Imagine for a moment that we build a computer, the most powerful computer ever, blessed with freedom and ability to solve any problem we put to it.  We ask this computer to solve a seemingly impossible math problem.  The computer, following our instructions, proceeds to convert all matter on earth, including us, into hardware to increase it’s computational ability and solve the problem.  Sound crazy?  It isn’t.  These algorithms do not think like we do and cannot be predicted to behave as we do.

As we hand more and more power over to machines we can’t predict working within a system we don’t understand, what do you think the odds are something will go wrong?