Saturday, 5 September 2009

On: "The revolution will not be robotised" NewSci 29Aug09

p28 - Two page interview with Noel Sharkey (perhaps most famous for being a judge on BBC's "Robot Wars"). Or "Why AI is a dangerous dream" (online).

- Basically, he's sitting on the fence, saying that he needs to see proof of artificial sentience before he'll concede it's possible. He does at least give fair mention to the opposite side of the debate (Hans Moravec and Ray Kurzweil).

"The mind could be a type of physical system that cannot be recreated by a computer"

His sentence should use "in" in place of "by": A computer can no more create or be sentient(s), than can a calculator understand mathematics. It's a category error; computers are merely calculators by design. Artificial Sentience would come in the form of software running on a *very* powerful computer, or a radically different electronic architecture.

But he doesn't give any examples of systems than are fundamentally impossible to emulate on computers, or why they be so. Given the amazing accuracy of physics simulations (able now to simulate entire viruses from quantum mechanical subatomic upward), I think the burden of proof should be on the other foot. But proving that a real world system can never be simulated accurately would appear to be equivalent to proving it could never be scientifically defined or understood. Quite possibly a paradox; thus everything is computable (eventually).

He admits that his own work is on problems immediately solvable with AI; these systems truly are tailor made by clever humans to solve simple problems as reliably and predictably as possible. Hence they are brute force and dumb as possible. To attribute any/all successes of a creation to it's creator is clearly flawed though: The inventor or an electronic calculator can not claim to be able to immediate find the square root of any 12 digit number. Genetic algorithms, today, are consistently finding designs (for RF aerials, F1 car aerodynamics, traffic light systems) that their human creators would *never* have come up with (often utterly weird and counter-intuitive). I'm not saying, therefore, that a piece of code should be attributed the status previous granted to the human expert it has replaced. I'm saying that intelligence has never been what it's cracked up to be. I like the his quote from Marvin Minsky:

"AI [is] the science of making machines do things that would require intelligence if done by humans."

Much of the progress in 'AI' to date has been through reducing our over-estimation of the mental abilities of humans, rather than attaining this mythical quality we call "sentience".

- This is all fine if he's taking this standpoint purely as a means to an end: getting policy makers to treat the current generation of robots correctly. In my experience there is a lot of anthropomorphising of very dumb machines (even by roboticists/computer scientists, who should know better). This is merely evidence of humans' lack of intelligence, rather than any limitations on future machine intelligence, as Sharky would have it. People expect too much too soon, then too little further in the future (after short term over-optimism falls flat), because people have little/no intuitive understanding of the experiential nature of technological progress.

The experts may just be using language loosely or hyping up their creations, but this could lead to a climate of misunderstanding in those outside the domain of expertise. A strong sceptical voice might well be beneficial here, to help avoid detriment to humanity. Warfare is the most pressing issue:

Last millennium, the ethics of tactics in armed interventions (e.g. US's various wars: Vietnam, Iraq 1, etc) there has been a balancing act between minimising foreign civilian casualties, and minimising troop casualties. Shelling from a distance, of dropping bombs from on high keep your boys safe, but are significantly less discriminatory as to whom they kill. Homeland public opinion is naturally biased towards it's soldier's safe return, but there are also limits to how many collateral deaths are acceptable.

Contemporary military automation seems to be forging ahead towards zero coffins being flown home, but on the flip side is only aiming (at best) to be equally as competent as human soldiers are currently. It shouldn't be a question of whether a robotic sniper cut distinguish civilians for militants as capably as a trained person can?; we should be asking: is it really necessary to use deadly force at all, when it doesn't even keep our military personnel safer? It's no longer 1 life vs. X number of civilian casualties; with robotic troops, it's human casualties now have a direct market valuation ($X). This kind of thinking doesn't seem to factor into the arms industry's thinking enough yet. It may take a massive humanitarian outcry (from a home population) before policy makers take note.

- Elder care won't be a significant issue until anthropomorphised robots become a reality (decades off for many reasons). If the Elderly miss out on genuine human contact as a result of carer's jobs being taken over by robots, this is more likely to be a result of necessity; the 'baby boomers' will soon be reaching that age range, adding greatly to the growing burden of an ageing society.

- He has the “Robocup” example the wrong way around: robotics is utterly failing to match the coordination and finesse of human movement; kicking a ball accurately is a long way off (the latest Asimo can barely run on a perfectly flat surface). Strength and endurance are also major deficits of, would be, artificial players: there just aren't the materials technologies to match the combination of human muscles and biochemical energy reserves, honed over billions of years of evolution.

On the flip side, tactics are easier to define. Leaving aside the 'computer vision' systems necessary to operate a stand alone robot soccer player, I don't see any reason that an AI system could not soon obtain optimal tactics (player positioning wise) within the next 10 years, or less. This might well be achieved through a 'brute force' technique, i.e. evaluating the positions of players in every recorded football match available. More examples and detail than any human player or manager could ever hope to scrutinise. But why should this be assumed to be cheating?; in a person it would just be considered the advantage of experience.

No comments:

Post a Comment

I'm very happy to see comments, but I need to filter out spam. :-)