Sunday 6 September 2009

Last Month in New Scientist (The Best of August 2009)

For all the articles in the magazine that I've found interesting but not thought worthy of an entire blog entry devoted to them. I was aiming to mention about 4 articles per issue, with a paragraph each. Naturally, I've elaborate significantly in places (the Noel Sharkey interview became a full blown piece on it's own), and there were 5 magazine issues this month (opposed to the more usual 4). I'd say that it is, at the least, worth a scan through the article titles listed below, in case anything jumps out at you too.

I've not provided links to individual articles online, just the issue's index page, seeing as most people won't have full access to many of the articles anyway.


** 29 Aug:

+ "It's a small world - in a healthy brain" p14 -
The neuronal structures (in human cortices) display behaviour consistent with their connectivity being finely configured to a state of 'self organised criticality'. Basically that means cascades of activity are as unpredictable as earthquakes and the neurons form a structure more like human society (where one's only ever 6 degrees of separation from anyone else on the planet), rather than merely relying on neighbouring cells to pass signals on to their neighbours, etc. This comes as no surprise to anyone who's up to speed with the revolution of network theory from the beginning of this decade. And it doesn't elucidate much further as to how we think, let alone what makes us conscious/sentient.

The sited study looked at brains with Alzheimer's (which were seen to have long distance connections that were too random) and Frontal Temporal Lobe Dementia (which had too few long distance connections).

So I now wonder if something similar might be the cause of dyslexia (rather than macro-scale variations, like a particular brain region being smaller). For me dyslexia has meant: very slow reading speed in particular (even more prominently than blindness to certain spelling transliterations). What if my current condition also represents a structural deviation in my brain?; A decline in cognitive ability, particularly higher order (conscious) thought that, presumably, requires the most widespread cortical co-ordination, making it most susceptible to such faults.

Of course, even in an optimally order brain, each cell's activation threshold has to be carefully tuned, so that the network operates in a region of chaos, rather than epileptic fit or coma (too low threshold & too high threshold, respectively). Neuromodulation (by serotonin, melatonin, dopamine, adrenaline, etc) and variations in oxygen and glucose concentration can also be responsible for moving the brain away from it's optimal operating range.

+ "Air travellers get a robot chauffeur" p19 -
I (and some of my friends) have thought for a while that we could have entirely 'self driven' cars on the roads by now, from a technical stand point. The (good) reason why we actually have none, is that

Saturday 5 September 2009

On: "The revolution will not be robotised" NewSci 29Aug09

p28 - Two page interview with Noel Sharkey (perhaps most famous for being a judge on BBC's "Robot Wars"). Or "Why AI is a dangerous dream" (online).

- Basically, he's sitting on the fence, saying that he needs to see proof of artificial sentience before he'll concede it's possible. He does at least give fair mention to the opposite side of the debate (Hans Moravec and Ray Kurzweil).

"The mind could be a type of physical system that cannot be recreated by a computer"

His sentence should use "in" in place of "by": A computer can no more create or be sentient(s), than can a calculator understand mathematics. It's a category error; computers are merely calculators by design. Artificial Sentience would come in the form of software running on a *very* powerful computer, or a radically different electronic architecture.

But he doesn't give any examples of systems than are fundamentally impossible to emulate on computers, or why they be so. Given the amazing accuracy of physics simulations (able now to simulate entire viruses from quantum mechanical subatomic upward), I think the burden of proof should be on the other foot. But proving that a real world system can never be simulated accurately would appear to be equivalent to proving it could never be scientifically defined or understood. Quite possibly a paradox; thus everything is computable (eventually).

He admits that his own work is on problems immediately solvable with AI; these systems truly are tailor made by clever humans to solve simple problems as reliably and predictably as possible. Hence they are brute force and dumb as possible. To attribute any/all successes of a creation to it's creator is clearly flawed though: The inventor or an electronic calculator can not claim to be able to immediate find the square root of any 12 digit number. Genetic algorithms, today, are consistently finding designs (for RF aerials, F1 car aerodynamics, traffic light systems) that their human creators would *never* have come up with (often utterly weird and counter-intuitive). I'm not saying, therefore, that a piece of code should be attributed the status previous granted to the human expert it has replaced. I'm saying that intelligence has never been what it's cracked up to be. I like the his quote from Marvin Minsky:

"AI [is] the science of making machines do things that would require intelligence if done by humans."

Much of the progress in 'AI' to date has been through reducing our over-estimation of the mental abilities of humans, rather than attaining this mythical quality we call "sentience".

- This is all fine if he's taking this standpoint purely as a means to an end: getting policy makers to treat the current generation of robots correctly. In my experience there is a lot of anthropomorphising of very dumb machines (even by roboticists/computer scientists, who should know better). This is merely evidence of humans' lack of intelligence, rather than any limitations on future machine intelligence, as Sharky would have it. People expect too much too soon, then too little further in the future (after short term over-optimism falls flat), because people have little/no intuitive understanding of the experiential nature of technological progress.

The experts may just be using language loosely or hyping up their creations, but this could lead to a climate of misunderstanding in those outside the domain of expertise. A strong sceptical voice might well be beneficial here, to help avoid detriment to humanity. Warfare is the most pressing issue:

Last millennium, the ethics of tactics in armed interventions (e.g. US's various wars: Vietnam, Iraq 1, etc) there has been a balancing act between minimising foreign civilian casualties, and minimising troop casualties. Shelling from a distance, of dropping bombs from on high keep your boys safe, but are significantly less discriminatory as to whom they kill. Homeland public opinion is naturally biased towards it's soldier's safe return, but there are also limits to how many collateral deaths are acceptable.

Contemporary military automation seems to be forging ahead towards zero coffins being flown home, but on the flip side is only aiming (at best) to be equally as competent as human soldiers are currently. It shouldn't be a question of whether a robotic sniper cut distinguish civilians for militants as capably as a trained person can?; we should be asking: is it really necessary to use deadly force at all, when it doesn't even keep our military personnel safer? It's no longer 1 life vs. X number of civilian casualties; with robotic troops, it's human casualties now have a direct market valuation ($X). This kind of thinking doesn't seem to factor into the arms industry's thinking enough yet. It may take a massive humanitarian outcry (from a home population) before policy makers take note.

- Elder care won't be a significant issue until anthropomorphised robots become a reality (decades off for many reasons). If the Elderly miss out on genuine human contact as a result of carer's jobs being taken over by robots, this is more likely to be a result of necessity; the 'baby boomers' will soon be reaching that age range, adding greatly to the growing burden of an ageing society.

- He has the “Robocup” example the wrong way around: robotics is utterly failing to match the coordination and finesse of human movement; kicking a ball accurately is a long way off (the latest Asimo can barely run on a perfectly flat surface). Strength and endurance are also major deficits of, would be, artificial players: there just aren't the materials technologies to match the combination of human muscles and biochemical energy reserves, honed over billions of years of evolution.

On the flip side, tactics are easier to define. Leaving aside the 'computer vision' systems necessary to operate a stand alone robot soccer player, I don't see any reason that an AI system could not soon obtain optimal tactics (player positioning wise) within the next 10 years, or less. This might well be achieved through a 'brute force' technique, i.e. evaluating the positions of players in every recorded football match available. More examples and detail than any human player or manager could ever hope to scrutinise. But why should this be assumed to be cheating?; in a person it would just be considered the advantage of experience.