Wednesday, 23 February 2011

Rough Rebuttal to a Kurzweil Critic

This is very much an 'off the top of my head' response to a blog post by my friend, found here: http://www.simonpstevens.com/News/FlawWithTheFuture

A) Nit picking - I guess you are probably just simplifying matters for the sake of a concise introduction paragraph there but:

Vinge popularised the term 'singularity' in this context in 1983, but Stanislaw Ulam talked to von Neumann privately of singularity well before (1958), and Turing spoke of machine thinking taking over (1951). [1]

More significantly, Vinge's expectations for singularity are distinct from Kurzwiel's in that he expects the more sci-fi friendly course of events, with an AI spontaneously boot strapping itself towards super intelligence. He also estimates an early to mid 2030s time scale.

Kurzweil's vision, on the other hand, needs no Skynet (Terminator) type event. He sees little/no distinction between us and our machines. Our computers already form a kind of inseparable hybrid intelligence with our biological selves, and the level of integration will only increase in future (no clear machine/human divide). Kurzweils brand of future fantastic is about as boringly down to earth as his speaking style.

Also, I believe Kurzweil expects human level AI well before 2045 (as you stated), he in fact estimates that one could acquire (for $1000) computing power equivalent to all the brains of all the humans currently alive Earth by that date.


B) Given that the graph of his that you included goes back 10^11 years (most the way to the beginning of the universe) you can't really get away with saying we could be at the * beginning* of the growth curve! ;op  Also, remember that curves are mathematical structures that approximate the real world, not the other way around. Reality is not pootling along trajectories inscribed by God or gods. The 'laws' of the universe emerge from it's increasing complexity, they do not exist outside/before it's existence.

I mean to say is there is no reason to assume that complexity is likely have some physical limit to its growth (or rate thereof), just because it approximates a curve in a similar way to many domain specific phenomenon in the physical sciences, etc. In each of those cases growth hits some out-of-context-problem that breaks the simple approximation to a mathematical equation. Complexity of the universe itself is unbounded, for there is nothing 'outside' to limit it by definition.


C) Besides, Frank J Tipler showed that eternal progress in finite real time would be possible in the case of an Omega Point singularity at the end of a closed universe (big crunch). Life is bound to permeate the universe and cause this to happen. He also got obsessed with equating it to Christian heaven, but that's beside the point IMHO. [2]

I came to Kurzweil's singularity (in 2005) after being gripped first by Tipler's (back in 2002). I was initially uninterested in this 'new' kind of singularity, after all, how could it be more significant than the ultimate fate of the universe?! Time has reversed this; I no longer worry about whether experimentation will verify/disprove Tipler's predictions about the mass of the Higgs bosson necessary for his Omega Point, or about the cipling problem of the speed limit of light for that matter (as I had done 4 years pre-Tipler).
 The main feature of the technological singularity is that we are bound to find our current understanding of reality hopelessly myopic. Either laws of physics we now consider immutable have convenient get out clauses in the fine print, or they only limit things which turn out to have no relevance to continued progress.


D) Initially I was quietly sceptical about how much Kurzeil's equations could be considered scientific theories. However, I grow increasingly convinced that their continued success at predicting (technological) progress is every bit as valid as the sucess of Newton's laws in predicting the motion of apples. In both cases they let one know almost exactly what's going to happen, provided you stay away from singularities...

There is an important distinction to be made between Kurzweil's  specific predictions and his overarching theory(s). Each prediction is a instance of him using his own intuition to extrapolate general trends to specific domains. Each one is prone to error and some *will* be wrong. He acknowledges this. These predictions are not like the hypothesis of a theoretical physicists, for a very fundamental reason: the complexity being created by humanity (including technology) is an emergent phenomena. It is *actually* impossible to predict the *exact* details of what will happen until it has already happened; an NP complete problem as deterministic but chaotically unpredictable as the weather.

The 'predictions' are more like exacting illustrations of the way technological change is *probably* going to affect our lives. They should always be taken together and in this context. A few iffy illustrations do not disprove his central theory. His only hard predictions are the core, numerical, rate(s) of change. Even these are not considered immutable though, because the main characteristic of the singularity is that you *can not* predict what is happening the other side of it.


E) Humans are inherently illogicall, usually deciding based on personal emotional response, then telling themselves (and others) a cleverly constructed, post hoc, story about why this point of view is valid. Or, if there's no excuses to cherry pick, anger or upset tend to fill the chasm of inconsistency.

As with the knee jerk response to Aubrey De Gray's call to end ageing, people generally respond negatively when claims might unseat fairly basic assumptions in an idividual's mind. This would take much mental effort, time and emotional readjustment (pain). Instead the brain does what it does best and resolves on a shortcut, throwing out the first reason/excuse it can find to disregard Kurzweil entirely and save all that trouble.

Experts in the domain of Kurzweil's  predictions can be *even more* prone to reject his ideas out of hand because even more of their personal axioms would need uncomfortable scrutinising. Also (warning - dodgy metaphor coming!) they are so intent on the bugs in the cracks of the bark they can't see the forest fire approaching (i.e. forest for the trees).

Also, the cult of individualism does not help here: when each invention and discovery seems attributable to one special person, smooth trends in overall technological progress are highly unintuitive.

But progress is smooth, just as the number for road fatalities in a country from one year to the next are exceedingly close (when no legislation or such makes global changes) despite almost every individual accident being an unique and freakish culmination of circumstances. As Hans Rosling tells in one of his wonderful videos about stats.

I find Susan Blackmore's memetic view of society to be a helpful mental tool here. It supplants magical/mysterious creativity of individuals by looking at things from the point of view of ideas themselves. Humans are only vehicles (or conduits) for these 'selfish replicators' (memes) that succeed and spread (or fail and dead end), mutate (like Chinese whispers) then meet and combine to form ever more complex concepts and creations, just like one of Stephen Wolfram's cellular automata or Conway's game of life: complexity from simplicity.

From this thought framework there is an obvious link all the way back to the origin of life with it's genes, which have produced increasingly complex phenotypes. There is even the parallel of the accelerating progress: life reshaped the earth forming new environments (ever more complex) upon which new layers of creatures base their existence. The rate of increase of complexity accelerated, not purely with physically more complex bodies, but in terms of the amount of information encoded in each cell's genome. The number of different cell types per organism and, more crucially, the possible number of behaviours this gave each cell and it's organism. (I bet that even 'junk DNA' is essential here: a repository of complexity, ready to splice in for some impossibly unlikely new behaviours in single generations)

Looking at an earth 100 million years ago, teaming with increasingly  diverse life, it would have been easy (for some ethereal being) to say: Sure, the complexity of genomes in those creature/vehicle things the genes use has been increasing along an exponential curve for ages, but it can't possibly increase exponentially *forever*. It's gona be sigmoidal. The earth is only so big, and the genomes each take up ever more of it's material resources (start of negative feedback), and the whole over sized petri dish will be torched by the death of it's power source in a few billion years anyway...

How could they have predicted that those gangly limbed tree dwellers would have evolved to create a new environment of replicators in thier heads?! Illusive entities that behave like genes but with a totally different physical basis (memes). Who'd have thought these 'memes' would be so successful as to take control over most genes (and the materials they control) and then go beyond the available, using previously inaccessible materials (mining, etc) to further aid their propagation. Then they did the impossible and climbed right out of the petri dish! Even weirder, they've started making themselves increasingly *small* and efficient vehicles in which to reside.

But it's ok, because these will have ultimate limits too! Right?...

Links
[1] http://en.wikipedia.org/wiki/Technological_singularity
[2] http://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_Point

No comments:

Post a Comment

I'm very happy to see comments, but I need to filter out spam. :-)