Showing posts with label Social Science. Show all posts
Showing posts with label Social Science. Show all posts

Saturday 3 October 2015

The Flaw in Thinking Artificial Intelligence Can Solve Our Problems

I recently knocked out a review of Frank Tipler's 'The Physics of Immortality: Moderm Cosmology, God and the Resurrection of the Dead' (1994) on GoodReads. One passing claim struck me as particularly interesting in the light of my blog postings that cast doubt on speculative science as useful - not that it is not worthwhile but that it seems to be fuelling a cultural hysteria about scientific possibility that is distracting us from what is achievable. I have a similar critique of the social sciences and I covered my concerns about excessive claims in that area in another GoodReads review - of Lawrence Freedman's 'Strategy: A  History' (2013).

Tipler's passage gave me yet another useful bullet for my gun of scepticism about claims not only about what we can know about the world but what any machine created by us may know about the world although Tipler's main task is to postulate (amongst other things) omniscient total information at the Omega Point of history.

On page 297 of my Edition but also elsewhere, Tipler explores the amount of information required to be or do or understand certain things in the world. He points out that if something is more complex than 10 to the power of 15 bits of information, then it cannot be understood by any human being whatsoever. This is the level of complexity of the human brain itself. He points out that human society is 10 to the power of 15 bits of information times the number of humans in the world.

We have to invent higher level theories to attempt to explain such complexity but these higher level theories over-simplify and so may (I think, will) give incorrect answers. The problems of human society, in particular, are far too complex to be understood even with such theories to hand which, in my view, are not scientifically valid but merely probabilistic guidelines.

Often human instinct, honed on millions of years of evolutionary development which screens out more information than it actually uses, is going to be more effective (assuming the human being is 'intelligent', that is, evolved to maximise that evolutionary advantage) in dealing with the world than theory, no matter how apparently well based on research. Tipler's omniscient Omega Point is, of course, classed as something completely different but no one in their right minds would consider any probable AGI coming close to this level of omniscience within the foreseeable future. Tipler does not make this mistake.

Therefore, in my view, an AGI is just as likely to be more wrong (precisely because its reasoning is highly rational) than a human in those many situations where the evolution of the human brain has made it into a very fine tool for dealing with environmental complexity. Since human society is far more complex than the natural environment or environments based on classical physics (it is interesting that humans still have 'accidents' at his lower level of information, especially when distracted by human considerations), then the human being is going to be more advantaged in its competition with any creation that is still fundamentally embedded in a particular location without the environmentally attuned systems of the human.

This is not to say that AGIs might one day be more advanced in all respects than humans but the talk of the singularity has evaded and avoided this truth - that this brilliant AGI who will emerge in the wet dreams of scientists may be a reflection of their rational personality type but is no more fitted to survival and development than a scientist dumped with no funds and no friends into a refugee camp short of food and water.

In other words, species or creature survival is highly conditional on environment. The social environment in which humans are embedded may be tough but it also ensures that the human species will be operating as dominant species for quite some time after the alleged singularity. Pure intellect may not only not be able to comprend the world sufficiently to be functional (once it moves out of the realm of the physical and into the social) but, because it theorises on the basis of logic and pure reason, is likely to come up with incorrect theories by its very nature.

Worse, those human policy-makers who trust to such AGIs in the way that they currently trust to social scientists may be guilty of compounding the sorts of policy mistakes that have driven us to the brink in international relations, social collapse, economic failure in the last two or three decades. Take this as a warning!