A.I., My Eye!

For some decades, there has been increasing speculation, propaganda, indeed fear, spread in our beloved media emporia concerning the exponentially rapid development of A.I., or Artificial Intelligence.

Artificial Intelligence, for the uninitiated, is the idea that computers, machines and robots can be programmed, or pre-programmed, to think for themselves.

You might assume that computers, in particular, do that already.  However, they don’t.  They work according to algorithms, programming and a form of intelligence that comes specifically and entirely from human programming.  In other words, no ‘thought processes’ are involved.

Giving machines full A.I. is not so simple as it might appear.  There are so many technological, social, psychological, and indeed moral implications, that one cannot simply tap-tap some code and let the computer get on with it.

The late philosopher, Hubert Dreyfuswas something of a critic of A.I., or at least the power that the human race has to create it.  Dreyfus (1929-2017) rejected the notion that the scientific community held that a form of study called ‘Scientific Psychology’ could and should be created, which would detail and explain the workings of the human mind in much the same way that physics is the scientific explanation of goings-on in the exterior world – or indeed the entire Universe.  According to Wikipedia:

Dreyfus’s arguments against this position are taken from the phenomenological and hermeneutical tradition (especially the work of Martin Heidegger). Heidegger argued that, contrary to the cognitivist views (on which AI has been based), our being is in fact highly context-bound, which is why the two context-free assumptions are false. Dreyfus doesn’t deny that we can choose to see human (or any) activity as being ‘law-governed’, in the same way that we can choose to see reality as consisting of indivisible atomic facts … if we wish. But it is a huge leap from that to state that because we want to or can see things in this way that it is therefore an objective fact that they are the case. In fact, Dreyfus argues that they are not (necessarily) the case, and that, therefore, any research program that assumes they are will quickly run into profound theoretical and practical problems. Therefore, the current efforts of workers in the field are doomed to failure.

Dreyfus argues that to get a device or devices with human-like intelligence would require them to have a human-like being-in-the-world and to have bodies more or less like ours, and social acculturation (i.e. a society) more or less like ours. (This view is shared by psychologists in the embodied psychology (Lakoff and Johnson 1999) and distributed cognitiontraditions. His opinions are similar to those of robotics researchers such as Rodney Brooks as well as researchers in the field of artificial life.) [All links are Wikipedia’s.]

Dreyfus doesn’t deny that programming A.I. could take place, he merely argues that there are significant contextual and interpretative obstacles in our ability to do so successfully.

Dreyfus argues against A.I. using two important philosophically-based principles: Phenomenology, which is the study of human experience and our understanding of same; and Hermeneutics, or the methodology by which we humans interpret things, especially what we read.  The key applications of hermeneutics are in the study of texts such as biblical texts, philosophical ones and also other forms of ‘wisdom’ literature.

However, Dreyfus’ record in the field of A.I. study is not good; in two of his books, Alchemy and A.I. (1965), and What Computers Can’t Do (1972), he poured cold water on the optimism within the scientific community about A.I. firstly that a computer would soon beat a human at chess, find and interpret a new mathematical theorem on its own, and that psychological theory (perhaps the most important where A.I. is concerned) could be turned into computer programs.

Dreyfus failed to accept one of the key principles – if not the key principle – of the development of A.I., and that is that the human mind, and/or the human brain, could be compared to a computer software program/human brain of sorts.

So, for example, when Alchemy and A.I. was published in 1965, it was met with ridicule, cynicism and – let’s be honest here – outright hostility from the scientific community.

And, of course, let’s not forget the fact that most of Dreyfus’ key theories were published in the 1960s, when, although computer programming of course existed and was being developed all the time, it was still fairly primitive by today’s standard or the standard at which it was at the time of Dreyfus’ death in 2017.  At the time he went to his grave, Dreyfus could still count on the support of some in the scientific community, but that number was dwindling.

In short, Dreyfus misjudged A.I. research.  He could not have foreseen the fact that an entirely new basis could be found upon which to build the research, which began to occur in the 1980s.  In other words, research into the subject could take place with little or indeed any reference to Dreyfus and his work.

This ‘sub-symbolic’ approach included the development of Computational Intelligence paradigms, such as neural nets – the mimicry of human neural patterns and networks in the brain; using this theory, scientists could look at the development of programming that interpreted, for itself, the human ability for simulated unconscious reasoning.  In this way, a device fired by this kind of programming could, potentially, develop a ‘personality’ or an ‘attitude’ that was entirely its own, and not from any form of human intervention.

This approach also looked at another fundamental for A.I., that of ‘commonsense knowledge.’  This means, for example, that you know that fire is hot, so you don’t put your hand in it.  Experience has taught you not to buy any Travis albums; you know they are going to be shit.

Robotics engineers such as Moravec and Brooks realised in their research that such unconscious reasoning and common sense were obviously going to the be most difficult to get to grips with.  Duh!

Reverse Engineering is an important concept in A.I. research.  The study of existing machines and software is more or less the basis upon which any study, particularly in computing, begins; if you don’t know what’s out there and how it works, how can you develop it?

Fast-forward to today, and an exponentially-rising dystopian view of A.I. in a post-Dreyfusian world, largely propagated by the likes of Elon Musk.

Musk is an extremely important name in the field of contemporary scientific research; largely because he is fantastically rich and can pay for all the scientific endeavour he wants.  I also think it important to state that, when Musk says something, you listen.  He might be as mad as a box of David Ickes, I don’t know; but, even if he is, he knows enough based on the work that he has paid for already, and the work that his company, SpaceX, continue to do.  Musk fears that A.I. could, again potentially at least, discover ways in which to develop its intelligence far, far beyond that of our own understanding; in one video that I watched recently, Musk gravely intoned, ‘Mark my words…’ as he warned humanity of the impending crisis spawned from our desire to develop machines more intelligent than ourselves.

I believe entirely in the concept and possibility of A.I.; it has the same amount of cynicism thrown at it like Wikipedia, from which I have quoted above.  I believe in A.I., and I believe in Wikipedia, indeed the entire concept of wikis, firmly and resolutely.  I would be happy if the entire internet were based on wiki technology.  Wikipedia has an excellent article on artificial intelligence.  I figure if an article such as this has almost four hundred references, a dozen textbook recommendations, and hundreds of resources, further reading suggestions and external links, then the article knows what it is talking about.

Start your reading, if you wish, with this Wikipedia article:

https://en.wikipedia.org/wiki/Artificial_intelligence

…and take it from there.  A.I. is both fascinating and frightening.  What if a ‘species’ of A.I. declared war on humans, with no qualms, compunction or moral compass whatsoever?  We would all be ‘doomed,’ to quote Private Frazer out of Dad’s Army.  It’s the old God/Frankenstein complex, isn’t it?  We, human beings, have created a species that we deliberately programmed to be more intelligent than ourselves turning upon us, marching towards us with that evil red glow in their eye like in The Terminator (1984), and destroying us without any qualms at all.  And that film, by the way, is now 35 years old as of this year.  They can’t say they didn’t warn us. x

 

 

 

2 thoughts on “A.I., My Eye!

  1. This was doing so well up until it suggested a reading start if a wikipedia article. I would suggest a more robust and am factual way to learn more would be to use google scholar and read a paper on the sibject

    Like

    • Fair enough, Gareth, but I love Wikipedia. I think it a stunning project, and one that I turn to daily. I use Google Scholar too, but many do not seem to be open-minded where Wikipedia is concerned. It’s always, oh, anyone can edit it, over and over and over again… The article on A.I. has, as I stated, almost four hundred references, text book suggestions, and other research tools including those self-same doctoral theses to which you refer, so I have no qualms about recommending it to others. Google Scholar is for intellectuals and academics, but I don’t address my blog solely to those, I address it to anyone who wishes to read it. x

      Like

Leave a comment