Monday 18 June 2012

Ireland - Euro 2012 - Friendly against Bosnia and Herzegovina

In an hour's time, Ireland will play Italy - our last act in the Euro 2012 competition.

So far, the event has not been a happy one for the Irish. Certainly not in keeping with our excitement at being back in a major competition after an absence of 10 years.

The only thing we took pleasure/pride in so far was the quality of the singing by the Irish supporters at the last game (against Spain).

And some Irish people were even critical of that - saying that we shouldn't be singing after being beaten 4-0.

So in this post I will look back on happier times, before the competition started!

My son Andrew and I were lucky enough to be given tickets to see Ireland play Bosnia Herzegovina in the Aviva stadium on 26th May. It was Andrew's first time seeing the Irish national soccer team in action. It was a very enjoyable evening: we played well (especially in the second half) and we won 1-0. The goal was set up by Aiden McGeady (in my opinion, Ireland's most exciting player at the moment) and scored by Shane Long.

We even had great seats. Here is a shot of our view during the opening proceedings:

Monty Hall Problem Revisited

I mentioned in a previous post that I don't fully "get" the solution to the Monty Hall problem.

An friend of mine, John, didn't accept that I couldn't understand this and sent me an email about it.

On thinking about it again I think I do get it now.

My problem was this: intuitively I think that once Monty shows you one of the wrong doors, you imagine that you are faced with two doors (a "stick or switch" choice) so your changes are 50:50. So why should you switch?!

But intuition is doing you a disservice here.

Your changes of getting the right door from the original 3 were 1 in 3. You will only have the right door one time in 3 on average. Your changes if you stick are not 50:50 - they are still 1 in 3!

The chances that one of the other doors is the right one are therefore 2 in 3. But since Monty has shown you a wrong door, your changes if you switch to the remaining door are therefore 2 in 3.

So my intuition that Monty showing you a wrong door has changed the odds of you being right (by either sticking or switching) to 50:50 is the mistake I was making.

If Monty gave you the opportunity to change your selection without opening another door, there would be no reason to do so: the odds would still be 1 in 3.

But since he knows the right door, and is therefore able to open a wrong door, he has changed the dynamic. He has compressed the 2 in 3 odds into a single door!

So your chances of moving OFF the right door (which you had all along) are 1 in 3.

But your changes of moving TO the right door are 2 in 3.

So you should move!

Thanks John: I think I get it now!

Thursday 7 June 2012

Adventure Game - Practical Computing August 1980

I mentioned previously that I used to love reading computer magazines back in the 70s and 80s.

My all-time favourite article appeared in Practical Computing in August 1980. It was written by Ken Reed and was entitled:
Adventure II - an epic game for non-disc systems

There were a number of things I liked about the article:
  • It explained how text adventure games worked (objects, locations, verbs, timers, etc.)
  • It was full of intriguing example commands like "RUB LAMP" and "UNLOCK GRATE"
  • It presented the program code in a "pseudo-code" based on Assembler
  • It explained how the required data would be stored, separate from the code
I liked the magazine cover too, with its winged horse, witch, castle, wolf and intrepid adventurer. You can see it below.

One of the things I disliked about the article was that it did not include the data that would be required for a proper game. Instead, it provided an "engine" that could power a number of games, given the right data.

You might expect that subsequent articles would produce the data that would be needed for real games. But I never saw any. Of course the flaw in this thinking is that many of the surprises of the game would be ruined if you were required to enter strings such as "Someone has leapt out of the shadows and bitten my neck!!!!"

You will find links to the text of the article here. Enjoy!


Tuesday 5 June 2012

Artificial Intelligence - The last 30 years

I can't help noticing that my recent posts on AI have focussed on programs that were created a long time ago:
  • Eliza - 1966
  • Animal - 1973
  • Noughts and Crosses - 1979

This makes me wonder what has been happening in the last 30 years!

Perhaps I am being unfair, but I don't think that the progress has been good enough.

During that time, there have been huge improvements in processing power, storage, and the availability of data. But I don't think there have been commensurate improvements in AI.

The main AI languages are Lisp and Prolog. These languages (although they have been enhanced over the years) were created in 1958 and 1972 respectively. And although they are still used in research and academia, they never made the crossover into business.

There have been successes, of course, and some of these have passed into the mainstream. One example is OCR which would have fallen under the AI umbrella during its development.

There are also systems that would seem to owe something to AI.

The Ask Jeeves website, for example, seems to be able to "understand" questions that are put to it. Here is an example:


It seems likely that Google Maps uses some kind of AI, particularly in relation to directions.

As I look at my Smartphone, I see some apps that may owe something to AI. Examples are:
  • Shazam - which can recognise songs
  • Google Goggles - which can recognise books, logos, and works of art
  • Voice Command - which can execute simple spoken instructions

Arguably the most obvious milestones in AI over the last 30 years have been:
  • IBM's Deep Blue beating Garry Kasparov at chess in 1997
  • IBM's Watson beating the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, at the quiz show in 2011. A documentary on the latter is available below.

Monday 4 June 2012

Artificial Intelligence - Tac-Tac-Toe

I have discussed two AI programs in recent posts. Eliza seems to "respond" intelligently to conversation. Animals can "guess" an animal by asking a series of questions and can "learn" about new animals.

Aside: It is difficult to discuss AI without personifying either the computer program or the computer itself. I have and will continue to do this in this posting!

The program I am discussing in this posting can also "learn". But it does not need the user to "teach" it. It learns by doing.

The program appeared in the September 1979 issue of "Practical Computing". The issue has a particular focus on "The Intelligent Computer" as you can see from the cover below.

The article in question was "Noughts and Crosses" (Tac-Tac-Toe) and was written by Trevor L. Lusty.

It featured a program written in BASIC which could "learn" how to play the simple game.

The clever thing about the game was that the code told it how to recognise a win, lose, or draw but not how to attack or defend. It learned this by playing.

Here is an example of a game which I (the human player was always given X, although each player took turns to start) could win in 5 turns:

As you can see, the program had no idea how to defend against my attack. In fact it just plays in the first available square each time. It would immediately spot that it had lost however and would display the following message:
"I concede --- You win --- I'll try harder next time"

What it would do behind the scenes was to recognise that the move it made in turn 4 was a mistake and to record the fact that it should not make that mistake again.

So starting a new game and repeating the same moves would result in it making the following move in turn 4:
Obviously this move is no more effective that its previous attempt so turn 5 sees me winning again.

But again the program records its mistake.

Repeating the game over and over sees the program modifying its strategy until it finally blocks the attack and the game moves on:


In this way the program would get better and better, and harder and harder to beat!

But the really clever part of the program was how it would learn not to make moves that would lead it into a no-win situation. My next move shows such a situation (with my next winning moves highlighted):
The program would try blocking those two moves (and then try all other blank spaces) before "realising" that this was a no-win situation. It would then deprecate its previous move (the block it made in turn 4). At this point it would recognise the situation illustrated in turn 3 as a no-win situation. This means that the next time this arose, it would flag the move it took in turn 2 as being a mistake and it would start to try alternative responses to my move in turn 1.

Genius!

Aside: when I first read the program back in 1979 (age 17) I could not understand the piece I have just tried to explain. So the happy message I take from this is that it shows that we DO become smarter with age and experience!

The program was interesting to play as you could see how it adapted to your strategy (how it became "smarter").

The author himself observed in the article that you could modify the program so that it would play itself a number of times before engaging with the user. In this way it could do the "learning" on its own and be ready to give the player a good game from the outset.

Artificial Intelligence - Eliza

I mentioned the "Animals" program in my last two postings. It is an example of an Artificial Intelligence (AI) program in that it "learns". It only knows about two animals when it starts but it you can teach it about more until it becomes quite impressive (at least if you feed it good information).

I want to talk about two other AI programs.

In this posting I will deal with Eliza, which was written in 1966 by Joseph Weizenbaumin MIT.

This program responds to sentences entered by the user by identifying keywords and manipulating the input string. It pretends to be a Rogerian psychotherapist. Apparently it has fooled some people into thinking that they were exchanging messages with a real person.

You can try out a version of the program (and see some Java source code) here.

Here is a dialogue I had with this program (my sentences are preceded by ">>"):
>> Hello.
How do you do.  Please state your problem.
>> I am feeling a little down
Is it because you are feeling a little down  that you came to me ?
>> yes I am hoping that you can help me
You seem to be quite positive.
>> are you a psychologist?
Why are you interested in whether I am a psychologist  or not ?
>> I am looking for someone who can help me
How long have you been looking for someone who can help you  ?
>> Do you answer all questions with questions?
We were discussing you -- not me.
>> I am not sure that you are real
Do you believe it is normal to be not sure that I are real  ?
>> Are you a machine or a person?
Do computers worry you ?
>> No I use computers in my work
Why do you mention computers ?
>> you mentioned them first
Oh, I mentioned them first  ?
>> I am not convinced that you are real
Do you enjoy being not convinced that I are real  ?

While most of the responses are pretty convincing, a few are not (such as the last one).

The thing about Eliza (aside from how clever it seems to be) is that by almost fooling people it is almost passing the Turning Test.

In 1950 Alan Turing (an English mathematician, logician, cryptanalyst, and computer scientist) proposed that a machine would have exhibited intelligent behaviour if a judge who was interacting with the machine and a person at the same time via written messages was unable to determine which was which.

While Eliza is interesting in that it simulates human behaviour, it is less interesting than Animals in that it is unable to learn. Its conversation skills do not improve with time and practice.

In my next posting I will talk about another "learning" program.

Friday 1 June 2012

Akinator - nothing new under the sun

A few months ago my daughter Ellen showed me a game called Akinator on her iPad where you think of a celebrity and the game tries to guess who it is by asking a series of questions. It is amazingly accurate. You can try it here.

I don't know exactly how this program works but I will bet that if you were to examine the code you would find some similarities to the program "Animal" which I mentioned in my last post. That program appeared in a book which was published in 1973!

In Animal the program tried to guess an animal based on a series of yes/no questions. If it fails then it asks you for two pieces of information:
  • What animal you were thinking of
  • A new question which would differentiate between the animal it guessed and the animal you were thinking of

In this way it "expands its knowledge" (i.e. it constructs a binary tree containing a series of questions with an animal name as the final node at the end of each path).

Now a lot of time has passed since 1973 and the Akinator program appears to be more sophisticated in a number of ways:
  • You can respond to each question with one of five options: yes, probably partially, I don't know, probably not really, and no
  • If you make a mistake, it may still get the right answer (it can sometimes cope with contradictions in your answers)
  • If it gets the answer wrong, you can get it try again and it will try a different path
At the end of the day, I'd say that it is not all that different from the program that was published back in 73.

But because its database it so extensive, it is still very impressive.