Notes from the IET / BCS
2010 Turing Lecture
London, 25 February 2010

Blog Home Page

I found myself at the Savoy Place, standing in front of a large plasma screen with a cup of coffee in one hand and a list of attendees in the other. On the huge screen Professor Chris Bishop was waiting patiently and silently as a heavy looking wrecking ball arced toward his head. He was grinning and leaning against a solid headrest. Audience members looked scared. The ball stopped at his nose and began its return journey.

I have not yet looked at my complimentary DVD of the Royal Institute Christmas Lectures 2008 of which this was a clip. I am saving that guilty pleasure for later today to share with my kids, but I guessed he was demonstrating some principle of the laws of thermodynamics and the principles of conservation of energy and entropy in rather a dramatic fashion.

Lots of men in suits and a sprinkling of women (not enough yet in IT or engineering by a long shot) were standing about in front of the screen in post registration and pre talk chat.

The IET building hosting the Turing Lecture for 2010 is an impressive place. Staircases swoop and dive and the whole place reeks of efficiency.

The room in which the lecture was to be held is an imposing place. It is full of the most outstanding wood panelling. Inspiring paintings of Faraday and his intellectual peers gaze down. I can imagine that even an experienced speaker like Prof. Bishop must have felt a frisson of excitement as he contemplated his words.

Prof. Bishop is not what many people would expect when they think of a research scientist. There is no white coat or wild eyed stare. He flies planes, skydives, ponders the nature of numbers, explores magnetically confined plasmas and does the odd experiment with wrecking balls. His talk was not what I had expected either. Having seen silent clips from his Royal Institute Lectures while waiting outside, I was half expecting eggs frying on transistors and showers of foil fragments.

Instead it was a well considered talk to a sober audience of serious professionals. Mine was not the only beard present.

There were no verbal fireworks, no great showmanship and no easily digestible sound-bytes. As I eased my mind into what he was saying I realised that this was a talk for adults not an entertainment.

Before I talk about the content of the talk I feel it to be absolutely necessary to talk about the structure of the talk. This was close to being a masterpiece of simplicity and perfect execution.

Firstly he laid out the issue he was addressing. He told us that there were three things that he would discuss. He laid down some concepts and principles he would be using. Each of these three things had three aspects.

Then he used some illustrative data sets drawing on chess ranking, online games and the search for a cure for asthma.

Here was a brilliant and recursive structure. The title of the lecture was “Embracing Uncertainty: The New Machine Intelligence”. He was telling us how to tame uncertainty by accepting it. He was telling us about a revolutionary approach to dealing with enormous data sets. He was demonstrating how to use probability. In a gathering of engineers and IT people there is a chance that many play or have played chess. In such a gathering of engineers and IT professionals there is a fair assumption that many play or have played computer games or are actively involved in the technology that makes them possible. Given the number of people in the UK who are affected by Asthma or who know somebody who is affected, many in the audience were undoubtedly interested in hearing about this work with the disease.

Look at it this way, in an audience of four hundred plus, the probability of every single person having strong feelings about at least one of his examples must have been ridiculously close to 100%.

The talk included some very heavy duty mathematics, yet he only showed us two equations that left me, at least, hungry for more.

He talked about Bayesian Probability. That is you start with prior beliefs or knowledge and you update this with new data and create posterior beliefs which you can use as a basis for decisions.

He talked about Probabilistic Graphical Models - which is a way to take what you know and derive some probabilistic relationships between random variables.

He talked about Efficient Inference - for which he gave his second mathematical equation:

∑ ∑xy = x1y1+x1y2+x2y1+x2y2 = (x1+x2)(y1+y2)
x y

You will notice that using inference you go from seven operations to three. Four products and three sums in the first version to two sums and one product in the second version. When dealing with the huge data sets we are talking about here, this is a significant processing optimisation.

What struck me was that all this provides a mechanism for variables to send messages to each other and in turn for programs to learn. This is the key point. The lecture was really about machine learning. With online 24 worldwide gaming we have access to planetary statistical data; with the mapping of the human genome we have huge data sets to deal with where critical information is hidden in the sheer volume of data. We can no longer hand craft programs to deal with this, we need programs that can learn as they navigate the data.

In the past we thought that by defining the rules you could come up with expert systems. I remember when I was studying computer science, oh so many years ago, expert systems were the exciting future of information technology. Indeed there have been some successful applications. However the problem, as Prof. Bishop pointed out, is the difficulty in including complex domain knowledge. I.e. reality is messy, noisy and very, very big.

His answer, and the bleeding edge of artificial intelligence, is to integrate domain knowledge with statistical learning. This is to use probability to quantify uncertainty. Sounds simple? Well like most complex problems the answer looks almost too simple. It takes real genius to make something like this fit into a 90 minute lecture and make the likes of me think they understand. That is what makes me suspect that Prof. Bishop and his colleagues are on to something revolutionary. In my own book “The Trousers of Reality” I maintain that there is simplicity at the heart of all complexity. This is what the namesake of the lecture, Turing, also discovered, as did Mandelbrot, Gödel and the others.

At the end of the lecture we were asked us to remember these three points

  1. We are at the beginning of a data driven revolution
  2. We have a new paradigm for machine intelligence using old ideas in a new way
    • Bayesian formulation
    • Probabilistic graphical models
    • Fast inference using local message patterns
  3. There is a freely available tool for non profit use at

There were interesting questions after the talk. I took notes of two of Prof. Bishops surprisingly modest and humble answers:

In response to a question about what can we learn from the brain? he gave an answer that suggested that it would be a very long time before we would have a computer that could even come close to the ability of the human brain. He added that in studying machine intelligence we were starting to learn about the brain and in studying the brain we were discovering there are things there that engineers have known for a long time.

In response to a question about which prior information to build in? he gave a great answer that talked about the danger of certainty. He said that building things into your system which you believed to be true but which were in fact false allows you to be ill advisedly confident.

This brings me on to my final lap in this unexpectedly long blog.

I have not mentioned the trip we took to Monte Carlo (data is randomly sampled and the clustering tells you about probability) and how we no longer have the time or the processing power to approach probability that way any more because of the enormous data sets we have to deal with.

I have not mentioned the points that were made about search engines. Probability is revolutionising how ads are chosen; not on the basis of the most expensive but on the basis of profitability based on the probability that they will be clicked and generate revenue. This is an insight about how something that would seem to the to be in the realm of rarified mathematics is touching you every time you google or bing.

Microsoft comes in for a lot of criticism by the general public. It is perceived by many in the IT world as the evil empire. Yet here is a man, a brilliant man who is the chief research scientist for the evil empire, given time and money to do whatever he wants. What does he choose?
A way to corner the market in some greedy manipulating way?
A vanity project full of pure academic kudos?


He chooses to carry out a project collaborating with dedicated medical professionals and genome scientists. He chooses to use his genius, time, budget and experience to explore a debilitating illness that is affecting an increasing and alarming number of people. This is work that is worth doing. This is a face of technology, science and enlightenment that we can be proud of.

The talk is webcast by the IET at their web tv site

Do you want to comment on this article?

Blog Home Page