Thursday, 1 December 2016

In response to the question "Are there differences in brains of men and women".

Imagine that you have two types of dice, yellow ones and pink ones. Both types have six consecutive numbers on their six faces as usual. However, in this case we don't know what the lowest and highest numbers are, or the colour, because we are blindfolded. Now we throw 1000 of each type and get someone, who isn't blindfolded, to add up the total number of spots for both colours.

If the total for yellow dice is around 3,500 we would have some justification for thinking that these dice are the normal 1 to 6 version; 3.5 is the average score for standard dice. If the total for the pink type, after a thousand throws, is 4,500 then it would be fair to conclude that pink dice were numbered 2 to 7 and that the two types of dice are indeed different.

So far so good. But if you have one throw only, without knowing which colour you have thrown, then the chances you will find out the colour, on that single trial, is only 1 in 6. Five times out of six a single trial is not enough to tell which type you have thrown. And the more faces your dice have the less likely it is that one throw will tell you which type it is.

It gets worse. A small number of yellow dice are known to have a 7 rather than a 6. And a small number of pink dice have a 1 rather than a 2. So now it is never possible to know for certain which colour from a single blindfold trial just by being told the number you have thrown.

So looking at a single brain is like looking at a single throw of a dice type with many hundreds of faces, and where it is certain that many of the faces are not typical.

The sentence 'pink dice have more spots than yellow dice' remains true - but only of the population. What you can discover by looking at a single dice is almost nothing.

Tuesday, 10 March 2015

Expertise is a prison

Most people would be happy with the idea that if you want an opinion it is best to talk to an expert. In most areas this idea is, of course, relatively modern because the areas of specialisation are themselves modern developments. But before the nineteenth century the idea of intellectual expertise hardly existed - it is only a slight exaggeration to say that people who fancied themselves as clever had opinions and wrote treatises on more or less any subject.

Where expertise existed in the past (for example expert shoemakers, expert bakers and so on) it was recognised most commonly among those with skills that were practical. It is tempting to see the origin of this view in the distinction between  'knowledge' and  'craft' that is found in early Greek philosophy which was hugely influential in western thought. But Aristotle (for example) made it clear that people who treat diseases need to have both knowledge and craft to be successful in medicine. The theory and the practice, if you want to put it another way, are both necessary and complementary.

In Tom Stoppard's play 'Dirty Linen' a member of parliament describes his colleagues as people with interests "either so generalized as to mimic wholesale ignorance or so particular as to be lunatic obsessions". The two groups are not so different. The problem with expertise is precisely that the lunatic obsessions are a form of gross ignorance. We have seen this work, with sometimes tragic consequences, where experts with an obsession are called upon to give evidence in difficult criminal trials.

As areas of expertise get ever narrower the probability that the expert has no views other than their own current hypotheses, and that you will not find anyone brave enough to gainsay them, both get dangerously high. Is this a problem? After all, aren't they likely to be right?

Aristotle has something very important to say about this. In Aristotle's view knowledge that is not purely practical can be claimed only of truths that are eternal, and we know that eternal truths are very rare. In the particular case of scientific investigation, every hypothesis must be assumed to be provisional until it is backed by generations of evidence. The opinions of experts are not, and in most cases cannot be, truths.

We expect too much of our experts; we expect them to be right. In recent times geologists have even been found guilty of manslaughter for failing to predict earthquakes. This is bonkers. Expertise is a prison if either you are dismissed as ignorant for stepping outside of it, or if you are tightly constrained by expectation within it. Neither is healthy for society, science, or government.












Friday, 12 December 2014

Public Engagement and Impact

What do we mean?

The term 'Public Engagement' (PE) means different things in different contexts. For the current purpose I would love to define what I mean by PE so that you, the reader, can compare my definition with what you think it means. But that is for another post. For the time being I just want to comment on a roadmap that I have outlined elsewhere which might help institutions get a bit better at doing it.

The roadmap comes in four parts - each with a paragraph or two of explanation. I have based the language I use in the discussion paragraphs on the sorts of things that might be discussed at an institute of further or higher education, because that is where I work. The same arguments are applicable everywhere.

A Roadmap

The credibility and reputation of an institution, and it's standing in the community, indeed it's long term sustainability, depend on how positively it interacts with it's constituency.
  • This creates an immediate problem, particularly for HE where it is often unclear who it is they are trying to interact positively with. Government? Research Councils? The EU? The vast sea of potential undergraduates in UK schools? The lucrative pool of overseas students who want to study at UK universities? It seems like there is not one single constituency, but many. It may be a little easier in other spheres.
Attempts to manage interactions with all constituencies by seeing them as a set of separate problems leads to an over-bureaucratic, unwieldy, factionalised mess.
  • Seriously, does anyone who works in the 'impact' team even know the name of the people who organise school visits? Does the department that deals with summer schools share data with the overseas recruitment office? Does the lone academic in Maths with money to employ a post-doc for outreach have any interest in sharing a platform with a student of dance? Does the 'widening participation' team even know the name of the individual who organises Café Scientifique - without official support - as a hobby? I could go on. Each team has it's own budget, goals, contacts, targets and inevitably this means that it has it's own agenda.
Attempts by the those responsible for the more powerful constituencies to encourage factional development lead to a cycle of deepening factionalisation.
  • I will borrow a term from geopolitics and call this Balkanisation; with apologies to my many friends in the Balkans. From an online dictionary: Balkanize - divide (a region or body) into smaller mutually hostile states or groups: eg 'ambitious neighbours would snatch pieces of territory, Balkanizing the country'.

    The worst aspect of this in the current HE landscape is the 'impact' component of the Research Excellence Framework (REF) which we will come back to in a later post I hope. But there are others and I will not begin to list them here.
Attempts by management (which we hope is not factionalised, but often is) to address this often comes in the form of a plan.
  • Let's face it, if there ever was a plan that involved anything more than ticking boxes then we have long since forgotten how to write it. A plan is what you get if you buy an Airfix kit (showing my age). A plan is a sheet of step-by-step instructions with clear goals (tick boxes) - and although these do not involve deadlines you could add those yourself!

    A strategy, in contrast, is what you start with at the beginning of a chess game. It is smart, adaptive, and does not involve a fixed order of goals. What institutions need is a PE strategy and people need to be encouraged to think, and to be rewarded for thinking, strategically.

Public engagement is simply the antithesis of the planned Balkanisation of interactions with outside agencies and groups.

There - I hope the denouement surprised you! It was my intention to make you think that this post was about what we could, or should, be doing; or maybe about how much we could, or should, be spending. I am happy to rant about these things - indeed I have done so elsewhere. But this particular rant is about something different. Public Engagement is the priority you give to the strategic integration of everything reputationally enhancing.







Wednesday, 6 August 2014

Algorithm, who could ask for anything more?

First, I have to thank Stanley Kelly-Bootle for the line that I borrowed for the title of this post, modelled of course on Ira Gershwin. I thought it was a pretty good joke when I first laughed at it in 1982, but maybe it had been around for much longer than that. It is a bit scary that in the intervening 30-odd years it has gone from being a joke to being an article of faith. Everyone believes that everything can, and should, be reduced to an algorithm. This is the same as saying that everything can be automated, and that what humans usually refer to as the "skill and experience" can be taken out.

The word Algorithm is a westernised form of al-Khwārizmī, the name of a Persian mathematician who flourished in the late 8th to 9th centuries. His name is, it seems, derived from the oasis region where he was born, sometimes called Chorasmia in what now appears to be Uzbekistan.  Al-Khwārizmī was a seriously smart bloke, who among many other things was the first to write about solving mathematical problems by using numerals written in columns, for tens and units, and using a step-by-step approach. His techniques came to be known as Algorism. In this way he sort-of invented maths, in its modern form anyway, and 'Algebra' - a term he coined and which we owe entirely to his lasting influence.

The name in the form 'algorithm' has been borrowed by computer scientists to mean something more specific, and a bit difficult to explain. We can get close to understanding the idea if we just say that an algorithm is anything that can be realised as a computer program*. Take for example the programmes** found on automatic washing machines. We need to tell the machine how to wash clothes, so we decompose the process in our minds in to a sequence of steps each characterised by drum speeds, durations, temperatures and so on. These steps can be stored and executed by the machine (the program), but the idea that is expressed by the description of the steps is the "algorithm".

Phew! We can build machines that adequately wash clothes. This is because the process of washing clothes can be adequately expressed as an algorithm. For my own part I am mighty glad this is true because I have, in the past, lived without automatic washing machines so I know what a blessing they are. But to stick to the point, the key word in this paragraph is 'adequately'.

Let's choose a more controversial example, like driving a car. This example has been much in the news recently because cars without human drivers are on the verge of being feasible and so we need an algorithm to describe the process. Once we have the algorithm it can be embedded in a machine (the car can be 'programmed') and we are away. Who is going to dream up this algorithm? Can this be done 'adequately'? Can we even define what adequately means in this case?

My current purpose is to comment only on the first of these questions: "Who is going to dream up this algorithm?" I have been writing computer programmes since 1975 and helping other people to learn how to do the same for most of that time. Only once, working with my colleagues Raymond Flood, John Axford and Robert Lockhart in Oxford in the 1990s, did the opportunity come up to develop a course that did not concentrate on programming, but instead emphasised the importance of algorithmic thought. My own view is that we were highly successful!

I believe we can, and must, teach algorithmic thought if we are going to increasingly rely on algorithms. Or maybe what I mean is that we can, and must, develop people's ability to think algorithmically by good coaching. In any case this is far more important than a thousand exercises involving designing an 'app' for a smart-phone than concentrates on its usability and marketability; this will not produce a generation of engineers that will make you feel comfortable in your driver-less car, reading the newspaper, on a busy road.

  I am bound to get comments from people who object to this as a definition, which is fine, because I am not pretending it is one.

  ** Lapsed into UK spelling there - it seems right.

This essay also occurs in substantially the same form on LinkedIn.

Sunday, 3 August 2014

The art of distraction

Much of the effort that goes in to talking about science to young people, or engaging a wider audience with scientific issues in general for that matter, seems to be directed towards distraction. I don't wish to be too critical of this approach because we have all used it, and it does have some value. This aim of this short essay is simply to argue that distraction should be only a small part of what we do - not the main focus.

First I need to be clear about what I mean by 'distraction'. In an ideal world you would want everybody to interested in what you were saying. As an opening gambit you might do something, or say something that is spectacular, or loud, or controversial, or impressive just to 'hook' everyone's attention. Maybe you don't open with the hook, if you are clever you might just hint at the nature of the hook and build up to it placing it at a critical point in the presentation.

The hook, or the promise of a hook, is designed to keep the audience interested for long enough to get your message over, and to get you the required amount of applause at the end. Hooks have their uses, as I have said, but their over-use, and the competitive drive to develop 'better' ones, is ultimately pathological. In most cases their function is simply to distract the audience from the fact that they are not interested in what you are saying. They have seen so many hooks they are just waiting for the next one. They have become interested only in the art of distraction.

This wouldn't be such a bad situation if it were not for two things. First, most of science comes entirely without hooks. And second, many of the audience don't need or want them. Many of the audience are interested in you, and what you have to say and find the constant recourse to the shocking or spectacular unsatisfying.

Saying 'much of science comes entirely without hooks' doesn't mean that much of science isn't interesting, it simply means that it cannot be understood or appreciated in terms of scene-bites. (A 'scene-bite' is the presentational equivalent to the interview sound-bite - I just made that up!) Are you frustrated by the effect that the obsession with the sound-bite has had on the way politicians handle interviews? Scene-bite fixation is in danger of doing the same damage to the art of science communication. Everything around the distraction is forgotten and, by the insistence that the distraction is necessary, everything around it is also devalued.

It is increasingly the case that young people (particularly, but not exclusively) think that science is just a sequence of spectacular or cool events because that is all they ever see presented as science - or at least it is all they remember. As a result you don't get invited to speak unless you are bringing something spectacular or cool. 'Interesting' doesn't get a look in, and as a result the kids who are interested are badly served.

Science distraction is the opposite of science engagement. It keeps a few uninterested bottoms on a few seats for a short time, but creates a shadow that obscures the really interesting stuff and hides you from those members of the audience you stood the best chance of reaching.

Friday, 18 July 2014

A Model System


Getting the right choice of words when describing anything is, truly, a tricky business. My style of research is often called "computational modelling" which roughly means "writing a computer program that mimics the function of a real system".

This idea seems straightforward enough. A flight simulator, for example, is a computational model. It gives pilots the impression that they are flying an aircraft, when what they are really doing is providing inputs to sets of equations that are a model of an aircraft. Notice that themodel aircraft is also referred to as a flight simulator, so the words model and simulator are close relatives.

So, I work to write programs that model the brain. Does this mean I am building a brain simulator? Or even an artificial brain?

The key difference is the level of understanding. Flight simulators are almost indistinguishable from the real thing because aircraft are relatively simple systems that humans designed and built to start with, we know how they work. In contrast we have absolutely no well formed idea of how large areas of the brain do what they do, not on the level that is necessary to produce a successful simulation of even the simplest sort, for even the simplest animals.

The work that I, and many others, do is designed precisely to help to develop, and test, new and imperfect ideas about how the brain might work, and any measure of success is very welcome! This sort of modelling - to explore hypotheses - is quite different to building a flight simulator which relies on well established and reliable science.

So my type of models are based on ideas, and if the models get even close to working, well, the ideas might have some merit. Models that are not based on clear underlying ideas are models of nothing. A red light should start to flash whenever the answer to "What idea are you testing?" gets dangerously close to "We just want to take the few things we know, multiply them half a billion times, wire it all up, and see what happens".

Model planes, and model brains - two almost unrelated concepts. We seem to be getting them confused.

Monday, 1 July 2013

The brain, like Gaul, is divided into parts

For a few centuries people have cut brains up and named the various lumps, bumps, holes, and sheets that they found. This could be likened to early astronomy. Let us call this desire to name bits of the brain cerebonomy, just to have the joy of coining a new word! Cerebonomy is simply naming the brain parts, like the naming of stars, without any reference to what a star is or why it shines. More recently, with the aid of microscopes, researchers have been able to discern a multitude of layers, regions and divisions within each lump or bump that had previously been given a name. Each layer and region is then also given a name.  This makes the simple profusion of names one of the chief obstacles to reading brain-related literature, and there is no way I can think of to simplify things.

Here are some bits you should know in a simple glossary.
  • The brain stem is the bit of your brain that is obviously an extension of your spinal cord. It is not divided into 'left' and 'right' sections like the rest of the brain. On a side view of a human brain you can see just a little bit of the brain stem sticking out the bottom like the stem of a cauliflower, but it carries on up inside almost to the centre of the brain, and indeed incorporates some (or all) of what is usually called the mid-brain or mesencephalon.
  • The cerebral cortex (confusingly often just called cortex) is the folded 'cauliflower' bit that is the most visible feature of a human brain. It is a sheet that isn't very thick (cortex is from the latin for 'bark' of trees) and in humans it is around 4mm thick. It is divided into the left and right hemispheres although these are joined in various places by large, fast, bundles of connections. It is the 'greyest' part of the brain indicating that its connections are dense and mostly very short-range.
  • The next bit of interest, the diencephelon, refers to most of the bits that you cannot see from the outside (because they are covered by the cortex) and which are joined to the top of the brain stem and the mid-brain. If you turn the brain to look directly up the brain stem, you can catch a glimpse of part of the diencephalon.
  • An obvious feature of the human brain is the cerebellum.  It is almost like a separate little brain that sticks prominently out of the back of a human brain and is joined to the brain stem. Its surface is folded - more narrowly folded than the cerebral cortex - and consists of a single sheet of tissue in a folded arrangement like an accordion.
These are the major divisions of the brain - not really like the divisions of Gaul, more like the continents of the globe. Like the continents there aren't too many of them!

Part of: Martin’s Vastly Oversimplified and Woefully Incomplete Guide to Everything in the Brain as featured on the Brainsex website.