Wednesday, 8 November 2017

Is Education Failing the Test?

In the last year I have worked with eight secondary schools but one visit stands out in my mind. I was asked to give a talk to year 12/13 students (I still think of them as 'sixth formers') about the brain, about science in general, and about my experience of doing research. As usual I offered to do more than just one classroom visit, to make best use of the time and effort of getting to the school. Sometimes it isn't possible for the school to arrange two classes in one afternoon but I always make the offer. I was told that all of the more able students doing science had been gathered in to a single group and the only other group available in the right time-slot was a group of less able year 11s. It was implicit in the wording of the reply that this group were a handful, and that I should not expect an easy ride. I said 'Fine book me in'.


On my arrival at the school I went directly to the 'difficult' class and was further warned on the way by the member of staff accompanying me that I was in for a rough ride. She was right, the students were unruly and rude.


I didn't get any further than telling them I was working on a project called 'Brainsex' before they bombarded me with questions about sex and hermaphroditism (they didn't call it that). I had recently read a paper about gynandromorphism in birds originating in the brain, which fascinated me and it turned out it was interesting to them too. But they soon moved on to 'babies born with tails' and 'people with webbed feet'. We discussed atavism, and one or two looked the word up while others were still googling 'inter-sex finches' and the teacher was furiously taking notes. This was fine for ten minutes before the discussion morphed to people being descended from aliens. I explained the concept of panspermia (got a laugh) and talked about water and organic matter in comets, before this became a cue to move on to meteorite strikes and global extinction. I suggested that they might like to research the Tunguska event, and the eruption of Thera, after explaining that the dinosaurs were not wiped out by an explosion but by the resulting change in global conditions. And in any case the dinosaurs were not wiped out because birds are therapaods (a suborder of dinosaurs) and birds are still doing very well thank you. By this time some of the class had already found out about Tunguska on Wikipedia and started explaining this to their teacher, others thought that the 'birds are dinosaurs' comment was a joke and were trying to disprove it. The teacher was still working hard making notes and finding photographs from the Rosetta space mission, while some of the class were looking up 'Nuclear Winter'. I had to leave at this point.


Slightly exhausted I went to the more able group where I gave my talk, in silence. At the end I was asked only one question that I can remember - 'How much do you get paid?'


On the way out of the school I was caught up by the teacher of the first group carrying a card (complete with sketches of babies with tails, and aliens) that the group had spontaneously decided to make for me to thank me for coming. I mentioned the very different experience I had in the second classroom. The explanation was simple enough, the staff had explained, and the students were bright enough to realise, that nothing I said was going to be in their exams.


It is not the fault of students, or of teachers, or of schools, that we are losing the distinction between education itself and the boxes we tick along the way to track it. It is not only a question of 'teaching to the exam' although this is common enough (I have done it myself with A-Level computing science very successfully). It is the pervasive notion that if all the boxes are ticked then everything must be all right, and the false corollory that anything that doesn't tick a box doesn't count - one of the most distressing forms of this being the widespread disappearance of music and languages from primary schools.

Teachers and schools under constant pressure to do more will find ways of ticking the right boxes in the hope that it will get someone off their backs, or gets them a pay rise, or satisfy Ofsted, or whatever - it doesn't matter why they are doing it. Every ounce of effort they put in to working out how to tick the box is effort that is not being put in to educate students. Teaching to the exam is merely the tip of the iceberg, the most obvious symptom of the disease.


It is a fundamental principle of experimental science that if you cannot measure what is important then you attach importance to something that you can measure. This is good practice for a science experiment but it is a terrifying and dangerous idea to let loose out of the lab! It is true that our schools and colleges are a sort of ongoing experiment, but to manage this by ignoring the point of the process (ie education) in favour of the operational variable (counting tick boxes) is a dereliction of our responsibilities. We can sit and watch the number of boxes ticked climb year-on-year (which we have been doing) and pat ourselves on the back that we are getting better at something - better at ticking boxes.


The situation reminds me of the painting of a pipe by Magritte which he labelled 'This is not a pipe'. Of course it isn't a pipe, it is a painting of a pipe. In the same way, results, or numbers, or grades, are not an education. It is not clear (at least not to me) if education leads society or the other way round. At the moment we live in an age where ticking boxes is a preoccupation. This is a failure of our society and education simply reflects this.

Thursday, 1 December 2016

In response to the question "Are there differences in brains of men and women".

Imagine that you have two types of dice, yellow ones and pink ones. Both types have six consecutive numbers on their six faces as usual. However, in this case we don't know what the lowest and highest numbers are, or the colour, because we are blindfolded. Now we throw 1000 of each type and get someone, who isn't blindfolded, to add up the total number of spots for both colours.

If the total for yellow dice is around 3,500 we would have some justification for thinking that these dice are the normal 1 to 6 version; 3.5 is the average score for standard dice. If the total for the pink type, after a thousand throws, is 4,500 then it would be fair to conclude that pink dice were numbered 2 to 7 and that the two types of dice are indeed different.

So far so good. But if you have one throw only, without knowing which colour you have thrown, then the chances you will find out the colour, on that single trial, is only 1 in 6. Five times out of six a single trial is not enough to tell which type you have thrown. And the more faces your dice have the less likely it is that one throw will tell you which type it is.

It gets worse. A small number of yellow dice are known to have a 7 rather than a 6. And a small number of pink dice have a 1 rather than a 2. So now it is never possible to know for certain which colour from a single blindfold trial just by being told the number you have thrown.

So looking at a single brain is like looking at a single throw of a dice type with many hundreds of faces, and where it is certain that many of the faces are not typical.

The sentence 'pink dice have more spots than yellow dice' remains true - but only of the population. What you can discover by looking at a single dice is almost nothing.

Tuesday, 10 March 2015

Expertise is a prison

Most people would be happy with the idea that if you want an opinion it is best to talk to an expert. In most areas this idea is, of course, relatively modern because the areas of specialisation are themselves modern developments. But before the nineteenth century the idea of intellectual expertise hardly existed - it is only a slight exaggeration to say that people who fancied themselves as clever had opinions and wrote treatises on more or less any subject.

Where expertise existed in the past (for example expert shoemakers, expert bakers and so on) it was recognised most commonly among those with skills that were practical. It is tempting to see the origin of this view in the distinction between  'knowledge' and  'craft' that is found in early Greek philosophy which was hugely influential in western thought. But Aristotle (for example) made it clear that people who treat diseases need to have both knowledge and craft to be successful in medicine. The theory and the practice, if you want to put it another way, are both necessary and complementary.

In Tom Stoppard's play 'Dirty Linen' a member of parliament describes his colleagues as people with interests "either so generalized as to mimic wholesale ignorance or so particular as to be lunatic obsessions". The two groups are not so different. The problem with expertise is precisely that the lunatic obsessions are a form of gross ignorance. We have seen this work, with sometimes tragic consequences, where experts with an obsession are called upon to give evidence in difficult criminal trials.

As areas of expertise get ever narrower the probability that the expert has no views other than their own current hypotheses, and that you will not find anyone brave enough to gainsay them, both get dangerously high. Is this a problem? After all, aren't they likely to be right?

Aristotle has something very important to say about this. In Aristotle's view knowledge that is not purely practical can be claimed only of truths that are eternal, and we know that eternal truths are very rare. In the particular case of scientific investigation, every hypothesis must be assumed to be provisional until it is backed by generations of evidence. The opinions of experts are not, and in most cases cannot be, truths.

We expect too much of our experts; we expect them to be right. In recent times geologists have even been found guilty of manslaughter for failing to predict earthquakes. This is bonkers. Expertise is a prison if either you are dismissed as ignorant for stepping outside of it, or if you are tightly constrained by expectation within it. Neither is healthy for society, science, or government.












Friday, 12 December 2014

Public Engagement and Impact

What do we mean?

The term 'Public Engagement' (PE) means different things in different contexts. For the current purpose I would love to define what I mean by PE so that you, the reader, can compare my definition with what you think it means. But that is for another post. For the time being I just want to comment on a roadmap that I have outlined elsewhere which might help institutions get a bit better at doing it.

The roadmap comes in four parts - each with a paragraph or two of explanation. I have based the language I use in the discussion paragraphs on the sorts of things that might be discussed at an institute of further or higher education, because that is where I work. The same arguments are applicable everywhere.

A Roadmap

The credibility and reputation of an institution, and it's standing in the community, indeed it's long term sustainability, depend on how positively it interacts with it's constituency.
  • This creates an immediate problem, particularly for HE where it is often unclear who it is they are trying to interact positively with. Government? Research Councils? The EU? The vast sea of potential undergraduates in UK schools? The lucrative pool of overseas students who want to study at UK universities? It seems like there is not one single constituency, but many. It may be a little easier in other spheres.
Attempts to manage interactions with all constituencies by seeing them as a set of separate problems leads to an over-bureaucratic, unwieldy, factionalised mess.
  • Seriously, does anyone who works in the 'impact' team even know the name of the people who organise school visits? Does the department that deals with summer schools share data with the overseas recruitment office? Does the lone academic in Maths with money to employ a post-doc for outreach have any interest in sharing a platform with a student of dance? Does the 'widening participation' team even know the name of the individual who organises Café Scientifique - without official support - as a hobby? I could go on. Each team has it's own budget, goals, contacts, targets and inevitably this means that it has it's own agenda.
Attempts by the those responsible for the more powerful constituencies to encourage factional development lead to a cycle of deepening factionalisation.
  • I will borrow a term from geopolitics and call this Balkanisation; with apologies to my many friends in the Balkans. From an online dictionary: Balkanize - divide (a region or body) into smaller mutually hostile states or groups: eg 'ambitious neighbours would snatch pieces of territory, Balkanizing the country'.

    The worst aspect of this in the current HE landscape is the 'impact' component of the Research Excellence Framework (REF) which we will come back to in a later post I hope. But there are others and I will not begin to list them here.
Attempts by management (which we hope is not factionalised, but often is) to address this often comes in the form of a plan.
  • Let's face it, if there ever was a plan that involved anything more than ticking boxes then we have long since forgotten how to write it. A plan is what you get if you buy an Airfix kit (showing my age). A plan is a sheet of step-by-step instructions with clear goals (tick boxes) - and although these do not involve deadlines you could add those yourself!

    A strategy, in contrast, is what you start with at the beginning of a chess game. It is smart, adaptive, and does not involve a fixed order of goals. What institutions need is a PE strategy and people need to be encouraged to think, and to be rewarded for thinking, strategically.

Public engagement is simply the antithesis of the planned Balkanisation of interactions with outside agencies and groups.

There - I hope the denouement surprised you! It was my intention to make you think that this post was about what we could, or should, be doing; or maybe about how much we could, or should, be spending. I am happy to rant about these things - indeed I have done so elsewhere. But this particular rant is about something different. Public Engagement is the priority you give to the strategic integration of everything reputationally enhancing.







Wednesday, 6 August 2014

Algorithm, who could ask for anything more?

First, I have to thank Stanley Kelly-Bootle for the line that I borrowed for the title of this post, modelled of course on Ira Gershwin. I thought it was a pretty good joke when I first laughed at it in 1982, but maybe it had been around for much longer than that. It is a bit scary that in the intervening 30-odd years it has gone from being a joke to being an article of faith. Everyone believes that everything can, and should, be reduced to an algorithm. This is the same as saying that everything can be automated, and that what humans usually refer to as the "skill and experience" can be taken out.

The word Algorithm is a westernised form of al-Khwārizmī, the name of a Persian mathematician who flourished in the late 8th to 9th centuries. His name is, it seems, derived from the oasis region where he was born, sometimes called Chorasmia in what now appears to be Uzbekistan.  Al-Khwārizmī was a seriously smart bloke, who among many other things was the first to write about solving mathematical problems by using numerals written in columns, for tens and units, and using a step-by-step approach. His techniques came to be known as Algorism. In this way he sort-of invented maths, in its modern form anyway, and 'Algebra' - a term he coined and which we owe entirely to his lasting influence.

The name in the form 'algorithm' has been borrowed by computer scientists to mean something more specific, and a bit difficult to explain. We can get close to understanding the idea if we just say that an algorithm is anything that can be realised as a computer program*. Take for example the programmes** found on automatic washing machines. We need to tell the machine how to wash clothes, so we decompose the process in our minds in to a sequence of steps each characterised by drum speeds, durations, temperatures and so on. These steps can be stored and executed by the machine (the program), but the idea that is expressed by the description of the steps is the "algorithm".

Phew! We can build machines that adequately wash clothes. This is because the process of washing clothes can be adequately expressed as an algorithm. For my own part I am mighty glad this is true because I have, in the past, lived without automatic washing machines so I know what a blessing they are. But to stick to the point, the key word in this paragraph is 'adequately'.

Let's choose a more controversial example, like driving a car. This example has been much in the news recently because cars without human drivers are on the verge of being feasible and so we need an algorithm to describe the process. Once we have the algorithm it can be embedded in a machine (the car can be 'programmed') and we are away. Who is going to dream up this algorithm? Can this be done 'adequately'? Can we even define what adequately means in this case?

My current purpose is to comment only on the first of these questions: "Who is going to dream up this algorithm?" I have been writing computer programmes since 1975 and helping other people to learn how to do the same for most of that time. Only once, working with my colleagues Raymond Flood, John Axford and Robert Lockhart in Oxford in the 1990s, did the opportunity come up to develop a course that did not concentrate on programming, but instead emphasised the importance of algorithmic thought. My own view is that we were highly successful!

I believe we can, and must, teach algorithmic thought if we are going to increasingly rely on algorithms. Or maybe what I mean is that we can, and must, develop people's ability to think algorithmically by good coaching. In any case this is far more important than a thousand exercises involving designing an 'app' for a smart-phone than concentrates on its usability and marketability; this will not produce a generation of engineers that will make you feel comfortable in your driver-less car, reading the newspaper, on a busy road.

  I am bound to get comments from people who object to this as a definition, which is fine, because I am not pretending it is one.

  ** Lapsed into UK spelling there - it seems right.

This essay also occurs in substantially the same form on LinkedIn.

Sunday, 3 August 2014

The art of distraction

Much of the effort that goes in to talking about science to young people, or engaging a wider audience with scientific issues in general for that matter, seems to be directed towards distraction. I don't wish to be too critical of this approach because we have all used it, and it does have some value. This aim of this short essay is simply to argue that distraction should be only a small part of what we do - not the main focus.

First I need to be clear about what I mean by 'distraction'. In an ideal world you would want everybody to interested in what you were saying. As an opening gambit you might do something, or say something that is spectacular, or loud, or controversial, or impressive just to 'hook' everyone's attention. Maybe you don't open with the hook, if you are clever you might just hint at the nature of the hook and build up to it placing it at a critical point in the presentation.

The hook, or the promise of a hook, is designed to keep the audience interested for long enough to get your message over, and to get you the required amount of applause at the end. Hooks have their uses, as I have said, but their over-use, and the competitive drive to develop 'better' ones, is ultimately pathological. In most cases their function is simply to distract the audience from the fact that they are not interested in what you are saying. They have seen so many hooks they are just waiting for the next one. They have become interested only in the art of distraction.

This wouldn't be such a bad situation if it were not for two things. First, most of science comes entirely without hooks. And second, many of the audience don't need or want them. Many of the audience are interested in you, and what you have to say and find the constant recourse to the shocking or spectacular unsatisfying.

Saying 'much of science comes entirely without hooks' doesn't mean that much of science isn't interesting, it simply means that it cannot be understood or appreciated in terms of scene-bites. (A 'scene-bite' is the presentational equivalent to the interview sound-bite - I just made that up!) Are you frustrated by the effect that the obsession with the sound-bite has had on the way politicians handle interviews? Scene-bite fixation is in danger of doing the same damage to the art of science communication. Everything around the distraction is forgotten and, by the insistence that the distraction is necessary, everything around it is also devalued.

It is increasingly the case that young people (particularly, but not exclusively) think that science is just a sequence of spectacular or cool events because that is all they ever see presented as science - or at least it is all they remember. As a result you don't get invited to speak unless you are bringing something spectacular or cool. 'Interesting' doesn't get a look in, and as a result the kids who are interested are badly served.

Science distraction is the opposite of science engagement. It keeps a few uninterested bottoms on a few seats for a short time, but creates a shadow that obscures the really interesting stuff and hides you from those members of the audience you stood the best chance of reaching.

Friday, 18 July 2014

A Model System


Getting the right choice of words when describing anything is, truly, a tricky business. My style of research is often called "computational modelling" which roughly means "writing a computer program that mimics the function of a real system".

This idea seems straightforward enough. A flight simulator, for example, is a computational model. It gives pilots the impression that they are flying an aircraft, when what they are really doing is providing inputs to sets of equations that are a model of an aircraft. Notice that themodel aircraft is also referred to as a flight simulator, so the words model and simulator are close relatives.

So, I work to write programs that model the brain. Does this mean I am building a brain simulator? Or even an artificial brain?

The key difference is the level of understanding. Flight simulators are almost indistinguishable from the real thing because aircraft are relatively simple systems that humans designed and built to start with, we know how they work. In contrast we have absolutely no well formed idea of how large areas of the brain do what they do, not on the level that is necessary to produce a successful simulation of even the simplest sort, for even the simplest animals.

The work that I, and many others, do is designed precisely to help to develop, and test, new and imperfect ideas about how the brain might work, and any measure of success is very welcome! This sort of modelling - to explore hypotheses - is quite different to building a flight simulator which relies on well established and reliable science.

So my type of models are based on ideas, and if the models get even close to working, well, the ideas might have some merit. Models that are not based on clear underlying ideas are models of nothing. A red light should start to flash whenever the answer to "What idea are you testing?" gets dangerously close to "We just want to take the few things we know, multiply them half a billion times, wire it all up, and see what happens".

Model planes, and model brains - two almost unrelated concepts. We seem to be getting them confused.

Monday, 1 July 2013

The brain, like Gaul, is divided into parts

For a few centuries people have cut brains up and named the various lumps, bumps, holes, and sheets that they found. This could be likened to early astronomy. Let us call this desire to name bits of the brain cerebonomy, just to have the joy of coining a new word! Cerebonomy is simply naming the brain parts, like the naming of stars, without any reference to what a star is or why it shines. More recently, with the aid of microscopes, researchers have been able to discern a multitude of layers, regions and divisions within each lump or bump that had previously been given a name. Each layer and region is then also given a name.  This makes the simple profusion of names one of the chief obstacles to reading brain-related literature, and there is no way I can think of to simplify things.

Here are some bits you should know in a simple glossary.
  • The brain stem is the bit of your brain that is obviously an extension of your spinal cord. It is not divided into 'left' and 'right' sections like the rest of the brain. On a side view of a human brain you can see just a little bit of the brain stem sticking out the bottom like the stem of a cauliflower, but it carries on up inside almost to the centre of the brain, and indeed incorporates some (or all) of what is usually called the mid-brain or mesencephalon.
  • The cerebral cortex (confusingly often just called cortex) is the folded 'cauliflower' bit that is the most visible feature of a human brain. It is a sheet that isn't very thick (cortex is from the latin for 'bark' of trees) and in humans it is around 4mm thick. It is divided into the left and right hemispheres although these are joined in various places by large, fast, bundles of connections. It is the 'greyest' part of the brain indicating that its connections are dense and mostly very short-range.
  • The next bit of interest, the diencephelon, refers to most of the bits that you cannot see from the outside (because they are covered by the cortex) and which are joined to the top of the brain stem and the mid-brain. If you turn the brain to look directly up the brain stem, you can catch a glimpse of part of the diencephalon.
  • An obvious feature of the human brain is the cerebellum.  It is almost like a separate little brain that sticks prominently out of the back of a human brain and is joined to the brain stem. Its surface is folded - more narrowly folded than the cerebral cortex - and consists of a single sheet of tissue in a folded arrangement like an accordion.
These are the major divisions of the brain - not really like the divisions of Gaul, more like the continents of the globe. Like the continents there aren't too many of them!

Part of: Martin’s Vastly Oversimplified and Woefully Incomplete Guide to Everything in the Brain as featured on the Brainsex website.

Saturday, 29 June 2013

50 Shades of Brain

People often talk about grey matter and white matter without making it clear what they mean; even a fictional character like Hercule Poirot, for example, makes references to his 'little grey cells'. The brain can be usefully divided into areas where the connections are very dense and so short that speed isn't much of an issue (grey matter); and other areas where the connections are less dense, but faster, and often cover longer distances, called the white matter.

A dead brain does look sort of grey and white in different areas. A living brain is mostly browny-pink, because it is well supplied with blood vessels, while the grey matter is slightly darker (browner).

The white matter looks white (or paler) because it is high in fat. The brain uses fat to surround and insulate the connections, to stop spikes from 'leaking away' as they travel to the synapses. So areas of the brain responsible for the long-range connections have very fatty axons, and these regions appear paler. And in contrast, areas containing only very short-range connections, which need very little fat, appear darker or grey.

To complete the picture there is a small amount of 'black matter' which is a distinct area of the brain, deep inside, which appears darker than the grey matter because the cells have pigment in them. This area is always referred to by its Latin designation substantia nigra - in contrast to grey and white matter which are hardly ever called substantia grisea and substantia alba.

Oh yes, and there is a blue bit. The locus coeruleus. I have never seen one but apparently it looks a bit blue.

Part of: Martin’s Vastly Oversimplified and Woefully Incomplete Guide to Everything in the Brain as featured on the Brainsex website.

Saturday, 22 June 2013

The chemical truth!

In other posts we have established that the brain is a network of cells, which almost (but not quite) touch each other, which send out pulses of electrochemical activity if there is enough of the right kind of activity around them, and that they have special structures called synapses to manage the communication between them.

Synapses are where the action truly is, and any discussion of how the brain works has to have them at its heart. It is useful to get a primitive idea of what they do split up into three time scales:

On the very short time scale (thousandths of a second), activity in the synapses is dominated by neurotransmitters. These these are chemicals that are produced inside the neurons and are responsible for carrying the wave of activity over the the synaptic cleft to the next neuron. These chemicals are not very 'famous' so we don't really need to know what they are called at this stage. They are either excitatory - that is they increase the likelihood that the post-synaptic neuron will fire - or they are inhibitory. They are released only when the spike reaches the synapse.

On a longer time scale (seconds to hours or even days), the behaviour of the synapses is changed by another, much more famous, group of chemicals called neuromodulators. These are not produced in the synapses they affect, but are usually made in other parts of the brain.

Many neuromodulators are well known because they are linked in the popular imagination to particular behaviours: adrenaline (fight and flight), dopamine (reward and pleasure), histamine (allergic reactions), oxytocin (love and bonding), serotonin (happiness!), melatonin (sleep cycles), and many, many others.

In reality, things are much more complicated than this picture (one modulator - one behaviour) suggests! Things are further complicated by the fact that many neuromodulators are also neurotransmitters (although not necessarily in the brain) and many of them are hormones with wide-ranging effects apart from their effects on synapses. A chemical like oestrogen for example (slightly controversial to include this as a neuromodulator - but justified I think) has hundreds of well-documented effects on pretty much every part of the body.

On the longest time scale (hours, and days, and months), synapses actually appear and disappear, are strengthened and weakened, grow and shrink. (And there may be hundreds of other behaviours yet to be documented.) This is controlled by a vast array of factors including the neuromodulators, genetics and epigenetic control. This is only just beginning to be documented, and is certainly not well understood. The longer ("developmental" some might say) time scale is more or less undiscovered country for neuroscientists at a cellular level, and patterns that emerge on this time scale are still something of a mystery.

Part of: Martin’s Vastly Oversimplified and Woefully Incomplete Guide to Everything in the Brain as featured on the Brainsex website.

Tuesday, 4 June 2013

The Neural Hypothesis

The brain is an organ made up of cells. This is just like any other tissue or organ in the body is made up of specialized cells. The heart is composed of heart cells that are found nowhere else in the body, the liver of liver cells, and so it is with the brain.

The most famous type of brain cell is the neuron. Actually there are hundreds of different types of neuron but we will ignore that here. Neurons differ from all other cells in the body because they send out projections to meet each other to form a complicated network capable of fast and flexible communication. All cells are capable of communicating with each other at some level, but we are talking about something much faster and more flexible found only in cells of the nervous system.

For a long time people thought that the cells of the brain (and the rest of the nervous system) were all joined up in a huge net; this was called the 'reticular hypothesis'. But towards the end of the 19th century it became clear that they don't actually join up, and that each neuron was separate and a "fully autonomous physiological canton" (Cajal, 1888). Thus was born the neuron hypothesis.

(We will ignore, for the moment, all brain cells that are not neurons - although it turns out that this is probably a big mistake.)

So, the neuron sends out information to other neurons down a thick-ish fibre called an 'axon', and gathers information from its surroundings, through thin thread-like projections called 'dendrites'. Dendrites and axons are usually, but not always, on opposite sides of the cell body which, if you look at pictures, is the 'blob' in the middle.

The most obvious sort of activity that is carried away from the neuron by the axon is the action potential, which is a spike of electrochemical activity. These spikes, or clicks, are the usual form in which information is encoded and moved around the brain. In the places where an outgoing axon gets close to, but never quite touches, a receiving dendrite, there is a special structure that controls the passage of information across the gap, or cleft, called the synapse.

You won't find this in many textbooks, but my money, and all the smart money is on the synapse (which is not yet that well understood) being the key to much of the really clever stuff that happens in the brain.

From: Martin’s Vastly Oversimplified and Woefully Incomplete Guide to Everything in the Brain as featured on the Brainsex website.

Thursday, 16 May 2013

Detecting activity in the brain

So, as has been mentioned elsewhere, neurons pick up the pulses or spikes of activity that travel away down the axons of its neighbouring neurons. This activity jumps the gap at the synapse on to the dendrites of the receiving neuron. The activity the neuron picks up can either be excitatory (making it more likely to generate its own spike) or inhibitory, but if there is enough excitatory activity the neuron responds by generating a spike of its own that travels away down its own axon.

The spikes are like Mexican waves of chemicals moving in and out across the walls of the axon which is hollow. The atoms that move in the wave all carry an electrical charge, so the propagating wave is, in some ways, like an electrical current in a wire. However this analogy can be pushed too far. The brain is not an electrical device in the usual sense (which would involve electrons moving through a solid or a gas) the brain is a liquid phase machine and the spikes are movements of charged atoms - or ions as they are known.

The only way to detect an individual spike in an individual neuron, is stick a glass needle into the brain and get it as close to the neuron as possible. This is fraught with difficulty and interpretation of the results can be very tricky.

However, groups of neurons tend to fire more or less at the same time, and the axons of the group are, more or less, all pointing in the same direction in many cases. The total resulting current is large enough to be detected by placing coils on the scalp. This technique is known as EEG (electroencephalography). The EEG is incredibly useful because it records exactly when the activity happens but, frustratingly, it is very difficult to pin down exactly where the signal came from - only a very rough answer is usually possible. And in any case this technique is limited to activity that is near the skull (mostly the outside 4mm or so of the brain known as the cortex).

If something causes the brain to generate more spiking activity than usual, then, between 2 and 6 seconds later, there is a corresponding increase in blood flow in the area of activity. Although this increase in blood flow is generated way after the event, doesn't directly measure the activity of the neuron, and contains no fine timing detail, it does have the the huge advantage of being easy to locate using a technique known as MRI (magnetic resonance imaging).

From: Martin’s Vastly Oversimplified and Woefully Incomplete Guide to Everything in the Brain as featured on the Brainsex website.

Tuesday, 11 September 2012

Scientists must talk to people other than scientists!

(This piece appeared in substantially the same form in the BSA magazine but the web version has now been taken down so it is archived here.)

I have been sharing my scientific proclivities, in public, since I was 13. My early enthusiasm for science was ignited by space exploration, the rise of micro-electronics, and the promise of unlimited energy from artificial suns. It has continued to be some part of my everyday work almost every day since; in industry and further education; in schools, prisons and drop-in centres for disabled people; at public events and evening classes; in engineering firms, and at the Workers’ Education Association.

I didn’t apply for my first research job until 2001. It was immediately obvious that many academics, who were simply disinclined or already adjusting to the increasing emphasis on undergraduate teaching, saw engagement with the wider public as a mere side-show.

Of course researchers know that the funding councils and big industries don’t print money; that is the money that funds research. They also know that we live in an open, connected society. So any attempt to ignore how the funding bodies get their money is, as described by Professor Brian Cox at a British Science Association Conference recently, ‘myopic’.

You cannot blame academics for being short sighted. We have had decades of short-term contracts, the pellmell pursuit of scarce posts via a good publication record, and increasing pressure to secure funding is piled on to the demand for excellence in undergraduate teaching. And anyway, when I emailed a colleague recently to ask for someone to represent their research group at a university-sponsored public event, he said one of his postdocs might be willing, but that it was ‘outside her job description’.

He is absolutely right – it is; the myopia is, by omission, part of the contract.

I am lucky to be working alongside senior colleagues who can see that there is value in my continuing outreach activities. When the new Cognition Institute at Plymouth University came in to existence, I successfully applied for the first research fellowship at the university that incorporated an explicit public engagement remit. For me, at least, it is inside my job description. I regarded this as a small victory despite the short–term, part-time contract.

Seven years ago I sailed too close to the enchanted islands of the public engagement community and was lured on to the rocks by the sirens (they were called Sharon, Gina, and Timandra). I took part in a Science Communication competition called Famelab, and was a finalist which led to my first invitation to speak at the Cheltenham Science Festival. I had sailed too close to the wider public engagement community and, because I had no one to tie me to the mast, I was lured onto the rocks.

This opened my eyes to many more disparate routes through which academics can develop ideas and cooperate on projects: national competitions, open-mic events, citizen science, and many others. If we are to reach the widest audience, it is essential that the projects we support are diverse and inventive.

Not all academics will want to get involved in any of these, but I have been surprised by how sceptical, or dismissive, many are of their value. This is particularly true of my regular support for science and maths in primary schools, which I have been told recently are ‘pointless’. There is certainly a lot of work yet to be done.

Science needs to foster a joint enterprise with the society that funds it, and which benefits from its work. When I say this out loud I still tend to receive blank looks and awkward silences. This isn’t just about publicity for your research, getting your face in the media, building your CV, or meeting a grant deliverable.

If you believe that democracy is strengthened when the people who vote understand the issues, then it is a matter of citizenship. Only the research community can take responsibility for this, and as a result universities must commit to taking a leading role.