Quality Transparency Free US Shipping over $15 Over 15,000 Reviews

Challenging the Nature BBC Brain Games Study

BBC published the results of a ‘brain training’ study in Nature magazine. The show “Bang goes the Theory” funded and conducted a study with 8,600 subjects using a simple set of light-weight computer games, played them for a minimal time and declared to the world that “brain games don’t work”. With so many willing subjects involved, it is unfortunate that it was a wasted opportunity to apply depth and rigor to the study and do a serious and responsible job. The ill-founded experiment ultimately throws the baby out with the bath water. It is great propaganda for hyping a popular TV show with sound-bites, however, the hype and rhetoric generated by it is a disservice to those that could benefit from serious scientific work.

To be specific, the study only uncovered that the specific brain games they developed for use in the test were no more efficient than the tasks they used in their control group. The results should not surprise anyone when you consider the tools and methods used. Let’s review a few points here:

1) The duration of the training program was very short (6 tasks for a minimum of 10 minutes a day, three times a week, over a six-week period). In 2008, the BBC contacted HAPPYneuron for early advice in designing the experiment. At the time we advised the BBC that the training duration was too short and that no reportable conclusion could be drawn from an experiment of this duration. This opinion was a direct result of other studies and our own extensive experience and analysis. It is well established that the minimal engagement time to see any statistically significant effect from brain training is 90 days, with 3 sessions per week of at least 30 minutes in duration.

2) The study was seeking to demonstrate ‘transferability’, that is to see a correlation between task improvement and benchmarking improvement. The application of meta-cognition is essential for transferability. It is a complex process (indeed the subject of many a Ph.D. theses) and must be intently and explicitly designed into the training exercises. Two key factors that are necessary to achieve transferability were absent from BBC experiment. These are:

a) a didactic approach which shows the strategies and mechanisms that help develop meta-cognition abilities for improvement
b) a multiplicity of situations (large variety of exercises ) where the acquired meta-cognition abilities can be re-applied in different exercises

The BBC experiment had just 6 exercises, certainly not enough for a subject to reapply the approaches that they may have learned.

3) The control group was an interesting choice and arguably stacked the odds in favor of the conclusion from the onset. To determine if an activity is effective, the ideal situation is to compare the activity with what people are likely to do in real life. Will people systematically and regularly perform the same type of explorative internet searches on an ongoing basis of the caliber that was given to the control subjects? The subjects were asked to explore a set of obscure questions (e.g. in what year did Henry VIII die?) and then to order them chronologically. This required systematic Web research and exercised sequencing skills, language skills, reasoning, working memory and other cognitive functions… arguably a cognitive task at least on par with the training exercises. So we are not surprised to also see improvements within the control group, or a reduction in the difference between the trained groups and the control group. The study’s claim is that « in all three groups the standardized effect sizes of the transfer effects were, at best, small, suggesting that any comparison (even with a control group who did nothing) would have yielded a negligible brain training effect in the experimental groups. » Yes, but all this suggests is that if you jointly use a poor brain training program at a light dose and you are compared against a stimulated control group, you limit the chance to see real improvement! Optimistically, the study might have had the honesty and scientific integrity to conclude « That said, the possibility that an even more extensive training regimen may have eventually produced an effect cannot be excluded. » Lastly, the experiment’s reasoning benchmark is a grammatical reasoning test, in which no training test included language materials (which was not the case of the control group).

To conclude, designing an efficient brain training platform is a complex task that requires many important and essential elements you cannot simply get by compiling basic games and expecting positive results with a minimal training duration. This is the difference between leisure gaming activity and a properly designed brain training program. It is important to note, as shown by many published studies*, brain training software is not the only way of receiving broad cognitive stimulation and preventing cognitive decline. Cognitive stimulation can also be achieved with a highly cognitive stimulated way of life (traveling, gardening, reading, learning, etc) or even by taking cognitive enhancing supplements. What brain training software offers is another way to achieve this stimulation, a simple and inexpensive solution to compensate for the decrease in cognitive activity that happens to most people as they age.

So, we encourage people interested in the benefit of brain training to see though all of the sound bites and the veneer of what has been this highly publicized conclusion by a poorly designed study and to continue to support the serious cognitive research going on at universities and hospital systems around the world.

Bernard Croisile, MD, Ph.D and the HAPPYneuron scientific team

* Research Studies