11th June 2015
We chose to read the recent paper by Nick Turk-Browne’s lab (deBettencourt et al. 2015;http://www.nature.com/neuro/journal/v18/n3/full/nn.3940.html) because we are interested in attention, variability in performance over trials, methods for training attention, neurofeedback, and multivariate pattern analysis.In this experiment, 16 participants performed a selective sustained attention task in three separate sessions. In all cases participants viewed a composite image made up of a superimposed face and scene, and had to monitor one stimulus category to detect frequent (90% present) targets (e.g. male faces).Subjects completed 3 sessions on different days: 2 behavioral sessions (day 1 & 3) served to measure the effect of training and a session during fMRI scanning (day 2) which included both blocks similar to the behavioural sessions (‘stable’ blocks) and blocks of neuro-feedback based training.There were 4 stable blocks per run (6-9 runs per subject) that were used to train a logistic regression classifier to discriminate between faces and scenes. As the main experimental manipulation, during training blocks the pattern classification was done nearly immediately (1 scan delay), and the amount of evidence for the attended category was used to provide feedback during the next scan in the form of changing the strength of the face versus scene information in the composite stimulus.Interestingly, the authors choose to use the feedback as a means to amplify the consequences of attention waxing and waning during performance, further impoverishing information about the relevant stimulus when poor classifier output signaled lapses and augmenting relevant stimulus information when good classifier output signaled focused engagement.This training was shown to have significant consequences. Participants performed better in the post vs pre-scan behavioral session. Classifier performance also improved (first vs last run) in several areas, including ventral temporal cortex and basal ganglia, suggesting multivariate patterns of brain activity became more separable through training. Finally, a network of frontoparietal areas was suggested to contribute most to the overall classification and to drive the training effects.We greatly enjoyed discussing this paper. We agree that training basic attention functions can have important and widespread consequences for maintaining and promoting cognitive health in a variety of populations. We found the approach to be innovative and the methods rigorous. We especially liked the use of Neurosynth (http://neurosynth.org/) to create database-driven regions of interest, and the practice of running experiments in a double-blind way, with standardized instructions to participants.In going through the details, we wondered about the challenges associated with accurate pattern classification based on limited data at the individual-participant level. We would have liked to see time courses from individual participants (and not only group averages, as shown). We also wondered about whether there would be ‘threshold’ effects (doing well on the task could get you stuck with easy stimuli in a self-perpetuating loop). And, to be pedantic, we felt that some claims about causality, derived only from correlations, overstepped the evidence.Overall, this was great fun to discuss, and it even gave us some new ideas for how to approach some of these tricky training studies.
– See more at: http://www.brainandcognition.org/2015/06/16/closed-loop-training-of-attention-with-real-time-brain-imaging/#sthash.14bcXr4p.dpuf