IEEE Brain Webinar Series

IEEE Brain Webinar Series

Learn from the top subject matter experts in brain research and neurotechnology. The objective of the IEEE Brain Webinar Series is to be a point of learning for engineering and technology advancements that improve our understanding of the brain to treat diseases and to improve human condition.  We will be hosting a technical webinar approximately once every 2 months.

Upcoming Webinars

Motor Imagery BCI for Cognitive Profiling in Disorders of Consciousness and Prospects for Direct Speech BCI with Imagined-speech

Prof. Damien Coyle, Intelligent Systems Research Centre, Ulster University

Motor Imagery BCI for Cognitive Profiling in Disorders of Consciousness and Prospects for Direct Speech BCI with Imagined-speech
Thursday, 18 March 2021

This webinar will cover two current hot topics in EEG-based brain-computer interface research and research ongoing at the Intelligent Systems Research Centre.

Part 1 will focus on assessment of patients with prolonged disorder of consciousness (PDoC). Motor imagery brain-computer interface (MI-BCI) may facilitate willful modulation of sensorimotor oscillations in patients with PDoC, enabling assessment of awareness and question answering by imagining movement and thus, potentially, movement-independent neuropsychological assessment. We evaluated this potential with a cohort of PDoC patients (n=24). Whilst results revealed patients across the PDoC spectrum have capacity to learn to modulate sensorimotor rhythms and respond to closed questions, differences in patient cognition are more likely to be revealed after extended training with feedback and more intensive question-and-answer sessions.

Part 2 will focus on direct speech BCIs. Several recent studies have harnessed overt speech to examine linguistic communication through neural signals. While imagined speech is the holy grail modality for a BCI based on language, systematic study of imagined speech has been relatively sparse. The phenomenology of imagined speech, its relationship to overt speech, and the effect of different stimuli on efforts to elicit and label its neural correlations to enable algorithms to learn to detect and classify it, are currently not well understood. Employing the first picture-naming paradigm in speech BCI research, this talk will show that effects of stimuli/cues on speech decoding from EEG are highly significant, whilst linguistic properties of semantics and syntax are not and that overt speech is easier to decode from EEG than imagined speech.

View for free until 01 April

Past Webinars

Seeing the Sound: Optical Neural Interfaces for In Vivo Neuromodulation

Dr. Guosong Hong, Assistant Professor, Stanford, Materials Science and Engineering and Neurosciences Institute
Seeing the Sound: Optical Neural Interfaces for In Vivo Neuromodulation
Tuesday, 23 February 2021

Optogenetics has transformed experimental neuroscience by manipulating the activity of specific cell types with light, enabling in vivo neuromodulation with millisecond temporal resolution. Visible light with wavelengths between 430 nm and 640 nm is used for optogenetics, limiting penetration depth in vivo and resulting in an invasive fiber-tethered interface that damages the endogenous neural tissue and constrains the animal’s free behavior. In this talk, I will present two recent methods to address this challenge: “sono-optogenetics” and “macromolecular infrared nanotransducers for deep-brain stimulation (MINDS)”. In the first method, we demonstrate that mechanoluminescent nanoparticles can act as circulation-delivered nanotransducers to convert sound into light for noninvasive optogenetic neuromodulation in live mice. In the second method, we demonstrate 1064-nm near-infrared-II light can penetrate the brain to reach 5-mm depths for modulating neural activity in tether-free, freely behaving animals. I will present an outlook on how new optical neural interfaces may advance neuroscience research by reducing the invasiveness and mechanical restraints in live animals and even humans.

Recording coming soon
Brain Machine Interfaces: Concept to Clinic
Dr. Vikash Gilja, Associate Professor, Department of Electrical & Computer Engineering and Neuroscience Graduate Program, University of California, San Diego (UCSD)
Brain Machine Interfaces: Concept to Clinic
Wednesday, 28 October 2020

Over the last two decades neural prostheses that aim to restore lost motor function have moved quickly from concept to laboratory development and clinical demonstration. In parallel, advances in neural interfacing technologies poised to broaden clinical application of these prostheses are actively in development in both academic and industry settings. In this talk, I will provide a broad overview of the technical history of these neural prostheses starting from enabling neurophysiology insights to work currently being conducted. Additionally, I will describe research within my own lab with the goal of augmenting neural prosthesis performance and expanding their potential application space. This work will highlight key enabling research collaborations in multiple clinical settings and the development of complementary animal models that accelerate development. We will take a few deep dives to describe the application of statistical signal processing, machine learning, and algorithm design to this research domain.

Watch the recorded webinar
Optimizing Control and Learning in Neural Interfaces
Dr. Amy Orsborn, Clare Boothe Luce Assistant Professor in Electrical & Computer Engineering and Bioengineering, University of Washington
Optimizing Control and Learning in Neural Interfaces
Tuesday, 30 June 2020
Direct interfaces with the brain provide exciting new ways to restore and repair neurological function. For instance, motor Brain-Machine Interfaces (BMIs) can bypass a paralyzed person's injury by repurposing intact portions of their brain to control movements. Recent work shows that BMIs do not simply "decode" subjects' intentions - they create new systems subjects learn to control. To improve BMI performance and usability, we must therefore understand how to optimize learning and control in these systems. I will present a survey of recent work and new directions exploring how the design of BMI systems influence BMI performance. I'll touch on the importance of control loop design, brain-decoder interactions and multi-learner approaches, and network-informed neural signal selection. These examples highlight the role of learning and closed-loop in BMIs, and demonstrate the promise of engineering approaches based on optimizing learning and control along with information "decoding."

Watch the recorded webinar >>

A Large-scale Standardized Physiological Pipeline Reveals Functional Organization of the Mouse Visual Cortex
Dr. Saskia de Vries, Assistant Investigator, Allen Institute for Brain Science
A Large-scale Standardized Physiological Pipeline Reveals Functional Organization of the Mouse Visual Cortex
Tuesday, 31 March 2020

An important open question in visual neuroscience is how visual information is represented in cortex. Important results characterized neural coding by assessing the responses to artificial stimuli, with the assumption that responses to gratings, for example, capture the key features of neural responses, and deviations, such as extra-classical effects, are relatively minor. The failure of these responses to have strong predictive power has renewed these questions. It has been suggested that this characterization of visual responses has been strongly influenced by the biases inherent in recording methods and the limited stimuli used in experiments. In creating the Allen Brain Observatory, we sought to reduce these biases by recording large populations of neurons in the mouse visual cortex using a broad array of stimuli, both artificial and natural. This open dataset is a large-scale, systematic survey of physiological activity in the awake mouse cortex recorded using 2-photon calcium imaging. Neural activity was recorded in cortical neurons of awake mice who were presented a variety of visual stimuli, including gratings, noise, natural images, and natural movies. This dataset consists of over 63,000 neurons recorded in over 1300 imaging sessions, surveying 6 cortical areas, 4 cortical layers, and 14 transgenically defined cell types (Cre lines).

We found that visual responses throughout the mouse cortex are highly variable. Using the joint reliabilities of responses to multiple stimuli, we classify neurons into functional classes and validate this classification with models of visual responses. Only 10% of neurons in the mouse visual cortex show reliable responses to all of the stimuli used, and are reasonably well predicted by linear-nonlinear models. The remaining neurons fall into classes characterized by responses to specific subsets of the stimuli and the neurons in the largest class do not reliably responsive to any of the stimuli. These classes reveal a functional organization within the mouse visual cortex wherein putative dorsal areas show specialization for visual motion signals.

Watch the recorded webinar >>
Multimodal Imaging in Understanding Brain Diseases
Dr. Ruiqing Ni,
Junior Group Leader, Institute for Biomedical Engineering, ETH Zurich & University of Zurich
Multimodal Imaging in Understanding Brain Diseases
Tuesday, 29 October 2019

The advances in neuroimaging in the last decades have bridged the translational gap, and enabled our understanding of brain under physiological and disease conditions. Multiscale and multimodal imaging such as positron emission tomography, magnetic resonance imaging, optoacoustic and fluorescence imaging have provide molecular, structural, and functional insights at cellular, circuit and whole brain levels. The use of maging biomarkers has also assisted the early and accurate diagnosis of brain disorders, and facilitated personalized medicine. This webinar will focus on the development of novel brain imaging techniques, as well as their application in the field of Alzheimer’s disease. Multimodal high-resolution imaging tools were developed for non-invasive visualization of the neuropathology (amyloid-beta and tauopathy), brain connectivity, and atrophy in mouse models of Alzheimer’s disease.

Watch the recorded webinar >>
Modeling the Representation of Object Boundary Contours in Human fMRI Data
Dr. Mark Lescroart,
Assistant Professor, Cognitive & Brain Sciences Group, Department of Psychology, University of Nevada, Reno

Modeling the Representation of Object Boundary Contours in Human fMRI Data
Tuesday, 13 August 2019

The human visual system consists of a hierarchy of areas, each of which represents different features of the visual world. Recent studies have revealed that most brain areas—and even many individual neurons—represent information about multiple visual features. Thus, a complete model of the brain must specify the relative importance of multiple visual features across the visual hierarchy. This talk will describe our work to estimate the importance of object boundary contours relative to other features.
Boundary contours define the edges of figural objects in scenes, and figure/ground segmentation has long been held to be a critical process in human vision. However, the relative importance of boundary contours compared to both lower- and higher-level features (e.g. motion energy and visual categories) remains unknown. To address this issue, we measured fMRI responses while human subjects viewed two sets of movies that varied in many feature dimensions: rendered movies of artificial scenes and cinematic movies. We modeled responses to both sets of movies independently using the same three models: models of motion energy, object boundary contours, and visual categories. We used the encoding models to predict withheld fMRI data, and used variance partitioning to determine whether the various models explained unique or shared variance in each dataset. We found that the pattern of unique variance explained by the three models was qualitatively consistent across both datasets, with unique variance explained by boundary contours in Lateral Occipital cortex and other areas. However, the three models also shared substantially more variance in the cinematic movies, likely due to correlations between model features. For example, much of the motion energy in the cinematic movies was a result of people moving. The shared variance between all three models in the cinematic movies in particular highlights the need for complex stimulus sets in which features in different models are de-correlated from each other.

Watch the recorded webinar >>
Neurophotonic Systems: From Flexible Polymer Implants to in situ Ultrasonically-driven Light Guides with Dr. Maysam Chamanzar
Dr. Maysam Chamanzar,
Assistant Professor of Electrical Computer Engineering,
Carnegie Mellon University

Neurophotonic Systems: From Flexible Polymer Implants to in situ Ultrasonically-driven Light Guides
Tuesday, 18 June 2019

Understanding the neural basis of brain function and dysfunction may inform the design of effective therapeutic interventions for brain disorders and mental illnesses. Optical techniques have been recently developed for structural and functional imaging as well as targeted stimulation of neural circuits. One of the challenges of optical modality is light delivery deep into the brain tissue in a non-invasive or at least minimally invasive way.

Scattering and absorption prevents deep penetration of light in tissue and limits light-based methods to superficial layers of the tissue. To overcome this challenge, implantable photonic waveguides such as optical fibers or graded-index (GRIN) lenses have been used to deliver light into the tissue or collect photons for imaging. Existing large and rigid optical waveguides cause damage to the brain tissue and vasculature. In this talk, Dr. Maysam Chamanzar will discuss his research on developing next generation optical neural interfaces. First, Dr. Chamanzar will introduce a novel compact flexible photonic platform based on biocompatible polymers, Parylene C and PDMS, and GaN active light sources for optogenetic stimulation of neural circuits with high spatiotemporal resolution. This photonic platform can be monolithically integrated with implantable neural probes.

Then, Dr. Chamanzar will discuss his recent work on developing a novel complementary approach to guide and steer light in the brain using non-invasive ultrasound. Dr. Chamanzar will show that ultrasound waves can sculpt virtual graded-index (GRIN) waveguides in the tissue to define and steer the trajectory of light without physically implanting optical waveguides in the brain.

These novel neurophotonic techniques enable high-throughput bi-directional interfacing with the brain to understand the neural basis of brain function and design next generation neural prostheses.

Watch the recorded webinar >>

Euisik Yoon, Ph.D.
Professor, Dept. of Electrical Engineering and Computer Science
Professor, Dept. of Biomedical Engineering
Director, NSF International Program for Advancement of Neurotechnology
University of Michigan
Fiberless Optoelectrodes for Selective Optical Neuromodulation at Cellular Resolution
Tuesday, 30 April 2019

This talk will review the evolution of Michigan neural probe technologies toward scaling up the number of recording sites, enhancing the recording reliability, and introducing multi-modalities in neural interface including optogenetics. Modular system integration and compact 3D packaging approaches have been explored to realize high-density neural probe arrays for recording of more than 1,000 channels simultaneously. In order to obtain optical stimulation capability, optical waveguides were monolithically integrated on the silicon substrate to bring light to the probe shank tips. Excitation and inhibition of neural activities could be successfully validated by switching the wavelengths delivered to the distal end of the waveguide. For scaling of the number of stimulation sites, multiple micro-LEDs were directly integrated on the probe shank to achieve high spatial temporal modulation of neural circuits. Independent control of distinct cells was demonstrated ~50 μm apart and of differential somato-dendritic compartments of single neurons in the CA1 pyramidal layer of anesthetized and freely-moving mice.

Watch the recorded webinar >>

Anton Arkhipov, Ph.D.
Associate Investigator
Allen Institute for Brain Science
Data-Driven Modeling of Brain Circuits Based on a Systematic Experimental Platform
Wednesday, 20 February 2019

The Mindscope project at the Allen Institute aims to elucidate mechanisms underlying cortical function in the mouse, focusing on the visual system. This involves concerted efforts of multiple teams characterizing cell types, connectivity, and neuronal activity in behaving animals. An integral part of these efforts is the construction of models of the cortical tissue and cortical computations. To achieve this, multi-model experimental data are integrated into a highly realistic 230,000-neuron model of the mouse cortical area V1. We perform systematic comparisons of simulated responses to in vivo experiments and investigate the structure-function relationships in the models to make mechanistic predictions for experimental testing. To enable this work, we developed the software suite called Brain Modeling ToolKit (BMTK) and a modeling file format called SONATA. These tools, the models, and simulation results are all being made freely available to the community via the Allen Institute Modeling Portal.

Watch the recorded webinar >>