NeuroGen: Activation optimized image synthesis for discovery neuroscience. Academic Article uri icon

Overview

abstract

  • Functional MRI (fMRI) is a powerful technique that has allowed us to characterize visual cortex responses to stimuli, yet such experiments are by nature constructed based on a priori hypotheses, limited to the set of images presented to the individual while they are in the scanner, are subject to noise in the observed brain responses, and may vary widely across individuals. In this work, we propose a novel computational strategy, which we call NeuroGen, to overcome these limitations and develop a powerful tool for human vision neuroscience discovery. NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation. We demonstrate that the reduction of noise that the encoding model provides, coupled with the generative network's ability to produce images of high fidelity, results in a robust discovery architecture for visual neuroscience. By using only a small number of synthetic images created by NeuroGen, we demonstrate that we can detect and amplify differences in regional and individual human brain response patterns to visual stimuli. We then verify that these discoveries are reflected in the several thousand observed image responses measured with fMRI. We further demonstrate that NeuroGen can create synthetic images predicted to achieve regional response patterns not achievable by the best-matching natural images. The NeuroGen framework extends the utility of brain encoding models and opens up a new avenue for exploring, and possibly precisely controlling, the human visual system.

publication date

  • December 20, 2021

Research

keywords

  • Deep Learning
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging
  • Visual Cortex

Identity

PubMed Central ID

  • PMC8845078

Scopus Document Identifier

  • 85121366645

Digital Object Identifier (DOI)

  • 10.1073/pnas.1403112111

PubMed ID

  • 34936922

Additional Document Info

volume

  • 247