US Node special symposium

The brain is an exceedingly complex organ, with distinct organizational principles that govern its operation over a vast set of spatial and temporal scales. The study of neuroscience is the embodiment of the enterprise to capture, catalog, classify and understand this complex interplay of genes, signals, circuits, behavior across the organism’s lifetime. Neuroinformatics represents the necessary computational infrastructure to collect, store, retrieve, and analyze the myriad experimental information streams in order to elucidate and predict the operational principles of the brain. This session will recap the progress of the INCF Programs and their efforts to establish a global neuroscience infrastructure, and will feature several US-based initiatives that are making domain-specific advances to better understand “how the brain works”.

September 6 (Congress day 3)

13:30-17:30

13:30 Opening Greeting by Maryann Martone & David Kennedy

Part 1 - INCF Program Update

13:35 INCF Global Infrastructure for Neuroinformatics

  Sean Hill, Director of INCF

13:50 INCF Multi-Scale Modeling program

  Erik De Schutter, Okinawa Institute of Science and Technology, Japan

14:05  INCF Digital Brain Atlasing  program

  Robert W. Williams, University of Tennessee, USA

14:20  INCF Program on Ontologies of Neural Structures    

  Maryann Martone, University of California, San Diego, USA

14:35 INCF Standards for Datasharing program

  Colin Ingram, Newcastle University, United Kingdom

 

14:50 - 15:15 Coffee Break

Part 2 - USA Node Activities - Building upon the Infrastructure

15:15 Introduction from the Chairs

  Yuan Liu (NIH) Computational Neuroscience, Neuroinformatics and International Funding Opportunities at the NIH

  Kenneth C. Whang (NSF)

15:30 NIF (Neuroscience Information Framework)

  Maryann Martone/Jeff Grethe, University of California, San Diego, USA

15:50 NITRC (Neuroimaging Informatics Tools and Resources Clearinghouse)

  David Kennedy, University of Massachusetts Medical School, USA and

  Michael Milham, NYU, USA

16:10 CRCNS (Collaborative Research in Computational Neuroscience)

  Fritz Sommer, University of California, Berkeley, USA

16:30 HCP (Human Connectome Project)

  Daniel Marcus, Washington University in St Louis, USA

16:50 Discussion

17:15 Concluding Remarks - See you in Munich!

  David Kennedy & Thomas Wachtler

17:30 Adjourn

 

Abstracts:

Part 1:

INCF Global Infrastructure for Neuroinformatics

  Sean Hill, Director of INCF

INCF is leading an effort to develop and coordinate international neuroinformatics infrastructures and facilitate data sharing, publication, analysis, modeling, visualization and simulation. The ultimate goal of this effort is to enable global neuroscience data integration. Such integrative infrastructures will be an essential tool in developing new insights about the structure and function of the brain in health and disease.

INCF Multi-Scale Modeling Program

  Erik De Schutter, Okinawa Institute of Science and Technology, Japan and University of Antwerp, Belgium

The growing number of complex neural models at multiple scales has created a need for standards and guidelines to ease model sharing and facilitate the replication of results across different simulators. To foster community efforts towards such standards, the International Neuroinformatics Coordinating Facility (INCF) has formed its Multiscale Modeling program. A task force of simulator developers is proposing a declarative computer language for description of large-scale neuronal networks.

The Network Interchange for Neuroscience Modeling Language (NineML) has an initial focus on spiking point neurons. NineML aims to provide tool support for the explicit declarative definition of spiking neuronal network models both conceptually and mathematically in a simulator independent manner. NineML is designed to be self-consistent and highly flexible, allowing addition of new models and mathematical descriptions without modification of the previous structure and organization of the language. To achieve these goals, the language is being iteratively designed using several representative models with various levels of complexity as test cases.

The design of NineML is divided into two semantic layers: the Abstraction Layer, which consists of core mathematical concepts necessary to express neuronal and synaptic dynamics and network connectivity patterns, and the User Layer, which provides constructs to specify the instantiation of a network model in terms that are familiar to computational neuroscience modelers.

The NineML specification is defined as an implementation-neutral object model representing all the concepts in the User and Abstraction Layers. Libraries for creating, manipulating, querying and serializing the NineML object model to a standard XML representation will become available for a variety of languages. First priority is a Python implementation to support the wide range of simulators which provide a Python user interface (NEURON, NEST, Brian, MOOSE, GENESIS-3, PCSIM, PyNN, etc.). These libraries will allow simulator developers to quickly add support for NineML and catalyze the emergence of a broad software ecosystem supporting model definition interoperability around NineML.

INCF Digital Brain Atlasing  Program

  Robert W. Williams, University of Tennessee, USA
 
The Digital Atlasing Program (http://incf.org/core/programs/atlasing) was formed in the fall of 2007 as an International Neuroinfomatics Coordinating Facility (INCF, http://incf.org/) collaborative effort to create an atlas-based framework to make the rapidly growing collection of multidimensional data of rodent brains more widely accessible and usable to the research community. (1)  This led to the formation of the INCF Digital Atlasing Program Standards Task Force, which produced a full 3D reference space for the adult mouse brain based on a set of young adult C57BL/6J mice called the Waxholm Space (WHS), via a high-resolution MRI and corroborative histological data sets. (2) Our group has also produced initial specifications and a prototype web service atlas hub that exploits a distributed interoperable Digital Atlasing Infrastructure (DAI).  The vision, challenges, and initial results of this effort are discussed in Hawrylycz et al., 2009 (3) and Hawrylycz et al., 2011. (4)
 
The Standards Task Force has completed its work and efforts now are being continued by two other task forces. The main goal of these groups are to make it easier for the research community to register data sets (images and quantitative traits) into this framework and to improve the reliability and access to these data.  The WHS Task Force is focused on testing the integration of new data sets in WHS, using primarily three use cases.  These include the Mouse Brain Library (MBL) images, which includes highly diverse strains of mice that differ greatly from C57BL/6J, the ViBrism database, which includes extensive gene expression maps and co-expression networks, and sparsely sampled slices from a transgenic mouse line donated by Dr. Ken Sugino.  This group is also devising segmenting, labeling, and registration tools to interoperate between atlases used in the DAI.  The DAI Task Force is focused on technical aspects of interoperability within this framework and on the creation of atlas hubs, which serve atlas-related functionality from various resources over the web.  The design allows any WHS/DAI-aware client to access information and data from multiple resources.  This in turn encourages flexibility in how the framework is exploited by software developers. The DAI Task Force has already produced a set of tools that access WHS/DAI with unique functionalities.
 
A current priority is the registration of new datasets (in particular 2D images) into the 3D WHS and the creation of atlas hubs that share data from these resources.  Deliverables from both groups include recommendations, standard operating procedures, and easier access to tools, workflows, and services.  More in-depth information is shared with the community via publications, program web-sites, and workshops.

References:
(1) Boline et al, 2007, Nature Preceedings doi:10.1038/npre.2007.1046.1
(2) Johnson et al, 2010, J Neuroimage doi:10.1016/j.neuroimage.2010.06.067
(3) Hawrylycz et al, 2009, Nature Precedings doi:10.1038/npre.2009.4000.1.
(4) Hawrylycz et al, 2011, PLoS Comput Biol 7[2]: e1001065. doi:10.1371/journal.pcbi.1001065

INCF Program on Ontologies of Neural Structures   

  Maryann Martone, University of California, San Diego, USA

The importance of controlled vocabularies and ontologies are recognized as a key element in sharing and re-using data. For this reason, the INCF Program on Ontologies of Neural Structure (PONS) was created to facilitate neuroscience with ontology-related informatics. Three task forces have been formed within this program, one with a focus on anatomical structures (Structural Lexicon Task Force; SLTF), another on neurons (Neuron Registry Task Force; NRTF), and one focused on the representation of ontologies and the technical aspects of building ontologies (Representation and Deployment Task Force; RDTF).

The work of the three task forces is close to completion. A reference panel will soon be convened to evaluate and give feedback on the work that has been done.

Structural Lexicon Task Force:
This group, headed by David Van Essen, was charged with developing a strategy and infrastructure for defining brain structures across scales. Their initial focus was on creating a standard set of criteria for defining brain structures in a structural lexicon, followed by
entering definitions for these structures in a forum accessible to the scientific community. For ease of viewing and community comment, the brain region definitions have been exposed in the Neurolex wiki (http://neurolex.org), and when possible, linked to an atlas or a figure in the paper where the structure is delineated.

The Scalable Brain Atlas (SBA; http://scalablebrainatlas.org) imports brain atlases and produces a visual representation of the vector drawings tied to the nomenclature hierarchy established by each atlas. It works with Neurolex by creating pages for each of the brain structures represented within the atlas, and generating a thumbnail that links back to the structure within the SBA.

The group is creating a subset of common structures in human, macaque, rat, and mouse (pan-mammalian structures) with the goal of creating a computable list of brain parts that can serve as a simple means for translating across nomenclatures, scales, and species. In
collaboration with the INCF Digital Atlasing program, these structures are being used to create delineations for the Waxholm Space mouse brain by Seth Ruffins. Once completed, a set of textual definitions describing the criteria by which these delineations were made will be commissioned.

Neuron Registry Task Force:
This group is headed by Giorgio Ascoli with the goal of creating a knowledge-base of neuronal cell types, providing a means for describing existing cell types from their properties (a neuron registry), to be populated with information from the literature. This knowledge-base will serve as a resource for comparing potentially new neuronal types with-known types and for constructing statistical representations of neuronal cell types based on known instances. This group established criteria, conventions, and operational principles for a Neuron Registry and how neurons should be defined, e.g. by a collection of properties. To collect neuron definitions into a
database, a prototype Neuron Registry interface has been created and tested (http://incfnrci.appspot.com). It allows entry of new neurons, properties, and editing of existing neurons, properties, parts, relations, and values, using drop-down menus. The Neuron Registry
interface continues to be aligned with existing ontology resources and specifications set forward by the RDTF. The next steps include population of the Neuron Registry by the community with an “Adopt-A-Neuron” campaign launched in the summer of 2011.

Representation and Deployment Task Force:
This group, headed by Alan Ruttenberg, is responsible for creating infrastructure and policies to aid in the development of INCF ontologies, and ensuring all products delivered by this group meets the best practices of the greater Ontology community. Much of the group’s efforts have focused understanding, aligning, and improving the resources relevant to the program, and aiding the SLTF and NRTF create appropriate representations of and relations between anatomical structures and neurons. More specifically, this includes developing a formal ontology on top of Neurolex and ensuring the Neuron Registry curator interface is interoperable with ontology sources and other best practices.

This group has facilitated the development of the appropriate terminologies and definitions needed to create new ontologies. This includes creation of a Common Upper Mammalian Brain Ontology (CUMBO), which includes general terms essential for representing brains across
mammals. The group has also contributed to and improved the content of several existing ontologies that relate to brain structures and neurons.

INCF Standards for Datasharing Program

  Colin Ingram, Newcastle University, United Kingdom

The INCF Program on Standards for Datasharing was launched in 2010, with two Task Forces, the first with a focus on human neuroimaging data and the second on electrophysiology data. The Neuroimaging Datasharing Task Force, led by David Kennedy and Jean-Baptiste Poline has identified four specific projects to be carried out in 2011. In brief, the four projects include:
1. One-click share tool
The first project will develop a tool to push neuroimaging data in standardized format to a database hosted by INCF.  As a service to the researcher, data uploaded to this central server would be reviewed with a basic quality control check and a report generated. The raw and meta-data available as well as the quality control data will be stored in the database. Further, lightweight client would be available to perform further data processing, e.g., spatial normalization, re-alignments, etc. The task force is working to have a prototype of this tool available for demonstration at the Neuroinformatics Congress in September, 2011.
2. Neuroimaging data shared description and common API
The goal of project 2 is to have the first draft of a standard description of neuroimaging data and associated metadata to facilitate the communication between databases. A number of efforts have already made progresses towards that goal, with XCEDE probably being the most well-known. This standard description would be used to mediate between databases with different data models. Eventually, this could be linked to a set of ontologies to allow for semantic searches and reasoning. One further idea is to base a standard API that could be used to interrogate any database on this description.
3. Data formats
Data formats can seem like a simple solution to a number of problems in data sharing, but often pose technical, ontological, and social problems of their own.  This project investigates the viability of using a data format to solve certain data sharing problems.  The group identified a small number of well-defined technical problems in data sharing and will propose technical solutions to these problems using the Connectome File Format (CFF).  The goals of the project are to better understand the technical challenges in proposing a common data format, to evaluate CFF as a candidate for a common format, and to begin to understand the human challenges in the adoption of a new data format.
4. XNAT - NIPyPE workflow
This project will take the output of standard analyses and push the results to an XNAT database initially, later using the common API and the XCEDE standard schema from the previous projects to feedback to the database with processed data and metadata. Input data would be extracted also using the standard API. Another goal for this project is to standardize the description of a workflow, using as an example the current XML description of LONI pipelines.

Through outreach and interaction with other discussion lists, the Task Force has gathered several additional ideas for services that might ideally be handled by INCF, as a central and neutral body. We are looking into implementing these recommendations:
Repository for protocols, file formats, data, etc.
Survey community about barriers to data sharing (in progress)
Assemble inventory of resources for data-sharing and public databases, with info about each system (in progress)
Work with journals to encourage data sharing


The Electrophysiology Datasharing Task Force, led by Friedrich Sommer and Thomas Wachtler, has idenitified the diversity of data formats and lack of format standards as a major obstacle to data sharing in electrophysiology research. Although ultimately the data format problem is not the key problem in data sharing, the situation needs to be improved also at this level. This Task Force will work with INCF to implement a webpage that provides an overview about which tools exist to read or write different formats, and which are missing.

Different aspects of metadata recording, management, and sharing that were addressed during the first meeting of the task force need further consideration. Many members of the task force have experience with different approaches to metadata recording, and these experiences will be considered in detail at the next meeting. It was also agreed that a standardized way of referencing datasets would facilitate data sharing. The ability to cite datasets that comes with such a standard would greatly enhance the motivation to make data available. Such a standard is a goal that will be achievable on a relatively short time scale and the group decided to work on this.

Publishers can have a strong influence on the culture of scientific exchange and in this respect can have an impact on the sharing of data. The group discussed possibilities to promote data sharing through publishers. It was proposed that publishers should be encouraged to raise awareness of data sharing among authors. Publishers could recommend that authors make their data available, and should encourage authors to include a statement about data sharing in their papers. Feedback from publishers will be collected and discussed at further Task Force meetings.

Part 2:

NIF (Neuroscience Information Framework)

  Maryann Martone/Jeff Grethe, University of California, San Diego, USA

Informatics and new web technologies (e.g. ontologies, social networking and community wikis) are becoming increasingly important to biomedical researchers. The sharing of research data and information pertaining to resources (i.e. tools, data, materials and people) across a research community adds tremendous value to the efforts of that community and the Neuroscience Information Framework (NIF; http://neuinfo.org) is providing practical solutions for tackling such data challenges. An initiative of the NIH Blueprint for Neuroscience Research, the NIF enables discovery and access to public research data, contained in databases and structured web resources (e.g. queryable web services) that are sometimes referred to as the deep or hidden web, and resources through an open source dynamic inventory of biomedical resources that are annotated and integrated with a unified system of biomedical terminology.

The NIF provides simultaneous search across multiple types of information sources to connect biomedical researchers to available resources. These sources include the: (1) NIF Registry: A human-curated registry of neuroscience-relevant resources annotated with the NIF vocabulary; (2) NIF Literature: A full text indexed corpus derived from open access literature, full index of PubMed, and specialized bibliographic databases; (3) NIF Database Federation: A federation of independent databases registered to the NIF, allowing for direct search, discovery and integration of database content. NIF continues to grow significantly in content as more data sources are added, providing access to over 3700 resources through the Registry and more than 50 million database records contained within more than 70 independent databases in the data federation, making NIF the largest source of neuroscience resources on the web. 

Search and annotation of resources and resource content is enhanced through the utilization of a comprehensive modular ontology (NIFSTD; http://purl.org/nif/ontology/nif.owl). To enable broad community contribution to NIFSTD, NeuroLex (http://neurolex.org) is available as a wiki that provides an easy entry point for the community. NeuroLex takes advantage of the Semantic Mediawiki open source software to provide an easily accessible interface for viewing and contributing to the lexicon.


Neuroimaging Advances in ADHD: The ADHD-200 Project, Powered by NITRC
  David Kennedy, University of Massachusetts Medical School, USA and
  Michael Milham, NYU Child Study Center, USA

The Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) is funded by the NIH Blueprint for Neuroscience Research. Since 2006, NITRC fosters a user-friendly knowledge environment for the neuroimaging community. Continuing to identify existing software tools and resources valuable to this community, NITRC’s goal is to support its researchers dedicated to enhancing, adopting, distributing, and contributing to the evolution of neuroimaging analysis tools and resources. Located at www.nitrc.org , NITRC promotes software tools and resources, vocabularies, test data, and now, pre-processed, community-generated data sets, extending the impact of previously funded, neuroimaging informatics contributions to a broader community. NITRC gives researchers greater and more efficient access to the tools and resources they need, better categorizing and organizing existing tools and resources, facilitating interactions between researchers and developers, and promoting better use through enhanced documentation. NITRC. in concert with NIF and the INCF Software Center, is now established as a key resource for the advancement of functional and structural neuroimaging research.

Of particular neuroscience impact are projects that combine software, data, protocol and results in order to drive forward large-scale, data-driven understanding of brain function and dysfunction. As an example of this, the ADHD-200 Sample is a grassroots initiative, dedicated to accelerating the scientific community's understanding of the neural basis of ADHD through the implementation of open data-sharing and discovery-based science. Towards this goal, we are pleased to announce the unrestricted public release of 776 resting-state fMRI and anatomical datasets aggregated across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old). Accompanying phenotypic information includes: diagnostic status, dimensional ADHD symptom measures, age, sex, intelligence quotient (IQ) and lifetime medication status. Preliminary quality control assessments (usable vs. questionable) based upon visual timeseries inspection are included for all resting state fMRI scans.

Neuroimaging Advances in ADHD: The ADHD-200 Project, Powered by NITRC

  David Kennedy, University of Massachusetts Medical School, USA and
  Michael Milham, NYU Child Study Center, USA

The Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) is funded by the NIH Blueprint for Neuroscience Research. Since 2006, NITRC fosters a user-friendly knowledge environment for the neuroimaging community. Continuing to identify existing software tools and resources valuable to this community, NITRC’s goal is to support its researchers dedicated to enhancing, adopting, distributing, and contributing to the evolution of neuroimaging analysis tools and resources. Located at www.nitrc.org , NITRC promotes software tools and resources, vocabularies, test data, and now, pre-processed, community-generated data sets, extending the impact of previously funded, neuroimaging informatics contributions to a broader community. NITRC gives researchers greater and more efficient access to the tools and resources they need, better categorizing and organizing existing tools and resources, facilitating interactions between researchers and developers, and promoting better use through enhanced documentation. NITRC. in concert with NIF and the INCF Software Center, is now established as a key resource for the advancement of functional and structural neuroimaging research.

Of particular neuroscience impact are projects that combine software, data, protocol and results in order to drive forward large-scale, data-driven understanding of brain function and dysfunction. As an example of this, the ADHD-200 Sample is a grassroots initiative, dedicated to accelerating the scientific community's understanding of the neural basis of ADHD through the implementation of open data-sharing and discovery-based science. Towards this goal, we are pleased to announce the unrestricted public release of 776 resting-state fMRI and anatomical datasets aggregated across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old). Accompanying phenotypic information includes: diagnostic status, dimensional ADHD symptom measures, age, sex, intelligence quotient (IQ) and lifetime medication status. Preliminary quality control assessments (usable vs. questionable) based upon visual timeseries inspection are included for all resting state fMRI scans.


CRCNS.org: An online repository for neurophysiology data

  Fritz Sommer, University of California, Berkeley, USA

CRCNS.org is a project designed to make experimental data and other resources available to researchers, for example investigators who plan new experiments or theoreticians who are using computational analysis approaches to study the brain.  The main focus is a repository of electrophysiology data sets.  The goal of the repository is to promote the sharing of data sets within the neuroscience community to enable projects and approaches that would be unfeasible otherwise.  To achieve this goal a number of issues need to be resolved related to the how the data is contributed and managed, and how community interactions can be facilitated. The current features of the resource and future plans will be described.  This work is funded by NSF award 0855272.

Data sharing and informatics approaches of the Human Connectome Project

  Daniel Marcus, Washington University in St Louis, USA

The Human Connectome Project represents a concerted attack on elucidating the neural pathways that underlie brain function. Using cutting edge imaging methods, the project aims to map the macro-scale wiring diagram of the brain.  The resulting data set, including rich phenotypic and genetic data, will be made freely and openly available to the scientific community.  The full data set is expected to exceed one petabyte. Dr. Marcus will discuss the informatics approaches that are being developed to support the project.

Document Actions
Latest news (rss)
Congress videos
2011-11-01
The keynote lectures presented at Neuroinformatics 2011 are now available for online viewing.

View these lectures and more on INCF's channel on YouTube  Watch on YouTube
 
2011 Congress pictures
2011-09-22

Close to a hundred snapshots are displayed at the bottom of this page.

Lecture presentation slides
2011-09-20

When available, we will publish links to slides.

Michael Frank slides

Your opinion is important!
2011-09-08

Congress 2011 attendee?

Please take the online survey here

Abstract Book
Abstract book cover

 

Abstract Book PDF icon

file size 8 MB

EMBC WS puff whiteb

Download Flyer and Poster
Poster thumbnail

Poster (A3) PDF icon

Flyer (A4) PDF icon

Flyer (Letter) PDF icon