Version 8 (modified by Morris Swertz, 14 years ago) (diff)


Project Overview

The BBMRI/Bioinformatics rainbow project was awarded based on the attached project proposal.


Expected output

This project aims to produce the bioinformatics resources needed by BBMRI-NL participating biobanks and rainbow and complementation projects, most notably in the context of Genoom van Nederland:

  1. Sequence data management, QC and analysis pipelines to produce and share a Dutch catalog of variants.
  2. GWAS data management, QC and imputation to produce a Dutch GWAS control cohort
  3. Dutch (inter)national biobank catalog and data exchange formats
  4. Scalable and easy to maintain software and web access tools underlying 1-3.

All these resources will be made publically available both as centralized, secured, web accessible national services, i.e. central hubs assembled in partnership to support the rainbow projects, as well as downloadable and customizable ‘tools-in a-box’ meant for local installation by biobanks and their local projects (local hubs). This project will develop in parallel the scientific, professional and physical infrastructures needed to effectively communicate expertise, procedures and tools between all Dutch biobanks as well as the provision of bioinformatics experts building on the infrastructure organized in the Netherlands Bioinformatics Center (NBIC) BioAssist program. This group will work in coordination with the BBMRI-NL ethical-legal working group to develop a code of practice and guidelines for large scale harmonized data pooling and for the use of data from multiple biobanks.


This project will combine a hub-and-spoke research & development organization to harmonize data between biobanks together with the provision of experts who will provide innovative model-driven software methods to efficiently produce ready-to-use software infrastructures needed by biologists and researchers. This includes:

Agile hub-and-spoke organization

At the core of BBMRI there is the vision to develop all resources in a hub and spokes manner such that we maximize use of local expertise and innovation and minimize duplicated efforts and barriers to integration via centralized harmonization and enrichment. The smallest hubs within the Dutch biobank landscape are the individual biobanks, the larger hubs the participating institutes, and the largest hubs are central deployment of key data and analysis resources (which again can connect to pan-European hubs). This project will mirror this organization to bridge between biomedical researchers, bioinformaticians and hardcore software engineers to ensure the multi-disciplinary interplay needed:

· A central engineering team of hardcore programmers is responsible for the overarching infrastructure and will ensure harmonization of tools, pipelines and databases between working groups. This group will function as one of the eight NBIC task forces and will meet every week to ensure knowledge and method transfer.

· Participating experts will host programmers and scientific staff to pilot the planned tools and pipelines in close support to (their) BBMRI-NL complementation and rainbow projects. These bioinformaticians will be organized in themed working groups as described in appendix 1. Each working group will have a lead programmer that is part of the central engineering team. All members will meet monthly and will have weekly Skype meetings.

· This project is strongly linked with leading international sister projects to avoid duplicated efforts and efficiently achieve these aims by having project members participating in, or staying at, institutes like European Bioinformatics Institute (1KG, EGA, ArrayExpress), Netherlands Bioinformatics Center (NGS, eScience, CWA), projects like EU-GEN2PHEN, EU-BIOSHARE, OMII-UK, ESFRI/ELIXIR, Parelsnoer, Mondriaan, CTMM, TIFN, NPC, NMC, P3G, Human Variome Project and open source collaborations like ObiBa, MOLGENIS/XGAP, ABEL and Concept Web Alliance.

Model driven software

Flexible model driven software development as described in Swertz & Jansen (2007) has proven to be an efficient method to rapidly produce harmonized software infrastructures for life scientists while sharing the best models, software and tools notwithstanding large variation in research aims. This project will build and extend upon open source implementations of these methods such as MOLGENIS and Galaxy focusing on:

· Implementing extensible standard data models and software components developed internationally (we co-piloted data models for microarrays, QTLs, GWAS studies [Swertz 2010], and phenotypes in EU consortia like GEN2PHEN and EBI and participated in international GWAS and sequencing initiatives like the 1KG project).

· Making tools and protocols reusable in a user-friendly catalog of bioinformatics tools and workflows that captures all necessary inputs, outputs, optimization properties and user interactions in models to automatically incorporate existing tools (building or inspired on Taverna and Galaxy).

· Generating automatically from these data and tool models the scalable back-ends and front-ends needed. This automatic procedure ensures harmonized software results building on industry standard databases for metadata and innovative approaches like cloud computing activities at SARA/Amsterdam, CIT/Groningen and BigGRID/Rotterdam to connect to the scalable compute power and storage needed.

· Ease finding and integration of resources using semantic and ontology technologies such as developed at EBI and NBIC/Concept Web Alliance to build bridges between data and tools, tapping into existing ontologies for data (e.g. HPO for human phenotype ontology) and for analysis protocols to help user and systems developers to bring tools together.

Ready-to-use databases and tools ‘in-a-box’ that can federate into national resources

As detailed below in the description of work section, this project aims to develop novel or incorporate internationally proven key bioinformatics tools, databases, models and software such can be re-used by the smallest hubs (to accommodate and improve local research and complementation projects) up to the larges hubs (supporting rainbow projects, starting with Genoom van Nederland). By sharing the same components between all hubs we provide an effective path to

· harmonize and enrich available data management, exchange and analysis protocols

· avoid duplicated efforts between local hubs

· make it more likely that everyone’s needs are supported

· improve quality because more users test the available bioinformatics infrastructure

· preserving flexibility to go beyond standardization and accommodate specific local needs.

8. Duration of project:

3 years

Planning (matching GvNL planning where appropriate)

Short read archive Month 0 – 8

Biobank catalog pilot Month 0 – 6

Sequence analysis Phase 1 (GvNL) Month 4 – 16

Harmonized exchange formats Month 6 - 24

Establish variation QC and analysis pipeline Month 8 – 20

Sequence analysis Phase 2 (GvNL) Month 8 – 20

Variation catalog/Dutch HapMap Month 20

GWAS data release server Month 0 – 12

GWAS QC and imputation protocols Month 6 – 20

Dutch GWAS Control Cohort (DGCC) Month 12 –24

Imputation of available GWA data (GvNL) Month 20 – 30

Make sequence data available (GvNL) Month 12 – 30

GWAS analysis tools catalog Month 12 – 36

Web access tools Month 22 – 30

Integrated DCGG and Variation catalog web access tools Month 24 – 36

9. Deliverables

D1 Sequencing

· Short Read Archive (GvNL) – a database and user interface to manage and trace next generation sequencing data, associated sample annotations (metadata) and intermediate- and end-results.

· Variation analysis and QC pipelines (GvNL) – harmonization and enrichment of available processing pipelines for quality control and variation analysis for (exome) re-sequencing projects.

· Variation catalog/Dutch HapMap (GvNL) – release of the enrichment results of variation analysis of the GvNL 1000 genomes as produced using above tools as imputation data source.

D2 Genome-wide association analysis

· GWAS data release server– database and user interfaces to manage and query GWAS data, in particular to create GWAS (control cohort imputation) data releases.

· GWAS CQ and imputation protocols – harmonization and enrichment of tools and pipelines to verify and clean GWAS data sets and produce data sets ready for analysis by the researcher.

· GWAS data analysis – a catalog of established protocols and bioinformatic pipeline implementations thereof for GWAS analysis.

· GWAS control cohort and DCGG (GvNL) – collection of BBMRI-NL GWAS data into the DCGG database and release of imputed datasets using variation catalog produced by GvNL

D3 Biobank (meta)data finding and exchange

· Biobank and biobankers catalogue – central index of biobanks with aggregate metadata on biobank contents (protocols, features observed, optionally (aggregate) data) and semantic search functionality to enable researchers to find biobank(er)s and samples.

· Harmonized data exchange formats – harmonization of syntaxes / file formats to transfer sample annotations, phenotypic data and molecular data between biobanks and/or central hubs.

· Pseudonimization system – to ensure privacy of participants is protected and legal/ethical requirements are addressed (in collaboration with Parelsnoer).

D4 Core software platform (support of above to prevent reinvented wheels)

· Flexible ‘model-driven’ software platform – which allows to efficiently produce, configure and maintain all data models, databases, compute services and pipelines needed.

· Large data platform – to harmonize how to deal with the GWAS and NGS data within data archives (storage), algorithms (runtime) and data exchange (network)

· Flexible compute pipeline platform - to harmonize how to run large scale analyses without each pipeline having to bother about how difficult it is to run your algorithms on clusters, grids or clouds with suitable user interfaces

· Web access tools – harmonized user interfaces and programmers interfaces to provide a single point of access to all the resources developed in this project.

Attachments (1)

Download all attachments as: .zip