Welcome to the 3rd Get Your Brain Together Hackathon !
Register now!
What?
Purpose
The Get Your Brain Together hackathons bring together neuroimage data
generators, image registration researchers, and neurodata compute
infrastructure providers for a hands-on, collaborative event. This community
collaboration aims to create reproducible, open source resources that enable
discovery of the structure and function of brains.
This hackathon will focus on advancing OME-Zarr spatial transformations.
Motivation
OME-Zarr is a cloud-optimized bioimaging file format with international community support and broad adoption across neuroscience. The current standard supports large-scale bioimages with spatial metadata. The coordinate transformations draft provides a first-class for spatial-transformations in OME-Zarr, which is vitally important for neuroimaging and broader scientific imaging practices to enable:
-
Reproducibility and Consistency: Supporting spatial transformations explicitly in a file format ensures that transformations are applied consistently across different platforms and applications. This FAIR capability is a cornerstone of scientific research, and having standardized formats and tools facilitates verification of results by independent researchers.
-
Integration with Analysis Workflows: Having spatial transformations as a first-class citizen within file formats allows for seamless integration with various image analysis workflows. Registration transformations can be used in subsequent image analysis steps without requiring additional conversion.
-
Efficiency and Accuracy: Storing transformations within the file format avoids the need for re-sampling each time the data is processed. This reduces sampling errors and preserves the accuracy of subsequent analyses. Standardization enables on-demand transformation, critical for the massive volumes collected by modern microscopy techniques.
-
Flexibility in Analysis: A file format that natively supports spatial transformations allows researchers to apply, modify, or reverse transformations as needed for different analysis purposes. This flexibility is critical for tasks such as longitudinal studies, multi-modal imaging, and comparative analysis across different subjects or experimental conditions.
Agenda
The hackathon is structured into three key components:
- The first day features tutorial sessions designed to impart knowledge on the application needs for coordinate transformations, the mathematical principles involved, and the current computational standards and tools available in the open-source ecosystem.
- On the second day, small working groups will review and propose enhancements to the current coordinate transformations draft and relevant neuroimaging additions.
- The third day will be dedicated to hands-on activities, during which participants will implement and apply the proposed improvements to the standards.
How to add this calendar to your own?
When, where, how much?
- Dates:
- Friday, July 26th - Sunday, July 28th, 2024
- Attend all or part of one of the days
- Details in the calendar below
- Location: The second hackathon will be a hybrid in-person and online event, held:
If travelling to attend in-person nearby hotels include:
How does it work?
Before the Hackathon
Who can attend?
Get Your Brain Together hackathons are open to all and publicly advertised. Email announcements are sent to the mailing list.
Tutorials
Tutorials Video Recording
Tutorial Transcript
Friday 7/26
- Giotto Spatial Transforms - 9:00 AM ET - Jiaji Chen (George) - Boston University
- Spatial transformations of data will become more and more important in the near future due to the fact that performing spatial analyses across any two sections of tissue from the same block will require that data to be spatially aligned into a common coordinate space. Minute differences during the sectioning process from the cutting motion to how long an FFPE section was floated can result in even neighboring sections being distorted when compared side-by-side.
- These differences make it difficult to assemble multislice and/or cross platform multimodal datasets into a cohesive 3D volume. The solution for this is to perform registration across either the dataset images or expression information. Based on the registration results, both the raster images and vector feature and polygon information can be aligned into a continuous whole.
- Ideally this registration will be a free deformation based on sets of control points or a deformation matrix, however affine transforms already provide a good approximation. In either case, the transform or deformation applied must work in the same way across both raster and vector information.
- Giotto provides spatial classes and methods for easy manipulation of data with 2D affine transformations. These functionalities are all available from
GiottoClass
.
- MONAI Lazy Transforms and Geometric Transforms Proposal Discussion - 9:30 AM ET - Benjamin Murray, King’s College London
- Discuss how multiple coordinate transformations are deferred for computation in MONAI and the proposal for Geometric Transforms.
- An implementation decoupling the storage representation from the in-memory representation - 10:00 AM ET - Luca Marconato, EMBL
- Within the SpatialData library (a Python package for representing and processing spatial molecular datasets), we needed a way to represent vector and raster data across coordinate systems and store affine and non-linear coordinate transformations between them. We also needed to store this information in a language-agnostic way, so we decided to rely on the NGFF specification to represent the data.
- Our first implementation was in the form of classes modeled closely after the NGFF design. While optimal for read/write operations, we soon realized that our API requirements needed to differentiate between the on-disk and in-memory representation.
- In the tutorial, we will discuss the lessons learned, the “selling points” and the limitations of our APIs, and the “behind the scenes”, showing some technical details of our current implementation. We will also discuss a planned refactoring that will create a bridge between NGFF transformations and the xarray system of representing data coordinates.
napari-spatialdata - 10:30 AM ET - Wouter-Michiel Adrien Maria Vierdag, EMBL
- BigWarp and Ome-Zarr: a match made in Fiji - 11:00 AM ET - John Bogovic, Janelia Research
- BigWarp is a Fiji / Java tool for manual, deformable 2D and 3D image registration. This tutorial will highlight the ways BigWarp uses the draft transformation specification for import and export. In particular, I will show a use case where, given a moving image, a target image, and a set of transformations, bigwarp automatically determines which transformations to apply, in what direction (forward or inverse), and in what order. We will discuss how the Ome-Zarr transformation specification enables this functionality. If there is time and interest, I will show how BigWarp interpolates two transformations using a mask image, and how decomposing transformations into parts results in nicer behavior.
- Making Meaningful (Mouse) Mappings - 11:30 AM ET - Nick Tustison - University of Virginia
- The Advanced Normalization Tools Ecosystem (ANTsX) is a comprehensive open-source software toolkit for generalized quantitative imaging with applicability to multiple organ systems, modalities, and animal species. In this tutorial, we illustrate the utility of ANTsX for generating precision spatial mappings of the mouse brain. Specifically we discuss two recently developed ANTsX tools:
- The modeling of a velocity flow-based mapping spanning the spatiotemporal domain of a longitudinal trajectory which we apply to the Developmental Common Coordinate Framework—a longitudinal atlas demonstrating mouse development.
- An automated structural morphological pipeline for determining volumetric and cortical thickness measurements analogous to the well-utilized ANTsX pipeline for human neuroanatomical structural morphology which illustrates a general open-source framework for tailored brain parcellations.
Fully functional examples of the above are provided at a dedicated GitHub repository meant to accompany our recent preprint. The tutorial will be given using the ANTsPy toolkit with functionality self-contained tutorials available.
- Your Transform Object needs Metadata - 1:00 PM ET - Thomas Hastings Greer - University of North Carolina-Chapel Hill
- Downstream medical application code needs to correctly handle image spacing, image orientation, and unusual image origins correctly – in particular, landmarks, volume and size measurements, and mesh data are always in physical coordinates.
- If we try to manually handle this metadata by, for each downstream task, manually scaling, transposing, and shifting deformation fields, we will make mistakes and spend a lot of time. Instead, we need to use a metadata-aware representation of image registration results at application boundaries. I propose that itk.Transform objects are the current best option.
Transformations in the Visualization Toolkit (VTK) - 1:45 PM ET Andras Lasso - Queens University
Code of Conduct
Participants and contributors are expected to adhere to the ITK Code of Conduct.
Acknowledgements
This hackathon is supported by the National Institute of Mental Health (NIMH) of the National Institutes of Health (NIH) under the BRAIN Initiative award numbers 1RF1MH126732.