Improving Registration via Registration:
Semiautomatic Landmark Localization

This notebook is intentionally missing code, an example of a homework assignment using SimpleITK.

Localization of anatomical landmarks or fiducial markers in medical images is a common task, both for initializing intensity based registration between two images and for registration between the image space and physical space in computer assisted interventions.

In this notebook our goal is to rigidly register two images using manually localized point pairs. You will then improve the initial result by improving the point localization in the moving image via registration between each of the landmark regions in the fixed and moving images.

Manual Localization

  • Advantages: identification, coarse localization, of the landmarks or fiducials is extremely robust. Humans readily identify salient features in the presence of noise and under a variety of spatial transformations, including large deformations.
  • Disadvantages: exhibits low accuracy and precision.

Automatic Localization

  • Advantages: highly precise, and with a good coarse localization it is also highly accurate.
  • Disadvantages: prone to failure in the presence of noise and requires knowledge of the possible spatial transformations the landmark may undergo.

Semiautomatic Localization

A Combination of manual and automatic components to obtain a robust (human contribution), accurate and precise (machine contribution) localization.

In [1]:
# To use interactive plots (mouse clicks, zooming, panning) we use the notebook back end. We want our graphs 
# to be embedded in the notebook, inline mode, this combination is defined by the magic "%matplotlib notebook".
%matplotlib notebook

import numpy as np
import SimpleITK as sitk
import registration_utilities as ru
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
import gui

Load Data

We will be working with the training data from the Retrospective Image Registration Evaluation (RIRE) project. This data consists of a CT and MR of the same patient with a known rigid transformation between the two. We create a dense random point set in the CT image's coordinate system and transform it to the MR coordinate system. This set of points serves as our reference data for registration evaluation.

To ensure that your semi-automatic localization approach can deal with clinical data you should evaluate it using the data as is, and rotated. We test the extreme case where the data is rotated by 180$^o$ (boolean variable "rotate") so that in one scan the patient is in supine and in the other in prone position.

We will start by loading our data and looking at the distances between corresponding points prior to registration, illustrates the spatial variability of the errors.

In [2]:
fixed_image =  sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32)
moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32) 
fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(fdata("ct_T1.standard"))

# In the original data both images have the same orientation (patient in supine), the approach should also work when 
# images have different orientation. In the extreme they have a 180^o rotation between them.

rotate = True

if rotate:
    rotation_center = moving_image.TransformContinuousIndexToPhysicalPoint([(index-1)/2.0 for index in moving_image.GetSize()])    
    transform_moving = sitk.Euler3DTransform(rotation_center, 0, 0, np.pi, (0,0,0))
    
    resample = sitk.ResampleImageFilter()
    resample.SetReferenceImage(moving_image)
    resample.SetInterpolator(sitk.sitkLinear)    
    resample.SetTransform(transform_moving)
    moving_image = resample.Execute(moving_image)
    for i,p in enumerate(moving_fiducial_points):
        moving_fiducial_points[i] = transform_moving.TransformPoint(p)

        
# Compute the rigid transformation defined by the two point sets. Flatten the tuple lists 
# representing the points. The LandmarkBasedTransformInitializer expects the point coordinates 
# in one flat list [x1, y1, z1, x2, y2, z2...].
fixed_fiducial_points_flat = [c for p in fixed_fiducial_points for c in p]        
moving_fiducial_points_flat = [c for p in moving_fiducial_points for c in p]
reference_transform = sitk.LandmarkBasedTransformInitializer(sitk.VersorRigid3DTransform(), 
                                                             fixed_fiducial_points_flat, 
                                                             moving_fiducial_points_flat)

# Generate a reference dataset from the reference transformation 
# (corresponding points in the fixed and moving images).
fixed_points = ru.generate_random_pointset(image=fixed_image, num_points=100)
moving_points = [reference_transform.TransformPoint(p) for p in fixed_points]    

# Compute the TRE prior to registration.
pre_errors_mean, pre_errors_std, _, pre_errors_max, pre_errors = ru.registration_errors(sitk.Euler3DTransform(), fixed_points, moving_points, display_errors=True)
print('Before registration, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(pre_errors_mean, pre_errors_std, pre_errors_max))        
Fetching training_001_ct.mha
Fetching training_001_mr_T1.mha
Fetching ct_T1.standard
Before registration, errors (TRE) in millimeters, mean(std): 259.93(101.43), max: 441.36

Manual Landmark Localization

We now localize N(>=3) landmarks in the two images. Note that you can zoom and pan the images, just remember to change the interaction mode from "edit" to "view".

NOTE: In edit mode, the GUI will force you to enter corresponding points by disabling the option for consecutively localizing multiple (>2) points in the same image. In view mode, point localization is disabled which is useful for zooming/panning (in edit mode zooming/panning will also localize points due to the mouse button click).

In [3]:
point_acquisition_interface = gui.RegistrationPointDataAquisition(fixed_image, moving_image, fixed_window_level=(215,50))