SimpleITK Spatial Transformations
Summary:
Points are represented by vector-like data types: Tuple, Numpy array, List.
Matrices are represented by vector-like data types in row major order.
Default transformation initialization as the identity transform.
Angles specified in radians, distances specified in unknown but consistent units (nm,mm,m,km...).
All global transformations except translation are of the form: $$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
Nomenclature (when printing your transformation):
- Matrix: the matrix $A$
- Center: the point $\mathbf{c}$
- Translation: the vector $\mathbf{t}$
- Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$
Bounded transformations, BSplineTransform and DisplacementFieldTransform, behave as the identity transform outside the defined bounds.
DisplacementFieldTransform:
- Initializing the DisplacementFieldTransform using an image requires that the image's pixel type be sitk.sitkVectorFloat64.
- Initializing the DisplacementFieldTransform using an image will "clear out" your image (your alias to the image will point to an empty, zero sized, image).
Composite transformations are applied in stack order (first added, last applied).
Transformation Types¶
This notebook introduces the transformation types supported by SimpleITK and illustrates how to "promote" transformations from a lower to higher parameter space (e.g. 3D translation to 3D rigid).
Class Name | Details |
---|---|
TranslationTransform | 2D or 3D, translation |
VersorTransform | 3D, rotation represented by a versor |
VersorRigid3DTransform | 3D, rigid transformation with rotation represented by a versor |
Euler2DTransform | 2D, rigid transformation with rotation represented by a Euler angle |
Euler3DTransform | 3D, rigid transformation with rotation represented by Euler angles |
Similarity2DTransform | 2D, composition of isotropic scaling and rigid transformation with rotation represented by a Euler angle |
Similarity3DTransform | 3D, composition of isotropic scaling and rigid transformation with rotation represented by a versor |
ScaleTransform | 2D or 3D, anisotropic scaling |
ScaleVersor3DTransform | 3D, rigid transformation and anisotropic scale is added to the rotation matrix part (not composed as one would expect) |
ScaleSkewVersor3DTransform | 3D, rigid transformation with anisotropic scale and skew matrices added to the rotation matrix part (not composed as one would expect) |
ComposeScaleSkewVersor3DTransform | 3D, a composition of rotation $R$, scaling $S$, and shearing $K$, $A=RSK$ in addition to translation. |
AffineTransform | 2D or 3D, affine transformation |
BSplineTransform | 2D or 3D, deformable transformation represented by a sparse regular grid of control points |
DisplacementFieldTransform | 2D or 3D, deformable transformation represented as a dense regular grid of vectors |
CompositeTransform | 2D or 3D, stack of transformations concatenated via composition, last added, first applied |
|Transform | 2D or 3D, parent/super-class for all transforms
import SimpleITK as sitk
import numpy as np
import os
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
OUTPUT_DIR = "Output"
print(sitk.Version())
SimpleITK Version: 2.4.0 (ITK 5.4) Compiled: Aug 15 2024 01:21:37
Points in SimpleITK¶
Utility functions¶
A number of functions that deal with point data in a uniform manner.
import numpy as np
def point2str(point, precision=1):
"""
Format a point for printing, based on specified precision with trailing zeros. Uniform printing for vector-like data
(tuple, numpy array, list).
Args:
point (vector-like): nD point with floating point coordinates.
precision (int): Number of digits after the decimal point.
Return:
String represntation of the given point "xx.xxx yy.yyy zz.zzz...".
"""
return " ".join(f"{c:.{precision}f}" for c in point)
def uniform_random_points(bounds, num_points):
"""
Generate random (uniform withing bounds) nD point cloud. Dimension is based on the number of pairs in the bounds input.
Args:
bounds (list(tuple-like)): list where each tuple defines the coordinate bounds.
num_points (int): number of points to generate.
Returns:
list containing num_points numpy arrays whose coordinates are within the given bounds.
"""
internal_bounds = [sorted(b) for b in bounds]
# Generate rows for each of the coordinates according to the given bounds, stack into an array,
# and split into a list of points.
mat = np.vstack(
[np.random.uniform(b[0], b[1], num_points) for b in internal_bounds]
)
return list(mat[: len(bounds)].T)
def target_registration_errors(tx, point_list, reference_point_list):
"""
Distances between points transformed by the given transformation and their
location in another coordinate system. When the points are only used to evaluate
registration accuracy (not used in the registration) this is the target registration
error (TRE).
"""
return [
np.linalg.norm(np.array(tx.TransformPoint(p)) - np.array(p_ref))
for p, p_ref in zip(point_list, reference_point_list)
]
def print_transformation_differences(tx1, tx2):
"""
Check whether two transformations are "equivalent" in an arbitrary spatial region
either 3D or 2D, [x=(-10,10), y=(-100,100), z=(-1000,1000)]. This is just a sanity check,
as we are just looking at the effect of the transformations on a random set of points in
the region.
"""
if tx1.GetDimension() == 2 and tx2.GetDimension() == 2:
bounds = [(-10, 10), (-100, 100)]
elif tx1.GetDimension() == 3 and tx2.GetDimension() == 3:
bounds = [(-10, 10), (-100, 100), (-1000, 1000)]
else:
raise ValueError(
"Transformation dimensions mismatch, or unsupported transformation dimensionality"
)
num_points = 10
point_list = uniform_random_points(bounds, num_points)
tx1_point_list = [tx1.TransformPoint(p) for p in point_list]
differences = target_registration_errors(tx2, point_list, tx1_point_list)
print(
tx1.GetName()
+ "-"
+ tx2.GetName()
+ f":\tminDifference: {min(differences):.2f} maxDifference: {max(differences):.2f}"
)
In SimpleITK points can be represented by any vector-like data type. In Python these include Tuple, Numpy array, and List. In general Python will treat these data types differently, as illustrated by the print function below.
# SimpleITK points represented by vector-like data structures.
point_tuple = (9.0, 10.531, 11.8341)
point_np_array = np.array([9.0, 10.531, 11.8341])
point_list = [9.0, 10.531, 11.8341]
print(point_tuple)
print(point_np_array)
print(point_list)
# Uniform printing with specified precision.
precision = 2
print(point2str(point_tuple, precision))
print(point2str(point_np_array, precision))
print(point2str(point_list, precision))
(9.0, 10.531, 11.8341) [ 9. 10.531 11.8341] [9.0, 10.531, 11.8341] 9.00 10.53 11.83 9.00 10.53 11.83 9.00 10.53 11.83
Global Transformations¶
All global transformations except translation are of the form: $$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
In ITK speak (when printing your transformation):
- Matrix: the matrix $A$
- Center: the point $\mathbf{c}$
- Translation: the vector $\mathbf{t}$
- Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$
TranslationTransform¶
# A 3D translation. Note that you need to specify the dimensionality, as the sitk TranslationTransform
# represents both 2D and 3D translations.
dimension = 3
offset = (1, 2, 3) # offset can be any vector-like data
translation = sitk.TranslationTransform(dimension, offset)
print(translation)
itk::simple::TranslationTransform TranslationTransform (0x60000101c400) RTTI typeinfo: itk::TranslationTransform<double, 3u> Reference Count: 1 Modified Time: 2673 Debug: Off Object Name: Observers: none Offset: [1, 2, 3]
# Transform a point and use the inverse transformation to get the original back.
point = [10, 11, 12]
transformed_point = translation.TransformPoint(point)
translation_inverse = translation.GetInverse()
print(
"original point: " + point2str(point) + "\n"
"transformed point: " + point2str(transformed_point) + "\n"
"back to original: "
+ point2str(translation_inverse.TransformPoint(transformed_point))
)
original point: 10.0 11.0 12.0 transformed point: 11.0 13.0 15.0 back to original: 10.0 11.0 12.0
Euler2DTransform¶
point = [10, 11]
rotation2D = sitk.Euler2DTransform()
rotation2D.SetTranslation((7.2, 8.4))
rotation2D.SetAngle(np.pi / 2)
print(
"original point: " + point2str(point) + "\n"
"transformed point: " + point2str(rotation2D.TransformPoint(point))
)
# Change the center of rotation so that it coincides with the point we want to
# transform, why is this a unique configuration?
rotation2D.SetCenter(point)
print(
"original point: " + point2str(point) + "\n"
"transformed point: " + point2str(rotation2D.TransformPoint(point))
)
original point: 10.0 11.0 transformed point: -3.8 18.4 original point: 10.0 11.0 transformed point: 17.2 19.4
VersorTransform¶
# Rotation only, parametrized by Versor (vector part of unit quaternion),
# quaternion defined by rotation of theta around axis n:
# q = [n*sin(theta/2), cos(theta/2)]
# 180 degree rotation around z axis
# Use a versor:
rotation1 = sitk.VersorTransform([0, 0, 1, 0])
# Use axis-angle:
rotation2 = sitk.VersorTransform((0, 0, 1), np.pi)
# Use a matrix:
rotation3 = sitk.VersorTransform()
rotation3.SetMatrix([-1, 0, 0, 0, -1, 0, 0, 0, 1])
point = (10, 100, 1000)
p1 = rotation1.TransformPoint(point)
p2 = rotation2.TransformPoint(point)
p3 = rotation3.TransformPoint(point)
print(
"Points after transformation:\np1="
+ str(p1)
+ "\np2="
+ str(p2)
+ "\np3="
+ str(p3)
)
Points after transformation: p1=(-10.0, -100.0, 1000.0) p2=(-10.000000000000012, -100.0, 1000.0) p3=(-10.0, -100.0, 1000.0)
We applied the "same" transformation to the same point, so why are the results slightly different for the second initialization method?
This is where theory meets practice. Using the axis-angle initialization method involves trigonometric functions which on a fixed precision machine lead to these slight differences. In many cases this is not an issue, but it is something to remember. From here on we will sweep it under the rug (printing with a more reasonable precision).
Translation to Rigid [3D]¶
Copy the translational component.
dimension = 3
t = (1, 2, 3)
translation = sitk.TranslationTransform(dimension, t)
# Only need to copy the translational component.
rigid_euler = sitk.Euler3DTransform()
rigid_euler.SetTranslation(translation.GetOffset())
rigid_versor = sitk.VersorRigid3DTransform()
rigid_versor.SetTranslation(translation.GetOffset())
# Sanity check to make sure the transformations are equivalent.
bounds = [(-10, 10), (-100, 100), (-1000, 1000)]
num_points = 10
point_list = uniform_random_points(bounds, num_points)
transformed_point_list = [translation.TransformPoint(p) for p in point_list]
# Draw the original and transformed points, include the label so that we
# can modify the plots without requiring explicit changes to the legend.
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
orig = ax.scatter(
list(np.array(point_list).T)[0],
list(np.array(point_list).T)[1],
list(np.array(point_list).T)[2],
marker="o",
color="blue",
label="Original points",
)
transformed = ax.scatter(
list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
list(np.array(transformed_point_list).T)[2],
marker="^",
color="red",
label="Transformed points",
)
plt.legend(loc=(0.0, 1.0))
euler_errors = target_registration_errors(
rigid_euler, point_list, transformed_point_list
)
versor_errors = target_registration_errors(
rigid_versor, point_list, transformed_point_list
)
print(f"Euler\tminError: {min(euler_errors):.2f} maxError: {max(euler_errors):.2f}")
print(f"Versor\tminError: {min(versor_errors):.2f} maxError: {max(versor_errors):.2f}")
Euler minError: 0.00 maxError: 0.00 Versor minError: 0.00 maxError: 0.00
Rotation to Rigid [3D]¶
Copy the matrix or versor and center of rotation.
rotationCenter = (10, 10, 10)
rotation = sitk.VersorTransform([0, 0, 1, 0], rotationCenter)
rigid_euler = sitk.Euler3DTransform()
rigid_euler.SetMatrix(rotation.GetMatrix())
rigid_euler.SetCenter(rotation.GetCenter())
rigid_versor = sitk.VersorRigid3DTransform()
rigid_versor.SetRotation(rotation.GetVersor())
# rigid_versor.SetCenter(rotation.GetCenter()) #intentional error
# Sanity check to make sure the transformations are equivalent.
bounds = [(-10, 10), (-100, 100), (-1000, 1000)]
num_points = 10
point_list = uniform_random_points(bounds, num_points)
transformed_point_list = [rotation.TransformPoint(p) for p in point_list]
euler_errors = target_registration_errors(
rigid_euler, point_list, transformed_point_list
)
versor_errors = target_registration_errors(
rigid_versor, point_list, transformed_point_list
)
# Draw the points transformed by the original transformation and after transformation
# using the incorrect transformation, illustrate the effect of center of rotation.
from mpl_toolkits.mplot3d import Axes3D
incorrect_transformed_point_list = [rigid_versor.TransformPoint(p) for p in point_list]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
orig = ax.scatter(
list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
list(np.array(transformed_point_list).T)[2],
marker="o",
color="blue",
label="Rotation around specific center",
)
transformed = ax.scatter(
list(np.array(incorrect_transformed_point_list).T)[0],
list(np.array(incorrect_transformed_point_list).T)[1],
list(np.array(incorrect_transformed_point_list).T)[2],
marker="^",
color="red",
label="Rotation around origin",
)
plt.legend(loc=(0.0, 1.0))
print(f"Euler\tminError: {min(euler_errors):.2f} maxError: {max(euler_errors):.2f}")
print(f"Versor\tminError: {min(versor_errors):.2f} maxError: {max(versor_errors):.2f}")
Euler minError: 0.00 maxError: 0.00
Versor minError: 28.28 maxError: 28.28
Similarity [2D]¶
When the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\mathbf{x}) = s\mathbf{x}-s\mathbf{c} + \mathbf{c}$. Changing the transformation's center results in scale + translation.
def display_center_effect(x, y, tx, point_list, xlim, ylim):
tx.SetCenter((x, y))
transformed_point_list = [tx.TransformPoint(p) for p in point_list]
plt.scatter(
list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
marker="^",
color="red",
label="transformed points",
)
plt.scatter(
list(np.array(point_list).T)[0],
list(np.array(point_list).T)[1],
marker="o",
color="blue",
label="original points",
)
plt.xlim(xlim)
plt.ylim(ylim)
plt.legend(loc=(0.25, 1.01))
# 2D square centered on (0,0)
points = [
np.array((-1.0, -1.0)),
np.array((-1.0, 1.0)),
np.array((1.0, 1.0)),
np.array((1.0, -1.0)),
]
# Scale by 2
similarity = sitk.Similarity2DTransform()
similarity.SetScale(2)
interact(
display_center_effect,
x=(-10, 10),
y=(-10, 10),
tx=fixed(similarity),
point_list=fixed(points),
xlim=fixed((-10, 10)),
ylim=fixed((-10, 10)),
);
Rigid to Similarity [3D]¶
Copy the translation, center, and matrix or versor.
rotation_center = (100, 100, 100)
theta_x = 0.0
theta_y = 0.0
theta_z = np.pi / 2.0
translation = (1, 2, 3)
rigid_euler = sitk.Euler3DTransform(
rotation_center, theta_x, theta_y, theta_z, translation
)
similarity = sitk.Similarity3DTransform()
similarity.SetMatrix(rigid_euler.GetMatrix())
similarity.SetTranslation(rigid_euler.GetTranslation())
similarity.SetCenter(rigid_euler.GetCenter())
# Apply the transformations to the same set of random points and compare the results
# (see utility functions at top of notebook).
print_transformation_differences(rigid_euler, similarity)
Euler3DTransform-Similarity3DTransform: minDifference: 0.00 maxDifference: 0.00
Compose Scale Skew Versor¶
Composition of rotation $R$ , scaling $S$ , and shearing $K$ in addition to translation: $$ T(x)=RSK(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S = \left[\begin{array}{ccc} s_0 & 0 & 0 \\ 0 & s_1 & 0 \\ 0 & 0 & s_2 \end{array}\right]\;\; \textrm{and } K = \left[\begin{array}{ccc} 1 & k_0 & k_1 \\ 0 & 1 & k_2 \\ 0 & 0 & 1 \end{array}\right]$$
rotation_center = [100, 100, 100]
axis = [0, 0, 1]
angle = np.pi / 2.0
translation = [1, 2, 3]
scale_factors = [3.14, 1.59, 2.65]
skew = [4, 5, 6]
compose_scale_skew_rigid1 = sitk.ComposeScaleSkewVersor3DTransform(
scale_factors, skew, axis, angle, translation, rotation_center
)
# The versor is n*sin(theta/2) for a unit norm axis
versor = [a * np.sin(angle / 2.0) / np.linalg.norm(axis) for a in axis]
compose_scale_skew_rigid2 = sitk.ComposeScaleSkewVersor3DTransform()
# Parameter order is versor, translation, scale, skew
compose_scale_skew_rigid2.SetParameters(versor + translation + scale_factors + skew)
# Compare the two transformations, their parameters and their effect on a set of
# random points (utility function top of notebook)
print(f"Transform1 parameters: {compose_scale_skew_rigid1.GetParameters()}")
print(f"Transform2 parameters: {compose_scale_skew_rigid2.GetParameters()}")
print_transformation_differences(compose_scale_skew_rigid1, compose_scale_skew_rigid2)
Transform1 parameters: (0.0, 0.0, 0.7071067811865475, 1.0, 2.0, 3.0, 3.14, 1.59, 2.65, 4.0, 5.0, 6.0) Transform2 parameters: (0.0, 0.0, 0.7071067811865475, 1.0, 2.0, 3.0, 3.14, 1.59, 2.65, 4.0, 5.0, 6.0) ComposeScaleSkewVersor3DTransform-ComposeScaleSkewVersor3DTransform: minDifference: 3277.22 maxDifference: 3277.22
Why don't the two transformations have the same effect on the point set even though their parameters are the same? What parameters did we forget to set for the second transformation?
Similarity to Affine [3D]¶
Copy the translation, center and matrix.
rotation_center = (100, 100, 100)
axis = (0, 0, 1)
angle = np.pi / 2.0
translation = (1, 2, 3)
scale_factor = 2.0
similarity = sitk.Similarity3DTransform(
scale_factor, axis, angle, translation, rotation_center
)
affine = sitk.AffineTransform(3)
affine.SetMatrix(similarity.GetMatrix())
affine.SetTranslation(similarity.GetTranslation())
affine.SetCenter(similarity.GetCenter())
# Apply the transformations to the same set of random points and compare the results
# (see utility functions at top of notebook).
print_transformation_differences(similarity, affine)
Similarity3DTransform-AffineTransform: minDifference: 0.00 maxDifference: 0.00
Scale Transform¶
Just as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\mathbf{x}) = \mathbf{s}^T\mathbf{x}-\mathbf{s}^T\mathbf{c} + \mathbf{c}$).
# 2D square centered on (0,0).
points = [
np.array((-1.0, -1.0)),
np.array((-1.0, 1.0)),
np.array((1.0, 1.0)),
np.array((1.0, -1.0)),
]
# Scale by half in x and 2 in y.
scale = sitk.ScaleTransform(2, (0.5, 2))
# Interactively change the location of the center.
interact(
display_center_effect,
x=(-10, 10),
y=(-10, 10),
tx=fixed(scale),
point_list=fixed(points),
xlim=fixed((-10, 10)),
ylim=fixed((-10, 10)),
);
Scale Versor¶
This is not what you would expect from the name (composition of anisotropic scaling and rigid). This is: $$T(x) = (R+S)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S= \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]$$
There is no natural way of "promoting" the similarity transformation to this transformation.
scales = (0.5, 0.7, 0.9)
translation = (1, 2, 3)
axis = (0, 0, 1)
angle = 0.0
scale_versor = sitk.ScaleVersor3DTransform(scales, axis, angle, translation)
print(scale_versor)
itk::simple::ScaleVersor3DTransform ScaleVersor3DTransform (0x7f9549af2e80) RTTI typeinfo: itk::ScaleVersor3DTransform<double> Reference Count: 1 Modified Time: 2770 Debug: Off Object Name: Observers: none Matrix: 0.5 0 0 0 0.7 0 0 0 0.9 Offset: [1, 2, 3] Center: [0, 0, 0] Translation: [1, 2, 3] Inverse: 2 0 0 0 1.42857 0 0 0 1.11111 Singular: 0 Versor: [ 0, 0, 0, 1 ] Scales: [0.5, 0.7, 0.9]
Scale Skew Versor¶
Again, not what you expect based on the name, this is not a composition of transformations. This is: $$T(x) = (R+S+K)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S = \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]\;\; \textrm{and } K = \left[\begin{array}{ccc} 0 & k_0 & k_1 \\ k_2 & 0 & k_3 \\ k_4 & k_5 & 0 \end{array}\right]$$
In practice this is an over-parametrized version of the affine transform, 15 (scale, skew, versor, translation) vs. 12 parameters (matrix, translation).
scale = (2, 2.1, 3)
skew = np.linspace(
start=0.0, stop=1.0, num=6
) # six equally spaced values in[0,1], an arbitrary choice
translation = (1, 2, 3)
versor = (0, 0, 0, 1.0)
scale_skew_versor = sitk.ScaleSkewVersor3DTransform(scale, skew, versor, translation)
print(scale_skew_versor)
itk::simple::ScaleSkewVersor3DTransform ScaleSkewVersor3DTransform (0x7f951977ef70) RTTI typeinfo: itk::ScaleSkewVersor3DTransform<double> Reference Count: 1 Modified Time: 2780 Debug: Off Object Name: Observers: none Matrix: 2 0 0.2 0.4 2.1 0.6 0.8 1 3 Offset: [1, 2, 3] Center: [0, 0, 0] Translation: [1, 2, 3] Inverse: 0.511486 0.0179469 -0.0376884 -0.0646088 0.524049 -0.100503 -0.11486 -0.179469 0.376884 Singular: 0 Versor: [ 0, 0, 0, 1 ] Scale: [2, 2.1, 3] Skew: [0, 0.2, 0.4, 0.6, 0.8, 1]
Modify transform center without changing the transformation effect¶
Given a transformation $T_0$ with center $\mathbf{c_0}$ we want to change the center to $\mathbf{c_1}$ without changing the transformation's effect. That is $\forall\mathbf{x},\;T_0(\mathbf{x})=T_1(\mathbf{x})$.
With some simple arithmetic we see that for $T_1$ we need to set:
- $A = A_0$
- $\mathbf{c}=\mathbf{c_1}$
- $\mathbf{t}=A(\mathbf{c_1}-\mathbf{c_0}) + \mathbf{t_0} + \mathbf{c_0}- \mathbf{c_1}$
old_translation = np.array(rigid_euler.GetTranslation())
old_matrix = np.array(rigid_euler.GetMatrix()).reshape((3, 3))
old_center = np.array(rigid_euler.GetCenter())
rigid_euler2 = sitk.Euler3DTransform()
new_center = np.array([2, 4, 8])
new_translation = (
old_translation + old_center + old_matrix.dot(new_center - old_center) - new_center
)
rigid_euler2.SetMatrix(old_matrix.ravel())
rigid_euler2.SetTranslation(new_translation.tolist())
rigid_euler2.SetCenter(new_center.tolist())
pnt = [16, 32, 64]
print(rigid_euler.TransformPoint(pnt))
print(rigid_euler2.TransformPoint(pnt))
(169.0, 17.999999999999996, 67.0) (169.0, 18.000000000000004, 67.0)
Bounded Transformations¶
SimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse representation) and DisplacementFieldTransform (dense representation).
Transforming a point that is outside the bounds will return the original point - identity transform.
#
# This function displays the effects of the deformable transformation on a grid of points by scaling the
# initial displacements (either of control points for BSpline or the deformation field itself). It does
# assume that all points are contained in the range(-2.5,-2.5), (2.5,2.5).
#
def display_displacement_scaling_effect(
s, original_x_mat, original_y_mat, tx, original_control_point_displacements
):
if tx.GetDimension() != 2:
raise ValueError("display_displacement_scaling_effect only works in 2D")
plt.scatter(
original_x_mat,
original_y_mat,
marker="o",
color="blue",
label="original points",
)
pointsX = []
pointsY = []
tx.SetParameters(s * original_control_point_displacements)
for index, value in np.ndenumerate(original_x_mat):
px, py = tx.TransformPoint((value, original_y_mat[index]))
pointsX.append(px)
pointsY.append(py)
plt.scatter(pointsX, pointsY, marker="^", color="red", label="transformed points")
plt.legend(loc=(0.25, 1.01))
plt.xlim((-2.5, 2.5))
plt.ylim((-2.5, 2.5))
BSpline¶
Using a sparse set of control points to control a free form deformation. Note that the order of parameters to the transformation is $[x_0\ldots x_N,y_0\ldots y_N, z_0\ldots z_N]$ for $N$ control points.
To configure this transformation type we need to specify its bounded domain and the parameters for the control points, the incremental shifts from original grid positions. This can either be done explicitly by specifying the set of parameters defining the domain and control point parameters one by one or by using a set of images that encode all of this information in a more compact manner.
The next two code cells illustrate these two options.
# Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function
# or its object oriented counterpart BSplineTransformInitializerFilter).
dimension = 2
spline_order = 3
direction_matrix_row_major = [1.0, 0.0, 0.0, 1.0] # identity, mesh is axis aligned
origin = [-1.0, -1.0]
domain_physical_dimensions = [2, 2]
mesh_size = [4, 3]
bspline = sitk.BSplineTransform(dimension, spline_order)
bspline.SetTransformDomainOrigin(origin)
bspline.SetTransformDomainDirection(direction_matrix_row_major)
bspline.SetTransformDomainPhysicalDimensions(domain_physical_dimensions)
bspline.SetTransformDomainMeshSize(mesh_size)
# Random displacement of the control points, specifying the x and y
# displacements separately allows us to play with these parameters,
# just multiply one of them with zero to see the effect.
x_displacement = np.random.random(len(bspline.GetParameters()) // 2)
y_displacement = np.random.random(len(bspline.GetParameters()) // 2)
original_control_point_displacements = np.concatenate([x_displacement, y_displacement])
bspline.SetParameters(original_control_point_displacements)
# Apply the BSpline transformation to a grid of points
# starting the point set exactly at the origin of the BSpline mesh is problematic as
# these points are considered outside the transformation's domain,
# remove epsilon below and see what happens.
numSamplesX = 10
numSamplesY = 20
coordsX = np.linspace(
origin[0] + np.finfo(float).eps,
origin[0] + domain_physical_dimensions[0],
numSamplesX,
)
coordsY = np.linspace(
origin[1] + np.finfo(float).eps,
origin[1] + domain_physical_dimensions[1],
numSamplesY,
)
XX, YY = np.meshgrid(coordsX, coordsY)
interact(
display_displacement_scaling_effect,
s=(-1.5, 1.5),
original_x_mat=fixed(XX),
original_y_mat=fixed(YY),
tx=fixed(bspline),
original_control_point_displacements=fixed(original_control_point_displacements),
);
We next define the same BSpline transformation using a set of coefficient images. Note that to compare the parameter values for the two transformations we need to scale the values in the new transformation using the scale value used in the GUI above.
control_point_number = [sz+spline_order for sz in mesh_size]
num_parameters_per_axis = np.prod(control_point_number)
coefficient_images = []
for i in range(dimension):
coefficient_image = sitk.GetImageFromArray((original_control_point_displacements[i*num_parameters_per_axis:(i+1)*num_parameters_per_axis]).reshape(control_point_number))
coefficient_image.SetOrigin(origin)
coefficient_image.SetSpacing([sz/(cp-1) for cp,sz in zip(control_point_number, domain_physical_dimensions)])
coefficient_image.SetDirection(direction_matrix_row_major)
coefficient_images.append(coefficient_image)
bspline2 = sitk.BSplineTransform(coefficient_images, spline_order)
# Note that the scale value is left intentionally blank: set the scale value based on the slider value in the GUI above.
# You will get an error when executing the cell if a value is not provided.
scale_factor_from_gui =
print(np.array(bspline.GetParameters()) - np.array(bspline2.GetParameters())*scale_factor_from_gui)
Cell In[20], line 16 scale_factor_from_gui = ^ SyntaxError: invalid syntax
DisplacementField¶
A dense set of vectors representing the displacement inside the given domain. The most generic representation of a transformation.
# Create the displacement field.
# When working with images the safer thing to do is use the image based constructor,
# sitk.DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement
# field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be
# sitk.sitkVectorFloat64.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10, 20]
field_origin = [-1.0, -1.0]
field_spacing = [2.0 / 9.0, 2.0 / 19.0]
field_direction = [1, 0, 0, 1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list
displacement.SetFixedParameters(
field_size + field_origin + field_spacing + field_direction
)
# Set the interpolator, either sitkLinear which is default or nearest neighbor
displacement.SetInterpolator(sitk.sitkNearestNeighbor)
originalDisplacements = np.random.random(len(displacement.GetParameters()))
displacement.SetParameters(originalDisplacements)
coordsX = np.linspace(
field_origin[0],
field_origin[0] + (field_size[0] - 1) * field_spacing[0],
field_size[0],
)
coordsY = np.linspace(
field_origin[1],
field_origin[1] + (field_size[1] - 1) * field_spacing[1],
field_size[1],
)
XX, YY = np.meshgrid(coordsX, coordsY)
interact(
display_displacement_scaling_effect,
s=(-1.5, 1.5),
original_x_mat=fixed(XX),
original_y_mat=fixed(YY),
tx=fixed(displacement),
original_control_point_displacements=fixed(originalDisplacements),
);
Displacement field transform created from an image. Remember that SimpleITK will clear the image you provide, as shown in the cell below.
displacement_image = sitk.Image([64, 64], sitk.sitkVectorFloat64)
# The only point that has any displacement is (0,0)
displacement = (0.5, 0.5)
displacement_image[0, 0] = displacement
print("Original displacement image size: " + point2str(displacement_image.GetSize()))
displacement_field_transform = sitk.DisplacementFieldTransform(displacement_image)
print(
"After using the image to create a transform, displacement image size: "
+ point2str(displacement_image.GetSize())
)
# Check that the displacement field transform does what we expect.
print(
f"Expected result: {str(displacement)}\nActual result:{displacement_field_transform.TransformPoint((0,0))}"
)
Original displacement image size: 64.0 64.0 After using the image to create a transform, displacement image size: 0.0 0.0 Expected result: (0.5, 0.5) Actual result:(0.5, 0.5)
Inverting bounded transforms¶
In SimpleITK we cannot directly invert a BSpline transform. Luckily there are several ways to invert a displacement field transform, and all transformations can be readily converted to a displacement field. Note though that representing a transformation as a deformation field is an approximation of the original transformation where representation consistency depends on the smoothness of the original transformation and the sampling rate (spacing) of the deformation field.
The relevant classes are listed below.
Options for inverting displacement field:
- InvertDisplacementFieldImageFilter
- InverseDisplacementFieldImageFilter
- IterativeInverseDisplacementFieldImageFilter
Note: The methods used to invert a displacement field make assumptions with respect to the function smoothness and continuity and will fail to yield a valid result if these assumptions are not met. For example an affine transformation representing a reflection is invertible, but inverting a deformation field representing this transformation will not yield the desired inverse.
In the next cell we invert the BSpline transform we worked with above.
# Convert the BSpline transform to a displacement field
physical_size = bspline.GetTransformDomainPhysicalDimensions()
# The deformation field spacing affects the accuracy of the transform approximation,
# so we set it here to 0.1mm in all directions.
output_spacing = [0.1] * bspline.GetDimension()
output_size = [
int(phys_sz / spc + 1) for phys_sz, spc in zip(physical_size, output_spacing)
]
displacement_field_transform = sitk.DisplacementFieldTransform(
sitk.TransformToDisplacementField(
bspline,
outputPixelType=sitk.sitkVectorFloat64,
size=output_size,
outputOrigin=bspline.GetTransformDomainOrigin(),
outputSpacing=output_spacing,
outputDirection=bspline.GetTransformDomainDirection(),
)
)
# Arbitrary point to evaluate the consistency of the two representations.
# Change the value for the "output_spacing" above to evaluate its effect
# on the transformation representation consistency.
pnt = [0.4, -0.2]
original_transformed = np.array(bspline.TransformPoint(pnt))
secondary_transformed = np.array(displacement_field_transform.TransformPoint(pnt))
print(f"Original transformation result: {original_transformed}")
print(f"Deformaiton field transformation result: {secondary_transformed}")
print(
f"Difference between transformed points is: {np.linalg.norm(original_transformed - secondary_transformed)}"
)
# Invert a displacement field transform
displacement_image = displacement_field_transform.GetDisplacementField()
bspline_inverse_displacement = sitk.DisplacementFieldTransform(
sitk.InvertDisplacementField(
displacement_image,
maximumNumberOfIterations=20,
maxErrorToleranceThreshold=0.01,
meanErrorToleranceThreshold=0.0001,
enforceBoundaryCondition=True,
)
)
# Transform the point using the original BSpline transformation and then back
# via the displacement field inverse.
there_and_back = np.array(
bspline_inverse_displacement.TransformPoint(bspline.TransformPoint(pnt))
)
print(f"Original point: {pnt}")
print(f"There and back point: {there_and_back}")
print(
f"Difference between original and there-and-back points: {np.linalg.norm(pnt - there_and_back)}"
)
Original transformation result: [ 0.4 -0.2] Deformaiton field transformation result: [ 0.4 -0.2] Difference between transformed points is: 0.0 Original point: [0.4, -0.2] There and back point: [ 0.4 -0.2] Difference between original and there-and-back points: 0.0
CompositeTransform¶
This class represents a composition of transformations, multiple transformations applied one after the other.
The choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework.
Below we represent the composite transformation $T_{affine}(T_{rigid}(x))$ in two ways: (1) use a composite transformation to contain the two; (2) combine the two into a single affine transformation. We can use both as initial transforms (SetInitialTransform) for the registration framework (ImageRegistrationMethod). The difference is that in the former case the optimized parameters belong to the rigid transformation and in the later they belong to the combined-affine transformation.
# Create a composite transformation: T_affine(T_rigid(x)).
rigid_center = (100, 100, 100)
theta_x = 0.0
theta_y = 0.0
theta_z = np.pi / 2.0
rigid_translation = (1, 2, 3)
rigid_euler = sitk.Euler3DTransform(
rigid_center, theta_x, theta_y, theta_z, rigid_translation
)
affine_center = (20, 20, 20)
affine_translation = (5, 6, 7)
# Matrix is represented as a vector-like data in row major order.
affine_matrix = np.random.random(9)
affine = sitk.AffineTransform(affine_matrix, affine_translation, affine_center)
# Using the composite transformation we just add them in (stack based, first in - last applied).
composite_transform = sitk.CompositeTransform(affine)
composite_transform.AddTransform(rigid_euler)
# Create a single transform manually. this is a recipe for compositing any two global transformations
# into an affine transformation, T_0(T_1(x)):
# A = A0*A1
# c = c1
# t = A0*[t1+c1-c0] + t0+c0-c1
A0 = np.asarray(affine.GetMatrix()).reshape(3, 3)
c0 = np.asarray(affine.GetCenter())
t0 = np.asarray(affine.GetTranslation())
A1 = np.asarray(rigid_euler.GetMatrix()).reshape(3, 3)
c1 = np.asarray(rigid_euler.GetCenter())
t1 = np.asarray(rigid_euler.GetTranslation())
combined_mat = np.dot(A0, A1)
combined_center = c1
combined_translation = np.dot(A0, t1 + c1 - c0) + t0 + c0 - c1
combined_affine = sitk.AffineTransform(
combined_mat.flatten(), combined_translation, combined_center
)
# Check if the two transformations are equivalent.
print("Apply the two transformations to the same point cloud:")
print("\t", end="")
print_transformation_differences(composite_transform, combined_affine)
print("Transform parameters:")
print("\tComposite transform: " + point2str(composite_transform.GetParameters(), 2))
print("\tCombined affine: " + point2str(combined_affine.GetParameters(), 2))
print("Fixed parameters:")
print(
"\tComposite transform: " + point2str(composite_transform.GetFixedParameters(), 2)
)
print("\tCombined affine: " + point2str(combined_affine.GetFixedParameters(), 2))
Apply the two transformations to the same point cloud: CompositeTransform-AffineTransform: minDifference: 0.00 maxDifference: 0.00 Transform parameters: Composite transform: 0.00 0.00 1.57 1.00 2.00 3.00 Combined affine: 0.58 -0.54 0.96 0.78 -0.07 0.59 0.24 -0.63 0.81 96.50 44.66 65.85 Fixed parameters: Composite transform: 100.00 100.00 100.00 0.00 Combined affine: 100.00 100.00 100.00
When a composite transformation is comprised of global transformations we can combine all of them into a single affine transformation, this is a generalization of the operation shown in the cell above.
def composite2affine(composite_transform, result_center=None):
"""
Combine all of the composite transformation's contents to form an equivalent affine transformation.
Args:
composite_transform (SimpleITK.CompositeTransform): Input composite transform which contains only
global transformations, possibly nested.
result_center (tuple,list): The desired center parameter for the resulting affine transformation.
If None, then set to [0,...]. This can be any arbitrary value, as it is
possible to change the transform center without changing the transformation
effect.
Returns:
SimpleITK.AffineTransform: Affine transformation that has the same effect as the input composite_transform.
"""
# Flatten the copy of the composite transform, so no nested composites.
flattened_composite_transform = sitk.CompositeTransform(composite_transform)
flattened_composite_transform.FlattenTransform()
tx_dim = flattened_composite_transform.GetDimension()
A = np.eye(tx_dim)
c = np.zeros(tx_dim) if result_center is None else result_center
t = np.zeros(tx_dim)
for i in range(flattened_composite_transform.GetNumberOfTransforms() - 1, -1, -1):
curr_tx = flattened_composite_transform.GetNthTransform(i).Downcast()
# The TranslationTransform interface is different from other
# global transformations.
if curr_tx.GetTransformEnum() == sitk.sitkTranslation:
A_curr = np.eye(tx_dim)
t_curr = np.asarray(curr_tx.GetOffset())
c_curr = np.zeros(tx_dim)
else:
A_curr = np.asarray(curr_tx.GetMatrix()).reshape(tx_dim, tx_dim)
c_curr = np.asarray(curr_tx.GetCenter())
# Some global transformations do not have a translation
# (e.g. ScaleTransform, VersorTransform)
get_translation = getattr(curr_tx, "GetTranslation", None)
if get_translation is not None:
t_curr = np.asarray(get_translation())
else:
t_curr = np.zeros(tx_dim)
A = np.dot(A_curr, A)
t = np.dot(A_curr, t + c - c_curr) + t_curr + c_curr - c
return sitk.AffineTransform(A.flatten(), t, c)
# Create a nested composite transformation using the one from the
# previous cell and add a scale and a translation.
composite_transform.AddTransform(composite_transform)
composite_transform.AddTransform(sitk.ScaleTransform(3, [1.2, 1.4, 2.0]))
composite_transform.AddTransform(sitk.TranslationTransform(3, [1, 2, 3]))
# Get the corresponding affine transformation
simplified_composite = composite2affine(
composite_transform, result_center=[100, 200, 300]
)
# Check if the two transformations are equivalent.
print("Apply the two transformations to the same point cloud:")
print("\t", end="")
print_transformation_differences(composite_transform, simplified_composite)
print("Transform parameters:")
print("\tComposite transform: " + point2str(composite_transform.GetParameters(), 2))
print("\tCombined affine: " + point2str(simplified_composite.GetParameters(), 2))
print("Fixed parameters:")
print(
"\tComposite transform: " + point2str(composite_transform.GetFixedParameters(), 2)
)
print("\tCombined affine: " + point2str(simplified_composite.GetFixedParameters(), 2))
# Why doesn't the composite_transform seem to have fixed parameters?
# The last, n'th, transformation in the composite_transform is a TranslationTransform and that has no fixed parameters.
Apply the two transformations to the same point cloud: CompositeTransform-AffineTransform: minDifference: 0.00 maxDifference: 0.00 Transform parameters: Composite transform: 1.00 2.00 3.00 Combined affine: 0.19 -1.25 2.06 0.66 -1.11 2.39 -0.19 -0.84 1.04 554.60 526.82 64.57 Fixed parameters: Composite transform: Combined affine: 100.00 200.00 300.00
Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation.
The following code illustrates this, where the whole region is translated and subregions have different deformations.
# Global transformation.
translation = sitk.TranslationTransform(2, (1.0, 0.0))
# Displacement in region 1.
displacement1 = sitk.DisplacementFieldTransform(2)
field_size = [10, 20]
field_origin = [-1.0, -1.0]
field_spacing = [2.0 / 9.0, 2.0 / 19.0]
field_direction = [1, 0, 0, 1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement1.SetFixedParameters(
field_size + field_origin + field_spacing + field_direction
)
displacement1.SetParameters(np.ones(len(displacement1.GetParameters())))
# Displacement in region 2.
displacement2 = sitk.DisplacementFieldTransform(2)
field_size = [10, 20]
field_origin = [1.0, -3]
field_spacing = [2.0 / 9.0, 2.0 / 19.0]
field_direction = [1, 0, 0, 1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement2.SetFixedParameters(
field_size + field_origin + field_spacing + field_direction
)
displacement2.SetParameters(-1.0 * np.ones(len(displacement2.GetParameters())))
# Composite transform which applies the global and local transformations.
composite = sitk.CompositeTransform([translation, displacement1, displacement2])
# Apply the composite transformation to points in ([-1,-3],[3,1]) and
# display the deformation using a quiver plot.
# Generate points.
numSamplesX = 10
numSamplesY = 10
coordsX = np.linspace(-1.0, 3.0, numSamplesX)
coordsY = np.linspace(-3.0, 1.0, numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
# Transform points and compute deformation vectors.
pointsX = np.zeros(XX.shape)
pointsY = np.zeros(XX.shape)
for index, value in np.ndenumerate(XX):
px, py = composite.TransformPoint((value, YY[index]))
pointsX[index] = px - value
pointsY[index] = py - YY[index]
plt.quiver(XX, YY, pointsX, pointsY);
Inverting Composite Transform¶
When a CompositeTransform
is:
- only comprised of global transformations, all we need to do is call its
GetInverse
method. - comprised of both global and bounded transformations, the
GetInverse
method will fail because inverting the bounded transformations requires additional information which is not available as part of the transformation.
The next cell shows how to invert a CompositeTransform
for the generic case.
def invert_composite_transform(
original_transform, displacement_field_inverter, grid_spacing=None
):
"""
Invert the given CompositeTransform. Note that the original
transform is modified, flattened. We do not create a copy
of the original because of the large memory usage associated
with the bounded transformations. If the caller wants to retain
the original nested structure of the CompositeTransform it is up
to them to create a copy prior to calling this method.
Args:
original_transform: A CompositeTransform containing global transforms
bounded transform and nested composite transforms.
displacement_field_inverter: Configured object for inverting a displacement
field. One of InvertDisplacementFieldImageFilter,
InverseDisplacementFieldImageFilter,
IterativeInverseDisplacementFieldImageFilter.
grid_spacing: The grid spacing to use for approximating internal BSplineTransforms.
Finer grids provide better approximation at a cost of a larger
memory footprint.
Return:
CompositeTransform which is the inverse of the given one.
"""
inverted_transform_list = []
original_transform.FlattenTransform()
for i in range(original_transform.GetNumberOfTransforms() - 1, -1, -1):
tx = original_transform.GetNthTransform(i)
ttype = tx.GetTransformEnum()
if ttype is sitk.sitkDisplacementField:
inverted_transform_list.append(
sitk.DisplacementFieldTransform(
displacement_field_inverter.Execute(
sitk.DisplacementFieldTransform(tx).GetDisplacementField()
)
)
)
elif ttype is sitk.sitkBSplineTransform:
# Convert the BSpline transform to a displacement field and then invert that transform
physical_size = tx.GetTransformDomainPhysicalDimensions()
grid_size = [
int(phys_sz / spc + 1)
for phys_sz, spc in zip(physical_size, grid_spacing)
]
displacement_field_image = sitk.TransformToDisplacementField(
tx,
outputPixelType=sitk.sitkVectorFloat64,
size=grid_size,
outputOrigin=tx.GetTransformDomainOrigin(),
outputSpacing=grid_spacing,
outputDirection=tx.GetTransformDomainDirection(),
)
inverted_transform_list.append(
sitk.DisplacementFieldTransform(
displacement_field_inverter.Execute(displacement_field_image)
)
)
else:
inverted_transform_list.append(tx.GetInverse())
return sitk.CompositeTransform(inverted_transform_list)
# inverting a CompositeTransform:
# 1. Select the inversion algorithm and configure it (possibly use default configuration).
# 2. Call the invert_composite_transform function.
df_inverter = sitk.InvertDisplacementFieldImageFilter()
df_inverter.SetMaximumNumberOfIterations(100)
df_inverter.SetEnforceBoundaryCondition(True)
composite_inverse = invert_composite_transform(composite, df_inverter)
# display the inverse composite transform using a quiver plot
pointsX = np.zeros(XX.shape)
pointsY = np.zeros(XX.shape)
for index, value in np.ndenumerate(XX):
px, py = composite_inverse.TransformPoint((value, YY[index]))
pointsX[index] = px - value
pointsY[index] = py - YY[index]
plt.quiver(XX, YY, pointsX, pointsY);
Transform¶
This class represents a generic transform. Underneath the generic facade is one of the actual classes. To access the underlying class object we can call the Downcast
method. While this provides us with the actual transform type, we don't know which of the concrete transformation types it is. To find the specific type we can query the transform to obtain its TransformEnum.
anonymous_transform_type = sitk.Transform(sitk.TranslationTransform(2, (1.0, 0.0)))
try:
print(anonymous_transform_type.GetOffset())
except:
print("The generic transform does not have this method.")
actual_transform_type = anonymous_transform_type.Downcast()
# Check that the actual transform type is indeed a translation before
# calling a translation specific method.
if actual_transform_type.GetTransformEnum() == sitk.sitkTranslation:
print(actual_transform_type.GetOffset())
The generic transform does not have this method. (1.0, 0.0)
Writing and Reading¶
The SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of transformations).
The transformation file formats supported by SimpleITK include .txt, .tfm, .xfm, .hdf and .mat. The former three are ASCII based formats and are more appropriate for saving global domain transformations, which are also easily understood by a human reader due to their limited number of parameters. The later two, .hdf and .mat, are binary formats and more appropriate for saving bounded domain transformations as those have a large number of parameters which are better saved using a binary file, faster IO, and also not readily understood by a human reader.
Note: Writing of nested composite transforms is not supported, you will need to "flatten" the transform before writing it to file.
# Create a 2D rigid transformation, write it to disk and read it back.
basic_transform = sitk.Euler2DTransform()
basic_transform.SetTranslation((1, 2))
basic_transform.SetAngle(np.pi / 2)
full_file_name = os.path.join(OUTPUT_DIR, "euler2D.tfm")
sitk.WriteTransform(basic_transform, full_file_name)
read_result = sitk.ReadTransform(full_file_name)
print_transformation_differences(basic_transform, read_result)
# Create a composite transform then write and read.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10, 20]
field_origin = [-10.0, -100.0]
field_spacing = [20.0 / (field_size[0] - 1), 200.0 / (field_size[1] - 1)]
field_direction = [1, 0, 0, 1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement.SetFixedParameters(
field_size + field_origin + field_spacing + field_direction
)
displacement.SetParameters(np.random.random(len(displacement.GetParameters())))
composite_transform = sitk.CompositeTransform([basic_transform, displacement])
full_file_name = os.path.join(OUTPUT_DIR, "composite.tfm")
sitk.WriteTransform(composite_transform, full_file_name)
read_result = sitk.ReadTransform(full_file_name)
print_transformation_differences(composite_transform, read_result)
Euler2DTransform-Euler2DTransform: minDifference: 0.00 maxDifference: 0.00 CompositeTransform-CompositeTransform: minDifference: 0.00 maxDifference: 0.00
x_translation = sitk.TranslationTransform(2, [1, 0])
y_translation = sitk.TranslationTransform(2, [0, 1])
# Create composite transform with the x_translation repeated 3 times
composite_transform1 = sitk.CompositeTransform([x_translation] * 3)
# Create a nested composite transform
composite_transform = sitk.CompositeTransform([y_translation, composite_transform1])
full_file_name = os.path.join(OUTPUT_DIR, "composite.tfm")
# We cannot write nested composite transformations, will throw an exception so we
# flatten it (unravel the nested part)
try:
print(
f"Nested composite transform contains {composite_transform.GetNumberOfTransforms()} transforms."
)
sitk.WriteTransform(composite_transform, full_file_name)
except RuntimeError:
print("Failed writting nested composite transform.")
composite_transform.FlattenTransform()
print(
f"Nested composite transform after flattening contains {composite_transform.GetNumberOfTransforms()} transforms."
)
sitk.WriteTransform(composite_transform, full_file_name)
Nested composite transform contains 2 transforms. Failed writting nested composite transform. Nested composite transform after flattening contains 4 transforms.
In the next cells we create a displacement field which has a nominal size for a CT/MR 512x512x100. We then save it using a text format and a binary format illustrating that IO is orders of magnitude faster when using the binary format.
displacement_field_transform = sitk.DisplacementFieldTransform(
sitk.GetImageFromArray(np.random.random([100, 512, 512, 3]))
)
%%timeit -r1 -n1
sitk.WriteTransform(
displacement_field_transform, os.path.join(OUTPUT_DIR, "deformation.tfm")
)
13.3 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
%%timeit -r1 -n1
sitk.WriteTransform(
displacement_field_transform, os.path.join(OUTPUT_DIR, "deformation.hdf")
)
603 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)