Grangeat-based 2D/3D image registration
Loading...
Searching...
No Matches
Namespaces | Classes | Typedefs | Enumerations | Functions
reg23 Namespace Reference

Namespaces

namespace  autograd
 
namespace  ops
 
namespace  structs
 

Classes

class  CUDATexture2D
 
class  CUDATexture3D
 
struct  GridSample3D
 
struct  Linear
 A functor class that represents a linear transformation: intercept + gradient * x. More...
 
struct  Linear2
 A functor class that represents a linear transformation of two variables: intercept + gradient1 * x + gradient2 * y. More...
 
struct  ProjectDRR
 
struct  ProjectDRRCuboidMask
 
struct  Radon2D
 
struct  Radon3D
 
struct  ResampleSinogram3D
 
struct  Similarity
 
class  SinogramClassic3D
 A 3D texture stored for access by the CPU, structured for storing an even distribution of values over the surface of S^2, according to the HEALPix mapping. ToDo: citation here. More...
 
class  SinogramHEALPix
 A 3D texture stored for access by the CPU, structured for storing an even distribution of values over the surface of S^2, according to the HEALPix mapping. ToDo: citation here. More...
 
class  Texture
 A parent texture class containing template data and functionality. More...
 
class  Texture2DCPU
 A 2D texture stored for access by the CPU. More...
 
class  Texture3DCPU
 A 3D texture stored for access by the CPU. More...
 
class  Vec
 A simple vector class derived from std::array<T, N>, providing overrides for all useful operators. More...
 

Typedefs

using CommonData = GridSample3D< Texture3DCPU >::CommonData
 

Enumerations

enum class  TextureAddressMode { TextureAddressMode::ZERO , TextureAddressMode::WRAP }
 

Functions

at::Tensor GridSample3D_CPU (const at::Tensor &input, const at::Tensor &grid, const std::string &addressModeX, const std::string &addressModeY, const std::string &addressModeZ, c10::optional< at::Tensor > out)
 Sample the given 3D input tensor at the positions given in grid according to the given address mode using bilinear interpolation. This implementation is single-threaded.
 
at::Tensor ProjectDRR_CPU (const at::Tensor &volume, const at::Tensor &voxelSpacing, const at::Tensor &homographyMatrixInverse, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing)
 Generate a DRR from the given volume at the given transformation.
 
at::Tensor ProjectDRR_backward_CPU (const at::Tensor &volume, const at::Tensor &voxelSpacing, const at::Tensor &homographyMatrixInverse, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing, const at::Tensor &dLossDDRR)
 Evaluate the derivative of some scalar loss that is a function of a DRR projected from the given volume at the given transformation, with respect to the inverse homography matrix.
 
at::Tensor ProjectDRRCuboidMask_CPU (const at::Tensor &volumeSize, const at::Tensor &voxelSpacing, const at::Tensor &homographyMatrixInverse, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing)
 Calculate the distances of the intersections between the DRR-generation rays and the domain of the volume data.
 
at::Tensor Radon2D_CPU (const at::Tensor &image, const at::Tensor &imageSpacing, const at::Tensor &phiValues, const at::Tensor &rValues, int64_t samplesPerLine)
 Compute an approximation of the Radon transform of the given 2D image.
 
at::Tensor DRadon2DDR_CPU (const at::Tensor &image, const at::Tensor &imageSpacing, const at::Tensor &phiValues, const at::Tensor &rValues, int64_t samplesPerLine)
 Compute the derivative with respect to plane-origin distance of an approximation of the Radon transform of the given 2D image.
 
at::Tensor Radon3D_CPU (const at::Tensor &volume, const at::Tensor &volumeSpacing, const at::Tensor &phiValues, const at::Tensor &thetaValues, const at::Tensor &rValues, int64_t samplesPerDirection)
 Compute an approximation of the Radon transform of the given 3D volume.
 
at::Tensor DRadon3DDR_CPU (const at::Tensor &volume, const at::Tensor &volumeSpacing, const at::Tensor &phiValues, const at::Tensor &thetaValues, const at::Tensor &rValues, int64_t samplesPerDirection)
 Compute the derivative with respect to plane-origin distance of an approximation of the Radon transform of the given 3D volume.
 
at::Tensor ResampleSinogram3D_CPU (const at::Tensor &sinogram3d, const std::string &sinogramType, double rSpacing, const at::Tensor &projectionMatrix, const at::Tensor &phiValues, const at::Tensor &rValues, c10::optional< at::Tensor > out)
 Resample the given 3D sinogram at locations corresponding to the given 2D sinogram grid (phiValues, rValues), according to the 2D-3D image registration method based on Grangeat's relation.
 
std::tuple< at::Tensor, double, double, double, double, doubleNormalisedCrossCorrelation_CPU (const at::Tensor &a, const at::Tensor &b)
 Additionally returns intermediate quantities useful for evaluating the backward pass.
 
template<typename T >
__host__ __device__ Vec< T, 3 > UnflipSphericalCoordinate (const Vec< T, 3 > &coordSph)
 'Unflips' the given spherical coordinates so that theta and phi both lie between -pi/2 and pi/2
 
template<typename T >
__host__ __device__ T Square (const T &x)
 Returns the square of the given value.
 
template<typename T >
__host__ __device__ T Modulo (const T &x, const T &y)
 Modulo operation that respect the sign.
 
template<typename T >
__host__ __device__ T Sign (const T &x)
 
__host__ at::Tensor GridSample3D_CUDA (const at::Tensor &input, const at::Tensor &grid, const std::string &addressModeX, const std::string &addressModeY, const std::string &addressModeZ, c10::optional< at::Tensor > out)
 An implementation of reg23::GridSample3D_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor ProjectDRR_CUDA (const at::Tensor &volume, const at::Tensor &voxelSpacing, const at::Tensor &homographyMatrixInverse, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing)
 An implementation of reg23::ProjectDRR_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor ProjectDRR_backward_CUDA (const at::Tensor &volume, const at::Tensor &voxelSpacing, const at::Tensor &homographyMatrixInverse, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing, const at::Tensor &dLossDDRR)
 An implementation of reg23::ProjectDRR_backward_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor ProjectDRRsBatched_CUDA (const at::Tensor &volume, const at::Tensor &voxelSpacing, const at::Tensor &invHMatrices, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing)
 An implementation similar to reg23::ProjectDRR_CUDA that evaluates projections for multiple transformations in parallel.
 
__host__ at::Tensor ProjectDRRCuboidMask_CUDA (const at::Tensor &volumeSize, const at::Tensor &voxelSpacing, const at::Tensor &homographyMatrixInverse, double sourceDistance, int64_t outputWidth, int64_t outputHeight, const at::Tensor &outputOffset, const at::Tensor &detectorSpacing)
 An implementation of reg23::ProjectDRRCuboidMask_CPU that uses CUDA parallelisation.
 
template<typename T , std::size_t faceCount>
__host__ __device__ T RayConvexPolyhedronDistance (const std::array< Vec< T, 3 >, faceCount > &facePoints, const std::array< Vec< T, 3 >, faceCount > &faceOutUnitNormals, const Vec< T, 3 > &rayPoint, const Vec< T, 3 > &rayUnitDirection)
 Calculate the length of the intersection between a ray and a convex polyhedron.
 
__host__ at::Tensor Radon2D_CUDA (const at::Tensor &image, const at::Tensor &imageSpacing, const at::Tensor &phiValues, const at::Tensor &rValues, int64_t samplesPerLine)
 An implementation of reg23::Radon2D_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor Radon2D_CUDA_V2 (const at::Tensor &image, const at::Tensor &imageSpacing, const at::Tensor &phiValues, const at::Tensor &rValues, int64_t samplesPerLine)
 An implementation of reg23::Radon2D_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor DRadon2DDR_CUDA (const at::Tensor &image, const at::Tensor &imageSpacing, const at::Tensor &phiValues, const at::Tensor &rValues, int64_t samplesPerLine)
 An implementation of reg23::DRadon2DDR_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor Radon3D_CUDA (const at::Tensor &volume, const at::Tensor &volumeSpacing, const at::Tensor &phiValues, const at::Tensor &thetaValues, const at::Tensor &rValues, int64_t samplesPerDirection)
 An implementation of reg23::Radon3D_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor Radon3D_CUDA_V2 (const at::Tensor &volume, const at::Tensor &volumeSpacing, const at::Tensor &phiValues, const at::Tensor &thetaValues, const at::Tensor &rValues, int64_t samplesPerDirection)
 An implementation of reg23::Radon3D_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor DRadon3DDR_CUDA (const at::Tensor &volume, const at::Tensor &volumeSpacing, const at::Tensor &phiValues, const at::Tensor &thetaValues, const at::Tensor &rValues, int64_t samplesPerDirection)
 An implementation of reg23::DRadon3DDR_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor DRadon3DDR_CUDA_V2 (const at::Tensor &volume, const at::Tensor &volumeSpacing, const at::Tensor &phiValues, const at::Tensor &thetaValues, const at::Tensor &rValues, int64_t samplesPerDirection)
 An implementation of reg23::DRadon3DDR_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor ResampleSinogram3D_CUDA (const at::Tensor &sinogram3d, const std::string &sinogramType, double rSpacing, const at::Tensor &projectionMatrix, const at::Tensor &phiValues, const at::Tensor &rValues, c10::optional< at::Tensor > out)
 An implementation of reg23::ResampleSinogram3D_CPU that uses CUDA parallelisation.
 
__host__ at::Tensor ResampleSinogram3DCUDATexture (int64_t sinogram3dTextureHandle, int64_t sinogramWidth, int64_t sinogramHeight, int64_t sinogramDepth, const std::string &sinogramType, double rSpacing, const at::Tensor &projectionMatrix, const at::Tensor &phiValues, const at::Tensor &rValues, c10::optional< at::Tensor > out)
 An implementation of reg23::ResampleSinogram3D_CUDA that takes a handle to a pre-allocated CUDA texture instead of a PyTorch tensor.
 
__host__ std::tuple< at::Tensor, double, double, double, double, doubleNormalisedCrossCorrelation_CUDA (const at::Tensor &a, const at::Tensor &b)
 An implementation of reg23::NormalisedCrossCorrelation_CPU that uses CUDA parallelisation.
 
template<std::size_t DIMENSIONALITY>
Vec< TextureAddressMode, DIMENSIONALITY > StringsToAddressModes (const std::array< std::string_view, DIMENSIONALITY > &strings)
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, Noperator+ (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise addition
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator+ (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise addition
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator+ (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise addition
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, Noperator- (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise subtraction
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator- (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise subtraction
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator- (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise subtraction
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, Noperator* (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise multiplication
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator* (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise multiplication
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator* (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise multiplication
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, Noperator/ (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise division
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator/ (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise division
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator/ (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise division
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, Noperator% (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise modulo
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator% (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise modulo
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< T, Noperator% (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise modulo
 
template<typename T , std::size_t N>
__host__ __device__ constexpr T VecDot (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec dot product
 
template<typename T , std::size_t N1, std::size_t N2>
__host__ __device__ constexpr Vec< Vec< T, N1 >, N2VecOuter (const Vec< T, N1 > &lhs, const Vec< T, N2 > &rhs)
 reg23::Vec outer product; returns a column major matrix of size N1 x N2.
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< bool, Noperator> (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise greater-than
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator> (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise greater-than
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator> (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise greater-than
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< bool, Noperator>= (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise greater-than-or-equal-to
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator>= (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise greater-than-or-equal-to
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator>= (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise greater-than-or-equal-to
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< bool, Noperator< (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise less-than
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator< (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise less-than
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator< (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise less-than
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< bool, Noperator<= (const Vec< T, N > &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise less-than-or-equal-to
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator<= (const Vec< T, N > &lhs, const scalar_t &rhs)
 reg23::Vec element-wise less-than-or-equal-to
 
template<typename T , std::size_t N, typename scalar_t >
__host__ __device__ constexpr Vec< bool, Noperator<= (const scalar_t &lhs, const Vec< T, N > &rhs)
 reg23::Vec element-wise less-than-or-equal-to
 
template<typename T , std::size_t... Ns>
__host__ __device__ constexpr Vec< T,(Ns+...)> VecCat (const Vec< T, Ns > &... vecs)
 reg23::Vec concatenation of any number of vectors
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, N+1 > VecCat (const Vec< T, N > &lhs, const T &rhs)
 reg23::Vec concatenation of a vector and a scalar
 
template<typename T , std::size_t N>
__host__ __device__ constexpr Vec< T, N+1 > VecCat (const T &lhs, const Vec< T, N > &rhs)
 reg23::Vec concatenation of a scalar and a vector
 
template<typename T , std::size_t R, std::size_t C>
__host__ __device__ constexpr Vec< T, RMatMul (const Vec< Vec< T, R >, C > &lhs, const Vec< T, C > &rhs)
 Matrix-vector multiplication of the Vec struct.
 
 PYBIND11_MODULE (TORCH_EXTENSION_NAME, m)
 
 TORCH_LIBRARY (reg23, m)
 
 TORCH_LIBRARY_IMPL (reg23, CPU, m)
 

Typedef Documentation

◆ CommonData

Function Documentation

◆ DRadon3DDR_CUDA_V2()

__host__ at::Tensor reg23::DRadon3DDR_CUDA_V2 ( const at::Tensor &  volume,
const at::Tensor &  volumeSpacing,
const at::Tensor &  phiValues,
const at::Tensor &  thetaValues,
const at::Tensor &  rValues,
int64_t  samplesPerDirection 
)

An implementation of reg23::DRadon3DDR_CPU that uses CUDA parallelisation.

One kernel launch is made per plane integral, and each plane integral approximation is done by summing samples from multiple kernels in log-time.

◆ PYBIND11_MODULE()

reg23::PYBIND11_MODULE ( TORCH_EXTENSION_NAME  ,
m   
)

◆ Radon2D_CUDA_V2()

__host__ at::Tensor reg23::Radon2D_CUDA_V2 ( const at::Tensor &  image,
const at::Tensor &  imageSpacing,
const at::Tensor &  phiValues,
const at::Tensor &  rValues,
int64_t  samplesPerLine 
)

An implementation of reg23::Radon2D_CPU that uses CUDA parallelisation.

One kernel launch is made per line integral, and each line integral approximation is done by summing samples from multiple kernels in log-time.

◆ Radon3D_CUDA_V2()

__host__ at::Tensor reg23::Radon3D_CUDA_V2 ( const at::Tensor &  volume,
const at::Tensor &  volumeSpacing,
const at::Tensor &  phiValues,
const at::Tensor &  thetaValues,
const at::Tensor &  rValues,
int64_t  samplesPerDirection 
)

An implementation of reg23::Radon3D_CPU that uses CUDA parallelisation.

One kernel launch is made per plane integral, and each plane integral approximation is done by summing samples from multiple kernels in log-time.

◆ TORCH_LIBRARY()

reg23::TORCH_LIBRARY ( reg23  ,
m   
)

◆ TORCH_LIBRARY_IMPL()

reg23::TORCH_LIBRARY_IMPL ( reg23  ,
CPU  ,
m   
)