Array types

This section of the documentation lists all array classes that are available in the various namespaces.

TODO: insert general information about how types are organized here.

Dr.Jit types derive from drjit.ArrayBase and generally do not implement any methods beyond those of the base class, which makes this section rather repetitious.

Scalar array namespace (drjit.scalar)

The scalar backend directly operates on individual floating point/integer values without the use of parallelization or vectorization.

For example, a drjit.scalar.Array3f instance represents a simple 3D vector with 3 float-valued entries. In the JIT-compiled backends (CUDA, LLVM), the same Array3f type represents an array of 3D vectors partaking in a parallel computation.

Scalars

drjit.scalar.Bool: type = bool
drjit.scalar.Float16: type = half
drjit.scalar.Float: type = float
drjit.scalar.Float64: type = float
drjit.scalar.Int: type = int
drjit.scalar.Int64: type = int
drjit.scalar.UInt: type = int
drjit.scalar.UInt64: type = int

1D arrays

class drjit.scalar.Array0b

Bases: ArrayBase[Array0b, _Array0bCp, bool, bool, bool, Array0b, Array0b]

class drjit.scalar.Array1b

Bases: ArrayBase[Array1b, _Array1bCp, bool, bool, bool, Array1b, Array1b]

class drjit.scalar.Array2b

Bases: ArrayBase[Array2b, _Array2bCp, bool, bool, bool, Array2b, Array2b]

class drjit.scalar.Array3b

Bases: ArrayBase[Array3b, _Array3bCp, bool, bool, bool, Array3b, Array3b]

class drjit.scalar.Array4b

Bases: ArrayBase[Array4b, _Array4bCp, bool, bool, bool, Array4b, Array4b]

class drjit.scalar.ArrayXb

Bases: ArrayBase[ArrayXb, _ArrayXbCp, bool, bool, bool, ArrayXb, ArrayXb]

class drjit.scalar.Array0f16

Bases: ArrayBase[Array0f16, _Array0f16Cp, float, float, float, Array0f16, Array0b]

class drjit.scalar.Array1f16

Bases: ArrayBase[Array1f16, _Array1f16Cp, float, float, float, Array1f16, Array1b]

class drjit.scalar.Array2f16

Bases: ArrayBase[Array2f16, _Array2f16Cp, float, float, float, Array2f16, Array2b]

class drjit.scalar.Array3f16

Bases: ArrayBase[Array3f16, _Array3f16Cp, float, float, float, Array3f16, Array3b]

class drjit.scalar.Array4f16

Bases: ArrayBase[Array4f16, _Array4f16Cp, float, float, float, Array4f16, Array4b]

class drjit.scalar.ArrayXf16

Bases: ArrayBase[ArrayXf16, _ArrayXf16Cp, float, float, float, ArrayXf16, ArrayXb]

class drjit.scalar.Array0f

Bases: ArrayBase[Array0f, _Array0fCp, float, float, float, Array0f, Array0b]

class drjit.scalar.Array1f

Bases: ArrayBase[Array1f, _Array1fCp, float, float, float, Array1f, Array1b]

class drjit.scalar.Array2f

Bases: ArrayBase[Array2f, _Array2fCp, float, float, float, Array2f, Array2b]

class drjit.scalar.Array3f

Bases: ArrayBase[Array3f, _Array3fCp, float, float, float, Array3f, Array3b]

class drjit.scalar.Array4f

Bases: ArrayBase[Array4f, _Array4fCp, float, float, float, Array4f, Array4b]

class drjit.scalar.ArrayXf

Bases: ArrayBase[ArrayXf, _ArrayXfCp, float, float, float, ArrayXf, ArrayXb]

class drjit.scalar.Array0u

Bases: ArrayBase[Array0u, _Array0uCp, int, int, int, Array0u, Array0b]

class drjit.scalar.Array1u

Bases: ArrayBase[Array1u, _Array1uCp, int, int, int, Array1u, Array1b]

class drjit.scalar.Array2u

Bases: ArrayBase[Array2u, _Array2uCp, int, int, int, Array2u, Array2b]

class drjit.scalar.Array3u

Bases: ArrayBase[Array3u, _Array3uCp, int, int, int, Array3u, Array3b]

class drjit.scalar.Array4u

Bases: ArrayBase[Array4u, _Array4uCp, int, int, int, Array4u, Array4b]

class drjit.scalar.ArrayXu

Bases: ArrayBase[ArrayXu, _ArrayXuCp, int, int, int, ArrayXu, ArrayXb]

class drjit.scalar.Array0i

Bases: ArrayBase[Array0i, _Array0iCp, int, int, int, Array0i, Array0b]

class drjit.scalar.Array1i

Bases: ArrayBase[Array1i, _Array1iCp, int, int, int, Array1i, Array1b]

class drjit.scalar.Array2i

Bases: ArrayBase[Array2i, _Array2iCp, int, int, int, Array2i, Array2b]

class drjit.scalar.Array3i

Bases: ArrayBase[Array3i, _Array3iCp, int, int, int, Array3i, Array3b]

class drjit.scalar.Array4i

Bases: ArrayBase[Array4i, _Array4iCp, int, int, int, Array4i, Array4b]

class drjit.scalar.ArrayXi

Bases: ArrayBase[ArrayXi, _ArrayXiCp, int, int, int, ArrayXi, ArrayXb]

class drjit.scalar.Array0f64

Bases: ArrayBase[Array0f64, _Array0f64Cp, float, float, float, Array0f64, Array0b]

class drjit.scalar.Array1f64

Bases: ArrayBase[Array1f64, _Array1f64Cp, float, float, float, Array1f64, Array1b]

class drjit.scalar.Array2f64

Bases: ArrayBase[Array2f64, _Array2f64Cp, float, float, float, Array2f64, Array2b]

class drjit.scalar.Array3f64

Bases: ArrayBase[Array3f64, _Array3f64Cp, float, float, float, Array3f64, Array3b]

class drjit.scalar.Array4f64

Bases: ArrayBase[Array4f64, _Array4f64Cp, float, float, float, Array4f64, Array4b]

class drjit.scalar.ArrayXf64

Bases: ArrayBase[ArrayXf64, _ArrayXf64Cp, float, float, float, ArrayXf64, ArrayXb]

class drjit.scalar.Array0u64

Bases: ArrayBase[Array0u64, _Array0u64Cp, int, int, int, Array0u64, Array0b]

class drjit.scalar.Array1u64

Bases: ArrayBase[Array1u64, _Array1u64Cp, int, int, int, Array1u64, Array1b]

class drjit.scalar.Array2u64

Bases: ArrayBase[Array2u64, _Array2u64Cp, int, int, int, Array2u64, Array2b]

class drjit.scalar.Array3u64

Bases: ArrayBase[Array3u64, _Array3u64Cp, int, int, int, Array3u64, Array3b]

class drjit.scalar.Array4u64

Bases: ArrayBase[Array4u64, _Array4u64Cp, int, int, int, Array4u64, Array4b]

class drjit.scalar.ArrayXu64

Bases: ArrayBase[ArrayXu64, _ArrayXu64Cp, int, int, int, ArrayXu64, ArrayXb]

class drjit.scalar.Array0i64

Bases: ArrayBase[Array0i64, _Array0i64Cp, int, int, int, Array0i64, Array0b]

class drjit.scalar.Array1i64

Bases: ArrayBase[Array1i64, _Array1i64Cp, int, int, int, Array1i64, Array1b]

class drjit.scalar.Array2i64

Bases: ArrayBase[Array2i64, _Array2i64Cp, int, int, int, Array2i64, Array2b]

class drjit.scalar.Array3i64

Bases: ArrayBase[Array3i64, _Array3i64Cp, int, int, int, Array3i64, Array3b]

class drjit.scalar.Array4i64

Bases: ArrayBase[Array4i64, _Array4i64Cp, int, int, int, Array4i64, Array4b]

class drjit.scalar.ArrayXi64

Bases: ArrayBase[ArrayXi64, _ArrayXi64Cp, int, int, int, ArrayXi64, ArrayXb]

2D arrays

class drjit.scalar.Array22b

Bases: ArrayBase[Array22b, _Array22bCp, Array2b, _Array2bCp, Array2b, Array22b, Array22b]

class drjit.scalar.Array33b

Bases: ArrayBase[Array33b, _Array33bCp, Array3b, _Array3bCp, Array3b, Array33b, Array33b]

class drjit.scalar.Array44b

Bases: ArrayBase[Array44b, _Array44bCp, Array4b, _Array4bCp, Array4b, Array44b, Array44b]

class drjit.scalar.Array22f16

Bases: ArrayBase[Array22f16, _Array22f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.scalar.Array33f16

Bases: ArrayBase[Array33f16, _Array33f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.scalar.Array44f16

Bases: ArrayBase[Array44f16, _Array44f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.scalar.Array22f

Bases: ArrayBase[Array22f, _Array22fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.scalar.Array33f

Bases: ArrayBase[Array33f, _Array33fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.scalar.Array44f

Bases: ArrayBase[Array44f, _Array44fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.scalar.Array22f64

Bases: ArrayBase[Array22f64, _Array22f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.scalar.Array33f64

Bases: ArrayBase[Array33f64, _Array33f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.scalar.Array44f64

Bases: ArrayBase[Array44f64, _Array44f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Special (complex numbers, etc.)

class drjit.scalar.Complex2f

Bases: ArrayBase[Complex2f, _Complex2fCp, float, float, float, Array2f, Array2b]

class drjit.scalar.Complex2f64

Bases: ArrayBase[Complex2f64, _Complex2f64Cp, float, float, float, Array2f64, Array2b]

class drjit.scalar.Quaternion4f16

Bases: ArrayBase[Quaternion4f16, _Quaternion4f16Cp, float, float, float, Array4f16, Array4b]

class drjit.scalar.Quaternion4f

Bases: ArrayBase[Quaternion4f, _Quaternion4fCp, float, float, float, Array4f, Array4b]

class drjit.scalar.Quaternion4f64

Bases: ArrayBase[Quaternion4f64, _Quaternion4f64Cp, float, float, float, Array4f64, Array4b]

class drjit.scalar.Matrix2f16

Bases: ArrayBase[Matrix2f16, _Matrix2f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.scalar.Matrix3f16

Bases: ArrayBase[Matrix3f16, _Matrix3f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.scalar.Matrix4f16

Bases: ArrayBase[Matrix4f16, _Matrix4f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.scalar.Matrix2f

Bases: ArrayBase[Matrix2f, _Matrix2fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.scalar.Matrix3f

Bases: ArrayBase[Matrix3f, _Matrix3fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.scalar.Matrix4f

Bases: ArrayBase[Matrix4f, _Matrix4fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.scalar.Matrix2f64

Bases: ArrayBase[Matrix2f64, _Matrix2f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.scalar.Matrix3f64

Bases: ArrayBase[Matrix3f64, _Matrix3f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.scalar.Matrix4f64

Bases: ArrayBase[Matrix4f64, _Matrix4f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Tensors

class drjit.scalar.TensorXb

Bases: ArrayBase[TensorXb, _TensorXbCp, TensorXb, _TensorXbCp, TensorXb, ArrayXb, TensorXb]

class drjit.scalar.TensorXf16

Bases: ArrayBase[TensorXf16, _TensorXf16Cp, TensorXf16, _TensorXf16Cp, TensorXf16, ArrayXf16, TensorXb]

class drjit.scalar.TensorXf

Bases: ArrayBase[TensorXf, _TensorXfCp, TensorXf, _TensorXfCp, TensorXf, ArrayXf, TensorXb]

class drjit.scalar.TensorXu

Bases: ArrayBase[TensorXu, _TensorXuCp, TensorXu, _TensorXuCp, TensorXu, ArrayXu, TensorXb]

class drjit.scalar.TensorXi

Bases: ArrayBase[TensorXi, _TensorXiCp, TensorXi, _TensorXiCp, TensorXi, ArrayXi, TensorXb]

class drjit.scalar.TensorXf64

Bases: ArrayBase[TensorXf64, _TensorXf64Cp, TensorXf64, _TensorXf64Cp, TensorXf64, ArrayXf64, TensorXb]

class drjit.scalar.TensorXu64

Bases: ArrayBase[TensorXu64, _TensorXu64Cp, TensorXu64, _TensorXu64Cp, TensorXu64, ArrayXu64, TensorXb]

class drjit.scalar.TensorXi64

Bases: ArrayBase[TensorXi64, _TensorXi64Cp, TensorXi64, _TensorXi64Cp, TensorXi64, ArrayXi64, TensorXb]

Textures

class drjit.scalar.Texture1f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf16

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array1f, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture2f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf16

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array2f, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture3f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf16

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array3f, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture1f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array1f, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array1f, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture2f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array2f, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array2f, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture3f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array3f, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array3f, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture1f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf64

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array1f64, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array1f64, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array1f64, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array1f64, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array1f64, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array1f64, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture2f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf64

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array2f64, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array2f64, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array2f64, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array2f64, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array2f64, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array2f64, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.scalar.Texture3f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.scalar.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.scalar.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.scalar.ArrayXf64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.scalar.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.scalar.ArrayXf64

Return the texture data as an array object

tensor(self) drjit.scalar.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.scalar.Array3f64, active: bool = Bool(True)) list[float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.scalar.Array3f64, active: bool = Bool(True)) list[list[float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.scalar.Array3f64, active: bool = Bool(True), force_drjit: bool = False) list[float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.scalar.Array3f64, active: bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.scalar.Array3f64, active: bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.scalar.Array3f64, active: bool = Bool(True)) list[float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

Random number generators

class drjit.scalar.PCG32

Implementation of PCG32, a member of the PCG family of random number generators proposed by Melissa O’Neill.

PCG combines a Linear Congruential Generator (LCG) with a permutation function that yields high-quality pseudorandom variates while at the same time requiring very low computational cost and internal state (only 128 bit in the case of PCG32).

More detail on the PCG family of pseudorandom number generators can be found here.

The PCG32 class is implemented as a PyTree, which means that it is compatible with symbolic function calls, loops, etc.

__init__(self, size: int = 1, initstate: int = UInt64(0x853c49e6748fea9b), initseq: int = UInt64(0xda3e39cb94b95bdb)) None
__init__(self, arg: drjit.scalar.PCG32) None

Overloaded function.

  1. __init__(self, size: int = 1, initstate: int = UInt64(0x853c49e6748fea9b), initseq: int = UInt64(0xda3e39cb94b95bdb)) -> None

Initialize a random number generator that generates size variates in parallel.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their defaults values are based on the original implementation.

The implementation of this routine internally calls py:func:seed, with one small twist. When multiple random numbers are being generated in parallel, the constructor adds an offset equal to drjit.arange(UInt64, size) to both initstate and initseq to de-correlate the generated sequences.

  1. __init__(self, arg: drjit.scalar.PCG32) -> None

Copy-construct a new PCG32 instance from an existing instance.

seed(self, initstate: int = UInt64(0x853c49e6748fea9b), initseq: int = UInt64(0xda3e39cb94b95bdb)) None

Seed the random number generator with the given initial state and sequence ID.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their values are the defaults from the original implementation.

next_uint32(self) int
next_uint32(self, arg: bool, /) int

Generate a uniformly distributed unsigned 32-bit random number

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint64(self) int
next_uint64(self, arg: bool, /) int

Generate a uniformly distributed unsigned 64-bit random number

Internally, the function calls next_uint32() twice.

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float32(self) float
next_float32(self, arg: bool, /) float

Generate a uniformly distributed single precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float64(self) float
next_float64(self, arg: bool, /) float

Generate a uniformly distributed double precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint32_bounded(self, bound: int, mask: bool = Bool(True)) int

Generate a uniformly distributed 32-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

next_uint64_bounded(self, bound: int, mask: bool = Bool(True)) int

Generate a uniformly distributed 64-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

__add__(self, arg: int, /) drjit.scalar.PCG32

Advance the pseudorandom number generator.

This function implements a multi-step advance function that is equivalent to (but more efficient than) calling the random number generator arg times in sequence.

This is useful to advance a newly constructed PRNG to a certain known state.

__iadd__(self, arg: int, /) drjit.scalar.PCG32

In-place addition operator based on __add__().

__sub__(self, arg: int, /) drjit.scalar.PCG32
__sub__(self, arg: drjit.scalar.PCG32, /) int

Rewind the pseudorandom number generator.

This function implements the opposite of __add__ to step a PRNG backwards. It can also compute the difference (as counted by the number of internal next_uint32 steps) between two PCG32 instances. This assumes that the two instances were consistently seeded.

__isub__(self, arg: int, /) drjit.scalar.PCG32

In-place subtraction operator based on __sub__().

property inc

Sequence increment of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

property state

Sequence state of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

LLVM array namespace (drjit.llvm)

The LLVM backend is vectorized, hence types listed as scalar actually represent an array of scalars partaking in a parallel computation (analogously, 1D arrays are arrays of 1D arrays, etc.).

Scalar

class drjit.llvm.Bool

Bases: ArrayBase[Bool, _BoolCp, bool, bool, Bool, Bool, Bool]

class drjit.llvm.Float16

Bases: ArrayBase[Float16, _Float16Cp, float, float, Float16, Float16, Bool]

class drjit.llvm.Float

Bases: ArrayBase[Float, _FloatCp, float, float, Float, Float, Bool]

class drjit.llvm.Float64

Bases: ArrayBase[Float64, _Float64Cp, float, float, Float64, Float64, Bool]

class drjit.llvm.UInt

Bases: ArrayBase[UInt, _UIntCp, int, int, UInt, UInt, Bool]

class drjit.llvm.UInt64

Bases: ArrayBase[UInt64, _UInt64Cp, int, int, UInt64, UInt64, Bool]

class drjit.llvm.Int

Bases: ArrayBase[Int, _IntCp, int, int, Int, Int, Bool]

class drjit.llvm.Int64

Bases: ArrayBase[Int64, _Int64Cp, int, int, Int64, Int64, Bool]

1D arrays

class drjit.llvm.Array0b

Bases: ArrayBase[Array0b, _Array0bCp, Bool, _BoolCp, Bool, Array0b, Array0b]

class drjit.llvm.Array1b

Bases: ArrayBase[Array1b, _Array1bCp, Bool, _BoolCp, Bool, Array1b, Array1b]

class drjit.llvm.Array2b

Bases: ArrayBase[Array2b, _Array2bCp, Bool, _BoolCp, Bool, Array2b, Array2b]

class drjit.llvm.Array3b

Bases: ArrayBase[Array3b, _Array3bCp, Bool, _BoolCp, Bool, Array3b, Array3b]

class drjit.llvm.Array4b

Bases: ArrayBase[Array4b, _Array4bCp, Bool, _BoolCp, Bool, Array4b, Array4b]

class drjit.llvm.ArrayXb

Bases: ArrayBase[ArrayXb, _ArrayXbCp, Bool, _BoolCp, Bool, ArrayXb, ArrayXb]

class drjit.llvm.Array0f16

Bases: ArrayBase[Array0f16, _Array0f16Cp, Float16, _Float16Cp, Float16, Array0f16, Array0b]

class drjit.llvm.Array1f16

Bases: ArrayBase[Array1f16, _Array1f16Cp, Float16, _Float16Cp, Float16, Array1f16, Array1b]

class drjit.llvm.Array2f16

Bases: ArrayBase[Array2f16, _Array2f16Cp, Float16, _Float16Cp, Float16, Array2f16, Array2b]

class drjit.llvm.Array3f16

Bases: ArrayBase[Array3f16, _Array3f16Cp, Float16, _Float16Cp, Float16, Array3f16, Array3b]

class drjit.llvm.Array4f16

Bases: ArrayBase[Array4f16, _Array4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.llvm.ArrayXf16

Bases: ArrayBase[ArrayXf16, _ArrayXf16Cp, Float16, _Float16Cp, Float16, ArrayXf16, ArrayXb]

class drjit.llvm.Array0f

Bases: ArrayBase[Array0f, _Array0fCp, Float, _FloatCp, Float, Array0f, Array0b]

class drjit.llvm.Array1f

Bases: ArrayBase[Array1f, _Array1fCp, Float, _FloatCp, Float, Array1f, Array1b]

class drjit.llvm.Array2f

Bases: ArrayBase[Array2f, _Array2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.llvm.Array3f

Bases: ArrayBase[Array3f, _Array3fCp, Float, _FloatCp, Float, Array3f, Array3b]

class drjit.llvm.Array4f

Bases: ArrayBase[Array4f, _Array4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.llvm.ArrayXf

Bases: ArrayBase[ArrayXf, _ArrayXfCp, Float, _FloatCp, Float, ArrayXf, ArrayXb]

class drjit.llvm.Array0u

Bases: ArrayBase[Array0u, _Array0uCp, UInt, _UIntCp, UInt, Array0u, Array0b]

class drjit.llvm.Array1u

Bases: ArrayBase[Array1u, _Array1uCp, UInt, _UIntCp, UInt, Array1u, Array1b]

class drjit.llvm.Array2u

Bases: ArrayBase[Array2u, _Array2uCp, UInt, _UIntCp, UInt, Array2u, Array2b]

class drjit.llvm.Array3u

Bases: ArrayBase[Array3u, _Array3uCp, UInt, _UIntCp, UInt, Array3u, Array3b]

class drjit.llvm.Array4u

Bases: ArrayBase[Array4u, _Array4uCp, UInt, _UIntCp, UInt, Array4u, Array4b]

class drjit.llvm.ArrayXu

Bases: ArrayBase[ArrayXu, _ArrayXuCp, UInt, _UIntCp, UInt, ArrayXu, ArrayXb]

class drjit.llvm.Array0i

Bases: ArrayBase[Array0i, _Array0iCp, Int, _IntCp, Int, Array0i, Array0b]

class drjit.llvm.Array1i

Bases: ArrayBase[Array1i, _Array1iCp, Int, _IntCp, Int, Array1i, Array1b]

class drjit.llvm.Array2i

Bases: ArrayBase[Array2i, _Array2iCp, Int, _IntCp, Int, Array2i, Array2b]

class drjit.llvm.Array3i

Bases: ArrayBase[Array3i, _Array3iCp, Int, _IntCp, Int, Array3i, Array3b]

class drjit.llvm.Array4i

Bases: ArrayBase[Array4i, _Array4iCp, Int, _IntCp, Int, Array4i, Array4b]

class drjit.llvm.ArrayXi

Bases: ArrayBase[ArrayXi, _ArrayXiCp, Int, _IntCp, Int, ArrayXi, ArrayXb]

class drjit.llvm.Array0f64

Bases: ArrayBase[Array0f64, _Array0f64Cp, Float64, _Float64Cp, Float64, Array0f64, Array0b]

class drjit.llvm.Array1f64

Bases: ArrayBase[Array1f64, _Array1f64Cp, Float64, _Float64Cp, Float64, Array1f64, Array1b]

class drjit.llvm.Array2f64

Bases: ArrayBase[Array2f64, _Array2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.llvm.Array3f64

Bases: ArrayBase[Array3f64, _Array3f64Cp, Float64, _Float64Cp, Float64, Array3f64, Array3b]

class drjit.llvm.Array4f64

Bases: ArrayBase[Array4f64, _Array4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.llvm.ArrayXf64

Bases: ArrayBase[ArrayXf64, _ArrayXf64Cp, Float64, _Float64Cp, Float64, ArrayXf64, ArrayXb]

class drjit.llvm.Array0u64

Bases: ArrayBase[Array0u64, _Array0u64Cp, UInt64, _UInt64Cp, UInt64, Array0u64, Array0b]

class drjit.llvm.Array1u64

Bases: ArrayBase[Array1u64, _Array1u64Cp, UInt64, _UInt64Cp, UInt64, Array1u64, Array1b]

class drjit.llvm.Array2u64

Bases: ArrayBase[Array2u64, _Array2u64Cp, UInt64, _UInt64Cp, UInt64, Array2u64, Array2b]

class drjit.llvm.Array3u64

Bases: ArrayBase[Array3u64, _Array3u64Cp, UInt64, _UInt64Cp, UInt64, Array3u64, Array3b]

class drjit.llvm.Array4u64

Bases: ArrayBase[Array4u64, _Array4u64Cp, UInt64, _UInt64Cp, UInt64, Array4u64, Array4b]

class drjit.llvm.ArrayXu64

Bases: ArrayBase[ArrayXu64, _ArrayXu64Cp, UInt64, _UInt64Cp, UInt64, ArrayXu64, ArrayXb]

class drjit.llvm.Array0i64

Bases: ArrayBase[Array0i64, _Array0i64Cp, Int64, _Int64Cp, Int64, Array0i64, Array0b]

class drjit.llvm.Array1i64

Bases: ArrayBase[Array1i64, _Array1i64Cp, Int64, _Int64Cp, Int64, Array1i64, Array1b]

class drjit.llvm.Array2i64

Bases: ArrayBase[Array2i64, _Array2i64Cp, Int64, _Int64Cp, Int64, Array2i64, Array2b]

class drjit.llvm.Array3i64

Bases: ArrayBase[Array3i64, _Array3i64Cp, Int64, _Int64Cp, Int64, Array3i64, Array3b]

class drjit.llvm.Array4i64

Bases: ArrayBase[Array4i64, _Array4i64Cp, Int64, _Int64Cp, Int64, Array4i64, Array4b]

class drjit.llvm.ArrayXi64

Bases: ArrayBase[ArrayXi64, _ArrayXi64Cp, Int64, _Int64Cp, Int64, ArrayXi64, ArrayXb]

2D arrays

class drjit.llvm.Array22b

Bases: ArrayBase[Array22b, _Array22bCp, Array2b, _Array2bCp, Array2b, Array22b, Array22b]

class drjit.llvm.Array33b

Bases: ArrayBase[Array33b, _Array33bCp, Array3b, _Array3bCp, Array3b, Array33b, Array33b]

class drjit.llvm.Array44b

Bases: ArrayBase[Array44b, _Array44bCp, Array4b, _Array4bCp, Array4b, Array44b, Array44b]

class drjit.llvm.Array22f

Bases: ArrayBase[Array22f, _Array22fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.llvm.Array33f

Bases: ArrayBase[Array33f, _Array33fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.llvm.Array44f

Bases: ArrayBase[Array44f, _Array44fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.llvm.Array22f64

Bases: ArrayBase[Array22f64, _Array22f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.llvm.Array33f64

Bases: ArrayBase[Array33f64, _Array33f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.llvm.Array44f64

Bases: ArrayBase[Array44f64, _Array44f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Special (complex numbers, etc.)

class drjit.llvm.Complex2f

Bases: ArrayBase[Complex2f, _Complex2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.llvm.Complex2f64

Bases: ArrayBase[Complex2f64, _Complex2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.llvm.Quaternion4f16

Bases: ArrayBase[Quaternion4f16, _Quaternion4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.llvm.Quaternion4f

Bases: ArrayBase[Quaternion4f, _Quaternion4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.llvm.Quaternion4f64

Bases: ArrayBase[Quaternion4f64, _Quaternion4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.llvm.Matrix2f16

Bases: ArrayBase[Matrix2f16, _Matrix2f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.llvm.Matrix3f16

Bases: ArrayBase[Matrix3f16, _Matrix3f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.llvm.Matrix4f16

Bases: ArrayBase[Matrix4f16, _Matrix4f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.llvm.Matrix2f

Bases: ArrayBase[Matrix2f, _Matrix2fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.llvm.Matrix3f

Bases: ArrayBase[Matrix3f, _Matrix3fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.llvm.Matrix4f

Bases: ArrayBase[Matrix4f, _Matrix4fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.llvm.Matrix2f64

Bases: ArrayBase[Matrix2f64, _Matrix2f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.llvm.Matrix3f64

Bases: ArrayBase[Matrix3f64, _Matrix3f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.llvm.Matrix4f64

Bases: ArrayBase[Matrix4f64, _Matrix4f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Tensors

class drjit.llvm.TensorXb

Bases: ArrayBase[TensorXb, _TensorXbCp, TensorXb, _TensorXbCp, TensorXb, Bool, TensorXb]

class drjit.llvm.TensorXf16

Bases: ArrayBase[TensorXf16, _TensorXf16Cp, TensorXf16, _TensorXf16Cp, TensorXf16, Float16, TensorXb]

class drjit.llvm.TensorXf

Bases: ArrayBase[TensorXf, _TensorXfCp, TensorXf, _TensorXfCp, TensorXf, Float, TensorXb]

class drjit.llvm.TensorXu

Bases: ArrayBase[TensorXu, _TensorXuCp, TensorXu, _TensorXuCp, TensorXu, UInt, TensorXb]

class drjit.llvm.TensorXi

Bases: ArrayBase[TensorXi, _TensorXiCp, TensorXi, _TensorXiCp, TensorXi, Int, TensorXb]

class drjit.llvm.TensorXf64

Bases: ArrayBase[TensorXf64, _TensorXf64Cp, TensorXf64, _TensorXf64Cp, TensorXf64, Float64, TensorXb]

class drjit.llvm.TensorXu64

Bases: ArrayBase[TensorXu64, _TensorXu64Cp, TensorXu64, _TensorXu64Cp, TensorXu64, UInt64, TensorXb]

class drjit.llvm.TensorXi64

Bases: ArrayBase[TensorXi64, _TensorXi64Cp, TensorXi64, _TensorXi64Cp, TensorXi64, Int64, TensorXb]

Textures

class drjit.llvm.Texture1f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float16

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture2f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float16

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture3f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float16

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture1f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array1f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture2f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array2f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture3f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array3f, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture1f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float64

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array1f64, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float64]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array1f64, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float64]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array1f64, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float64]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array1f64, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array1f64, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array1f64, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float64]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture2f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float64

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array2f64, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float64]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array2f64, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float64]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array2f64, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float64]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array2f64, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array2f64, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array2f64, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float64]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.Texture3f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.Float64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.Float64

Return the texture data as an array object

tensor(self) drjit.llvm.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.Array3f64, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float64]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.Array3f64, active: drjit.llvm.Bool = Bool(True)) list[list[drjit.llvm.Float64]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.Array3f64, active: drjit.llvm.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.Float64]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.Array3f64, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.Array3f64, active: drjit.llvm.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.Array3f64, active: drjit.llvm.Bool = Bool(True)) list[drjit.llvm.Float64]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

Random number generators

class drjit.llvm.PCG32

Implementation of PCG32, a member of the PCG family of random number generators proposed by Melissa O’Neill.

PCG combines a Linear Congruential Generator (LCG) with a permutation function that yields high-quality pseudorandom variates while at the same time requiring very low computational cost and internal state (only 128 bit in the case of PCG32).

More detail on the PCG family of pseudorandom number generators can be found here.

The PCG32 class is implemented as a PyTree, which means that it is compatible with symbolic function calls, loops, etc.

__init__(self, size: int = 1, initstate: drjit.llvm.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.llvm.UInt64 = UInt64(0xda3e39cb94b95bdb)) None
__init__(self, arg: drjit.llvm.PCG32) None

Overloaded function.

  1. __init__(self, size: int = 1, initstate: drjit.llvm.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.llvm.UInt64 = UInt64(0xda3e39cb94b95bdb)) -> None

Initialize a random number generator that generates size variates in parallel.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their defaults values are based on the original implementation.

The implementation of this routine internally calls py:func:seed, with one small twist. When multiple random numbers are being generated in parallel, the constructor adds an offset equal to drjit.arange(UInt64, size) to both initstate and initseq to de-correlate the generated sequences.

  1. __init__(self, arg: drjit.llvm.PCG32) -> None

Copy-construct a new PCG32 instance from an existing instance.

seed(self, initstate: drjit.llvm.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.llvm.UInt64 = UInt64(0xda3e39cb94b95bdb)) None

Seed the random number generator with the given initial state and sequence ID.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their values are the defaults from the original implementation.

next_uint32(self) drjit.llvm.UInt
next_uint32(self, arg: drjit.llvm.Bool, /) drjit.llvm.UInt

Generate a uniformly distributed unsigned 32-bit random number

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint64(self) drjit.llvm.UInt64
next_uint64(self, arg: drjit.llvm.Bool, /) drjit.llvm.UInt64

Generate a uniformly distributed unsigned 64-bit random number

Internally, the function calls next_uint32() twice.

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float32(self) drjit.llvm.Float
next_float32(self, arg: drjit.llvm.Bool, /) drjit.llvm.Float

Generate a uniformly distributed single precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float64(self) drjit.llvm.Float64
next_float64(self, arg: drjit.llvm.Bool, /) drjit.llvm.Float64

Generate a uniformly distributed double precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint32_bounded(self, bound: int, mask: drjit.llvm.Bool = Bool(True)) drjit.llvm.UInt

Generate a uniformly distributed 32-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

next_uint64_bounded(self, bound: int, mask: drjit.llvm.Bool = Bool(True)) drjit.llvm.UInt64

Generate a uniformly distributed 64-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

__add__(self, arg: drjit.llvm.Int64, /) drjit.llvm.PCG32

Advance the pseudorandom number generator.

This function implements a multi-step advance function that is equivalent to (but more efficient than) calling the random number generator arg times in sequence.

This is useful to advance a newly constructed PRNG to a certain known state.

__iadd__(self, arg: drjit.llvm.Int64, /) drjit.llvm.PCG32

In-place addition operator based on __add__().

__sub__(self, arg: drjit.llvm.Int64, /) drjit.llvm.PCG32
__sub__(self, arg: drjit.llvm.PCG32, /) drjit.llvm.Int64

Rewind the pseudorandom number generator.

This function implements the opposite of __add__ to step a PRNG backwards. It can also compute the difference (as counted by the number of internal next_uint32 steps) between two PCG32 instances. This assumes that the two instances were consistently seeded.

__isub__(self, arg: drjit.llvm.Int64, /) drjit.llvm.PCG32

In-place subtraction operator based on __sub__().

property inc

Sequence increment of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

property state

Sequence state of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

LLVM array namespace with automatic differentiation (drjit.llvm.ad)

The LLVM AD backend is vectorized, hence types listed as scalar actually represent an array of scalars partaking in a parallel computation (analogously, 1D arrays are arrays of 1D arrays, etc.).

Scalars

class drjit.llvm.ad.Bool

Bases: ArrayBase[Bool, _BoolCp, bool, bool, Bool, Bool, Bool]

class drjit.llvm.ad.Float16

Bases: ArrayBase[Float16, _Float16Cp, float, float, Float16, Float16, Bool]

class drjit.llvm.ad.Float

Bases: ArrayBase[Float, _FloatCp, float, float, Float, Float, Bool]

class drjit.llvm.ad.Float64

Bases: ArrayBase[Float64, _Float64Cp, float, float, Float64, Float64, Bool]

class drjit.llvm.ad.UInt

Bases: ArrayBase[UInt, _UIntCp, int, int, UInt, UInt, Bool]

class drjit.llvm.ad.UInt64

Bases: ArrayBase[UInt64, _UInt64Cp, int, int, UInt64, UInt64, Bool]

class drjit.llvm.ad.Int

Bases: ArrayBase[Int, _IntCp, int, int, Int, Int, Bool]

class drjit.llvm.ad.Int64

Bases: ArrayBase[Int64, _Int64Cp, int, int, Int64, Int64, Bool]

1D arrays

class drjit.llvm.ad.Array0b

Bases: ArrayBase[Array0b, _Array0bCp, Bool, _BoolCp, Bool, Array0b, Array0b]

class drjit.llvm.ad.Array1b

Bases: ArrayBase[Array1b, _Array1bCp, Bool, _BoolCp, Bool, Array1b, Array1b]

class drjit.llvm.ad.Array2b

Bases: ArrayBase[Array2b, _Array2bCp, Bool, _BoolCp, Bool, Array2b, Array2b]

class drjit.llvm.ad.Array3b

Bases: ArrayBase[Array3b, _Array3bCp, Bool, _BoolCp, Bool, Array3b, Array3b]

class drjit.llvm.ad.Array4b

Bases: ArrayBase[Array4b, _Array4bCp, Bool, _BoolCp, Bool, Array4b, Array4b]

class drjit.llvm.ad.ArrayXb

Bases: ArrayBase[ArrayXb, _ArrayXbCp, Bool, _BoolCp, Bool, ArrayXb, ArrayXb]

class drjit.llvm.ad.Array0f16

Bases: ArrayBase[Array0f16, _Array0f16Cp, Float16, _Float16Cp, Float16, Array0f16, Array0b]

class drjit.llvm.ad.Array1f16

Bases: ArrayBase[Array1f16, _Array1f16Cp, Float16, _Float16Cp, Float16, Array1f16, Array1b]

class drjit.llvm.ad.Array2f16

Bases: ArrayBase[Array2f16, _Array2f16Cp, Float16, _Float16Cp, Float16, Array2f16, Array2b]

class drjit.llvm.ad.Array3f16

Bases: ArrayBase[Array3f16, _Array3f16Cp, Float16, _Float16Cp, Float16, Array3f16, Array3b]

class drjit.llvm.ad.Array4f16

Bases: ArrayBase[Array4f16, _Array4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.llvm.ad.ArrayXf16

Bases: ArrayBase[ArrayXf16, _ArrayXf16Cp, Float16, _Float16Cp, Float16, ArrayXf16, ArrayXb]

class drjit.llvm.ad.Array0f

Bases: ArrayBase[Array0f, _Array0fCp, Float, _FloatCp, Float, Array0f, Array0b]

class drjit.llvm.ad.Array1f

Bases: ArrayBase[Array1f, _Array1fCp, Float, _FloatCp, Float, Array1f, Array1b]

class drjit.llvm.ad.Array2f

Bases: ArrayBase[Array2f, _Array2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.llvm.ad.Array3f

Bases: ArrayBase[Array3f, _Array3fCp, Float, _FloatCp, Float, Array3f, Array3b]

class drjit.llvm.ad.Array4f

Bases: ArrayBase[Array4f, _Array4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.llvm.ad.ArrayXf

Bases: ArrayBase[ArrayXf, _ArrayXfCp, Float, _FloatCp, Float, ArrayXf, ArrayXb]

class drjit.llvm.ad.Array0u

Bases: ArrayBase[Array0u, _Array0uCp, UInt, _UIntCp, UInt, Array0u, Array0b]

class drjit.llvm.ad.Array1u

Bases: ArrayBase[Array1u, _Array1uCp, UInt, _UIntCp, UInt, Array1u, Array1b]

class drjit.llvm.ad.Array2u

Bases: ArrayBase[Array2u, _Array2uCp, UInt, _UIntCp, UInt, Array2u, Array2b]

class drjit.llvm.ad.Array3u

Bases: ArrayBase[Array3u, _Array3uCp, UInt, _UIntCp, UInt, Array3u, Array3b]

class drjit.llvm.ad.Array4u

Bases: ArrayBase[Array4u, _Array4uCp, UInt, _UIntCp, UInt, Array4u, Array4b]

class drjit.llvm.ad.ArrayXu

Bases: ArrayBase[ArrayXu, _ArrayXuCp, UInt, _UIntCp, UInt, ArrayXu, ArrayXb]

class drjit.llvm.ad.Array0i

Bases: ArrayBase[Array0i, _Array0iCp, Int, _IntCp, Int, Array0i, Array0b]

class drjit.llvm.ad.Array1i

Bases: ArrayBase[Array1i, _Array1iCp, Int, _IntCp, Int, Array1i, Array1b]

class drjit.llvm.ad.Array2i

Bases: ArrayBase[Array2i, _Array2iCp, Int, _IntCp, Int, Array2i, Array2b]

class drjit.llvm.ad.Array3i

Bases: ArrayBase[Array3i, _Array3iCp, Int, _IntCp, Int, Array3i, Array3b]

class drjit.llvm.ad.Array4i

Bases: ArrayBase[Array4i, _Array4iCp, Int, _IntCp, Int, Array4i, Array4b]

class drjit.llvm.ad.ArrayXi

Bases: ArrayBase[ArrayXi, _ArrayXiCp, Int, _IntCp, Int, ArrayXi, ArrayXb]

class drjit.llvm.ad.Array0f64

Bases: ArrayBase[Array0f64, _Array0f64Cp, Float64, _Float64Cp, Float64, Array0f64, Array0b]

class drjit.llvm.ad.Array1f64

Bases: ArrayBase[Array1f64, _Array1f64Cp, Float64, _Float64Cp, Float64, Array1f64, Array1b]

class drjit.llvm.ad.Array2f64

Bases: ArrayBase[Array2f64, _Array2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.llvm.ad.Array3f64

Bases: ArrayBase[Array3f64, _Array3f64Cp, Float64, _Float64Cp, Float64, Array3f64, Array3b]

class drjit.llvm.ad.Array4f64

Bases: ArrayBase[Array4f64, _Array4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.llvm.ad.ArrayXf64

Bases: ArrayBase[ArrayXf64, _ArrayXf64Cp, Float64, _Float64Cp, Float64, ArrayXf64, ArrayXb]

class drjit.llvm.ad.Array0u64

Bases: ArrayBase[Array0u64, _Array0u64Cp, UInt64, _UInt64Cp, UInt64, Array0u64, Array0b]

class drjit.llvm.ad.Array1u64

Bases: ArrayBase[Array1u64, _Array1u64Cp, UInt64, _UInt64Cp, UInt64, Array1u64, Array1b]

class drjit.llvm.ad.Array2u64

Bases: ArrayBase[Array2u64, _Array2u64Cp, UInt64, _UInt64Cp, UInt64, Array2u64, Array2b]

class drjit.llvm.ad.Array3u64

Bases: ArrayBase[Array3u64, _Array3u64Cp, UInt64, _UInt64Cp, UInt64, Array3u64, Array3b]

class drjit.llvm.ad.Array4u64

Bases: ArrayBase[Array4u64, _Array4u64Cp, UInt64, _UInt64Cp, UInt64, Array4u64, Array4b]

class drjit.llvm.ad.ArrayXu64

Bases: ArrayBase[ArrayXu64, _ArrayXu64Cp, UInt64, _UInt64Cp, UInt64, ArrayXu64, ArrayXb]

class drjit.llvm.ad.Array0i64

Bases: ArrayBase[Array0i64, _Array0i64Cp, Int64, _Int64Cp, Int64, Array0i64, Array0b]

class drjit.llvm.ad.Array1i64

Bases: ArrayBase[Array1i64, _Array1i64Cp, Int64, _Int64Cp, Int64, Array1i64, Array1b]

class drjit.llvm.ad.Array2i64

Bases: ArrayBase[Array2i64, _Array2i64Cp, Int64, _Int64Cp, Int64, Array2i64, Array2b]

class drjit.llvm.ad.Array3i64

Bases: ArrayBase[Array3i64, _Array3i64Cp, Int64, _Int64Cp, Int64, Array3i64, Array3b]

class drjit.llvm.ad.Array4i64

Bases: ArrayBase[Array4i64, _Array4i64Cp, Int64, _Int64Cp, Int64, Array4i64, Array4b]

class drjit.llvm.ad.ArrayXi64

Bases: ArrayBase[ArrayXi64, _ArrayXi64Cp, Int64, _Int64Cp, Int64, ArrayXi64, ArrayXb]

2D arrays

class drjit.llvm.ad.Array22b

Bases: ArrayBase[Array22b, _Array22bCp, Array2b, _Array2bCp, Array2b, Array22b, Array22b]

class drjit.llvm.ad.Array33b

Bases: ArrayBase[Array33b, _Array33bCp, Array3b, _Array3bCp, Array3b, Array33b, Array33b]

class drjit.llvm.ad.Array44b

Bases: ArrayBase[Array44b, _Array44bCp, Array4b, _Array4bCp, Array4b, Array44b, Array44b]

class drjit.llvm.ad.Array22f16

Bases: ArrayBase[Array22f16, _Array22f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.llvm.ad.Array33f16

Bases: ArrayBase[Array33f16, _Array33f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.llvm.ad.Array44f16

Bases: ArrayBase[Array44f16, _Array44f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.llvm.ad.Array22f

Bases: ArrayBase[Array22f, _Array22fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.llvm.ad.Array33f

Bases: ArrayBase[Array33f, _Array33fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.llvm.ad.Array44f

Bases: ArrayBase[Array44f, _Array44fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.llvm.ad.Array22f64

Bases: ArrayBase[Array22f64, _Array22f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.llvm.ad.Array33f64

Bases: ArrayBase[Array33f64, _Array33f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.llvm.ad.Array44f64

Bases: ArrayBase[Array44f64, _Array44f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Special (complex numbers, etc.)

class drjit.llvm.ad.Complex2f

Bases: ArrayBase[Complex2f, _Complex2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.llvm.ad.Complex2f64

Bases: ArrayBase[Complex2f64, _Complex2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.llvm.ad.Quaternion4f

Bases: ArrayBase[Quaternion4f, _Quaternion4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.llvm.ad.Quaternion4f64

Bases: ArrayBase[Quaternion4f64, _Quaternion4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.llvm.ad.Matrix2f

Bases: ArrayBase[Matrix2f, _Matrix2fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.llvm.ad.Matrix3f

Bases: ArrayBase[Matrix3f, _Matrix3fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.llvm.ad.Matrix4f

Bases: ArrayBase[Matrix4f, _Matrix4fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.llvm.ad.Matrix2f64

Bases: ArrayBase[Matrix2f64, _Matrix2f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.llvm.ad.Matrix3f64

Bases: ArrayBase[Matrix3f64, _Matrix3f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.llvm.ad.Matrix4f64

Bases: ArrayBase[Matrix4f64, _Matrix4f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Tensors

class drjit.llvm.ad.TensorXb

Bases: ArrayBase[TensorXb, _TensorXbCp, TensorXb, _TensorXbCp, TensorXb, Bool, TensorXb]

class drjit.llvm.ad.TensorXf16

Bases: ArrayBase[TensorXf16, _TensorXf16Cp, TensorXf16, _TensorXf16Cp, TensorXf16, Float16, TensorXb]

class drjit.llvm.ad.TensorXf

Bases: ArrayBase[TensorXf, _TensorXfCp, TensorXf, _TensorXfCp, TensorXf, Float, TensorXb]

class drjit.llvm.ad.TensorXu

Bases: ArrayBase[TensorXu, _TensorXuCp, TensorXu, _TensorXuCp, TensorXu, UInt, TensorXb]

class drjit.llvm.ad.TensorXi

Bases: ArrayBase[TensorXi, _TensorXiCp, TensorXi, _TensorXiCp, TensorXi, Int, TensorXb]

class drjit.llvm.ad.TensorXf64

Bases: ArrayBase[TensorXf64, _TensorXf64Cp, TensorXf64, _TensorXf64Cp, TensorXf64, Float64, TensorXb]

class drjit.llvm.ad.TensorXu64

Bases: ArrayBase[TensorXu64, _TensorXu64Cp, TensorXu64, _TensorXu64Cp, TensorXu64, UInt64, TensorXb]

class drjit.llvm.ad.TensorXi64

Bases: ArrayBase[TensorXi64, _TensorXi64Cp, TensorXi64, _TensorXi64Cp, TensorXi64, Int64, TensorXb]

Textures

class drjit.llvm.ad.Texture1f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float16

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture2f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float16

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture3f16
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf16, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float16, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf16, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float16

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf16

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture1f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array1f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture2f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array2f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture3f
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array3f, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture1f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float64

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array1f64, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float64]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array1f64, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float64]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array1f64, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float64]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array1f64, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array1f64, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array1f64, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float64]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture2f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float64

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array2f64, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float64]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array2f64, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float64]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array2f64, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float64]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array2f64, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array2f64, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array2f64, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float64]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

class drjit.llvm.ad.Texture3f64
__init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None
__init__(self, tensor: drjit.llvm.ad.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) None

Overloaded function.

  1. __init__(self, shape: collections.abc.Sequence[int], channels: int, use_accel: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Create a new texture with the specified size and channel count

On CUDA, this is a slow operation that synchronizes the GPU pipeline, so texture objects should be reused/updated via set_value() and set_tensor() as much as possible.

When use_accel is set to False on CUDA mode, the texture will not use hardware acceleration (allocation and evaluation). In other modes this argument has no effect.

The filter_mode parameter defines the interpolation method to be used in all evaluation routines. By default, the texture is linearly interpolated. Besides nearest/linear filtering, the implementation also provides a clamped cubic B-spline interpolation scheme in case a higher-order interpolation is needed. In CUDA mode, this is done using a series of linear lookups to optimally use the hardware (hence, linear filtering must be enabled to use this feature).

When evaluating the texture outside of its boundaries, the wrap_mode defines the wrapping method. The default behavior is drjit.WrapMode.Clamp, which indefinitely extends the colors on the boundary along each dimension.

  1. __init__(self, tensor: drjit.llvm.ad.TensorXf64, use_accel: bool = True, migrate: bool = True, filter_mode: drjit.FilterMode = FilterMode.Linear, wrap_mode: drjit.WrapMode = WrapMode.Clamp) -> None

Construct a new texture from a given tensor.

This constructor allocates texture memory with the shape information deduced from tensor. It subsequently invokes set_tensor(tensor)() to fill the texture memory with the provided tensor.

When both migrate and use_accel are set to True in CUDA mode, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage. Note that the texture is still differentiable even when migrated.

set_value(self, value: drjit.llvm.ad.Float64, migrate: bool = False) None

Override the texture contents with the provided linearized 1D array.

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

set_tensor(self, tensor: drjit.llvm.ad.TensorXf64, migrate: bool = False) None

Override the texture contents with the provided tensor.

This method updates the values of all texels. Changing the texture resolution or its number of channels is also supported. However, on CUDA, such operations have a significantly larger overhead (the GPU pipeline needs to be synchronized for new texture objects to be created).

In CUDA mode, when both the argument migrate and use_accel() are True, the texture exclusively stores a copy of the input data as a CUDA texture to avoid redundant storage.Note that the texture is still differentiable even when migrated.

value(self) drjit.llvm.ad.Float64

Return the texture data as an array object

tensor(self) drjit.llvm.ad.TensorXf64

Return the texture data as a tensor object

filter_mode(self) drjit.FilterMode

Return the filter mode

wrap_mode(self) drjit.WrapMode

Return the wrap mode

use_accel(self) bool

Return whether texture uses the GPU for storage and evaluation

migrated(self) bool

Return whether textures with use_accel() set to True only store the data as a hardware-accelerated CUDA texture.

If False then a copy of the array data will additionally be retained .

property shape

Return the texture shape

eval(self, pos: drjit.llvm.ad.Array3f64, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float64]

Evaluate the linear interpolant represented by this texture.

eval_fetch(self, pos: drjit.llvm.ad.Array3f64, active: drjit.llvm.ad.Bool = Bool(True)) list[list[drjit.llvm.ad.Float64]]

Fetch the texels that would be referenced in a texture lookup with linear interpolation without actually performing this interpolation.

eval_cubic(self, pos: drjit.llvm.ad.Array3f64, active: drjit.llvm.ad.Bool = Bool(True), force_drjit: bool = False) list[drjit.llvm.ad.Float64]

Evaluate a clamped cubic B-Spline interpolant represented by this texture

Instead of interpolating the texture via B-Spline basis functions, the implementation transforms this calculation into an equivalent weighted sum of several linear interpolant evaluations. In CUDA mode, this can then be accelerated by hardware texture units, which runs faster than a naive implementation. More information can be found in:

GPU Gems 2, Chapter 20, “Fast Third-Order Texture Filtering” by Christian Sigg.

When the underlying grid data and the query position are differentiable, this transformation cannot be used as it is not linear with respect to position (thus the default AD graph gives incorrect results). The implementation calls eval_cubic_helper() function to replace the AD graph with a direct evaluation of the B-Spline basis functions in that case.

eval_cubic_grad(self, pos: drjit.llvm.ad.Array3f64, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_hessian(self, pos: drjit.llvm.ad.Array3f64, active: drjit.llvm.ad.Bool = Bool(True)) tuple

Evaluate the positional gradient and hessian matrix of a cubic B-Spline

This implementation computes the result directly from explicit differentiated basis functions. It has no autodiff support.

The resulting gradient and hessian have been multiplied by the spatial extents to count for the transformation from the unit size volume to the size of its shape.

eval_cubic_helper(self, pos: drjit.llvm.ad.Array3f64, active: drjit.llvm.ad.Bool = Bool(True)) list[drjit.llvm.ad.Float64]

Helper function to evaluate a clamped cubic B-Spline interpolant

This is an implementation detail and should only be called by the eval_cubic() function to construct an AD graph. When only the cubic evaluation result is desired, the eval_cubic() function is faster than this simple implementation

Random number generators

class drjit.llvm.ad.PCG32

Implementation of PCG32, a member of the PCG family of random number generators proposed by Melissa O’Neill.

PCG combines a Linear Congruential Generator (LCG) with a permutation function that yields high-quality pseudorandom variates while at the same time requiring very low computational cost and internal state (only 128 bit in the case of PCG32).

More detail on the PCG family of pseudorandom number generators can be found here.

The PCG32 class is implemented as a PyTree, which means that it is compatible with symbolic function calls, loops, etc.

__init__(self, size: int = 1, initstate: drjit.llvm.ad.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.llvm.ad.UInt64 = UInt64(0xda3e39cb94b95bdb)) None
__init__(self, arg: drjit.llvm.ad.PCG32) None

Overloaded function.

  1. __init__(self, size: int = 1, initstate: drjit.llvm.ad.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.llvm.ad.UInt64 = UInt64(0xda3e39cb94b95bdb)) -> None

Initialize a random number generator that generates size variates in parallel.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their defaults values are based on the original implementation.

The implementation of this routine internally calls py:func:seed, with one small twist. When multiple random numbers are being generated in parallel, the constructor adds an offset equal to drjit.arange(UInt64, size) to both initstate and initseq to de-correlate the generated sequences.

  1. __init__(self, arg: drjit.llvm.ad.PCG32) -> None

Copy-construct a new PCG32 instance from an existing instance.

seed(self, initstate: drjit.llvm.ad.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.llvm.ad.UInt64 = UInt64(0xda3e39cb94b95bdb)) None

Seed the random number generator with the given initial state and sequence ID.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their values are the defaults from the original implementation.

next_uint32(self) drjit.llvm.ad.UInt
next_uint32(self, arg: drjit.llvm.ad.Bool, /) drjit.llvm.ad.UInt

Generate a uniformly distributed unsigned 32-bit random number

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint64(self) drjit.llvm.ad.UInt64
next_uint64(self, arg: drjit.llvm.ad.Bool, /) drjit.llvm.ad.UInt64

Generate a uniformly distributed unsigned 64-bit random number

Internally, the function calls next_uint32() twice.

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float32(self) drjit.llvm.ad.Float
next_float32(self, arg: drjit.llvm.ad.Bool, /) drjit.llvm.ad.Float

Generate a uniformly distributed single precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float64(self) drjit.llvm.ad.Float64
next_float64(self, arg: drjit.llvm.ad.Bool, /) drjit.llvm.ad.Float64

Generate a uniformly distributed double precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint32_bounded(self, bound: int, mask: drjit.llvm.ad.Bool = Bool(True)) drjit.llvm.ad.UInt

Generate a uniformly distributed 32-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

next_uint64_bounded(self, bound: int, mask: drjit.llvm.ad.Bool = Bool(True)) drjit.llvm.ad.UInt64

Generate a uniformly distributed 64-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

__add__(self, arg: drjit.llvm.ad.Int64, /) drjit.llvm.ad.PCG32

Advance the pseudorandom number generator.

This function implements a multi-step advance function that is equivalent to (but more efficient than) calling the random number generator arg times in sequence.

This is useful to advance a newly constructed PRNG to a certain known state.

__iadd__(self, arg: drjit.llvm.ad.Int64, /) drjit.llvm.ad.PCG32

In-place addition operator based on __add__().

__sub__(self, arg: drjit.llvm.ad.Int64, /) drjit.llvm.ad.PCG32
__sub__(self, arg: drjit.llvm.ad.PCG32, /) drjit.llvm.ad.Int64

Rewind the pseudorandom number generator.

This function implements the opposite of __add__ to step a PRNG backwards. It can also compute the difference (as counted by the number of internal next_uint32 steps) between two PCG32 instances. This assumes that the two instances were consistently seeded.

__isub__(self, arg: drjit.llvm.ad.Int64, /) drjit.llvm.ad.PCG32

In-place subtraction operator based on __sub__().

property inc

Sequence increment of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

property state

Sequence state of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

CUDA array namespace (drjit.cuda)

The CUDA backend is vectorized, hence types listed as scalar actually represent an array of scalars partaking in a parallel computation (analogously, 1D arrays are arrays of 1D arrays, etc.).

Scalars

class drjit.cuda.Bool

Bases: ArrayBase[Bool, _BoolCp, bool, bool, Bool, Bool, Bool]

class drjit.cuda.Float

Bases: ArrayBase[Float, _FloatCp, float, float, Float, Float, Bool]

class drjit.cuda.Float64

Bases: ArrayBase[Float64, _Float64Cp, float, float, Float64, Float64, Bool]

class drjit.cuda.UInt

Bases: ArrayBase[UInt, _UIntCp, int, int, UInt, UInt, Bool]

class drjit.cuda.UInt64

Bases: ArrayBase[UInt64, _UInt64Cp, int, int, UInt64, UInt64, Bool]

class drjit.cuda.Int

Bases: ArrayBase[Int, _IntCp, int, int, Int, Int, Bool]

class drjit.cuda.Int64

Bases: ArrayBase[Int64, _Int64Cp, int, int, Int64, Int64, Bool]

1D arrays

class drjit.cuda.Array0b

Bases: ArrayBase[Array0b, _Array0bCp, Bool, _BoolCp, Bool, Array0b, Array0b]

class drjit.cuda.Array1b

Bases: ArrayBase[Array1b, _Array1bCp, Bool, _BoolCp, Bool, Array1b, Array1b]

class drjit.cuda.Array2b

Bases: ArrayBase[Array2b, _Array2bCp, Bool, _BoolCp, Bool, Array2b, Array2b]

class drjit.cuda.Array3b

Bases: ArrayBase[Array3b, _Array3bCp, Bool, _BoolCp, Bool, Array3b, Array3b]

class drjit.cuda.Array4b

Bases: ArrayBase[Array4b, _Array4bCp, Bool, _BoolCp, Bool, Array4b, Array4b]

class drjit.cuda.ArrayXb

Bases: ArrayBase[ArrayXb, _ArrayXbCp, Bool, _BoolCp, Bool, ArrayXb, ArrayXb]

class drjit.cuda.Array0f16

Bases: ArrayBase[Array0f16, _Array0f16Cp, Float16, _Float16Cp, Float16, Array0f16, Array0b]

class drjit.cuda.Array1f16

Bases: ArrayBase[Array1f16, _Array1f16Cp, Float16, _Float16Cp, Float16, Array1f16, Array1b]

class drjit.cuda.Array2f16

Bases: ArrayBase[Array2f16, _Array2f16Cp, Float16, _Float16Cp, Float16, Array2f16, Array2b]

class drjit.cuda.Array3f16

Bases: ArrayBase[Array3f16, _Array3f16Cp, Float16, _Float16Cp, Float16, Array3f16, Array3b]

class drjit.cuda.Array4f16

Bases: ArrayBase[Array4f16, _Array4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.cuda.ArrayXf16

Bases: ArrayBase[ArrayXf16, _ArrayXf16Cp, Float16, _Float16Cp, Float16, ArrayXf16, ArrayXb]

class drjit.cuda.Array0f

Bases: ArrayBase[Array0f, _Array0fCp, Float, _FloatCp, Float, Array0f, Array0b]

class drjit.cuda.Array1f

Bases: ArrayBase[Array1f, _Array1fCp, Float, _FloatCp, Float, Array1f, Array1b]

class drjit.cuda.Array2f

Bases: ArrayBase[Array2f, _Array2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.cuda.Array3f

Bases: ArrayBase[Array3f, _Array3fCp, Float, _FloatCp, Float, Array3f, Array3b]

class drjit.cuda.Array4f

Bases: ArrayBase[Array4f, _Array4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.cuda.ArrayXf

Bases: ArrayBase[ArrayXf, _ArrayXfCp, Float, _FloatCp, Float, ArrayXf, ArrayXb]

class drjit.cuda.Array0u

Bases: ArrayBase[Array0u, _Array0uCp, UInt, _UIntCp, UInt, Array0u, Array0b]

class drjit.cuda.Array1u

Bases: ArrayBase[Array1u, _Array1uCp, UInt, _UIntCp, UInt, Array1u, Array1b]

class drjit.cuda.Array2u

Bases: ArrayBase[Array2u, _Array2uCp, UInt, _UIntCp, UInt, Array2u, Array2b]

class drjit.cuda.Array3u

Bases: ArrayBase[Array3u, _Array3uCp, UInt, _UIntCp, UInt, Array3u, Array3b]

class drjit.cuda.Array4u

Bases: ArrayBase[Array4u, _Array4uCp, UInt, _UIntCp, UInt, Array4u, Array4b]

class drjit.cuda.ArrayXu

Bases: ArrayBase[ArrayXu, _ArrayXuCp, UInt, _UIntCp, UInt, ArrayXu, ArrayXb]

class drjit.cuda.Array0i

Bases: ArrayBase[Array0i, _Array0iCp, Int, _IntCp, Int, Array0i, Array0b]

class drjit.cuda.Array1i

Bases: ArrayBase[Array1i, _Array1iCp, Int, _IntCp, Int, Array1i, Array1b]

class drjit.cuda.Array2i

Bases: ArrayBase[Array2i, _Array2iCp, Int, _IntCp, Int, Array2i, Array2b]

class drjit.cuda.Array3i

Bases: ArrayBase[Array3i, _Array3iCp, Int, _IntCp, Int, Array3i, Array3b]

class drjit.cuda.Array4i

Bases: ArrayBase[Array4i, _Array4iCp, Int, _IntCp, Int, Array4i, Array4b]

class drjit.cuda.ArrayXi

Bases: ArrayBase[ArrayXi, _ArrayXiCp, Int, _IntCp, Int, ArrayXi, ArrayXb]

class drjit.cuda.Array0f64

Bases: ArrayBase[Array0f64, _Array0f64Cp, Float64, _Float64Cp, Float64, Array0f64, Array0b]

class drjit.cuda.Array1f64

Bases: ArrayBase[Array1f64, _Array1f64Cp, Float64, _Float64Cp, Float64, Array1f64, Array1b]

class drjit.cuda.Array2f64

Bases: ArrayBase[Array2f64, _Array2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.cuda.Array3f64

Bases: ArrayBase[Array3f64, _Array3f64Cp, Float64, _Float64Cp, Float64, Array3f64, Array3b]

class drjit.cuda.Array4f64

Bases: ArrayBase[Array4f64, _Array4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.cuda.ArrayXf64

Bases: ArrayBase[ArrayXf64, _ArrayXf64Cp, Float64, _Float64Cp, Float64, ArrayXf64, ArrayXb]

class drjit.cuda.Array0u64

Bases: ArrayBase[Array0u64, _Array0u64Cp, UInt64, _UInt64Cp, UInt64, Array0u64, Array0b]

class drjit.cuda.Array1u64

Bases: ArrayBase[Array1u64, _Array1u64Cp, UInt64, _UInt64Cp, UInt64, Array1u64, Array1b]

class drjit.cuda.Array2u64

Bases: ArrayBase[Array2u64, _Array2u64Cp, UInt64, _UInt64Cp, UInt64, Array2u64, Array2b]

class drjit.cuda.Array3u64

Bases: ArrayBase[Array3u64, _Array3u64Cp, UInt64, _UInt64Cp, UInt64, Array3u64, Array3b]

class drjit.cuda.Array4u64

Bases: ArrayBase[Array4u64, _Array4u64Cp, UInt64, _UInt64Cp, UInt64, Array4u64, Array4b]

class drjit.cuda.ArrayXu64

Bases: ArrayBase[ArrayXu64, _ArrayXu64Cp, UInt64, _UInt64Cp, UInt64, ArrayXu64, ArrayXb]

class drjit.cuda.Array0i64

Bases: ArrayBase[Array0i64, _Array0i64Cp, Int64, _Int64Cp, Int64, Array0i64, Array0b]

class drjit.cuda.Array1i64

Bases: ArrayBase[Array1i64, _Array1i64Cp, Int64, _Int64Cp, Int64, Array1i64, Array1b]

class drjit.cuda.Array2i64

Bases: ArrayBase[Array2i64, _Array2i64Cp, Int64, _Int64Cp, Int64, Array2i64, Array2b]

class drjit.cuda.Array3i64

Bases: ArrayBase[Array3i64, _Array3i64Cp, Int64, _Int64Cp, Int64, Array3i64, Array3b]

class drjit.cuda.Array4i64

Bases: ArrayBase[Array4i64, _Array4i64Cp, Int64, _Int64Cp, Int64, Array4i64, Array4b]

class drjit.cuda.ArrayXi64

Bases: ArrayBase[ArrayXi64, _ArrayXi64Cp, Int64, _Int64Cp, Int64, ArrayXi64, ArrayXb]

2D arrays

class drjit.cuda.Array22b

Bases: ArrayBase[Array22b, _Array22bCp, Array2b, _Array2bCp, Array2b, Array22b, Array22b]

class drjit.cuda.Array33b

Bases: ArrayBase[Array33b, _Array33bCp, Array3b, _Array3bCp, Array3b, Array33b, Array33b]

class drjit.cuda.Array44b

Bases: ArrayBase[Array44b, _Array44bCp, Array4b, _Array4bCp, Array4b, Array44b, Array44b]

class drjit.cuda.Array22f16

Bases: ArrayBase[Array22f16, _Array22f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.cuda.Array33f16

Bases: ArrayBase[Array33f16, _Array33f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.cuda.Array44f16

Bases: ArrayBase[Array44f16, _Array44f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.cuda.Array22f

Bases: ArrayBase[Array22f, _Array22fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.cuda.Array33f

Bases: ArrayBase[Array33f, _Array33fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.cuda.Array44f

Bases: ArrayBase[Array44f, _Array44fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.cuda.Array22f64

Bases: ArrayBase[Array22f64, _Array22f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.cuda.Array33f64

Bases: ArrayBase[Array33f64, _Array33f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.cuda.Array44f64

Bases: ArrayBase[Array44f64, _Array44f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Special (complex numbers, etc.)

class drjit.cuda.Complex2f

Bases: ArrayBase[Complex2f, _Complex2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.cuda.Complex2f64

Bases: ArrayBase[Complex2f64, _Complex2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.cuda.Quaternion4f16

Bases: ArrayBase[Quaternion4f16, _Quaternion4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.cuda.Quaternion4f

Bases: ArrayBase[Quaternion4f, _Quaternion4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.cuda.Quaternion4f64

Bases: ArrayBase[Quaternion4f64, _Quaternion4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.cuda.Matrix2f16

Bases: ArrayBase[Matrix2f16, _Matrix2f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.cuda.Matrix3f16

Bases: ArrayBase[Matrix3f16, _Matrix3f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.cuda.Matrix4f16

Bases: ArrayBase[Matrix4f16, _Matrix4f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.cuda.Matrix2f

Bases: ArrayBase[Matrix2f, _Matrix2fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.cuda.Matrix3f

Bases: ArrayBase[Matrix3f, _Matrix3fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.cuda.Matrix4f

Bases: ArrayBase[Matrix4f, _Matrix4fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.cuda.Matrix2f64

Bases: ArrayBase[Matrix2f64, _Matrix2f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.cuda.Matrix3f64

Bases: ArrayBase[Matrix3f64, _Matrix3f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.cuda.Matrix4f64

Bases: ArrayBase[Matrix4f64, _Matrix4f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Tensors

class drjit.cuda.TensorXb

Bases: ArrayBase[TensorXb, _TensorXbCp, TensorXb, _TensorXbCp, TensorXb, Bool, TensorXb]

class drjit.cuda.TensorXf16

Bases: ArrayBase[TensorXf16, _TensorXf16Cp, TensorXf16, _TensorXf16Cp, TensorXf16, Float16, TensorXb]

class drjit.cuda.TensorXf

Bases: ArrayBase[TensorXf, _TensorXfCp, TensorXf, _TensorXfCp, TensorXf, Float, TensorXb]

class drjit.cuda.TensorXu

Bases: ArrayBase[TensorXu, _TensorXuCp, TensorXu, _TensorXuCp, TensorXu, UInt, TensorXb]

class drjit.cuda.TensorXi

Bases: ArrayBase[TensorXi, _TensorXiCp, TensorXi, _TensorXiCp, TensorXi, Int, TensorXb]

class drjit.cuda.TensorXf64

Bases: ArrayBase[TensorXf64, _TensorXf64Cp, TensorXf64, _TensorXf64Cp, TensorXf64, Float64, TensorXb]

class drjit.cuda.TensorXu64

Bases: ArrayBase[TensorXu64, _TensorXu64Cp, TensorXu64, _TensorXu64Cp, TensorXu64, UInt64, TensorXb]

class drjit.cuda.TensorXi64

Bases: ArrayBase[TensorXi64, _TensorXi64Cp, TensorXi64, _TensorXi64Cp, TensorXi64, Int64, TensorXb]

Random number generators

class drjit.cuda.PCG32

Implementation of PCG32, a member of the PCG family of random number generators proposed by Melissa O’Neill.

PCG combines a Linear Congruential Generator (LCG) with a permutation function that yields high-quality pseudorandom variates while at the same time requiring very low computational cost and internal state (only 128 bit in the case of PCG32).

More detail on the PCG family of pseudorandom number generators can be found here.

The PCG32 class is implemented as a PyTree, which means that it is compatible with symbolic function calls, loops, etc.

__init__(self, size: int = 1, initstate: drjit.cuda.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.cuda.UInt64 = UInt64(0xda3e39cb94b95bdb)) None
__init__(self, arg: drjit.cuda.PCG32) None

Overloaded function.

  1. __init__(self, size: int = 1, initstate: drjit.cuda.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.cuda.UInt64 = UInt64(0xda3e39cb94b95bdb)) -> None

Initialize a random number generator that generates size variates in parallel.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their defaults values are based on the original implementation.

The implementation of this routine internally calls py:func:seed, with one small twist. When multiple random numbers are being generated in parallel, the constructor adds an offset equal to drjit.arange(UInt64, size) to both initstate and initseq to de-correlate the generated sequences.

  1. __init__(self, arg: drjit.cuda.PCG32) -> None

Copy-construct a new PCG32 instance from an existing instance.

seed(self, initstate: drjit.cuda.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.cuda.UInt64 = UInt64(0xda3e39cb94b95bdb)) None

Seed the random number generator with the given initial state and sequence ID.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their values are the defaults from the original implementation.

next_uint32(self) drjit.cuda.UInt
next_uint32(self, arg: drjit.cuda.Bool, /) drjit.cuda.UInt

Generate a uniformly distributed unsigned 32-bit random number

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint64(self) drjit.cuda.UInt64
next_uint64(self, arg: drjit.cuda.Bool, /) drjit.cuda.UInt64

Generate a uniformly distributed unsigned 64-bit random number

Internally, the function calls next_uint32() twice.

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float32(self) drjit.cuda.Float
next_float32(self, arg: drjit.cuda.Bool, /) drjit.cuda.Float

Generate a uniformly distributed single precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float64(self) drjit.cuda.Float64
next_float64(self, arg: drjit.cuda.Bool, /) drjit.cuda.Float64

Generate a uniformly distributed double precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint32_bounded(self, bound: int, mask: drjit.cuda.Bool = Bool(True)) drjit.cuda.UInt

Generate a uniformly distributed 32-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

next_uint64_bounded(self, bound: int, mask: drjit.cuda.Bool = Bool(True)) drjit.cuda.UInt64

Generate a uniformly distributed 64-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

__add__(self, arg: drjit.cuda.Int64, /) drjit.cuda.PCG32

Advance the pseudorandom number generator.

This function implements a multi-step advance function that is equivalent to (but more efficient than) calling the random number generator arg times in sequence.

This is useful to advance a newly constructed PRNG to a certain known state.

__iadd__(self, arg: drjit.cuda.Int64, /) drjit.cuda.PCG32

In-place addition operator based on __add__().

__sub__(self, arg: drjit.cuda.Int64, /) drjit.cuda.PCG32
__sub__(self, arg: drjit.cuda.PCG32, /) drjit.cuda.Int64

Rewind the pseudorandom number generator.

This function implements the opposite of __add__ to step a PRNG backwards. It can also compute the difference (as counted by the number of internal next_uint32 steps) between two PCG32 instances. This assumes that the two instances were consistently seeded.

__isub__(self, arg: drjit.cuda.Int64, /) drjit.cuda.PCG32

In-place subtraction operator based on __sub__().

property inc

Sequence increment of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

property state

Sequence state of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

CUDA array namespace with automatic differentiation (drjit.cuda.ad)

The CUDA AD backend is vectorized, hence types listed as scalar actually represent an array of scalars partaking in a parallel computation (analogously, 1D arrays are arrays of 1D arrays, etc.).

Scalars

class drjit.cuda.ad.Bool

Bases: ArrayBase[Bool, _BoolCp, bool, bool, Bool, Bool, Bool]

class drjit.cuda.ad.Float

Bases: ArrayBase[Float, _FloatCp, float, float, Float, Float, Bool]

class drjit.cuda.ad.Float64

Bases: ArrayBase[Float64, _Float64Cp, float, float, Float64, Float64, Bool]

class drjit.cuda.ad.UInt

Bases: ArrayBase[UInt, _UIntCp, int, int, UInt, UInt, Bool]

class drjit.cuda.ad.UInt64

Bases: ArrayBase[UInt64, _UInt64Cp, int, int, UInt64, UInt64, Bool]

class drjit.cuda.ad.Int

Bases: ArrayBase[Int, _IntCp, int, int, Int, Int, Bool]

class drjit.cuda.ad.Int64

Bases: ArrayBase[Int64, _Int64Cp, int, int, Int64, Int64, Bool]

1D arrays

class drjit.cuda.ad.Array0b

Bases: ArrayBase[Array0b, _Array0bCp, Bool, _BoolCp, Bool, Array0b, Array0b]

class drjit.cuda.ad.Array1b

Bases: ArrayBase[Array1b, _Array1bCp, Bool, _BoolCp, Bool, Array1b, Array1b]

class drjit.cuda.ad.Array2b

Bases: ArrayBase[Array2b, _Array2bCp, Bool, _BoolCp, Bool, Array2b, Array2b]

class drjit.cuda.ad.Array3b

Bases: ArrayBase[Array3b, _Array3bCp, Bool, _BoolCp, Bool, Array3b, Array3b]

class drjit.cuda.ad.Array4b

Bases: ArrayBase[Array4b, _Array4bCp, Bool, _BoolCp, Bool, Array4b, Array4b]

class drjit.cuda.ad.ArrayXb

Bases: ArrayBase[ArrayXb, _ArrayXbCp, Bool, _BoolCp, Bool, ArrayXb, ArrayXb]

class drjit.cuda.ad.Array0f16

Bases: ArrayBase[Array0f16, _Array0f16Cp, Float16, _Float16Cp, Float16, Array0f16, Array0b]

class drjit.cuda.ad.Array1f16

Bases: ArrayBase[Array1f16, _Array1f16Cp, Float16, _Float16Cp, Float16, Array1f16, Array1b]

class drjit.cuda.ad.Array2f16

Bases: ArrayBase[Array2f16, _Array2f16Cp, Float16, _Float16Cp, Float16, Array2f16, Array2b]

class drjit.cuda.ad.Array3f16

Bases: ArrayBase[Array3f16, _Array3f16Cp, Float16, _Float16Cp, Float16, Array3f16, Array3b]

class drjit.cuda.ad.Array4f16

Bases: ArrayBase[Array4f16, _Array4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.cuda.ad.ArrayXf16

Bases: ArrayBase[ArrayXf16, _ArrayXf16Cp, Float16, _Float16Cp, Float16, ArrayXf16, ArrayXb]

class drjit.cuda.ad.Array0f

Bases: ArrayBase[Array0f, _Array0fCp, Float, _FloatCp, Float, Array0f, Array0b]

class drjit.cuda.ad.Array1f

Bases: ArrayBase[Array1f, _Array1fCp, Float, _FloatCp, Float, Array1f, Array1b]

class drjit.cuda.ad.Array2f

Bases: ArrayBase[Array2f, _Array2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.cuda.ad.Array3f

Bases: ArrayBase[Array3f, _Array3fCp, Float, _FloatCp, Float, Array3f, Array3b]

class drjit.cuda.ad.Array4f

Bases: ArrayBase[Array4f, _Array4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.cuda.ad.ArrayXf

Bases: ArrayBase[ArrayXf, _ArrayXfCp, Float, _FloatCp, Float, ArrayXf, ArrayXb]

class drjit.cuda.ad.Array0u

Bases: ArrayBase[Array0u, _Array0uCp, UInt, _UIntCp, UInt, Array0u, Array0b]

class drjit.cuda.ad.Array1u

Bases: ArrayBase[Array1u, _Array1uCp, UInt, _UIntCp, UInt, Array1u, Array1b]

class drjit.cuda.ad.Array2u

Bases: ArrayBase[Array2u, _Array2uCp, UInt, _UIntCp, UInt, Array2u, Array2b]

class drjit.cuda.ad.Array3u

Bases: ArrayBase[Array3u, _Array3uCp, UInt, _UIntCp, UInt, Array3u, Array3b]

class drjit.cuda.ad.Array4u

Bases: ArrayBase[Array4u, _Array4uCp, UInt, _UIntCp, UInt, Array4u, Array4b]

class drjit.cuda.ad.ArrayXu

Bases: ArrayBase[ArrayXu, _ArrayXuCp, UInt, _UIntCp, UInt, ArrayXu, ArrayXb]

class drjit.cuda.ad.Array0i

Bases: ArrayBase[Array0i, _Array0iCp, Int, _IntCp, Int, Array0i, Array0b]

class drjit.cuda.ad.Array1i

Bases: ArrayBase[Array1i, _Array1iCp, Int, _IntCp, Int, Array1i, Array1b]

class drjit.cuda.ad.Array2i

Bases: ArrayBase[Array2i, _Array2iCp, Int, _IntCp, Int, Array2i, Array2b]

class drjit.cuda.ad.Array3i

Bases: ArrayBase[Array3i, _Array3iCp, Int, _IntCp, Int, Array3i, Array3b]

class drjit.cuda.ad.Array4i

Bases: ArrayBase[Array4i, _Array4iCp, Int, _IntCp, Int, Array4i, Array4b]

class drjit.cuda.ad.ArrayXi

Bases: ArrayBase[ArrayXi, _ArrayXiCp, Int, _IntCp, Int, ArrayXi, ArrayXb]

class drjit.cuda.ad.Array0f64

Bases: ArrayBase[Array0f64, _Array0f64Cp, Float64, _Float64Cp, Float64, Array0f64, Array0b]

class drjit.cuda.ad.Array1f64

Bases: ArrayBase[Array1f64, _Array1f64Cp, Float64, _Float64Cp, Float64, Array1f64, Array1b]

class drjit.cuda.ad.Array2f64

Bases: ArrayBase[Array2f64, _Array2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.cuda.ad.Array3f64

Bases: ArrayBase[Array3f64, _Array3f64Cp, Float64, _Float64Cp, Float64, Array3f64, Array3b]

class drjit.cuda.ad.Array4f64

Bases: ArrayBase[Array4f64, _Array4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.cuda.ad.ArrayXf64

Bases: ArrayBase[ArrayXf64, _ArrayXf64Cp, Float64, _Float64Cp, Float64, ArrayXf64, ArrayXb]

class drjit.cuda.ad.Array0u64

Bases: ArrayBase[Array0u64, _Array0u64Cp, UInt64, _UInt64Cp, UInt64, Array0u64, Array0b]

class drjit.cuda.ad.Array1u64

Bases: ArrayBase[Array1u64, _Array1u64Cp, UInt64, _UInt64Cp, UInt64, Array1u64, Array1b]

class drjit.cuda.ad.Array2u64

Bases: ArrayBase[Array2u64, _Array2u64Cp, UInt64, _UInt64Cp, UInt64, Array2u64, Array2b]

class drjit.cuda.ad.Array3u64

Bases: ArrayBase[Array3u64, _Array3u64Cp, UInt64, _UInt64Cp, UInt64, Array3u64, Array3b]

class drjit.cuda.ad.Array4u64

Bases: ArrayBase[Array4u64, _Array4u64Cp, UInt64, _UInt64Cp, UInt64, Array4u64, Array4b]

class drjit.cuda.ad.ArrayXu64

Bases: ArrayBase[ArrayXu64, _ArrayXu64Cp, UInt64, _UInt64Cp, UInt64, ArrayXu64, ArrayXb]

class drjit.cuda.ad.Array0i64

Bases: ArrayBase[Array0i64, _Array0i64Cp, Int64, _Int64Cp, Int64, Array0i64, Array0b]

class drjit.cuda.ad.Array1i64

Bases: ArrayBase[Array1i64, _Array1i64Cp, Int64, _Int64Cp, Int64, Array1i64, Array1b]

class drjit.cuda.ad.Array2i64

Bases: ArrayBase[Array2i64, _Array2i64Cp, Int64, _Int64Cp, Int64, Array2i64, Array2b]

class drjit.cuda.ad.Array3i64

Bases: ArrayBase[Array3i64, _Array3i64Cp, Int64, _Int64Cp, Int64, Array3i64, Array3b]

class drjit.cuda.ad.Array4i64

Bases: ArrayBase[Array4i64, _Array4i64Cp, Int64, _Int64Cp, Int64, Array4i64, Array4b]

class drjit.cuda.ad.ArrayXi64

Bases: ArrayBase[ArrayXi64, _ArrayXi64Cp, Int64, _Int64Cp, Int64, ArrayXi64, ArrayXb]

2D arrays

class drjit.cuda.ad.Array22b

Bases: ArrayBase[Array22b, _Array22bCp, Array2b, _Array2bCp, Array2b, Array22b, Array22b]

class drjit.cuda.ad.Array33b

Bases: ArrayBase[Array33b, _Array33bCp, Array3b, _Array3bCp, Array3b, Array33b, Array33b]

class drjit.cuda.ad.Array44b

Bases: ArrayBase[Array44b, _Array44bCp, Array4b, _Array4bCp, Array4b, Array44b, Array44b]

class drjit.cuda.ad.Array22f16

Bases: ArrayBase[Array22f16, _Array22f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.cuda.ad.Array33f16

Bases: ArrayBase[Array33f16, _Array33f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.cuda.ad.Array44f16

Bases: ArrayBase[Array44f16, _Array44f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.cuda.ad.Array22f

Bases: ArrayBase[Array22f, _Array22fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.cuda.ad.Array33f

Bases: ArrayBase[Array33f, _Array33fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.cuda.ad.Array44f

Bases: ArrayBase[Array44f, _Array44fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.cuda.ad.Array22f64

Bases: ArrayBase[Array22f64, _Array22f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.cuda.ad.Array33f64

Bases: ArrayBase[Array33f64, _Array33f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.cuda.ad.Array44f64

Bases: ArrayBase[Array44f64, _Array44f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Special (complex numbers, etc.)

class drjit.cuda.ad.Complex2f

Bases: ArrayBase[Complex2f, _Complex2fCp, Float, _FloatCp, Float, Array2f, Array2b]

class drjit.cuda.ad.Complex2f64

Bases: ArrayBase[Complex2f64, _Complex2f64Cp, Float64, _Float64Cp, Float64, Array2f64, Array2b]

class drjit.cuda.ad.Quaternion4f16

Bases: ArrayBase[Quaternion4f16, _Quaternion4f16Cp, Float16, _Float16Cp, Float16, Array4f16, Array4b]

class drjit.cuda.ad.Quaternion4f

Bases: ArrayBase[Quaternion4f, _Quaternion4fCp, Float, _FloatCp, Float, Array4f, Array4b]

class drjit.cuda.ad.Quaternion4f64

Bases: ArrayBase[Quaternion4f64, _Quaternion4f64Cp, Float64, _Float64Cp, Float64, Array4f64, Array4b]

class drjit.cuda.ad.Matrix2f16

Bases: ArrayBase[Matrix2f16, _Matrix2f16Cp, Array2f16, _Array2f16Cp, Array2f16, Array22f16, Array22b]

class drjit.cuda.ad.Matrix3f16

Bases: ArrayBase[Matrix3f16, _Matrix3f16Cp, Array3f16, _Array3f16Cp, Array3f16, Array33f16, Array33b]

class drjit.cuda.ad.Matrix4f16

Bases: ArrayBase[Matrix4f16, _Matrix4f16Cp, Array4f16, _Array4f16Cp, Array4f16, Array44f16, Array44b]

class drjit.cuda.ad.Matrix2f

Bases: ArrayBase[Matrix2f, _Matrix2fCp, Array2f, _Array2fCp, Array2f, Array22f, Array22b]

class drjit.cuda.ad.Matrix3f

Bases: ArrayBase[Matrix3f, _Matrix3fCp, Array3f, _Array3fCp, Array3f, Array33f, Array33b]

class drjit.cuda.ad.Matrix4f

Bases: ArrayBase[Matrix4f, _Matrix4fCp, Array4f, _Array4fCp, Array4f, Array44f, Array44b]

class drjit.cuda.ad.Matrix2f64

Bases: ArrayBase[Matrix2f64, _Matrix2f64Cp, Array2f64, _Array2f64Cp, Array2f64, Array22f64, Array22b]

class drjit.cuda.ad.Matrix3f64

Bases: ArrayBase[Matrix3f64, _Matrix3f64Cp, Array3f64, _Array3f64Cp, Array3f64, Array33f64, Array33b]

class drjit.cuda.ad.Matrix4f64

Bases: ArrayBase[Matrix4f64, _Matrix4f64Cp, Array4f64, _Array4f64Cp, Array4f64, Array44f64, Array44b]

Tensors

class drjit.cuda.ad.TensorXb

Bases: ArrayBase[TensorXb, _TensorXbCp, TensorXb, _TensorXbCp, TensorXb, Bool, TensorXb]

class drjit.cuda.ad.TensorXf16

Bases: ArrayBase[TensorXf16, _TensorXf16Cp, TensorXf16, _TensorXf16Cp, TensorXf16, Float16, TensorXb]

class drjit.cuda.ad.TensorXf

Bases: ArrayBase[TensorXf, _TensorXfCp, TensorXf, _TensorXfCp, TensorXf, Float, TensorXb]

class drjit.cuda.ad.TensorXu

Bases: ArrayBase[TensorXu, _TensorXuCp, TensorXu, _TensorXuCp, TensorXu, UInt, TensorXb]

class drjit.cuda.ad.TensorXi

Bases: ArrayBase[TensorXi, _TensorXiCp, TensorXi, _TensorXiCp, TensorXi, Int, TensorXb]

class drjit.cuda.ad.TensorXf64

Bases: ArrayBase[TensorXf64, _TensorXf64Cp, TensorXf64, _TensorXf64Cp, TensorXf64, Float64, TensorXb]

class drjit.cuda.ad.TensorXu64

Bases: ArrayBase[TensorXu64, _TensorXu64Cp, TensorXu64, _TensorXu64Cp, TensorXu64, UInt64, TensorXb]

class drjit.cuda.ad.TensorXi64

Bases: ArrayBase[TensorXi64, _TensorXi64Cp, TensorXi64, _TensorXi64Cp, TensorXi64, Int64, TensorXb]

Random number generators

class drjit.cuda.ad.PCG32

Implementation of PCG32, a member of the PCG family of random number generators proposed by Melissa O’Neill.

PCG combines a Linear Congruential Generator (LCG) with a permutation function that yields high-quality pseudorandom variates while at the same time requiring very low computational cost and internal state (only 128 bit in the case of PCG32).

More detail on the PCG family of pseudorandom number generators can be found here.

The PCG32 class is implemented as a PyTree, which means that it is compatible with symbolic function calls, loops, etc.

__init__(self, size: int = 1, initstate: drjit.cuda.ad.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.cuda.ad.UInt64 = UInt64(0xda3e39cb94b95bdb)) None
__init__(self, arg: drjit.cuda.ad.PCG32) None

Overloaded function.

  1. __init__(self, size: int = 1, initstate: drjit.cuda.ad.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.cuda.ad.UInt64 = UInt64(0xda3e39cb94b95bdb)) -> None

Initialize a random number generator that generates size variates in parallel.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their defaults values are based on the original implementation.

The implementation of this routine internally calls py:func:seed, with one small twist. When multiple random numbers are being generated in parallel, the constructor adds an offset equal to drjit.arange(UInt64, size) to both initstate and initseq to de-correlate the generated sequences.

  1. __init__(self, arg: drjit.cuda.ad.PCG32) -> None

Copy-construct a new PCG32 instance from an existing instance.

seed(self, initstate: drjit.cuda.ad.UInt64 = UInt64(0x853c49e6748fea9b), initseq: drjit.cuda.ad.UInt64 = UInt64(0xda3e39cb94b95bdb)) None

Seed the random number generator with the given initial state and sequence ID.

The initstate and initseq inputs determine the initial state and increment of the linear congruential generator. Their values are the defaults from the original implementation.

next_uint32(self) drjit.cuda.ad.UInt
next_uint32(self, arg: drjit.cuda.ad.Bool, /) drjit.cuda.ad.UInt

Generate a uniformly distributed unsigned 32-bit random number

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint64(self) drjit.cuda.ad.UInt64
next_uint64(self, arg: drjit.cuda.ad.Bool, /) drjit.cuda.ad.UInt64

Generate a uniformly distributed unsigned 64-bit random number

Internally, the function calls next_uint32() twice.

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float32(self) drjit.cuda.ad.Float
next_float32(self, arg: drjit.cuda.ad.Bool, /) drjit.cuda.ad.Float

Generate a uniformly distributed single precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_float64(self) drjit.cuda.ad.Float64
next_float64(self, arg: drjit.cuda.ad.Bool, /) drjit.cuda.ad.Float64

Generate a uniformly distributed double precision floating point number on the interval \([0, 1)\).

Two overloads of this function exist: the masked variant does not advance the PRNG state of entries i where mask[i] == False.

next_uint32_bounded(self, bound: int, mask: drjit.cuda.ad.Bool = Bool(True)) drjit.cuda.ad.UInt

Generate a uniformly distributed 32-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

next_uint64_bounded(self, bound: int, mask: drjit.cuda.ad.Bool = Bool(True)) drjit.cuda.ad.UInt64

Generate a uniformly distributed 64-bit integer number on the interval \([0, \texttt{bound})\).

To ensure an unbiased result, the implementation relies on an iterative scheme that typically finishes after 1-2 iterations.

__add__(self, arg: drjit.cuda.ad.Int64, /) drjit.cuda.ad.PCG32

Advance the pseudorandom number generator.

This function implements a multi-step advance function that is equivalent to (but more efficient than) calling the random number generator arg times in sequence.

This is useful to advance a newly constructed PRNG to a certain known state.

__iadd__(self, arg: drjit.cuda.ad.Int64, /) drjit.cuda.ad.PCG32

In-place addition operator based on __add__().

__sub__(self, arg: drjit.cuda.ad.Int64, /) drjit.cuda.ad.PCG32
__sub__(self, arg: drjit.cuda.ad.PCG32, /) drjit.cuda.ad.Int64

Rewind the pseudorandom number generator.

This function implements the opposite of __add__ to step a PRNG backwards. It can also compute the difference (as counted by the number of internal next_uint32 steps) between two PCG32 instances. This assumes that the two instances were consistently seeded.

__isub__(self, arg: drjit.cuda.ad.Int64, /) drjit.cuda.ad.PCG32

In-place subtraction operator based on __sub__().

property inc

Sequence increment of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.

property state

Sequence state of the PCG32 PRNG (an unsigned 64-bit integer or integer array). Please see the original paper for details on this field.