C++ API#
- page index
Introduction#
The aithree C++ library for creating custom implementations of common deep learning operations. The library provides a simple
Tensor
class and function declarations to be completed by the user.To install with customized implementations follow instructions here
Classes#
-
class Tensor#
Used for built-in and custom implementations of common deep learning operations.
Represents a multi-dimensional tensor which can be initialized from raw data or constructed from existing data with optional ownership semantics.
- Template Parameters:
dtype – The data type of the tensor elements
Public Functions
-
inline Tensor(const std::vector<uint> &s, ScalarType scalar_type)#
Allocates data to construct a Tensor with the given shape.
- Parameters:
s – Shape of the tensor
scalar_type – type of data to store
-
inline Tensor(const intptr_t data_address, const std::vector<uint> &s, ScalarType scalar_type)#
Allocates and copies data to construct a Tensor with the given shape and data.
- Parameters:
data_address – Address of the data to be copied to the Tensor
s – Shape of the tensor
scalar_type – type of the
data_address
-
inline Tensor(void *data, const std::vector<uint> &s, bool own, ScalarType scalar_type)#
Creates a tensor with the given values.
- Parameters:
data – Address of the data
s – Shape of the tensor
own – Whether the Tensor owns this data
scalar_type – type of the
data
-
template<typename target_type>
inline Tensor to_type(ScalarType to_type) const# Creates a new tensor from the exsiting with data of specified type.
- Template Parameters:
target_type – The type to which the data should be converted.
- Returns:
Tensor containing the converted data
-
inline py::buffer_info buffer()#
Implementation of Python Buffer Protocol for interoperability with Python.
- Returns:
pybind11::buffer_info object containing the tensor’s data, shape, and strides.
-
inline bool batched(const int data_dim) const#
- Parameters:
data_dim – The number of dimensions required per sample, see
sample_dims
- Returns:
true if the tensor has a dimension for batch size false otherwise
-
inline uint batch_size(const uint input_dims) const#
- Parameters:
input_dims – The number of dimensions required per sample, see
sample_dims
- Returns:
the number of samples in a batched Tensor
Public Members
-
void *data#
Pointer to the tensor data.
-
std::vector<uint> shape#
Shape of the tensor.
Public Static Functions
-
static inline Tensor form_tensor(const intptr_t data_address, const std::vector<uint> &s, ScalarType scalar_type)#
Wraps existing data with a Tensor without allocating or copying.
- Parameters:
data_address – Address of the data
s – Shape of the tensor
scalar_type – type of the
data_address
- Returns:
A Tensor object around the provided data
-
static inline std::optional<Tensor> from_optional(const std::optional<intptr_t> &data_address, const std::vector<uint> &s, ScalarType scalar_type, bool own = true)#
Creates a Tensor object depending on an optional data address.
If the data address is present, a Tensor is created with ownership determined by the own parameter. If the data address is not present, std::nullopt is returned.
- Parameters:
data_address – Optional address of the raw data.
s – Shape of the tensor.
scalar_type – type of the
data_address
own – Whether to take ownership of the data.
- Returns:
Tensor object if data_address has a value; otherwise, std::nullopt.
-
class Context#
Initializes and caches data useful in acceleration platforms.
Public Static Functions
-
static inline void *mps_graph_device()#
Returns the cached
MPSGraphDevice
, initializing it if necessary.
-
static inline void *mps_graph()#
Returns the cached
MPSGraph
, initializing it if necessary.
-
static inline void *mtl_device()#
Returns the cached
MTLDevice
, initializing it if necessary.
-
static inline void *cudnn_handle_t()#
Returns the cached
cudnnHandle_t
, initializing it if necessary.
-
static inline void *cublas_handle_t()#
Returns the cached
cublasHandle_t
, initializing it if necessary.
-
static inline void *sycl_queue()#
Returns the cached
sycl::queue
, initializing it if necessary.
-
static inline void *mps_graph_device()#
Available Customization#
- group custom
Custom implementations are created using outlines in the custom directory.
Functions
-
template<typename dtype>
Tensor adaptiveavgpool2d_custom(Tensor input, const std::optional<uint> output_h, const std::optional<uint> output_w)# Function for custom implementations of AdaptiveAvgPool2D, see built-in implementations here
- Template Parameters:
dtype – The data type of the tensors
-
template<typename dtype>
Tensor avgpool2d_custom(Tensor input, const uint kernel_h, const uint kernel_w, const uint padding_h, const uint padding_w, const uint stride_h, const uint stride_w, const bool ceil_mode, const bool count_include_pad, const std::optional<int> divisor_override)# Function for custom implementations of AvgPool2D, see built-in implementations here
- Template Parameters:
dtype – The data type of the tensors
-
template<typename dtype>
Tensor conv2d_custom(Tensor input, const Tensor &kernel, const std::optional<const Tensor> &bias, const uint padding_h, const uint padding_w, const uint stride_h, const uint stride_w, const uint dilation_h, const uint dilation_w, const PaddingMode padding_mode, uint groups)# Function for custom implementations of Conv2D, see built-in implementations here
- Template Parameters:
dtype – The data type of the tensors
-
template<typename dtype>
Tensor flatten_custom(Tensor input, const uint start_dim, int end_dim_orig)# Function for custom implementations of Flatten, see built-in implementations here
- Template Parameters:
dtype – The data type of the tensors
-
template<typename dtype>
Tensor linear_custom(Tensor input, const Tensor &weight, const std::optional<const Tensor> &bias)# Function for custom implementations of Linear, see built-in implementations here
- Template Parameters:
dtype – The data type of the tensors
-
template<typename dtype>
Tensor maxpool2d_custom(Tensor input, const uint kernel_h, const uint kernel_w, const uint padding_h, const uint padding_w, const uint stride_h, const uint stride_w, const uint dilation_h, const uint dilation_w, const bool ceil_mode)# Function for custom implementations of MaxPool2D, see built-in implementations here
- Template Parameters:
dtype – The data type of the tensors
Variables
-
const bool DEFAULT_ADAPTIVEAVGPOOL2D = false#
Set this to true to use custom AdaptiveAvgPool2D operation as default
-
const bool DEFAULT_AVGPOOL2D = false#
Set this to true to use custom AvgPool2D operation as default
-
const bool DEFAULT_CONV2D = false#
Set this to true to use custom Conv2D operation as default
-
const bool DEFAULT_FLATTEN = false#
Set this to true to use custom Flatten operation as default
-
const bool DEFAULT_LINEAR = false#
Set this to true to use custom Linear operation as default
-
const bool DEFAULT_MAXPOOL2D = false#
Set this to true to use custom MaxPool2D operation as default
-
const bool DEFAULT_RELU = false#
Set this to true to use custom ReLU operation as default
-
template<typename dtype>
-
namespace sample_dims#
Number of dimensions per sample of common deep learning operations.