package torch

  1. Overview
  2. Docs

Layer Types and Conversions

type t

A layer takes as input a tensor and returns a tensor through the forward function. Layers can hold variables, these are created and registered using a Var_store.t when creating the layer.

type t_with_training

A layer of type t_with_training is similar to a layer of type t except that it is also given a boolean argument when applying it to a tensor that specifies whether the layer is currently used in training or in testing mode. This is typically the case for batch normalization or dropout.

val with_training : t -> t_with_training

with_training t returns a layer using the is_training argument from a standard layer. The is_training argument is discarded. This is useful when sequencing multiple layers via fold.

Basic Layer Creation

val id : t

The identity layer. forward id tensor returns tensor.

val id_ : t_with_training

The identity layer with an is_training argument.

val of_fn : (Tensor.t -> Tensor.t) -> t

of_fn f creates a layer based on a function from tensors to tensors.

val of_fn_ : (Tensor.t -> is_training:Base.bool -> Tensor.t) -> t_with_training

of_fn_ f creates a layer based on a function from tensors to tensors. f also has access to the is_training flag.

val sequential : t Base.list -> t

sequential ts applies sequentially a list of layers ts.

val sequential_ : t_with_training Base.list -> t_with_training

sequential_ ts applies sequentially a list of layers ts.

val forward : t -> Tensor.t -> Tensor.t

forward t tensor applies layer t to tensor.

val forward_ : t_with_training -> Tensor.t -> is_training:Base.bool -> Tensor.t

forward_ t tensor ~is_training applies layer t to tensor with the specified is_training flag.

Linear and Convolution Layers

type activation =
  1. | Relu
  2. | Softmax
  3. | Log_softmax
  4. | Tanh
  5. | Leaky_relu
  6. | Sigmoid

The different kind of activations supported by the various layers below.

val linear : Var_store.t -> ?activation:activation -> ?use_bias:Base.bool -> ?w_init:Var_store.Init.t -> input_dim:Base.int -> Base.int -> t

linear vs ~input_dim output_dim returns a linear layer. When using forward, the input tensor has to use a shape of batch_size * input_dim. The returned tensor has a shape batch_size * output_dim.

val conv2d : Var_store.t -> ksize:(Base.int * Base.int) -> stride:(Base.int * Base.int) -> ?activation:activation -> ?use_bias:Base.bool -> ?w_init:Var_store.Init.t -> ?padding:(Base.int * Base.int) -> ?groups:Base.int -> input_dim:Base.int -> Base.int -> t

conv2d vs ~ksize ~stride ~input_dim output_dim returns a 2 dimension convolution layer. ksize specifies the kernel size and stride the stride. When using forward, the input tensor should have a shape batch_size * input_dim * h * w and the returned tensor will have a shape batch_size * output_dim * h' * w'.

val conv2d_ : Var_store.t -> ksize:Base.int -> stride:Base.int -> ?activation:activation -> ?use_bias:Base.bool -> ?w_init:Var_store.Init.t -> ?padding:Base.int -> ?groups:Base.int -> input_dim:Base.int -> Base.int -> t

conv2d_ is similar to conv2d but uses the same kernel size, stride, and padding on both the height and width dimensions, so a single integer needs to be specified for these parameters.

val conv_transpose2d : Var_store.t -> ksize:(Base.int * Base.int) -> stride:(Base.int * Base.int) -> ?activation:activation -> ?use_bias:Base.bool -> ?w_init:Var_store.Init.t -> ?padding:(Base.int * Base.int) -> ?output_padding:(Base.int * Base.int) -> ?groups:Base.int -> input_dim:Base.int -> Base.int -> t

conv_transpose2d creates a 2D transposed convolution layer, this is sometimes also called 'deconvolution'.

val conv_transpose2d_ : Var_store.t -> ksize:Base.int -> stride:Base.int -> ?activation:activation -> ?use_bias:Base.bool -> ?w_init:Var_store.Init.t -> ?padding:Base.int -> ?output_padding:Base.int -> ?groups:Base.int -> input_dim:Base.int -> Base.int -> t

conv_transpose2d_ is similar to conv_transpose2d but uses a single value for the height and width dimension for the kernel size, stride, padding and output padding.

Batch Normalization

val batch_norm2d : Var_store.t -> ?w_init:Var_store.Init.t -> ?cudnn_enabled:Base.bool -> ?eps:Base.float -> ?momentum:Base.float -> Base.int -> t_with_training

batch_norm2d vs dim creates a batch norm 2D layer. This layer applies Batch Normalization over a 4D input batch_size * dim * h * w. The returned tensor has the same shape.

Recurrent Neural Networks

module Lstm : sig ... end

A Long Short Term Memory (LSTM) recurrent neural network.

module Gru : sig ... end

Embeddings

val embeddings : ?sparse:Base.bool -> ?scale_grad_by_freq:Base.bool -> Var_store.t -> num_embeddings:Base.int -> embedding_dim:Base.int -> t
OCaml

Innovation. Community. Security.