Custom Layers
AutoEncoderToolkit.jl
provides a set of commonly-used custom layers for building autoencoders. These layers need to be explicitly defined if you want to save a train model and load it later. For example, if the input to the encoder is an image in format HWC
(height, width, channel), somewhere in the encoder there must be a function that flattens its input to a vector for the mapping to the latent space to be possible. If you were to define this with a simple function, the libraries to save the the model such as JLD2
or BSON
would not work with these anonymous function. This is why we provide this set of custom layers that play along these libraries.
Reshape
AutoEncoderToolkit.Reshape
— TypeReshape(shape)
A custom layer for Flux that reshapes its input to a specified shape.
This layer is useful when you need to change the dimensions of your data within a Flux model. Unlike the built-in reshape operation in Julia, this custom layer can be saved and loaded using packages such as BSON or JLD2.
Arguments
shape
: The target shape. This can be any tuple of integers and colons. Colons are used to indicate dimensions whose size should be inferred such that the total number of elements remains the same.
Examples
julia> r = Reshape(10, :)
Reshape((10, :))
julia> r(rand(5, 2))
10×1 Matrix{Float64}:
Note
When saving and loading the model, make sure to include Reshape in the list of layers to be processed by BSON or JLD2.
AutoEncoderToolkit.Reshape
— MethodReshape(args...)
Constructor for the Reshape struct that takes variable arguments.
This function allows us to create a Reshape instance with any shape.
Arguments
args...
: Variable arguments representing the dimensions of the target shape.
Returns
- A
Reshape
instance with the target shape set to the provided dimensions.
Examples
julia> r = Reshape(10, :)
Reshape((10, :))
(r::Reshape)(x)
This function is called during the forward pass of the model. It reshapes the input x
to the target shape
stored in the Reshape instance r
.
Arguments
r::Reshape
: An instance of the Reshape struct.x
: The input to be reshaped.
Returns
- The reshaped input.
Examples
julia> r = Reshape(10, :)
Reshape((10, :))
julia> r(rand(5, 2))
10×1 Matrix{Float64}:
...
Flatten
AutoEncoderToolkit.Flatten
— TypeFlatten()
A custom layer for Flux that flattens its input into a 1D vector.
This layer is useful when you need to change the dimensions of your data within a Flux model. Unlike the built-in flatten operation in Julia, this custom layer can be saved and loaded by packages such as BSON and JLD2.
Examples
julia> f = Flatten()
julia> f(rand(5, 2))
10-element Vector{Float64}:
Note
When saving and loading the model, make sure to include Flatten in the list of layers to be processed by BSON or JLD2.
AutoEncoderToolkit.Flatten
— Method(f::Flatten)(x)
This function is called during the forward pass of the model. It flattens the input x into a 1D vector.
Arguments
f::Flatten
: An instance of the Flatten struct.x
: The input to be flattened.
Returns
The flattened input.
ActivationOverDims
AutoEncoderToolkit.ActivationOverDims
— TypeActivationOverDims(σ::Function, dims::Int)
A custom layer for Flux that applies an activation function over specified dimensions.
This layer is useful when you need to apply an activation function over specific dimensions of your data within a Flux model. Unlike the built-in activation functions in Julia, this custom layer can be saved and loaded using the BSON or JLD2 package.
Arguments
σ::Function
: The activation function to be applied.dims
: The dimensions over which the activation function should be applied.
Note
When saving and loading the model, make sure to include ActivationOverDims
in the list of layers to be processed by BSON or JLD2.
AutoEncoderToolkit.ActivationOverDims
— Method(σ::ActivationOverDims)(x)
This function is called during the forward pass of the model. It applies the activation function σ.σ
over the dimensions σ.dims
of the input x
.
Arguments
σ::ActivationOverDims
: An instance of the ActivationOverDims struct.x
: The input to which the activation function should be applied.
Returns
- The input
x
with the activation function applied over the specified dimensions.
Note
This custom layer can be saved and loaded using the BSON package. When saving and loading the model, make sure to include ActivationOverDims
in the list of layers to be processed by BSON or JLD2.