Represents a neural network.
More...
#include <network.h>
|
| Network (size_t inputSize, bool useGPU=true, long long seed=NO_SEED) |
| Construct a new network. More...
|
|
void | add (size_t numNeurons, const std::string &activation="linear") |
| Add a new layer to the network. More...
|
|
sTensor | forward (const sTensor &batch) |
| Forward-propagate a batch through the network. More...
|
|
void | train (sTensor &X, sTensor &y, int epochs, size_t batchSize, float learningRate, Loss *loss, std::vector< Metric * > &metrics) |
| Train the network. More...
|
|
|
void | processEpoch (std::vector< sTensor > &batches, std::vector< sTensor > &targets, std::vector< sTensor > &targetsOnHost, float learningRate, Loss *loss, std::vector< Metric * > &metrics) |
| Trains the model on a single epoch. More...
|
|
Represents a neural network.
- Examples
- MNIST, and Titanic.
◆ Network()
Network::Network |
( |
size_t |
inputSize, |
|
|
bool |
useGPU = true , |
|
|
long long |
seed = NO_SEED |
|
) |
| |
|
explicit |
Construct a new network.
The constructed network can use GPU acceleration if a GPU and CUDA are available, and the useGPU
parameter is set to true.
- Parameters
-
inputSize | The number of inputs to the neural network. |
useGPU | Boolean to specify whether the network should use GPU acceleration. |
seed | Seed that should be used for random initialization of the network. |
◆ add()
void Network::add |
( |
size_t |
numNeurons, |
|
|
const std::string & |
activation = "linear" |
|
) |
| |
Add a new layer to the network.
Three activation functions can be used: Linear, ReLU and Sigmoid. The activation can be selected by specifying the activation parameter, using one of the strings: "linear", "relu" or "sigmoid". If any other string is specified, Linear activation will be used.
- Parameters
-
numNeurons | The number of neurons the new layer should contain. |
activation | The activation function to use. Can be "linear", "relu" or "sigmoid". |
- Examples
- MNIST, and Titanic.
◆ forward()
sTensor Network::forward |
( |
const sTensor & |
batch | ) |
|
Forward-propagate a batch through the network.
The samples should be aligned along the first axis.
- Parameters
-
batch | The batch to propagate. |
- Returns
- The pointer to the output of the network. This returns Layer::aMatrix of the last layer.
◆ processEpoch()
void Network::processEpoch |
( |
std::vector< sTensor > & |
batches, |
|
|
std::vector< sTensor > & |
targets, |
|
|
std::vector< sTensor > & |
targetsOnHost, |
|
|
float |
learningRate, |
|
|
Loss * |
loss, |
|
|
std::vector< Metric * > & |
metrics |
|
) |
| |
|
private |
Trains the model on a single epoch.
Helper method used in Network::train(). The method makes use of targetsOnHost
, which are the target batches stored on host. This is for performance reasons as some of the metrics require the input matrices to be located on host.
- Parameters
-
batches | The list of batches to process. These have been split in Network::train() method. |
targets | The list of targets to process. These have been split in Network::train() method. |
targetsOnHost | The list of targets to processed but stored on host. |
learningRate | The learning rate used during training. |
loss | The loss function to use. |
metrics | The list of metrics to compute aside from the loss function. |
◆ train()
void Network::train |
( |
sTensor & |
X, |
|
|
sTensor & |
y, |
|
|
int |
epochs, |
|
|
size_t |
batchSize, |
|
|
float |
learningRate, |
|
|
Loss * |
loss, |
|
|
std::vector< Metric * > & |
metrics |
|
) |
| |
Train the network.
Both X and y should have the data samples aligned on the first axis. Each row in X should be aligned with the corresponding row in y.
- Parameters
-
X | The data to train the network on. |
y | The targets of the network. |
epochs | The number of epochs to train the network for. |
batchSize | The size of the batch. |
learningRate | The learning rate of the algorithm. |
loss | The loss function to use. |
metrics | The list of metrics to compute aside from the loss function. |
- Examples
- MNIST, and Titanic.
◆ location
The location of the network.
Specifies the location of all the data used by the network. See DataLocation for more info.
◆ previousSize
size_t Network::previousSize |
|
private |
Keeps track of the size of the previous layer.
Layers need to know the size of the input passed to them. This is achieved using this variable, that keeps track of the size of the previous layer (or size of the input if in 1st layer). In this way, layers can pre-allocate required space during initialization.
The documentation for this class was generated from the following files: