nnlib
GPU-accelerated, C/C++ neural network library.
Public Member Functions | Private Member Functions | Private Attributes | List of all members
Network Class Reference

Represents a neural network. More...

#include <network.h>

Public Member Functions

 Network (size_t inputSize, bool useGPU=true, long long seed=NO_SEED)
 Construct a new network. More...
 
void add (size_t numNeurons, const std::string &activation="linear")
 Add a new layer to the network. More...
 
sTensor forward (const sTensor &batch)
 Forward-propagate a batch through the network. More...
 
void train (sTensor &X, sTensor &y, int epochs, size_t batchSize, float learningRate, Loss *loss, std::vector< Metric * > &metrics)
 Train the network. More...
 

Private Member Functions

void processEpoch (std::vector< sTensor > &batches, std::vector< sTensor > &targets, std::vector< sTensor > &targetsOnHost, float learningRate, Loss *loss, std::vector< Metric * > &metrics)
 Trains the model on a single epoch. More...
 

Private Attributes

DataLocation location
 The location of the network. More...
 
std::vector< Layerlayers
 List of network layers.
 
long long seed
 Seed used for random initialization.
 
size_t previousSize
 Keeps track of the size of the previous layer. More...
 

Detailed Description

Represents a neural network.

Examples
MNIST, and Titanic.

Constructor & Destructor Documentation

◆ Network()

Network::Network ( size_t  inputSize,
bool  useGPU = true,
long long  seed = NO_SEED 
)
explicit

Construct a new network.

The constructed network can use GPU acceleration if a GPU and CUDA are available, and the useGPU parameter is set to true.

Parameters
inputSizeThe number of inputs to the neural network.
useGPUBoolean to specify whether the network should use GPU acceleration.
seedSeed that should be used for random initialization of the network.

Member Function Documentation

◆ add()

void Network::add ( size_t  numNeurons,
const std::string &  activation = "linear" 
)

Add a new layer to the network.

Three activation functions can be used: Linear, ReLU and Sigmoid. The activation can be selected by specifying the activation parameter, using one of the strings: "linear", "relu" or "sigmoid". If any other string is specified, Linear activation will be used.

Parameters
numNeuronsThe number of neurons the new layer should contain.
activationThe activation function to use. Can be "linear", "relu" or "sigmoid".
Examples
MNIST, and Titanic.

◆ forward()

sTensor Network::forward ( const sTensor &  batch)

Forward-propagate a batch through the network.

The samples should be aligned along the first axis.

Parameters
batchThe batch to propagate.
Returns
The pointer to the output of the network. This returns Layer::aMatrix of the last layer.

◆ processEpoch()

void Network::processEpoch ( std::vector< sTensor > &  batches,
std::vector< sTensor > &  targets,
std::vector< sTensor > &  targetsOnHost,
float  learningRate,
Loss loss,
std::vector< Metric * > &  metrics 
)
private

Trains the model on a single epoch.

Helper method used in Network::train(). The method makes use of targetsOnHost, which are the target batches stored on host. This is for performance reasons as some of the metrics require the input matrices to be located on host.

Parameters
batchesThe list of batches to process. These have been split in Network::train() method.
targetsThe list of targets to process. These have been split in Network::train() method.
targetsOnHostThe list of targets to processed but stored on host.
learningRateThe learning rate used during training.
lossThe loss function to use.
metricsThe list of metrics to compute aside from the loss function.

◆ train()

void Network::train ( sTensor &  X,
sTensor &  y,
int  epochs,
size_t  batchSize,
float  learningRate,
Loss loss,
std::vector< Metric * > &  metrics 
)

Train the network.

Both X and y should have the data samples aligned on the first axis. Each row in X should be aligned with the corresponding row in y.

Parameters
XThe data to train the network on.
yThe targets of the network.
epochsThe number of epochs to train the network for.
batchSizeThe size of the batch.
learningRateThe learning rate of the algorithm.
lossThe loss function to use.
metricsThe list of metrics to compute aside from the loss function.
Examples
MNIST, and Titanic.

Member Data Documentation

◆ location

DataLocation Network::location
private

The location of the network.

Specifies the location of all the data used by the network. See DataLocation for more info.

◆ previousSize

size_t Network::previousSize
private

Keeps track of the size of the previous layer.

Layers need to know the size of the input passed to them. This is achieved using this variable, that keeps track of the size of the previous layer (or size of the input if in 1st layer). In this way, layers can pre-allocate required space during initialization.


The documentation for this class was generated from the following files: