Class MultiLayerPerceptron

  • All Implemented Interfaces:
    java.io.Serializable

    public class MultiLayerPerceptron
    extends SupervisedLearning
    Multi Layer Perceptron or MLP. MultiLayerPerceptron was the first algorithm proposed to train multilayer neurons using the general delta rule. This implementation supports multiple backpropagation implementations via a Backpropagation interface. Check the package libai.nn.supervised.backpropagation for more details about the supported implementations.
    See Also:
    Serialized Form
    • Constructor Detail

      • MultiLayerPerceptron

        public MultiLayerPerceptron​(int[] nperlayer,
                                    Function[] funcs)
        Constructor. Creates a MultiLayerPerceptron with nperlayer.length layers. The number of neurons per layer is defined in nperlayer. The nperlayer[0] means the input layer. For each layer i the neurons applies the output function funcs[i]. These functions must be derivable. The training algorithm is standard backpropagation.
        Parameters:
        nperlayer - Number of neurons per layer including the input layer.
        funcs - Function to apply per layer. The function[0] could be null.
      • MultiLayerPerceptron

        public MultiLayerPerceptron​(int[] nperlayer,
                                    Function[] funcs,
                                    Backpropagation trainer)
        Constructor. Creates a MultiLayerPerceptron with nperlayer.length layers. The number of neurons per layer is defined in nperlayer. The nperlayer[0] means the input layer. For each layer the neurons applies the output function funcs[i]. These functions must be derivable. The parameter beta means the momentum influence. A different implementation of the backpropagation algorithm can be provided on the trainer object.
        Parameters:
        nperlayer - Number of neurons per layer including the input layer.
        funcs - Function to apply per layer. The function[0] could be null.
        trainer - The backpropagation implementation to be used during training
      • MultiLayerPerceptron

        public MultiLayerPerceptron​(int[] nperlayer,
                                    Function[] funcs,
                                    java.util.Random rand)
        Constructor. Creates a MultiLayerPerceptron with nperlayer.length layers. The number of neurons per layer is defined in nperlayer. The nperlayer[0] means the input layer. For each layer the neurons applies the output function funcs[i]. These functions must be derivable. The training algorithm is standard backpropagation. A Random generator can be pass to initialize the matrices.
        Parameters:
        nperlayer - Number of neurons per layer including the input layer.
        funcs - Function to apply per layer. The function[0] could be null.
        rand - Random generator used for creating matrices
      • MultiLayerPerceptron

        public MultiLayerPerceptron​(int[] nperlayer,
                                    Function[] funcs,
                                    Backpropagation trainer,
                                    java.util.Random rand)
        Constructor. Creates a MultiLayerPerceptron with nperlayer.length layers. The number of neurons per layer is defined in nperlayer. The nperlayer[0] means the input layer. For each layer the neurons applies the output function funcs[i]. These functions must be derivable. A different backpropagation implementation can be provided along with a random generator to initialize the matrices.
        Parameters:
        nperlayer - Number of neurons per layer including the input layer.
        funcs - Function to apply per layer. The function[0] could be null.
        trainer - The backpropagation implementation to be used during training
        rand - Random generator used for creating matrices
    • Method Detail

      • train

        public void train​(Column[] patterns,
                          Column[] answers,
                          double alpha,
                          int epochs,
                          int offset,
                          int length,
                          double minerror)
        Train the network using the standard backpropagation algorithm. The pattern is propagated from the input to the final layer (the output). Then the error for the final layer is computed. The error is calculated backwards to the first hidden layer, calculating the differentials between input and expected output (backpropagation). Finally, the weights and biases are updated using the delta rule:
        W[i] = W[i] + beta*(W[i]-Wprev[i]) - (1-beta)*alpha.d[i].Y[i-1]^t
        B[i] = B[i] + beta*(B[i]-Bprev[i]) - (1-beta)*alpha.d[i]
        Specified by:
        train in class NeuralNetwork
        Parameters:
        patterns - The patterns to be learned.
        answers - The expected answers.
        alpha - The learning rate.
        epochs - The maximum number of iterations
        offset - The first pattern position
        length - How many patterns will be used.
        minerror - The minimal error expected.
      • simulate

        public Column simulate​(Column pattern)
        Description copied from class: NeuralNetwork
        Calculates the output for the pattern.
        Specified by:
        simulate in class NeuralNetwork
        Parameters:
        pattern - Pattern to use as input.
        Returns:
        The output for the neural network.
      • simulate

        public void simulate​(Column pattern,
                             Column result)
        Description copied from class: NeuralNetwork
        Calculates the output for the pattern and left the result in result.
        Specified by:
        simulate in class NeuralNetwork
        Parameters:
        pattern - Pattern to use as input.
        result - The output for the input.
      • getWeights

        public Matrix[] getWeights()