AI

Multi-Layered Perceptron (MLP)

9taetae9 2023. 10. 16. 17:19
728x90

Limitations of the Simple Perceptron
The primary limitation of a simple perceptron is its inability to handle data that isn't linearly separable. This means that if you can't draw a straight line (in two dimensions, or a hyper-plane in higher dimensions) to separate the classes, the perceptron won't be able to classify the data correctly.
In the context of logical operations, a perceptron can model AND and OR gates but fails with XOR. The reason is that the outputs of XOR are not linearly separable.

Multi-Layered Perceptron (MLP)
To overcome the limitations of a simple perceptron, the concept of MLP was introduced. As the name suggests, MLP has multiple layers: an input layer, one or more hidden layers, and an output layer.

Introducing hidden layers and non-linear activation functions enables the network to learn and model non-linear patterns.

different layers in an MLP
Single Layer: Can represent half-planes bounded by a hyperplane.
Two Layers: Can represent convex open or closed regions. With the right activation function, it can model an XOR.
Three Layers: Can approximate arbitrary shapes, but the complexity is limited by the number of nodes in the network.
Comparison with the Human Neural System:

The human visual system is incredibly complex and operates on principles that are both similar and different from artificial neural networks.
Rod and Cone cells: These are photoreceptor cells in the retina that detect light. Rods are responsible for low-light vision, while cones handle color and detail. They can be analogized to the input layer in an MLP as they collect and send the initial data.
Ganglion, Bipolar cells, etc.: These cells process the signals from the rods and cones and can be compared to the hidden layers in an MLP. They perform various transformations and computations on the data.

 

Image in Our Brain

After several stages of processing, the visual information is perceived as an image in our brains. This can be analogized to the output layer of an MLP. The actual intricacies of how we "see" and "interpret" are still areas of active research in neuroscience.

728x90