# Description
The architecture of a neural network refers to the organization of neurons across different layers. It is essential to consider parameters that affect the configuration, such as the input and activation functions.
Neural networks typically consist of an input layer, one or more hidden processing layers (a set of neurons), and finally an output layer.
Multilayer networks contain an indeterminate number of hidden layers. Adding more neurons increases the network's predictive capacity, but it can also increase the risk of overfitting and the computational cost.
Networks in which the output of one layer serves as the input for the next are known as **Feedforward Neural Networks (FNNs)**. Additionally, there are networks such as **Recurrent Neural Networks (RNNs)**, which have loops in their structure, allowing them to maintain a memory of previous inputs.
---
## References
- [[Deep learning - Anna Bosch Rué Jordi Casas Roma Toni Lozano Bagén]]