# Description
The geometric interpretation of the decision boundary for a [[Perceptron]] is determined by the following equation:
$
y = \sigma(\mathbf{w} \cdot \mathbf{x} + \alpha)
$
where $w$ represents the weight vector, $x$ represents the input vector, and $\alpha$ is the bias term. The weight vector is defined as:
$
\mathbf{w} = [w_1, w_2, \dots, w_n]
$
The output of the [[Perceptron]] is:
$
y =
\begin{cases}
1 & \text{if } \mathbf{w} \cdot \mathbf{x} - \alpha > 0 \\
0 & \text{if } \mathbf{w} \cdot \mathbf{x} - \alpha \leq 0
\end{cases}
$
In the case of a two-dimensional plane, the decision boundary can be defined by:
$
w_1 x_1 + w_2 x_2 + \alpha = 0
$
For example, when considering the weight vector $w$ and inputs $x_1$ and $x_2$, the equation of the line separating the classes is:
$
x_2 = -\frac{w_1}{w_2} x_1 + \frac{\alpha}{w_2}
$
This equation represents a linear boundary that separates the input space based on the weight vector and the bias term.
## References
- [[Deep learning - Anna Bosch Rué Jordi Casas Roma Toni Lozano Bagén]]