In the realm of deep learning, a **Loss Function** (or Cost Function) serves as the mathematical compass for a neural network. It quantifies the "error" by calculating the precise distance between the model's current prediction ($\hat{y}$) and the actual ground truth ($y$). During the training process, the network aims to minimize this value iteratively through an optimization algorithm like Gradient Descent.
The choice of function fundamentally dictates how the model learns. **Regression tasks** (predicting continuous quantities like house prices) typically employ MSE or MAE to measure numerical discrepancies. **Classification tasks** (predicting categories like "Spam" or "Not Spam") rely on Cross-Entropy to penalize probability divergences.