What is Value at Risk (VaR)?
Value at Risk quantifies the worst expected loss over a defined period at a specified confidence level. A 1-day 95% VaR of $1 million means there is a 5% chance the portfolio will lose more than $1 million in a single trading day. Developed at J.P. Morgan in the early 1990s and popularized through the RiskMetrics framework, VaR became the standard risk metric for banks, hedge funds, and regulators worldwide.
Calculation Methods
Three primary methods exist. The parametric (variance-covariance) approach assumes returns follow a normal distribution and calculates VaR using portfolio standard deviation; it is fast but underestimates tail risk. Historical simulation uses actual past returns without distributional assumptions, making it more robust for non-normal data. Monte Carlo simulation generates thousands of random scenarios based on statistical models, offering the most flexibility but requiring significant computational resources. For a $10 million equity portfolio with 15% annual volatility, the 1-day 95% parametric VaR is approximately $155,000.
Key Considerations
VaR tells you nothing about the magnitude of losses beyond the confidence threshold. During the 2008 financial crisis, many institutions experienced losses 5 to 10 times their reported VaR. Conditional VaR (CVaR), also called Expected Shortfall, addresses this by averaging losses in the tail beyond the VaR threshold. Regulators increasingly require CVaR reporting alongside traditional VaR to capture extreme-event exposure.