Module 06: Risk = Variance (But Not Really)
From Markowitz variance to modern downside risk measures
1. The Standard Story: Risk as Standard Deviation
In 1952, Harry Markowitz published his landmark paper on portfolio selection and gave finance a precise, mathematical definition of risk: the variance (or standard deviation) of returns. For a statistician, this is immediately familiar — risk is simply the second central moment of the return distribution.
Standard deviation σ is the square root of variance and is reported in the same units as returns (e.g., percent per year). An annualized standard deviation of 20% means the typical yearly return deviates about 20 percentage points from its mean.
In statistics, σ measures dispersion around the mean. In finance, σ measures risk. Same formula, different name. When a financial analyst says “volatility is 25%,” they literally mean σ = 0.25 annualized.
1.1 Why Markowitz Chose Variance
Markowitz needed a tractable, computable measure that could be plugged into an optimization framework. Variance has wonderful mathematical properties:
- Additivity under independence: Var(X + Y) = Var(X) + Var(Y) when X, Y are independent.
- Portfolio formula: Var(w'R) = w'Σw, a clean quadratic form in the weights.
- Sufficiency under normality: If returns are Gaussian, then (μ, σ2) fully characterize the distribution.
- Analytical tractability: Quadratic optimization has closed-form solutions.
1.2 The Volatility Calculation in Practice
Given a series of daily log-returns r1, r2, …, rT, the sample standard deviation is:
To annualize, multiply by √252 (the typical number of trading days per year):
Pythonimport numpy as np
import yfinance as yf
# Download daily prices for a stock
data = yf.download("AAPL", start="2020-01-01", end="2024-01-01")
prices = data["Adj Close"]
# Compute daily log returns
log_returns = np.log(prices / prices.shift(1)).dropna()
# Daily and annualized volatility
daily_vol = log_returns.std()
annual_vol = daily_vol * np.sqrt(252)
print(f"Daily volatility: {daily_vol:.4f}")
print(f"Annualized volatility: {annual_vol:.4f}")
print(f"Annualized volatility: {annual_vol*100:.2f}%")
The √252 scaling assumes returns are i.i.d. — a strong assumption. If returns are positively autocorrelated, the true annualized volatility is higher than σdaily × √252. If negatively autocorrelated (mean-reverting), it is lower. Always check the autocorrelation structure before blindly scaling.
2. The Asymmetry Problem: Variance Is Symmetric, Risk Isn't
Here is the fundamental critique of variance as a risk measure: variance treats upside and downside deviations equally. A stock that surges 30% above its mean contributes just as much to variance as one that drops 30% below. But no investor lies awake at night worrying about unexpectedly large gains.
2.1 Loss Aversion and Prospect Theory
Kahneman and Tversky's prospect theory (1979) showed that people feel losses roughly twice as intensely as equivalent gains. The value function is:
- Concave for gains (diminishing marginal utility of gains)
- Convex for losses (increasing marginal pain of losses)
- Steeper for losses than gains (loss aversion coefficient λ ≈ 2.25)
In statistical decision theory, symmetric loss functions (like squared error) penalize over- and under-prediction equally. But asymmetric loss functions — like LINEX (linear-exponential) loss — penalize one direction more heavily. Risk measurement in finance is precisely a case where asymmetric loss functions are appropriate.
2.2 Return Distributions Are Not Gaussian
If returns were truly Gaussian, variance would be sufficient — the distribution is fully parameterized by (μ, σ2), and knowing variance tells you everything about tail behavior. But empirical return distributions exhibit:
- Excess kurtosis: Tails are heavier than Gaussian (“fat tails”). For the S&P 500, the kurtosis of daily returns is typically 8–12, versus 3 for a normal distribution.
- Negative skewness: Large drops are more common than large rallies, especially at the index level.
- Volatility clustering: Periods of high volatility cluster together (ARCH/GARCH effects).
Pythonfrom scipy import stats
# Compute higher moments of the return distribution
skewness = log_returns.skew()
kurtosis = log_returns.kurtosis() # excess kurtosis (scipy default)
print(f"Skewness: {skewness:.4f}")
print(f"Excess Kurtosis: {kurtosis:.4f}")
print(f"(Gaussian would be: skew=0, kurtosis=0)")
# Jarque-Bera test for normality
jb_stat, jb_pval = stats.jarque_bera(log_returns)
print(f"\nJarque-Bera statistic: {jb_stat:.2f}")
print(f"p-value: {jb_pval:.6f}")
if jb_pval < 0.05:
print("=> Reject normality at 5% level")
Many financial models assume normality because it makes the math clean. But with excess kurtosis of 10+, the probability of a “3-sigma event” is far higher than the Gaussian model predicts. The 2008 financial crisis was a “25-sigma event” under Gaussian assumptions — an absurdity that reveals the model's failure, not the event's impossibility.
3. Semi-Variance: Measuring Only Downside Risk
The simplest fix for variance's symmetry problem is to only consider deviations below a threshold. This gives us the semi-variance (or downside variance):
where τ is the target or threshold return (often the mean return, zero, or the risk-free rate). The downside deviation is √SV.
Sortino Ratio: An improvement on the Sharpe ratio that uses downside deviation instead of total standard deviation: Sortino = (Rp − rf) / DD. This rewards strategies that have high upside volatility but limited downside risk.
Semi-variance is conceptually related to truncated moments. If you condition on the event {R < τ}, the semi-variance is the second moment of the truncated distribution. This connects to the broader toolkit of conditional expectations that we'll see again with CVaR.
3.1 The Sortino Ratio vs. the Sharpe Ratio
The Sharpe ratio penalizes all volatility equally. Consider two funds:
| Metric | Fund A | Fund B |
|---|---|---|
| Mean annual return | 12% | 12% |
| Standard deviation | 20% | 20% |
| Downside deviation (target = 0%) | 15% | 8% |
| Sharpe ratio (rf = 2%) | 0.50 | 0.50 |
| Sortino ratio (rf = 2%) | 0.67 | 1.25 |
Both funds look identical under the Sharpe ratio, but Fund B concentrates its volatility on the upside. The Sortino ratio correctly identifies Fund B as less risky.
Pythondef semi_variance(returns, target=0.0):
"""Compute semi-variance (downside only) relative to a target."""
downside = np.minimum(returns - target, 0)
return np.mean(downside ** 2)
def downside_deviation(returns, target=0.0):
"""Square root of semi-variance."""
return np.sqrt(semi_variance(returns, target))
def sortino_ratio(returns, rf=0.0, target=0.0):
"""Sortino ratio: excess return per unit of downside risk."""
excess = returns.mean() - rf / 252 # daily risk-free rate
dd = downside_deviation(returns, target)
return (excess / dd) * np.sqrt(252) # annualized
def sharpe_ratio(returns, rf=0.0):
"""Sharpe ratio: excess return per unit of total risk."""
excess = returns.mean() - rf / 252
return (excess / returns.std()) * np.sqrt(252)
# Compare
sharpe = sharpe_ratio(log_returns, rf=0.02)
sortino = sortino_ratio(log_returns, rf=0.02, target=0.0)
print(f"Sharpe Ratio: {sharpe:.4f}")
print(f"Sortino Ratio: {sortino:.4f}")
4. Value at Risk (VaR): The Quantile of the Loss Distribution
Value at Risk answers a seemingly simple question: What is the maximum loss we can expect over a given time horizon at a given confidence level?
Value at Risk (VaR): The α-quantile of the loss distribution. The 95% VaR is the loss level such that there is only a 5% probability of exceeding it. If the 1-day 95% VaR is $1 million, then on 95% of days, losses will not exceed $1 million.
VaR is literally a quantile. If L is the loss random variable, then VaRα = FL−1(α), the inverse CDF evaluated at α. Every concept you know about quantile estimation, confidence intervals for quantiles, and the Bahadur representation applies directly.
4.1 Three Methods for Computing VaR
Historical VaR
The simplest approach: take your historical return series, sort it, and read off the α-th percentile. No distributional assumptions required.
- Pros: Model-free, captures fat tails and skewness automatically.
- Cons: Limited by historical sample size; a 1% VaR from 250 days of data is determined by just 2–3 observations.
Parametric (Gaussian) VaR
Assume returns are normally distributed. Then VaR is simply a function of μ and σ:
where zα is the standard normal quantile. For 95% VaR, z0.05 ≈ −1.645.
- Pros: Simple, requires only μ and σ.
- Cons: Assumes normality — underestimates tail risk when returns have fat tails.
Monte Carlo VaR
Simulate many possible future return paths from a fitted model (e.g., a GARCH model or a multivariate distribution), compute the portfolio loss for each path, and take the α-th percentile of the simulated loss distribution.
- Pros: Can handle any distributional assumption, non-linear portfolios, path dependence.
- Cons: Computationally intensive; results depend on the assumed model.
Pythonfrom scipy.stats import norm
def historical_var(returns, alpha=0.05):
"""Historical VaR: the alpha-th percentile of returns."""
return -np.percentile(returns, alpha * 100)
def parametric_var(returns, alpha=0.05):
"""Parametric (Gaussian) VaR."""
mu = returns.mean()
sigma = returns.std()
return -(mu + norm.ppf(alpha) * sigma)
def monte_carlo_var(returns, alpha=0.05, n_simulations=100_000):
"""Monte Carlo VaR assuming Gaussian returns."""
mu = returns.mean()
sigma = returns.std()
simulated = np.random.normal(mu, sigma, n_simulations)
return -np.percentile(simulated, alpha * 100)
# Compute all three at the 95% confidence level (alpha = 0.05)
h_var = historical_var(log_returns.values, 0.05)
p_var = parametric_var(log_returns.values, 0.05)
mc_var = monte_carlo_var(log_returns.values, 0.05)
print("1-day 95% VaR (as positive loss):")
print(f" Historical: {h_var:.4f} ({h_var*100:.2f}%)")
print(f" Parametric: {p_var:.4f} ({p_var*100:.2f}%)")
print(f" Monte Carlo: {mc_var:.4f} ({mc_var*100:.2f}%)")
VaR tells you nothing about what happens beyond the quantile. A 95% VaR of 3% means that on the worst 5% of days, you lose at least 3% — but you could lose 5%, 10%, or 50%. VaR is like saying “you'll pass the exam 95% of the time” without specifying how badly you fail the other 5%.
4.2 VaR Is Not Subadditive
A well-behaved risk measure should satisfy subadditivity: the risk of a combined portfolio should not exceed the sum of individual risks. Mathematically:
Diversification should reduce risk, and a subadditive measure reflects this. VaR, in general, does not satisfy subadditivity. It is possible to construct examples where combining two positions actually increases VaR beyond the sum of individual VaRs. This was a major theoretical critique of VaR in the risk management literature.
In statistics, this connects to the theory of coherent risk measures (Artzner et al., 1999). A coherent measure must be monotone, translation-invariant, positively homogeneous, and subadditive. VaR fails the last property. Variance, despite its other shortcomings, is subadditive (in fact, additive under independence).
5. CVaR / Expected Shortfall: What Happens in the Tail
Conditional Value at Risk (CVaR), also called Expected Shortfall (ES), addresses VaR's blind spot by asking: given that we are in the worst α% of outcomes, what is the expected loss?
CVaR is a conditional expectation — specifically, E[L | L is in the upper tail]. If you've worked with truncated distributions or tail conditional expectations, you already understand CVaR conceptually. For a Gaussian distribution, CVaRα = μ + σ ⋅ φ(zα) / α, where φ is the standard normal PDF.
5.1 Why CVaR Is Superior to VaR
| Property | VaR | CVaR |
|---|---|---|
| Interpretation | Threshold loss at confidence level | Average loss in the tail beyond VaR |
| Subadditive? | No (in general) | Yes (always) |
| Coherent risk measure? | No | Yes |
| Sensitivity to tail shape | None (it's a single quantile) | Full (averages over the entire tail) |
| Convexity (for optimization) | Non-convex | Convex |
| Regulatory acceptance | Basel II, Basel III (being phased out) | Basel III.1 / FRTB (the new standard) |
Pythondef historical_cvar(returns, alpha=0.05):
"""CVaR: average of returns below the alpha-th percentile."""
var = np.percentile(returns, alpha * 100)
tail_losses = returns[returns <= var]
return -tail_losses.mean()
def parametric_cvar(returns, alpha=0.05):
"""Parametric CVaR under Gaussian assumption."""
mu = returns.mean()
sigma = returns.std()
z = norm.ppf(alpha)
return -(mu + sigma * norm.pdf(z) / alpha)
h_cvar = historical_cvar(log_returns.values, 0.05)
p_cvar = parametric_cvar(log_returns.values, 0.05)
print("1-day 95% CVaR (Expected Shortfall):")
print(f" Historical: {h_cvar:.4f} ({h_cvar*100:.2f}%)")
print(f" Parametric: {p_cvar:.4f} ({p_cvar*100:.2f}%)")
print(f"\nNote: CVaR is always >= VaR")
print(f" VaR: {h_var:.4f}")
print(f" CVaR: {h_cvar:.4f}")
print(f" Ratio CVaR/VaR: {h_cvar/h_var:.2f}")
For normal distributions, CVaR5% / VaR5% ≈ 1.28. For fat-tailed distributions, this ratio can be much larger (2.0 or more). The ratio itself is a useful diagnostic for tail heaviness: the bigger it is, the more dangerous the tail.
6. Maximum Drawdown: A Path-Dependent Risk Measure
All the measures above are distributional — they depend on the distribution of returns, not the sequence. Maximum drawdown (MDD) is fundamentally different: it is a path-dependent risk measure that captures the largest peak-to-trough decline in cumulative portfolio value.
Maximum Drawdown (MDD): The largest percentage decline from a historical peak. If a portfolio grows from $100 to $150, then falls to $90, the drawdown from the peak is ($150 − $90) / $150 = 40%. MDD captures the worst-case “buyer's remorse” — the maximum loss experienced by an investor who bought at the worst possible time.
Maximum drawdown is related to the theory of running maxima of stochastic processes. For a Brownian motion with drift μ and volatility σ, the distribution of the maximum drawdown is analytically tractable. MDD is also related to the order statistics of the cumulative sum process — it's the range of the running maximum minus the current value.
6.1 Why Practitioners Love Drawdowns
- Intuitive: “This fund fell 35% from peak” is immediately understandable.
- Path-dependent: Captures the investor's actual experience, not just statistical properties.
- Recovery time: A 50% drawdown requires a 100% gain to recover — drawdowns reveal the asymmetry of compounding.
- Behavioral relevance: Investors often abandon strategies during drawdowns, locking in losses.
| Drawdown | Gain Needed to Recover |
|---|---|
| 10% | 11.1% |
| 20% | 25.0% |
| 30% | 42.9% |
| 40% | 66.7% |
| 50% | 100.0% |
| 60% | 150.0% |
| 75% | 300.0% |
| 90% | 900.0% |
Pythonimport pandas as pd
import matplotlib.pyplot as plt
def compute_drawdowns(prices):
"""Compute the drawdown series and max drawdown."""
cumulative_max = prices.cummax()
drawdown = (prices - cumulative_max) / cumulative_max
max_drawdown = drawdown.min()
return drawdown, max_drawdown
# Compute drawdown series
dd_series, mdd = compute_drawdowns(prices)
print(f"Maximum Drawdown: {mdd:.4f} ({mdd*100:.2f}%)")
# Find the peak and trough dates
peak_date = prices[:dd_series.idxmin()].idxmax()
trough_date = dd_series.idxmin()
print(f"Peak: {peak_date.date()} (price: ${prices[peak_date]:.2f})")
print(f"Trough: {trough_date.date()} (price: ${prices[trough_date]:.2f})")
# Plot the drawdown
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
ax1.plot(prices, color='#1a365d', linewidth=1)
ax1.fill_between(prices.index, prices, prices.cummax(),
alpha=0.3, color='#e53e3e')
ax1.set_title("Price and Drawdown Periods")
ax1.set_ylabel("Price ($)")
ax2.fill_between(dd_series.index, dd_series, 0,
color='#e53e3e', alpha=0.5)
ax2.set_title("Drawdown Series")
ax2.set_ylabel("Drawdown (%)")
ax2.set_xlabel("Date")
plt.tight_layout()
plt.savefig("drawdown_analysis.png", dpi=150, bbox_inches='tight')
plt.show()
7. Comparison: Risk Measures at a Glance
Each risk measure captures different aspects of the return distribution. There is no single “best” measure — the appropriate choice depends on the context.
| Measure | What It Captures | Statistical Analogue | Strengths | Weaknesses |
|---|---|---|---|---|
| Variance / σ | Total dispersion | Second central moment | Tractable, portfolio formula, optimization-friendly | Symmetric, penalizes upside |
| Semi-variance | Downside dispersion | Truncated second moment | Captures asymmetry, intuitive | Harder to optimize, loses portfolio formula |
| VaR | Threshold loss at α | Quantile function | Single number, regulatory standard | Not subadditive, blind to tail shape |
| CVaR / ES | Average tail loss | Conditional expectation | Coherent, convex, tail-sensitive | Harder to estimate accurately, requires more data |
| Max Drawdown | Worst peak-to-trough loss | Running max of stochastic process | Path-dependent, behaviorally relevant | Single event, hard to compare across time periods |
8. Putting It All Together: A Complete Risk Dashboard
Let's build a comprehensive risk analysis that computes all the measures we've discussed for a portfolio of stocks.
Pythonimport numpy as np
import pandas as pd
import yfinance as yf
from scipy.stats import norm
import matplotlib.pyplot as plt
# ── Download data ──────────────────────────────────────────
tickers = ["AAPL", "MSFT", "GOOGL", "AMZN", "JPM"]
data = yf.download(tickers, start="2019-01-01", end="2024-01-01")["Adj Close"]
log_returns = np.log(data / data.shift(1)).dropna()
# ── Equal-weight portfolio ─────────────────────────────────
weights = np.array([1/len(tickers)] * len(tickers))
portfolio_returns = log_returns @ weights
# ── Risk Measures Function ─────────────────────────────────
def full_risk_report(returns, name="Portfolio", alpha=0.05):
"""Compute a comprehensive set of risk measures."""
mu = returns.mean() * 252
sigma = returns.std() * np.sqrt(252)
# Semi-variance (downside deviation)
downside = returns[returns < 0]
semi_var = np.mean(np.minimum(returns, 0)**2) * 252
dd = np.sqrt(semi_var)
# VaR
var_hist = -np.percentile(returns, alpha * 100)
var_param = -(returns.mean() + norm.ppf(alpha) * returns.std())
# CVaR
threshold = np.percentile(returns, alpha * 100)
tail = returns[returns <= threshold]
cvar_hist = -tail.mean()
z = norm.ppf(alpha)
cvar_param = -(returns.mean() +
returns.std() * norm.pdf(z) / alpha)
# Maximum Drawdown
cum_returns = (1 + returns).cumprod()
running_max = cum_returns.cummax()
drawdown = (cum_returns - running_max) / running_max
max_dd = drawdown.min()
# Ratios
rf_daily = 0.02 / 252
sharpe = (returns.mean() - rf_daily) / returns.std() * np.sqrt(252)
sortino = (returns.mean() - rf_daily) / dd * np.sqrt(252) if dd > 0 else np.inf
print(f"\n{'='*55}")
print(f" Risk Report: {name}")
print(f"{'='*55}")
print(f" Annualized Return: {mu*100:>8.2f}%")
print(f" Annualized Volatility: {sigma*100:>8.2f}%")
print(f" Downside Deviation: {dd*100:>8.2f}%")
print(f" Sharpe Ratio: {sharpe:>8.4f}")
print(f" Sortino Ratio: {sortino:>8.4f}")
print(f" {'─'*53}")
print(f" 1-day {int((1-alpha)*100)}% VaR (Hist): {var_hist*100:>8.4f}%")
print(f" 1-day {int((1-alpha)*100)}% VaR (Param): {var_param*100:>8.4f}%")
print(f" 1-day {int((1-alpha)*100)}% CVaR (Hist): {cvar_hist*100:>8.4f}%")
print(f" 1-day {int((1-alpha)*100)}% CVaR (Param): {cvar_param*100:>8.4f}%")
print(f" CVaR / VaR Ratio: {cvar_hist/var_hist:>8.4f}")
print(f" {'─'*53}")
print(f" Maximum Drawdown: {max_dd*100:>8.2f}%")
print(f" Skewness: {returns.skew():>8.4f}")
print(f" Excess Kurtosis: {returns.kurtosis():>8.4f}")
print(f"{'='*55}")
return {
'return': mu, 'vol': sigma, 'dd': dd,
'sharpe': sharpe, 'sortino': sortino,
'var_hist': var_hist, 'cvar_hist': cvar_hist,
'max_dd': max_dd, 'skew': returns.skew(),
'kurtosis': returns.kurtosis()
}
# ── Run report for each stock and the portfolio ────────────
results = {}
for ticker in tickers:
results[ticker] = full_risk_report(
log_returns[ticker], name=ticker
)
results['EW Portfolio'] = full_risk_report(
portfolio_returns, name="Equal-Weight Portfolio"
)
# ── Summary comparison table ───────────────────────────────
summary = pd.DataFrame(results).T
summary.columns = [
'Ann. Return', 'Ann. Vol', 'Down. Dev.',
'Sharpe', 'Sortino', 'VaR (1d)', 'CVaR (1d)',
'Max DD', 'Skewness', 'Kurtosis'
]
print("\n\nSummary Comparison Table:")
print(summary.round(4).to_string())
Notice that the equal-weight portfolio typically has lower volatility, VaR, CVaR, and maximum drawdown than any individual stock. This is diversification at work — the same variance-reduction principle that makes averaging estimators more efficient. Portfolio risk is always ≤ the weighted average of individual risks (as long as correlations < 1).
9. Risk Is Not Static: Rolling Measures and Regime Changes
All the measures above assume a stationary return distribution. In practice, risk varies dramatically over time — volatility clusters, correlations spike during crises, and tail risk waxes and wanes.
9.1 Rolling Volatility and VaR
Python# Rolling risk measures with a 60-day window
window = 60
rolling_vol = portfolio_returns.rolling(window).std() * np.sqrt(252)
rolling_var = portfolio_returns.rolling(window).apply(
lambda x: -np.percentile(x, 5), raw=True
)
rolling_cvar = portfolio_returns.rolling(window).apply(
lambda x: -x[x <= np.percentile(x, 5)].mean(), raw=True
)
fig, axes = plt.subplots(3, 1, figsize=(12, 10), sharex=True)
axes[0].plot(rolling_vol, color='#1a365d', linewidth=1)
axes[0].set_title(f"{window}-Day Rolling Annualized Volatility")
axes[0].set_ylabel("Volatility")
axes[0].axhline(y=portfolio_returns.std()*np.sqrt(252),
color='red', linestyle='--', alpha=0.5,
label='Full-sample vol')
axes[0].legend()
axes[1].plot(rolling_var, color='#e53e3e', linewidth=1)
axes[1].set_title(f"{window}-Day Rolling 95% VaR (1-day)")
axes[1].set_ylabel("VaR (loss)")
axes[2].plot(rolling_cvar, color='#d69e2e', linewidth=1)
axes[2].set_title(f"{window}-Day Rolling 95% CVaR (1-day)")
axes[2].set_ylabel("CVaR (loss)")
plt.tight_layout()
plt.savefig("rolling_risk.png", dpi=150, bbox_inches='tight')
plt.show()
Rolling windows are the financial analogue of local estimation or kernel smoothing. The window size trades off bias and variance — short windows are noisy but responsive; long windows are smooth but lagged. The optimal window depends on the degree of nonstationarity, exactly as bandwidth selection depends on the smoothness of the underlying function.
9.2 The Volatility of Volatility
If volatility itself is random (which it is — see GARCH models), then point estimates of risk measures have their own uncertainty. The standard error of the sample standard deviation under normality is:
With fat-tailed distributions, the standard error is even larger. This means risk estimates themselves carry substantial estimation risk, especially for tail measures like VaR and CVaR which depend on very few observations.
A 1% VaR estimated from one year (250 observations) relies on roughly 2.5 data points in the tail. The confidence interval around this estimate is enormous. When someone quotes a precise VaR number like “$1,234,567.89,” the false precision masks massive estimation uncertainty. Always think about the standard error of your risk estimates.
10. Chapter Summary
Risk in finance started as variance — a measure you know intimately from statistics. But financial risk is richer and more nuanced than simple dispersion:
- Variance treats upside and downside symmetrically, which conflicts with how humans perceive risk.
- Semi-variance fixes the asymmetry by focusing only on downside deviations.
- VaR is a quantile — it answers “how bad can it get?” at a given confidence level, but says nothing about what happens in the tail.
- CVaR (Expected Shortfall) is the conditional expectation in the tail — a coherent risk measure that is replacing VaR in regulatory frameworks.
- Maximum drawdown captures the worst cumulative decline, reflecting the actual investor experience.
- All risk measures are estimates with their own sampling uncertainty, and they change over time.
As a statistician, you are uniquely equipped to understand that risk measures are just statistics of the return distribution — quantiles, moments, conditional expectations, running extremes. The insight you bring is that all of these are estimated quantities, subject to sampling error, model misspecification, and nonstationarity. Healthy skepticism about point estimates of risk is one of the most valuable things you can contribute to financial practice.