Learn Without Walls

Module 9: The Efficient Market Hypothesis as a Statistical Claim

Testing whether returns are predictable using your time series toolkit

Part II of 5 Module 9 of 22

1. What the Efficient Market Hypothesis Actually Says

The Efficient Market Hypothesis (EMH), formalized by Eugene Fama (1970), is one of the most debated propositions in finance. It is also one of the most misunderstood. Let's state it precisely, as a statistician would.

Finance Term

Efficient Market Hypothesis (EMH): Asset prices fully reflect all available information. Formally, the conditional expectation of the asset's return, given the information set Ωt, equals the equilibrium expected return: E[rt+1 | Ωt] = requilibrium.

In plain language: you cannot systematically earn risk-adjusted excess returns using the information in Ωt. Any information you have is already priced in. The returns you observe are “fair” compensation for the risk you bear.

Stats Bridge

The EMH is a statement about conditional expectations. It says that excess returns (after adjusting for risk) are a martingale difference sequence with respect to the information filtration Ωt: E[εt+1 | Ωt] = 0. This is equivalent to saying that excess returns are unpredictable given the information set. Every test of EMH is a test of whether some function of Ωt predicts future excess returns.

1.1 The Three Forms of EMH

The three forms differ in what information set Ωt contains:

Form Information Set Ωt Implication Statistical Test
Weak Past prices and returns only Technical analysis (chart patterns) cannot beat the market Autocorrelation tests, variance ratio tests, runs tests
Semi-strong All publicly available information Fundamental analysis (earnings, ratios) cannot beat the market Event studies, cross-sectional return predictability
Strong All information (including insider information) Even insiders cannot beat the market Insider trading studies
Key Insight

Most academic research supports the weak form and a qualified version of the semi-strong form. The strong form is universally rejected — insiders do earn excess returns, which is why insider trading is illegal. The interesting debate is about the semi-strong form: can publicly available information (like accounting ratios or analyst forecasts) predict risk-adjusted returns?

1.2 What EMH Does NOT Say

2. Testing the Weak Form: Autocorrelation and Serial Dependence

The weak form of EMH implies that past returns should not predict future returns. If returns are an i.i.d. sequence (or more generally, a martingale difference sequence), then the autocorrelation at any lag should be zero.

ρ(k) = Corr(rt, rt-k) = 0    for all k ≥ 1

2.1 The Autocorrelation Function Test

Stats Bridge

This is a direct test of white noise. Under the null of no autocorrelation, the sample autocorrelation ρ̂(k) is approximately N(0, 1/T). The Ljung-Box Q-statistic tests whether the first K autocorrelations are jointly zero: Q = T(T+2) ∑k=1K ρ̂(k)2/(T-k) ~ χ2(K). This is a standard time series diagnostic you've used many times.

Pythonimport numpy as np
import pandas as pd
import yfinance as yf
import matplotlib.pyplot as plt
from statsmodels.tsa.stattools import acf
from statsmodels.stats.diagnostic import acorr_ljungbox

# ── Download S&P 500 returns ──────────────────────────────
data = yf.download("SPY", start="2010-01-01", end="2024-01-01")
prices = data["Adj Close"]
returns = prices.pct_change().dropna()
log_returns = np.log(prices / prices.shift(1)).dropna()

# ── Autocorrelation analysis ──────────────────────────────
max_lag = 20
acf_values, confint = acf(returns, nlags=max_lag, alpha=0.05)

print("Autocorrelation of Daily Returns (SPY):")
print(f"{'Lag':>4s} {'ACF':>8s} {'95% CI Low':>11s} {'95% CI High':>12s} {'Signif?':>8s}")
print(f"{'-'*45}")
for k in range(1, max_lag + 1):
    lower = confint[k][0] - acf_values[k]
    upper = confint[k][1] - acf_values[k]
    signif = "*" if abs(acf_values[k]) > 1.96 / np.sqrt(len(returns)) else ""
    print(f"{k:>4d} {acf_values[k]:>8.4f} {confint[k][0]:>11.4f} "
          f"{confint[k][1]:>12.4f} {signif:>8s}")

# ── Ljung-Box test ────────────────────────────────────────
lb_result = acorr_ljungbox(returns, lags=[5, 10, 20], return_df=True)
print("\nLjung-Box Q-test:")
print(lb_result)

# ── Plot ACF ──────────────────────────────────────────────
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))

# ACF of returns
from statsmodels.graphics.tsaplots import plot_acf
plot_acf(returns, lags=max_lag, ax=ax1, alpha=0.05)
ax1.set_title("ACF of Daily Returns")
ax1.set_xlabel("Lag (days)")

# ACF of absolute returns (volatility clustering proxy)
plot_acf(returns.abs(), lags=max_lag, ax=ax2, alpha=0.05)
ax2.set_title("ACF of |Returns| (Volatility Clustering)")
ax2.set_xlabel("Lag (days)")

plt.tight_layout()
plt.savefig("acf_analysis.png", dpi=150, bbox_inches='tight')
plt.show()
Key Insight

A crucial subtlety: daily return autocorrelations for liquid stocks and indices are tiny (typically |ρ(1)| < 0.05) and often statistically insignificant. But the autocorrelation of absolute returns or squared returns is large and highly significant, often persisting for weeks or months. Returns are approximately uncorrelated (consistent with weak-form efficiency), but they are not independent — the volatility process has strong memory. The EMH is about the conditional mean, not the conditional variance.

3. The Variance Ratio Test

The variance ratio test, introduced by Lo and MacKinlay (1988), is one of the most elegant tests of the random walk hypothesis. The idea is simple:

If prices follow a random walk, then the variance of k-period returns should be exactly k times the variance of 1-period returns. This follows from the independence assumption:

Var(rt(k)) = k ⋅ Var(rt(1))

where rt(k) = rt + rt-1 + … + rt-k+1

The variance ratio is:

VR(k) = Var(rt(k)) / [k ⋅ Var(rt(1))]

Under the random walk null, VR(k) = 1 for all k.

VR(k) Value Interpretation Implication
VR(k) = 1 Random walk (returns are uncorrelated) Consistent with weak-form EMH
VR(k) > 1 Positive autocorrelation (momentum/trending) Past winners continue winning
VR(k) < 1 Negative autocorrelation (mean reversion) Past losers tend to reverse
Stats Bridge

The variance ratio test is related to the Bartlett test for white noise. The test statistic under the null (assuming heteroscedasticity-robust version) is asymptotically standard normal. Lo and MacKinlay developed both homoscedasticity- assuming and heteroscedasticity-robust versions — always use the robust one for financial data.

Pythondef variance_ratio_test(returns, k, robust=True):
    """
    Lo-MacKinlay Variance Ratio test.

    H0: VR(k) = 1 (random walk)
    H1: VR(k) != 1

    Parameters:
      returns: array of log returns
      k: holding period (e.g., 5 for weekly)
      robust: if True, use heteroscedasticity-robust version
    """
    T = len(returns)
    mu = returns.mean()

    # Variance of 1-period returns
    sigma2_1 = np.sum((returns - mu)**2) / (T - 1)

    # Variance of k-period returns
    k_returns = pd.Series(returns).rolling(k).sum().dropna().values
    sigma2_k = np.sum((k_returns - k * mu)**2) / (T - k)

    # Variance ratio
    vr = sigma2_k / (k * sigma2_1)

    if robust:
        # Heteroscedasticity-robust test statistic
        # (Lo-MacKinlay, 1988, Theorem 2)
        delta_j = np.zeros(k - 1)
        for j in range(1, k):
            num = np.sum(
                (returns[j:] - mu)**2 * (returns[:-j] - mu)**2
            )
            den = (np.sum((returns - mu)**2))**2
            delta_j[j-1] = T * num / den

        weights = np.array([2 * (k - j) / k for j in range(1, k)])
        theta = np.sum(weights**2 * delta_j)
        z_stat = (vr - 1) / np.sqrt(theta)
    else:
        # Under homoscedasticity
        z_stat = (vr - 1) / np.sqrt(2 * (k - 1) / (3 * k * T))

    from scipy.stats import norm
    p_value = 2 * (1 - norm.cdf(abs(z_stat)))

    return vr, z_stat, p_value

# ── Run variance ratio tests for multiple horizons ────────
print("Variance Ratio Tests for S&P 500 (SPY)")
print(f"{'k':>4s} {'VR(k)':>8s} {'z-stat':>8s} {'p-value':>10s} {'Result':>15s}")
print(f"{'-'*48}")

for k in [2, 5, 10, 20, 40, 60, 120]:
    vr, z, p = variance_ratio_test(log_returns.values, k, robust=True)
    result = "Reject H0" if p < 0.05 else "Fail to reject"
    print(f"{k:>4d} {vr:>8.4f} {z:>8.4f} {p:>10.4f} {result:>15s}")

# ── Plot variance ratio vs k ─────────────────────────────
ks = range(2, 121)
vrs = [variance_ratio_test(log_returns.values, k, robust=True)[0]
       for k in ks]

plt.figure(figsize=(10, 6))
plt.plot(list(ks), vrs, color='#1a365d', linewidth=1.5)
plt.axhline(y=1, color='#e53e3e', linestyle='--', label='Random Walk (VR=1)')
plt.fill_between(list(ks),
                 [1 - 1.96*np.sqrt(2*(k-1)/(3*k*len(log_returns)))
                  for k in ks],
                 [1 + 1.96*np.sqrt(2*(k-1)/(3*k*len(log_returns)))
                  for k in ks],
                 alpha=0.2, color='#e53e3e', label='95% CI (homoscedastic)')
plt.xlabel("Holding Period k (days)")
plt.ylabel("Variance Ratio VR(k)")
plt.title("Variance Ratio Test: SPY Daily Returns")
plt.legend()
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.savefig("variance_ratio.png", dpi=150, bbox_inches='tight')
plt.show()

4. The Runs Test for Randomness

The runs test is a non-parametric test for randomness. A run is a consecutive sequence of returns with the same sign (all positive or all negative). Under randomness, the expected number and length of runs have known distributions.

Stats Bridge

The Wald-Wolfowitz runs test is a standard non-parametric test you may have encountered in your first statistics course. It tests whether a binary sequence is random by counting the total number of runs. Under H0 (randomness), the number of runs R is approximately normal with known mean and variance for large samples.

E[R] = (2 n+ n) / (n+ + n) + 1

Var(R) = (2 n+ n (2 n+ n − n+ − n)) / ((n+ + n)2 (n+ + n − 1))
Pythondef runs_test(returns):
    """
    Wald-Wolfowitz runs test for randomness.
    Tests whether the sequence of positive/negative returns is random.
    """
    from scipy.stats import norm

    signs = np.sign(returns)
    signs = signs[signs != 0]  # remove zeros

    n_pos = np.sum(signs > 0)
    n_neg = np.sum(signs < 0)
    n = n_pos + n_neg

    # Count runs
    runs = 1
    for i in range(1, len(signs)):
        if signs.iloc[i] != signs.iloc[i-1]:
            runs += 1

    # Expected runs and variance under H0
    expected_runs = (2 * n_pos * n_neg) / n + 1
    var_runs = (2 * n_pos * n_neg * (2 * n_pos * n_neg - n)) / \
               (n**2 * (n - 1))

    z_stat = (runs - expected_runs) / np.sqrt(var_runs)
    p_value = 2 * (1 - norm.cdf(abs(z_stat)))

    return {
        'n_observations': n,
        'n_positive': n_pos,
        'n_negative': n_neg,
        'observed_runs': runs,
        'expected_runs': expected_runs,
        'z_statistic': z_stat,
        'p_value': p_value
    }

# Run the test
result = runs_test(returns)

print("Runs Test for Randomness (SPY Daily Returns)")
print(f"{'='*50}")
for key, val in result.items():
    if isinstance(val, float):
        print(f"  {key:20s}: {val:.4f}")
    else:
        print(f"  {key:20s}: {val}")

if result['p_value'] < 0.05:
    if result['z_statistic'] < 0:
        print("\n  => Fewer runs than expected: POSITIVE autocorrelation")
        print("     (Trending behavior / momentum)")
    else:
        print("\n  => More runs than expected: NEGATIVE autocorrelation")
        print("     (Mean-reverting behavior)")
else:
    print("\n  => Cannot reject randomness at 5% level")
    print("     (Consistent with weak-form EMH)")

5. Event Studies: Testing Semi-Strong Form Efficiency

The event study methodology, pioneered by Fama, Fisher, Jensen, and Roll (1969), is the workhorse for testing semi-strong efficiency. The logic is:

  1. Define the event (earnings announcement, merger, dividend change).
  2. Estimate normal returns using a model (e.g., CAPM) over an estimation window before the event.
  3. Compute abnormal returns: ARt = rt − E[rt | model].
  4. Aggregate: Cumulative Abnormal Return (CAR) = ∑ ARt over the event window.
  5. Test: Is the CAR significantly different from zero?
Stats Bridge

An event study is a difference-in-means test — or more precisely, a test of whether the residuals from a regression model are systematically non-zero around a specific date. The CAR is a cumulative sum of regression residuals, and its t-test is straightforward. If you've done a pre-post analysis or a difference-in- differences study, you've done an event study.

ARt = rt − (α̂ + β̂ ⋅ rm,t)

CAR(t1, t2) = ∑t=t1t2 ARt

tCAR = CAR / (σ̂AR ⋅ √L)

where L = t2 − t1 + 1 is the length of the event window.

5.1 What EMH Predicts for Event Studies

Finding Consistent with EMH? Explanation
Price jumps immediately at announcement Yes New information incorporated instantly
Price drifts slowly after announcement No Post-Earnings Announcement Drift (PEAD) — the most robust anomaly
Price moves before announcement Maybe Could be information leakage, or market anticipation
No price reaction to announcement Yes (if priced in) Market already knew the information
Pythonimport numpy as np
import pandas as pd
import yfinance as yf
import statsmodels.api as sm

def simple_event_study(stock_ticker, market_ticker, event_date,
                       estimation_window=120, event_window=10):
    """
    Conduct a simple event study around a given date.

    Parameters:
      stock_ticker: ticker of the stock to study
      market_ticker: ticker of the market proxy
      event_date: the event date (string, 'YYYY-MM-DD')
      estimation_window: days before event for model estimation
      event_window: days before and after event to analyze
    """
    # Download data with buffer
    start = pd.Timestamp(event_date) - pd.Timedelta(days=estimation_window*2)
    end = pd.Timestamp(event_date) + pd.Timedelta(days=event_window*3)
    data = yf.download([stock_ticker, market_ticker],
                       start=start, end=end)["Adj Close"]
    rets = data.pct_change().dropna()

    # Find the event date in trading calendar
    event_ts = pd.Timestamp(event_date)
    trading_dates = rets.index
    event_idx = trading_dates.get_indexer([event_ts], method='nearest')[0]

    # Estimation window: [-estimation_window, -event_window-1] relative to event
    est_start = event_idx - estimation_window
    est_end = event_idx - event_window - 1

    # Event window: [-event_window, +event_window]
    evt_start = event_idx - event_window
    evt_end = event_idx + event_window

    # Estimate CAPM in estimation window
    est_data = rets.iloc[est_start:est_end+1]
    X_est = sm.add_constant(est_data[market_ticker])
    y_est = est_data[stock_ticker]
    model = sm.OLS(y_est, X_est).fit()
    alpha_hat = model.params['const']
    beta_hat = model.params[market_ticker]
    sigma_hat = model.resid.std()

    # Compute abnormal returns in event window
    evt_data = rets.iloc[evt_start:evt_end+1]
    expected_returns = alpha_hat + beta_hat * evt_data[market_ticker]
    abnormal_returns = evt_data[stock_ticker] - expected_returns

    # Cumulative abnormal returns
    car = abnormal_returns.cumsum()

    # Relative day index
    relative_days = range(-event_window, event_window + 1)

    # t-test for CAR
    L = len(abnormal_returns)
    car_total = car.iloc[-1]
    t_stat = car_total / (sigma_hat * np.sqrt(L))
    from scipy.stats import t as t_dist
    p_value = 2 * (1 - t_dist.cdf(abs(t_stat), model.df_resid))

    # Print results
    print(f"\nEvent Study: {stock_ticker} around {event_date}")
    print(f"{'='*55}")
    print(f"  Estimation window: {est_data.index[0].date()} to "
          f"{est_data.index[-1].date()}")
    print(f"  CAPM alpha: {alpha_hat:.6f}, beta: {beta_hat:.4f}")
    print(f"  Residual std: {sigma_hat:.6f}")
    print(f"  {'─'*53}")
    print(f"  Event window: [{-event_window}, +{event_window}] days")
    print(f"  CAR over event window: {car_total*100:.2f}%")
    print(f"  t-statistic: {t_stat:.4f}")
    print(f"  p-value: {p_value:.4f}")
    print(f"{'='*55}")

    # Plot
    fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 8))

    ax1.bar(relative_days[:len(abnormal_returns)],
            abnormal_returns.values * 100,
            color=['#38a169' if x > 0 else '#e53e3e'
                   for x in abnormal_returns.values],
            alpha=0.7)
    ax1.axvline(x=0, color='black', linestyle='--', linewidth=1.5,
                label='Event date')
    ax1.set_ylabel("Abnormal Return (%)")
    ax1.set_title(f"Abnormal Returns: {stock_ticker}")
    ax1.legend()
    ax1.grid(True, alpha=0.3)

    ax2.plot(relative_days[:len(car)], car.values * 100,
             'o-', color='#1a365d', linewidth=2)
    ax2.axhline(y=0, color='gray', linestyle='-', alpha=0.5)
    ax2.axvline(x=0, color='black', linestyle='--', linewidth=1.5,
                label='Event date')
    ax2.fill_between(relative_days[:len(car)], 0, car.values * 100,
                     alpha=0.2, color='#1a365d')
    ax2.set_ylabel("Cumulative Abnormal Return (%)")
    ax2.set_xlabel("Days Relative to Event")
    ax2.set_title(f"CAR: {stock_ticker} (t-stat = {t_stat:.2f})")
    ax2.legend()
    ax2.grid(True, alpha=0.3)

    plt.tight_layout()
    plt.savefig(f"event_study_{stock_ticker}.png", dpi=150,
                bbox_inches='tight')
    plt.show()

    return abnormal_returns, car

# Example: Apple earnings announcement
# (Use a known date - adjust as needed)
ar, car = simple_event_study(
    "AAPL", "SPY", "2023-10-26",  # Example earnings date
    estimation_window=120, event_window=10
)
Common Pitfall

The event study assumes that the CAPM (or whatever model you use for expected returns) is correctly specified during the event window. If the model is wrong, the “abnormal returns” you measure might just be model errors. This is the joint hypothesis problem in action: you're simultaneously testing EMH and your return model.

6. The Joint Hypothesis Problem

This is the deepest conceptual issue in testing the EMH, and it's essentially a model identification problem:

Key Insight

You can never test EMH alone. Every test of market efficiency is simultaneously a test of the model used to define “expected returns.” If you find that stocks with low P/E ratios earn high returns, there are two explanations: (1) the market is inefficient (P/E predicts mispricing), or (2) the market is efficient, and low P/E is a proxy for some risk that the CAPM doesn't capture. You cannot distinguish between these without knowing the “true” model of expected returns — which you don't.

Stats Bridge

This is the omitted variable bias problem from econometrics. If the true model has K factors but you only use K−1, the remaining factor's effect shows up in the alpha — making it look like there's a predictable excess return when there isn't. You can never be sure you've included all relevant factors. This is why “proving” market inefficiency is extraordinarily difficult.

The joint hypothesis problem means that every “anomaly” (pattern in returns that seems to violate EMH) has two possible interpretations:

Anomaly Inefficiency Interpretation Risk-Based Interpretation
Value premium (low P/E outperforms) Market underprices boring/distressed firms Value stocks are riskier (financial distress risk)
Size premium (small caps outperform) Market neglects small firms Small stocks have liquidity risk, higher beta in bad times
Momentum (past winners keep winning) Investors under-react to information Momentum is compensation for crash risk
Low volatility anomaly (safe stocks outperform) Investors overpay for lottery-like stocks Leverage constraints create demand for high-beta stocks

7. Predictability Does Not Equal Profitability

Even if you can statistically predict returns (reject the null of no predictability), it does not follow that you can make money from the prediction. Several wedges stand between statistical significance and economic significance:

Pythondef economic_significance(returns, transaction_cost_bps=10):
    """
    Compare statistical significance of autocorrelation
    with economic significance after transaction costs.
    """
    # Strategy: buy if yesterday's return was positive,
    # sell if negative (momentum at daily frequency)
    signals = np.sign(returns.shift(1))
    strategy_returns = signals * returns

    # Remove first observation (no signal)
    strategy_returns = strategy_returns.dropna()

    # Gross performance
    gross_return = strategy_returns.mean() * 252
    gross_sharpe = strategy_returns.mean() / strategy_returns.std() * np.sqrt(252)

    # Transaction costs (trade every day = 100% one-way turnover)
    tc_daily = transaction_cost_bps / 10000
    net_returns = strategy_returns - tc_daily  # every day we trade
    net_return = net_returns.mean() * 252
    net_sharpe = net_returns.mean() / net_returns.std() * np.sqrt(252)

    # Autocorrelation test
    from scipy.stats import pearsonr
    corr, pval = pearsonr(returns.iloc[:-1], returns.iloc[1:])

    print(f"Daily Momentum Strategy Analysis")
    print(f"{'='*50}")
    print(f"  Lag-1 autocorrelation:   {corr:.6f}")
    print(f"  p-value:                 {pval:.4f}")
    print(f"  Statistically signif.?   {'Yes' if pval < 0.05 else 'No'}")
    print(f"  {'─'*48}")
    print(f"  Gross annual return:     {gross_return*100:.2f}%")
    print(f"  Gross Sharpe ratio:      {gross_sharpe:.4f}")
    print(f"  {'─'*48}")
    print(f"  Transaction cost:        {transaction_cost_bps} bps per trade")
    print(f"  Net annual return:       {net_return*100:.2f}%")
    print(f"  Net Sharpe ratio:        {net_sharpe:.4f}")
    print(f"  Economically signif.?    {'Yes' if net_return > 0 else 'No'}")
    print(f"{'='*50}")

economic_significance(returns, transaction_cost_bps=5)
print()
economic_significance(returns, transaction_cost_bps=10)
print()
economic_significance(returns, transaction_cost_bps=20)
Key Insight

Statistical significance at the 5% level requires an effect size proportional to 1/√T. With T = 3,500 daily observations, an autocorrelation as tiny as 0.03 can be “statistically significant.” But a 0.03 autocorrelation generates perhaps 50 basis points of annual return before costs — not enough to cover transaction costs for most strategies. The market can be “statistically inefficient” but “economically efficient” once you account for the costs of exploiting the inefficiency.

8. Beyond EMH: The Adaptive Market Hypothesis

Andrew Lo (2004) proposed the Adaptive Market Hypothesis (AMH) as a reconciliation between EMH and behavioral finance. The key ideas:

Stats Bridge

The AMH is fundamentally about nonstationarity. Rather than testing whether ρ(1) = 0 over the entire sample, we should test whether ρ(1) varies over time. Rolling-window autocorrelation tests, structural break tests, and regime-switching models are the appropriate statistical tools. The market might be efficient in one regime and inefficient in another — the stationary tests we've been running may miss this entirely.

Python# Rolling autocorrelation to test the AMH
window = 252  # 1-year rolling window

rolling_acf1 = returns.rolling(window).apply(
    lambda x: x.autocorr(lag=1), raw=False
)

# Rolling variance ratio (k=5)
def rolling_vr(returns, window, k=5):
    vr_series = []
    for end in range(window, len(returns)):
        chunk = returns.iloc[end-window:end].values
        var_1 = np.var(chunk, ddof=1)
        k_rets = pd.Series(chunk).rolling(k).sum().dropna().values
        var_k = np.var(k_rets, ddof=1)
        vr = var_k / (k * var_1) if var_1 > 0 else 1
        vr_series.append(vr)
    return pd.Series(vr_series, index=returns.index[window:])

rolling_vr5 = rolling_vr(returns, window, k=5)

fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8), sharex=True)

ax1.plot(rolling_acf1, color='#1a365d', linewidth=1)
ax1.axhline(y=0, color='#e53e3e', linestyle='--')
ax1.fill_between(rolling_acf1.index,
                 -1.96/np.sqrt(window), 1.96/np.sqrt(window),
                 alpha=0.2, color='#e53e3e', label='95% CI')
ax1.set_ylabel("Lag-1 Autocorrelation")
ax1.set_title("Rolling 1-Year Autocorrelation of SPY Returns")
ax1.legend()
ax1.grid(True, alpha=0.3)

ax2.plot(rolling_vr5, color='#1a365d', linewidth=1)
ax2.axhline(y=1, color='#e53e3e', linestyle='--',
            label='Random Walk (VR=1)')
ax2.set_ylabel("Variance Ratio VR(5)")
ax2.set_title("Rolling 1-Year Variance Ratio (k=5)")
ax2.legend()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig("adaptive_market.png", dpi=150, bbox_inches='tight')
plt.show()

9. Chapter Summary

The Efficient Market Hypothesis is a statistical claim about the unpredictability of risk-adjusted returns. Testing it requires the full arsenal of time series analysis:

Stats Bridge

The EMH debate is fundamentally about predictability — can you build a forecasting model for returns that has a positive out-of-sample R2 after accounting for transaction costs and risk? Your training in time series analysis, forecasting evaluation, cross-validation, and multiple testing correction gives you the exact tools to engage rigorously with this question. The answer, for what it's worth, is: slightly, sometimes, for some assets, if you're careful.