16
1 Predicted-versus-Actual Studies: Why/how to do them and Lessons Learned Ken Cervenka Federal Transit Administration TRB Transportation Planning Applications Conference in Houston, Texas May 20, 2009

1 Predicted-versus-Actual Studies: Why/how to do them and Lessons Learned Ken Cervenka Federal Transit Administration TRB Transportation Planning Applications

Embed Size (px)

Citation preview

1

Predicted-versus-Actual Studies:Why/how to do them and Lessons Learned

Ken Cervenka

Federal Transit Administration

TRB Transportation Planning Applications Conference in Houston, Texas

May 20, 2009

TRB Transportation Planning Applications Conference in Houston 2May 2009

Topics

Why do them? How to do them? Lessons learned (so far)

TRB Transportation Planning Applications Conference in Houston 3May 2009

Why do them?

Forecasts (should) matter If they don’t matter, what are we doing here? Supports informed decision-making

Poor track record for accurate predictions Even aggregate numbers are often way off Capital costs Weekday traffic volumes Weekday transit ridership (boardings)

TRB Transportation Planning Applications Conference in Houston 4May 2009

Predicted-versus-Actual Traffic 104 Completed Toll Road Projects

Data Source: Standard & Poor’s, 2005 (Robert Bain)

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6

Actual/Forecast Traffic (Mean = 0.77)

TRB Transportation Planning Applications Conference in Houston 5May 2009

Predicted-versus-Actual Ridership 18 Transit Projects Completed 2003-2007

Average = 74.5%

50th Percentile = 63.8%

TRB Transportation Planning Applications Conference in Houston 6May 2009

So why do them?

Learn from past failures - and successes What went wrong and what went right More than just aggregate checks

Avoid hand-waving speculation Identify major drivers for errors (in either direction)

Insights for improved prediction tools Better understanding of uncertainties More informed decision-making Better use of limited funds

TRB Transportation Planning Applications Conference in Houston 7May 2009

Predicted-versus-ActualSome Examples

New transportation project Roadway and/or transit Policy change

New developments Change over time (“validation”)

2009 forecast of 2005 base year 2000 forecast of 2005 base year (backcast)

TRB Transportation Planning Applications Conference in Houston 8May 2009

So why else do them? Well…

Required for FTA discretionary funding of major transit projects (New Starts program)

Annual report to Congress Before-and-after comparisons Predicted-versus-actual (after) comparisons

See Session 11 from FTA’s March 2009 Travel Forecasting for New Starts workshop http://www.fta.dot.gov/planning/newstarts/planning_

environment_9547.html

TRB Transportation Planning Applications Conference in Houston 9May 2009

How to do them

Start with the big picture comparisons What we thought would happen What actually happened

Gain insights by digging into the details Not just traffic volumes and transit passenger boardings Forensic analysis Reasons for big picture misses Confirmation of big picture successes

Prepare new post-implementation forecasts New model runs with corrected inputs Special runs to track down sources for other errors

TRB Transportation Planning Applications Conference in Houston 10May 2009

How to do themExamples

Assess the accuracy of forecast inputs District-level demographics Roadway system Transit service levels and fares Auto-related costs, etc.

Check major transit rider travel patterns District-to-district flows Trip purpose and socio-economic class Access and egress modes

TRB Transportation Planning Applications Conference in Houston 11May 2009

How to do themFTA’s Approach

Required: preservation of forecasts Project scope, capital cost, service levels, O&M

cost, and ridership (plus others as needed) For different project planning milestones Analysis/explanation of changes in forecasts

Ability to replicate the forecasts, e.g. for ridership: Input demographics/networks and outputs Scripts and application documentation

DVDs to FTA do not include proprietary software

TRB Transportation Planning Applications Conference in Houston 12May 2009

How to do themFTA’s Approach

Approval of Before-and-After Study Plan Required for grant approval Data collection plan

E.g., before and after transit rider surveys Analytical approaches

Approval of Before-and-After Study Report Required to close out a grant Analysis/explanation of project impacts Analysis/explanation of prediction errors

TRB Transportation Planning Applications Conference in Houston 13May 2009

Lessons LearnedRoadway and Transit Forecasts

Compounded optimism (optimism bias) The “high side” of feasible assumptions Vested interests, ethics, and objectivity

Travel model inputs matter Networks and demographics Parking costs, fares, etc. Value of time assumptions

Person trip tables (!)

TRB Transportation Planning Applications Conference in Houston 14May 2009

Lessons LearnedRoadway and Transit Forecasts

Quality control checks matter Model testing

Data adequate for the task Caution on over-specification

Plausibility checks of behaviors Big picture insights for key markets

TRB Transportation Planning Applications Conference in Houston 15May 2009

Lessons LearnedRoadway and Transit Forecasts

Recognition of uncertainties Local predictions of new behaviors

Local calibration not possible Toll roads in areas without toll roads Choice riders in areas with few/no choice riders

Park-and-ride service Premium service (e.g., light rail)

TRB Transportation Planning Applications Conference in Houston 16May 2009

Lessons LearnedFTA Perspective

Useful information for the region and the profession Biggest missed opportunity of New Starts

program A compelling case for clear explanations

Early and often coordination Clearly defined responsibilities and budgets Preservation and analysis performed while

memories are still fresh Greater attention to opening-year forecasts