TRB Innovations in Travel Modeling Conference
Call for Content

Summer 2020 in Seattle, WA

The Transportation Research Board Innovations in Travel Modeling Conference will be held in the Summer of 2020 in Seattle, WA. The Organizing Committee has selected the following four technical issues to focus the conference around: data solutions, modeling approaches, useful forecasts, and getting there in the real world and is seeking content submissions that further our knowledge and understanding of them. Specific questions and issues on each topic area are further detailed below. Topics submitted that respond to these questions and issues will be given priority.

Content Type

The Committee is seeking a variety of content types which respond to the focus issues and questions as follows:

  • Discussion of a specific innovation or case study which you would like to put forward for review;
  • Discussion on a specific point of disagreement which is either beneficial to debate or critical to resolve;
  • Lecture on a well-established, fundamental topic which the industry should be better-versed in;
  • Tutorial of an important analysis technique which can take place in real-time with cross-platform, open-source tools;
  • Analysis competition which can be completed with available and open data;
  • A full session of your choosing which supports the goals of and is within the areas of focus for the conference and has a well-articulated outcome; or
  • Discussion, lecture, or full session that focuses on synthesizing a topic area and moving it from research to practice


*Notes

  • The Committee wants to encourage dynamic discussion and does not plan on traditional lectern sessions. Accordingly, the Committee will group submissions detailing specific innovations or case studies into panel discussions or debates with a common theme. Session participants will work together with the Committee leading up to the conference to craft useful and engaging sessions.
  • The Committee welcomes both partially- or fully-formed ideas, so long as they can be sufficiently documented ahead enough of time in order to warrant an in-depth discussion.

    Submission and Review Process

    Content review will occur in two stages. Initial submissions should be in the form of a brief narrative which describes the motivation and outcomes of the proposed content in addition to a maximum of 2000 character (with spaces, approx 300 words) which describes the content itself. Submissions should refer back to the technical focus areas of the conference listed above and describe how they relate to them. Sessions will be prioritized where interesting questions, that are often barely addressed at conference presentations, can be discussed and debated; and clear outcomes can be synthesized. Initial submissions are due June 30th (midnight, anywhere in the world) and must be submitted electronically.

Initial submissions will be screened by the conference organizing committee based on how well the proposed content fits within the vision of the conference in addition to the quality of the content itself. Submissions that proceed from the screening stage will require an additional, longer-form submission, the format of which will be dependent upon the type of content that is proposed. For example, a proposed lecture would likely require a lecture outline or notes and a proposed discussion around an innovation would likely require a short technical paper.

Conference Focus Areas

Data Solutions

What are techniques and strategies to assess, acquire, process, analyze, visualize and validate data of increasing volumes from heterogeneous sources, some with restrictions, alone or in combination with more traditional data sources? How to correctly, safely, and expediently select, expand and analyze from the many data fire-hoses? How does one identify standards for reporting on data sources, guidelines for assessing and applying data and reporting on results, best practices with respect to identifying and addressing privacy concerns, sharing best practices and lessons learned? More specifically:

Data Acquisition

  • What type of data sources are out there that we could be using, and for what purposes? What type of data is appropriate for which applications?
  • How do we evaluate those sources against our needs?
  • How do we increase confidence in the data for users as well as decision-makers? How do we assess the representativeness of a particular data source?
  • What questions should we be asking when evaluating data products? What are and how can we address barriers and challenges to acquiring data from various sources?
  • How are “traditional” data sources and data collection methods changing in response to recent challenges (public perception of data privacy, response rates, etc.)?
  • What are practices and frameworks that support best practices? What common terms should we be using?

    Data Processing and Application
  • How do we process the data for specific analyses, visualization or applications? What processes or procedures should we employ in order to ensure privacy?
  • How and when should new data be combined with traditional data sources? Which type of data can we confidently fuse (or not fuse)?
  • How do we validate the results?
  • What open source programming, software, or tools work well (or should be used with caution)?
  • What are practices and frameworks that support best practices?
  • How do we monitor and analyze transportation network and policy actions (e.g., built a connected bike network) and their resulting impacts?

Modeling Approaches

What are the strengths, weaknesses, and appropriate roles of data-driven vs behavior-driven models and how they might potentially work together? More specifically:

  • How are behavioural models and data driven models complementary, interdependent, or mutually exclusive?
  • How reliable and useful are complex model systems given their cost and potential uncertainty propagation?
  • How well do we understand the limitations and assumptions of the data vis-a-vis the limitations and assumptions in model systems themselves?
  • What are existing and novel experimental setups that support important learning objectives?
  • What are systematic ways to know if a model is adequate for a given problem?
  • Are there ways to interpret the meaning of machine learning models and how and why is this important, or not?
  • When building a model, is it most effective to start with the questions, the data, or a framework? How would your answers to specific policy questions change based on each path?


The Committee seeks submissions which can lead to the following outcomes:

  • Standardized terms and notation (for example, what is model validation in a data-driven versus behavior-driven context?);
  • Guidelines (for example, guidelines for effective validation of models); and
  • Associated calls for action to researchers and practitioners.

Useful Forecasts

How do we develop forecasts with useful representations of risk and uncertainty? How do we assess/evaluate how well we did with presenting a decision-space (as opposed to validation of existing behavior?). More specifically:

  • How do we ensure that the forecasting approach is the most useful and relevant to a particular decision-making context? Are different tools tuned to different planning questions? Are we getting to end goals like health and accessibility or only intermediate goals like mobility and delays?
  • What does accuracy look like from a historical perspective? What drives it? And how important is it to decisions at different points in the planning process?
  • What is the relationship between complexity and realistic representations of decision-making within models and their usefulness to answering questions?
  • Ensemble forecast techniques which incorporate a variety of modeling approaches (and the assumptions embedded in them). Examples of reconciling tools with different methods to tackle a question.
  • Tools and techniques to evaluate forecasts beyond single point accuracy or enable multiple scenarios/multiple futures.
  • Techniques to produce useful points of information when uncertainty is so great and multi-dimensional. Which uncertainties should we be accounting for and how?
  • Effective communication and visualization of model results with many dimensions of uncertainty and ranges.
  • Using visualizations to help understand trade-offs or see value in policies that are politically challenging.

Learning

The conference will devote several sessions to a “learning track” where we dive into the methods, tools and technologies that are broadly useful to our field. Previous conferences have had “code-alongs” for Git, R, and Python, for example. As part of this Call for Content we seek input and suggestions on the most timely and useful topics and the learning format (code-along, lectures, panels).

Some potential examples of the types of learning topics we think people might be interested in:

  • Machine learning 101 – focusing on principles and when and how to use machine learning;
  • Managing large datasets – database 101, cloud platforms, case studies or sample project walkthroughs;
  • R data analysis ( Tidyverse / Geospatial / Reproducible Science ) tutorial;
  • Data visualization techniques, especially interactive visualization;
  • Ensemble modeling lecture or tutorial;
  • Hackathon session on a specific project, organizing attendees into teams to compete on potential solutions to tough problems; and
  • ActivitySim (or similar) tutorial.