ACTIVITY 7.3: Plan measure monitoring and evaluation


By Tom Wood / Updated: 28 Nov 2019


Monitoringinfo-icon and evaluationinfo-icon both of the planning process and the measureinfo-icon implementation are crucial to the effectiveness of a Sustainable Urban Mobility Planinfo-icon.

Robust monitoring and evaluation processes help you to systematically learn from your experiences, to adjust and to improve your planning activities. Regular monitoring helps you ensure that you are making the necessary progress. Evaluation after implementation helps provide evidence of the effectiveness of the SUMP and its measures, which is essential for long-term success, as it allows decision makers to justify where money was spent and to avoid mistakes in future. Transparent reporting should ensure that evaluation results feed back into the public debate.

While strategic indicators and targets have already been defined earlier (see Activity 6.1 and 6.2), here the indicators at measure level are developed and the monitoring and evaluation activities are agreed in more detail. The aim of defining monitoring arrangements early is that they become an integrated part of measure implementation.



  • Define a set of indicators that allow monitoring and evaluation of all main measures with reasonable effort.

  • Agree on suitable monitoring arrangements (including responsibilities and budget) to assess the status of measure implementation and targetinfo-icon achievement, enabling timely and effective responses.

  • Make monitoring and evaluation arrangements an integral part of the further process.



  • Identify which information is needed to monitor and evaluate your measures.

    • Outcome: What impacts are expected from a measure? Define a suitable outcome or transport activity indicatorinfo-icon for each main measure or measure package to be able to evaluate its success. Strategic outcome indicators on general progress towards sustainable mobilityinfo-icon have already been selected in Activity 6.1. Here, more specific indicators on the objectives of individual measure packages are defined, e.g. emissions from buses, trucks and cars, number of accidents, or number of cycle trips in a certain area of the city.

    • Output: What policyinfo-icon, infrastructure or service is directly implemented in a measure? Define a suitable output indicator for each measure to be able to monitor the extent to which it has been carried out, e.g. km of new bus lanes or number of new buses in operation.

    • Input: What resources do you spend? Monitor the investment and maintenance costs (including labour costs) of each measure to react in time if costs get out of hand, and to be able to evaluate value for money.

  • Evaluate existing datainfo-icon sources, taking into account the results of previous data audits (see Activity 3.1 and 6.1). Identify gaps and, if necessary, develop or identify new sources of data (e.g. survey data, quantitative data from automatic measurements).

  • Before you start developing your own measure indicators, discuss the topic with key stakeholders and other organisations in your area, as they might already have adopted some. Progress is much easier to monitor if already implemented and accepted indicators are used.

  • Define a set of quantitative and qualitative measure indicators that provides sufficient information with reasonable effort. Take into account available data and limited resources for collection of new data when selecting indicators. Whenever possible, use standard indicators that are already well defined and where people know how to measure and analyse them.

  • Develop monitoring and evaluation arrangements for all selected indicators, both strategic and measure indicators. For each of them:

    • Develop a clear definition, reporting format, how data is measured, how the indicator value is calculated from the data, and how often it will be measured.

    • Establish a baselineinfo-icon value, i.e. a starting value and expected development without SUMP measures, as well as a target value of desired change

  • Agree on clear responsibilities and a budget for moni- toring and evaluation. Well-skilled staff members, or an external partner, should be responsible – ideally an independent body. The budget for monitoring and evaluation typically should be at least 5% of the total SUMP development budget.


Activities beyond essential requirements

  • Consider aligning your indicators to those of external funding bodies to make the measures attractive to funding. For example, measuring reductions in CO2 emissions might be required to get funding from national environmental agencies.

  • Integrate an assessmentinfo-icon of costs and benefits of the SUMP development process.

  • Plan for stakeholderinfo-icon involvement in monitoring and evaluation.

  • Coordinate with relevant local and regional stake- holders on regional indicators.


Details on the tasks

Figure 31: Categories of indicators with examples (May, T., 2016. CH4LLENGE Measure selection Manual – Selecting the most effective packages of measures for Sustainable Urban Mobility Plans, p. 28.)


Timing and coordination

  • Once measures and measure packages have been defined.

  • To be updated when the final set of actions has been agreed on (Activity 8.3), if needed.

  • Make monitoring and evaluation arrangements, including responsibilities and budget, part of the SUMP document (Activity 9.1), see also Figure 32 below.



✔ Suitable set of measure indicators selected.
✔ Monitoring and evaluation arrangements for all indicators developed.
✔ Responsibilities and budget for monitoring and evaluation agreed on.


Figure 32: Monitoring and evaluation in the SUMP process


Figure 33: Overview table to plan monitoring and evaluation activities filled with example indicators

More info: 


Ambitious monitoring process led by cross- institutional committees

The SUMP of Toulouse includes an ambitious plan for monitoring and evaluation. Several committees regularly monitor the SUMP and its measures and meet at least once a year. The committees are composed of different institutional, technical, civil society and research organisations. The committees are provided with different tools:
  • A SUMP observatory (for each measure: initial objectives, resources allocated, expected results & indicators which are updated by regular surveys).
  • A trip cost tool (per mode, for both users and for society)
  • A mobility dashboard (tracking of individual measures)
The involvement of partners in the monitoring activities is identified as a success factor.
Author: Mary Malicet and Christophe Doucet, Tisséo Collectivités, Toulouse,