The combination of route alignment and facility type must be compatible with the needs of the particular cyclist target audience and the route itself should be compatible with the overall network, noting that different routes may have different target audiences.
This section describes how to evaluate routes or sections of routes identified in accordance with Identifying cycle route options.
Evaluation requires a multi-criteria analysis (MCA) to examine the effects of multiple factors. Multi-criteria analysis methods for cycling facilities are described in the section on Route option selection. In some cases, changing one aspect will affect the performance of other factors. Therefore, evaluation must be an iterative process. Planners and designers should be aware of the principles discussed in this section when considering the elements discussed in previous sections so as to weed out totally unsuitable options before arriving at the stage of evaluation. Any doubtful options should be included in the MCA analysis so that the process is seen to be transparent when engaging with the community.
Two aspects stand out as being important in any cycling assessment:
A number of different methods for assessing cycle route options are presented below:
Guidance on how to choose between options and apply them, as well as other factors to consider is presented in Recommendations for assessment, below.
The concept of LoS (a performance measure) has been discussed in People who cycle; it is a traffic engineering term that describes traffic quality. It is basically a user-satisfaction rating. In general terms, LoS A can be considered ‘very good’, B is ‘good’, C is ‘fair’, D is ‘poor’, E is ‘bad’ and F is ‘very bad’. Where appropriate, finer distinctions are possible, such as ‘B minus’ or ‘C plus’. However, it must be remembered that these ratings refer to a context, not simply a particular facility type; one must be careful not to say that a particular facility type is ‘good’ or ‘bad’, rather a particular provision in a particular context may offer a ‘good’ or ‘bad’ LoS rating for cycling.
Ultimately it is a political decision as to what constitutes an acceptable LoS for cycling; typically LoS B or C is usually aimed for. It may also be useful to have implementation targets aimed at progressively reducing the proportion of sites with, for example, LoS D or worse on the existing cycling network.
Cycling LoS assessment is based on a significant volume of empirical research on people’s views and reactions to specific cycling environments, conducted mostly since the mid-1990s. Such research focuses on translating these perception scores into definitions of the characteristics that define facilities in each of the LoS ratings. The main factors in people’s perceptions of LoS for cycling are related to safety, comfort and delay, thus characteristics used in LoS definitions may include traffic volumes and speeds, degree of separation from motor traffic, facility width with respect to user volumes, occurrences of delay from crossing roads or passing other users, pedestrian effects, and presence of parked cars or bus stops. Note that the relative importance of safety and delay aspects varies according to the cyclist type; by the Geller classification enthused and confident cyclists place a higher emphasis on delay (whilst still valuing safety, but having a different perception of the conditions required to feel safe) in comparison with interested but concerned cyclists.
Attractiveness, whilst not a measure used in traditional LoS evaluation, may also be included in evaluating LoS for cycling. On a route level, directness and coherence may factor into the LoS level definitions.
Danish research has been undertaken for segments of cycle routes between intersections (Jensen 2007) and later intersections (Jensen 2012). United States research has been undertaken in Florida by Landis et al (2003), which forms the basis for a multi-modal LoS assessment that is reported in NCHRP (2010). These are the only published methods that are based on ratings by users experiencing and rating situation in real time – either in the road environment or in a video simulation. They are also the only methods that take into account the interactions between variables. For example the effect of traffic volume, speed and heavy vehicle presence is much less on a separated facility than on a narrow roadway. LaMondia and Moore (2015) compared four ways of combining factors to achieve an LoS score. The method using interaction between factors was the Florida study, and performed much better than all the alternatives in matching user perception surveys. There is an extensive database of real-time user ratings collected in New Zealand as part of the Cycle for Science project, described in Bezuidenhout (2005). It is intended to supplement this in the coming year with more data to develop a robust cycling LoS prediction method for New Zealand. Until then there are a range of available methods – all with their own strengths and weaknesses, which are discussed below.
Note that different assessment methods will not produce identical results.
The methods discussed here are:
A user satisfaction survey of cyclist can be conducted for a particular facility. This involves a representative group of cyclists riding down the route to rate each part of it in real time under the appropriate traffic conditions. You can also utilise regular users of the route. The survey forms and method is available from the Transport Agency national cycling team. The method is similar to the community street review process used for pedestrian LoS.
This is the only method that provides a direct user satisfaction score.
This method can only be used for existing situations
There is reluctance to have users riding and rating environments that are considered to be unsafe. This is why video simulation methods were developed. However there is likely to be some bias in video methods, as some of the experience is missing.
This is the preferred method of rating existing situations.
Austroads (2015)(external link) presents a LoS framework for network operations from the perspective of all road users, including motorists, public transport users, freight, pedestrians and cyclists. This framework is recommended by the NZ Transport Agency as the default for network operating frameworks for New Zealand localities.
It uses the standard scale from A to F, but is rated subjectively according to written criteria arrived at by a consensus of transport professionals. It deliberately does this to keep the assessment simple.
The mid-block LoS ratings used by Austroads (2015) build on those of Jensen (2007) who rated LoS based on motor vehicle volumes and speeds, for different facility types in Denmark; the corresponding charts are reproduced in the diagram below. (Note that preliminary comparison with the New Zealand Cycling for Science scores suggests that the Danish ratings are harsher than New Zealand user ratings for lower speed and volume situations.
The framework is based on a series of ‘LoS needs’ (mobility, safety, access, information and amenity), which are each subdivided into ‘LoS measures’ specific to each road user type. Ratings (from A to F) are assigned according to various defined ‘service measure values’. Note that in some cases a range of LoS ratings (eg C–D) are assigned for a particular service measure value.
The LoS measures and their associated needs used are shown in Table 6.
Table 6: LoS measures and needs for cycling (Austroads 2015)
LoS measure |
LoS needs |
Mobility
|
|
Safety
|
|
Access
|
|
Information |
|
Amenity
|
|
Table 7 gives an example of the service measure values corresponding to the range of LoS ratings; in this case for the risk of cycle-to-motor vehicle crash at mid-blocks, which is one of the LoS measures relating to the need of safety.
Table 7: Ratings and service measure values for LoS measure ‘risk of cycle-motor vehicle crash at mid-block’ relating to LoS need of safety (Austroads 2015)
Rating |
Service measure value |
A |
Exclusive bicycle facility in a low-risk environment |
B |
Exclusive bicycle facility in a low- to medium-risk road environment or no bicycle facility in a low-risk road environment |
C |
Exclusive bicycle facility in a medium- to high-risk road environment or no bicycle facility in a low- to medium-risk road environment |
D |
Exclusive bicycle facility in a medium- to high-risk road environment or no bicycle facility in a medium-risk road environment |
E |
Bicycle only lane (not Copenhagen style facility where the bicycle facility is behind a kerb) in a high-risk road environment or no bicycle facility in a medium to high-risk road environment |
F |
No bicycle facility in a high-risk road environment |
Note that the Austroads LoS framework does not result in a single LoS classification for a particular facility; instead it gives LoS ratings for every LoS measure, of which there are 14 (as per Table 6). Similarly, the framework does not include a method of combining different LoS ratings for different user types.
Below is an example of the worksheet used to apply the Austroads framework, corresponding to the LoS measure ‘risk of cycle-to-cycle/pedestrian crash’ under the LoS need of safety.
Provides LoS ratings for all modes individually, so that compromises between modes can be understood and determined in relation to the network hierarchy.
Volume and speed information is based on Jensen (2007), which involved actual field testing of people cycling, rather than simply stated preference surveying.
Values for service measure that have not been tested in-field are qualitative, not quantitative; this is a more realistic approach as there is not sufficient scientific understanding available to attempt to quantify most of the values.
The five main requirements for cycling (see General route requirements in People who cycle) are generally well covered within the various risk measures; however, there is no measure for route consistency (which comes under the requirement of coherence).
The framework has been developed at a link (ie between intersections) level, but the definition of ‘link’ also includes the downstream node, ie the intersection that the link feeds into. This may be an advantage as it reflects somewhat that it is generally more difficult to provide a high LoS for cycling at intersections. However, it is possible to separately assess intersections and between intersections, and by doing so their separate contribution can be assessed, and attention focused on improving the ‘weakest link’.
The fact that the framework definition of ‘link’ also includes the downstream intersection may lead to some misrepresentation in LoS scores for sections between intersections especially, as they have very different characteristics to intersections; so it is recommended they they be assessed separately.
Some of the rating scales need more development, for instance in relation to separated facilities, where the safety and delay issues at intersections are poorly considered.
The framework does not offer a method of combining LoS ratings for individual LoS measures to give an overall LoS for a particular link for a particular mode. Nor does it allow for combining the LoS ratings for all the links that make up a route. Whilst it is useful to consider individual measures of a homogenous link, it would also be beneficial to have an overall LoS for an entire route.
Whilst the Austroads (2015) report provides a comprehensive worksheet (see above for an excerpt) to evaluate the LoS for each measure, there is no associated electronic tool.
The Austroads LoS framework is suitable for the New Zealand context and can be usefully applied to identify the LOS of individual sections along a route, as long as suitable caution and professional judgement is applied. The individual scores can be gauged according to a threshold LoS to determine whether each section meets the required LoS.
Intersections and mid-block sections should be treated separately.
This method covers the greatest range of facilities, but still does not cover all of the facility types covered in this guide. It has the best methodology in its development of any of the published methods. It is based on video simulation surveys of Copenhagen residents. The method for between intersections is described in Jensen (2007) and the intersection model in Jensen (2012). There is a spreadsheet available to perform the calculations. It uses the standard LoS rating scales from ‘A’ to ‘F’.
The formulas used for calculating cycling LoS, includes interaction factors between variables so should more accurately reflect cyclist perceptions.
There is a spreadsheet to make it easy to enter the data and use the formulas. It also calculates pedestrian LoS so that can be confusing.
Only one overall score is generated for each section or intersection, so the various factors such as delay and comfort are not rated separately.
There are significant differences in road rules and driving culture around cyclists between Copenhagen and New Zealand cities, and the volume of cyclists is many times higher, so it is not certain that the ratings would be valid for New Zealand conditions and cyclists. So the method needs validation for New Zealand. A preliminary comparison with New Zealand Cycle for Science data suggests that the Danes are very used to separated facilities, and not as comfortable as New Zealand users with on-road facilities, even at lower volumes and speeds, so the rating formulas and charts would benefit from some adjustments.
The NZ Transport Agency National Cycling team is starting a research project to improve the cycling LoS guidance so watch this space for developments.
Use the Danish method within the framework of the Austroads LoS Metrics for Network Operations Planning, but use judgement to allow for its limitations, and the differences from the New Zealand context.
This method is the most widely used approach in the United States. It assesses bicycle LoS on links and straight through intersections as part of a multi-modal assessment of LoS. It is based on the research by Landis. The method includes a computer program to simplify the calculations. Refer to NCHRP (2010a). This method is now also utilised in the Highway capacity manual (TRB 2010).
It is based on perceptions of people who rode along the route and rated it in real time, and compared this with people who watched a video shot by a cyclist riding the same route.
It uses the standard six-point scale from A–F, so can rate existing situations and assess improvements in LoS from proposals.
LOS for other road users uses the same rating scale, so can be used in network operating plans.
Includes a spreadsheet tool to facilitate use.
This research was based on a limited range of options, and therefore does not cover many of facilities and contexts presented in this guide.
Developed in an American context (Florida), which may not be directly transferable to New Zealand.
This method may be considered, but it would be sensible to compare any results with those obtained through the Danish (Jensen) and Austroads (2015) method as well.
Transport for London’s Cycling Level of Service(external link) (CLoS) assessment is a cycling project checking and rating tool. The tool document describes it this way: ‘A Cycling Level of Service (CLoS) assessment has been developed in order to set a common standard for the performance of cycling infrastructure for routes and schemes, and for individual junctions. The purpose of the CLoS assessment is to frame discussion about design options so that schemes are appealing for existing cyclists and can entice new cyclists onto the network.’
Like the Austroads (2015) metrics, it is structured around the six design outcomes safety, directness, coherence, comfort, attractiveness and adaptability, these are each broken down further into specific factors. Indicators that can be used to measure performance are specified for each factor, by a set of descriptions accompanied by base scores ranging from 0 to 2, where 0 is basic LoS, 1 is good and 2 is highest LoS. It is similar to a multi-criteria analysis with clear guidance on the rating scales used.
Certain factors of particular concern are identified as ‘critical’. The CLoS guidance suggests that the base score of critical factors should be multiplied by 3 to give them a greater weighting, and that designers must address any critical factors that have a score of zero. Critical factors may also have a critical rating for items that are worse than basic LoS (worse than 0 but this is not scored) critical factors must be improved to achieve a score of at least 0 to get funded.
The individual factor scores are then summed to give a total score out of a possible 100 points. Projects have to achieve a target score to be good enough get funded. However, the target score required is not published in the guidance.
It is intended that CLoS be used at several stages of a project: planning, design brief, preliminary design and post-completion.
Gives a numeric value for each measure (as opposed to the Austroads metrics which have a plus-minus system).
Scores for each measure can be summed to give an overall LoS.
Includes measures for both at intersections and between them, but the intersection rating is structured quite differently.
It is a comprehensive system for assessing new projects, and could be used as a multi-criteria analysis.
It cannot be used to rate the adequacy of poor infrastructure because it notes negative features but does not give them a score. So it cannot be used to measure the improvement in level of service being provided.
Only devised for cycling, with its own rating scales, so is not consistent with ratings of LoS for other modes. However, it should be possible to convert scores to the standard LoS A–C ratings but could not do so for D–F.
The overall score is the sum of the scores for each factor. However, it is well know that some of the factors interact and the relationship with cyclist perceptions is more complex.
The method does not appear to have been validated against surveys of cyclist perceptions.
London CLoS is not a true cycling LoS tool, so is not recommended as such. However, it is a useful checklist and rating tool for assessing new projects. Its greatest strength is the way it focuses attention on those aspects which need to be improved to create an acceptable project. It would be of greatest use for the cycling aspects of a multi-criteria analysis tool. With minor adaptation, London CLoS could be used to assess proposed cycling facilities in New Zealand.
Users could define and justify the acceptable route score to use as a benchmark for assessment.
VicRoads and Bicycle Network jointly developed the Cyclist Level of Service Assessment Tool (CLOSAT) for assessment of on-road and off-road bicycle facilities in Melbourne (Bicycle Victoria, 2012; Hollander, 2014). The tool developers make a very clear case for the importance of considering LoS to cyclists and why LoS is measured differently for cyclists than for motorists. The tool assesses intersections separately from sections between intersections. It gauges LoS based on a variety of factors including facility type, separation from traffic, geometry, speed of adjacent motor traffic and volume of adjacent motor traffic.
The tool developers state that the tool essentially measures a facility’s attractiveness to cyclists, although this definition of ‘attractiveness’ is quite different to the route requirement defined in General route requirements (in People who cycle), which is concerned with the wider environmental surroundings, and is more aligned with the requirements of safety and comfort.
The authors acknowledged the importance of identifying the types of cyclists and their various needs and developed a schematic of Geller’s (2009) classification as applied to the Melbourne network.
Applying CLOSAT over a route can identify the ‘weakest link(s)’, ie the sections with the lowest LOS for cycling; if a minimum acceptable LOS is defined it is then obvious which sections require improvement to achieve a suitable level of provision along the entire route. Applications of CLOSAT along Melbourne routes show that LOS for cycling is worst at intersections.
Applying CLOSAT over a route can identify the ‘weakest link(s)’, ie the sections with the lowest LoS for cycling; if a minimum acceptable LoS is defined it is then obvious which sections require improvement to achieve a suitable level of provision along the entire route. Applications of CLOSAT along Melbourne routes show that LoS for cycling is worst at intersections.
The images below illustrate the various conditions between intersections associated with the LoS ratings used in CLOSAT and give an example of the CLOSAT worksheet, corresponding to on-road cycle lanes.
Assesses intersections separately from non-intersection sections.
Provides an easy-to-use structure for more quantitatively comparing different scenarios – which can be useful in undertaking network fit or business case assessments, and in presenting proposals to a non-technical audience (eg politicians, general public).
While the scoring system seems plausible, and based on professional judgement, the method has not been validated through field testing involving end users.
A simple additive scoring system is a poor fit to use perceptions.
Beyond a spreadsheet used for analysis by VicRoads and several conference presentations, the method is not documented.
Does not include variables for traffic flow, percentage of heavy vehicles or volume of other path users.
CLOSAT is not recommended for use in New Zealand in its current form.
It, however, has useful elements that could be considered in future development of a cycling LoS tool.
Munro (2013) separates cyclists into ‘confident riders’ and ‘cautious riders’, with the understanding that each will have a different approach to defining LoS scores with respect to variations in facility type, delay, interactions with other users on paths, motor vehicle volumes, motor vehicle speeds, and presence of parking. This is a different approach from Austroads (2015) which does not define different target audiences.
A model for each of the two rider types is given, as well as a third model for all riders combined. The model(s) can be applied over a link (ie a homogenous section) and link scores can be aggregated to give the overall non-intersection LoS for an entire route. It has not, however, been developed for intersection LoS assessment. One insight from the aggregation of sections is that while people expect to be delayed for some time at odd points along their route, they prefer these places to be widely spaced, and do not like to be delayed repeatedly.
Has been developed from stated choice surveys, carefully designed to distinguish the effect of each characteristic and the user groups. So it is derived using a very different method to the other main LoS methods, and as result may be oversensitive to some variables.
Has the ability to calculate LoS for a homogenous ‘link’, and also for a ‘route’ (which is, for this model, a series of links, but not including intersections).
Includes consideration of how user volumes affect LoS, in terms of meeting, passing and overtaking other facility users, both cyclists and pedestrians (for shared paths).
Can be incorporated into a probabilistic route choice model, as this was the way it was derived.
All the information necessary to implement the model is well documented in the research report that summarises the model development (Munro, 2013) an associated on-line tool, can be used to enter scenarios can vary them, but this acts as a black box. The model is technically advanced and therefore may seem too complicated for some users to understand.
It is based on people’s stated preferences when choosing between hypothetical facility scenarios, rather than actual experience.
This model has been developed for non-intersection sections only; it does not include intersections, which are generally the locations with the lowest LoS for cycling.
Along with other stated preference methods, this method provides some useful insights into user preferences.
The online tool may be useful for anyone wishing to quickly test the relative rating of some scenarios.
The online tool can be accessed by contacting Cameron Munro of CDM consulting.
Mekuria et al (2012) assume that a ‘large majority [of the population] is “traffic intolerant,” willing to tolerate only a small degree of traffic stress’ and equates these users to the ‘interested but concerned’ group as per the Geller (2009) classification (see People who cycle). It presumes such users will only ride if ‘Dutch’ standards of separation are provided. The method seeks to classify roads according to their level of ‘traffic-stress’ based on perceived danger and other stressors such as noise and exhaust fumes.
The standard level of service scale of A to F was deliberately abandoned on the basis that only traffic engineers understand what it means, and the formulas used in the methods are an unintelligible black box. Instead four levels of traffic stress are defined, that generally correspond to the four Geller categories, but the ‘no way no how’ group is excluded and the ‘interested but concerned’ divided into adults and children. Each facility is scored 1 to 4 to correspond to the level of stress that is thought would be tolerated by the corresponding target audience. The authors use the analysis method to identify the level of connectivity provided and barriers to cycling in networks.
Directly relates facilities to presumed user group preferences. Quantitatively relates user tolerance (and therefore, willingness to cycle) with road conditions.
Includes specific analysis for both intersections and crossings.
Only uses a few categories on a grossly aggregated level, rather than reflecting the fact that level of traffic stress is a continuum and that within any target group there is a range of risk tolerance.
Based on people’s stated preferences, rather than actual experience.
Simplified data needs mean that important variables like width and traffic volume are omitted, and the formulas ignore important interactions.
No electronic format is provided; users would have to develop their own tool based on the equations and tables provided in the report.
This method cannot be used for assessing cycling as part of a network operating plan as the scale is not compatible.
This method is not recommended for use in New Zealand.
It, however, has some useful ideas especially for intersections that could be explored when developing a tool for the New Zealand context.
This is an assessment against the criteria presented in Safety issues for people who cycle in relation to each cyclist type and the route characteristics they need.
To permit a comparison, a summary for each option could be prepared in a standard format – and from this a conclusion or recommendation determined. This summary can be reported on a single page in a similar format to Table 1 (see Summary in People who cycle) as a table indicating how the proposal will suit each cyclist type.
This assessment provides an opportunity to consider all overarching issues, including intangible matters such as attractiveness and comfort.
This is a qualitative assessment.
Always perform a needs assessment. No other assessment satisfactorily considers the full range of needs of people who cycle. Include the outcome of other assessments, for example the LoS, in a needs assessment report. The best way of including a needs assessment, along with other assessments is to integrate them into the process for a multi-criteria analysis.
Audits are a formal process for identifying deficiencies in provision. They can be applied to existing facilities or new proposals and can be applied during all project phases, from concept to post-construction audit. They may be specifically for cycling, or encompass all modes of transport. They can also be applied to a specific facility, a route or a network.
Three different types of audit affect cycling:
Audits take a systematic approach to identifying safety and other problems. They provide a valuable ‘sanity check’ which helps to prevent inappropriate designs being constructed.
The quality of audit results under this method depends on the cycling experience and knowledge of the auditor(s).
While audits identify the deficiencies of an option, they do not distinguish between options or rate them.
It should not be relied upon that a good audit will deliver a good outcome, but the underlying scheme development needs to be sound. A poor scheme is seldom ‘rescued’ through the audit process.
An audit focused on safety may miss aspects related to other important cycling principles (directness, coherence, etc). Often safety auditors may identify some of these problems during their audit but they lack the appropriate means of expressing their concerns within the safety audit framework.
Use cycle audits routinely in project development. Ensure that the audit process includes all the features of a cycle audit, whether as a stand-alone process or as part of a wider audit process.
Use a cycle audit to identify deficiencies on existing roads and paths.
Don’t use a cycle audit as a tool to evaluate and compare options.
Ensure that auditors have suitable specialist knowledge cycle design.
Close
This section offers guidance on the selection and application of the assessment methods presented above, along with other factors that should be included in the consideration of route options.
None of the methods listed above will give a complete evaluation of a cycle route or network; it is best to use a mix of methods and multi-criteria analysis. The following points help to select suitable methods:
GIS can be used to apply methods based on qualitative data (eg LOS methods using parameters such as traffic volumes, traffic speeds, etc). Outputs from qualitative assessments can also be entered into a GIS to be considered alongside other evaluations. Martin (2015) gives a good example of how GIS has been used to assess the appropriateness of a planned cycling network based on multiple criteria.
Routes should be assessed in their entirety wherever possible. However, it is not uncommon for the project scope to be limited for financial or other reasons. For example, a route may extend through more than one local authority’s area or depend on access to land under the control of another agency. In cases like this, any insurmountable issues with another authority may limit the route’s feasibility.
If the project scope means a route cannot be considered in its entirety, it is important to conduct a less rigorous review beyond the area of detailed assessment. This will help determine any likely physical, financial and political influences that could render a project unfeasible in the future.
Similarly, evaluators should consider access to/from the route at each end and at all locations along the way. For example, some projects stop immediately before an intersection where cycling access may be difficult. Furthermore, while some routes may provide good through-route connectedness, it may be difficult to turn on or off the route at some intermediate locations.
A technique to ensuring the whole route meets the required LOS to suit the intended target audience is to evaluate each individual homogenous section. A sub-standard section will become a barrier to cycling the entire route and therefore cycling numbers will not be as high as anticipated. This is linked to the key route requirement of coherence (see General route requirements in People who cycle).
Due to their complexity, intersections and crossings commonly have lower LOS scores for cycling than sections between intersections. Therefore more attention needs to be given to them.
The underlying goal of any cycling project is to provide a suitable level of service (however this is defined) for the intended cycling target audience. A route’s potential to achieve this goal should be evaluated throughout the planning and design process, from the earliest stages. It is usually technically and financially easier to make significant changes at an earlier stage of a project than to correct or retro-fit things later.
Future demand should be considered as if provisions cannot accommodate an increase in user volumes level of service will decrease over time. It is also important to consider future changes in the general transport network that may affect a route’s ability to provide for cycling.
CloseIn conjunction with the features to monitor presented above, there are several other factors that should be included throughout the monitoring process:
Any evaluation of cycle facilities must include considering the financial commitment required to implement them. Any measures must be both viable and represent value for money. Where financial assistance from the Transport Agency is sought, economic evaluations should following the follow the Monetised benefits and costs manual.
To assist planners and designers to understand costs for delivering cycle facilities, and to be able to compare costs of different facility types, the Cycle Facility Cost Estimation Tool [XLSX, 15 MB] can be used. Instructions on how to use the cost estimation tool are included within the spreadsheet tool.
Projects should also be assessed for their effects on the environment; ideally any adverse effects should be minimised and mitigated as much as possible.
Adverse effects on historic heritage should be avoided or minimised. An evaluation of cycle facilities should check whether there are any historic/heritage sites of interest within 200 metres of the planned route. Heritage features may add to a project, for example, heritage structures may be repurposed as part of a trail.
Considering historic heritage in walking and cycling projects information sheet describes heritage considerations for cycling projects.
The effects of cycling projects on other road users, authorities or property owners should be monitored. The network hierarchy may dictate the extent to which provision for other modes can be compromised due to a cycling project.
The political ‘climate’ and public views towards cycling can strongly dictate the amount of effort necessary to approve a proposal. Such factors will not influence the ideal solution, but they will dictate how much effort is required to convince others that this solution is the best
CloseHaving identified a range of alternative route options, the cycle route option assessment process concludes with the selection of the preferred option(s).
The most commonly used assessment technique is a multi-criteria analysis(external link) (MCA). MCA establishes preferences between options by reference to an explicit set of criteria, usually the project objectives that have been identified at the outset. This process provides final decision makers and the community with a deeper level of transparency and ultimately confidence in the resulting outcome.
A standard feature of MCA is a performance matrix in which each row (or column) describes an option and each column (or row) describes the performance of the options against each objective/criterion.
Performance can be gauged either quantitatively or qualitatively. Quantitative assessments are based on measurable levels (eg ‘less than 5 predicted crashes per year’ or ‘3000–5000 vehicles per day’), whereas qualitative assessments generally rely on comparison with a general description of a category (eg ‘has high degree of safety’ or ‘moderate traffic volumes on road’). A system of scoring each criterion may be applied to give an overall score for a route and enable comparison between routes. In some cases, it may be necessary to satisfy key criteria (eg safety) before the others can be considered.
The Christchurch City Council (CCC) have adopted an assessment methodology for its Major Cycleway Routes [PDF, 4.7 MB] (MCR) that uses the five main cycle design objectives (safety, coherence, directness attractiveness and comfort) supplemented with objectives related to risks to delivery and practical matters(external link). The assessment used for the CCC route selection is based on a qualitative scoring as shown below, where an option must first pass the ‘Is it safe?’ test before being judged against other objectives. This approach is also used for assessing MCR facility types.
A key advantage of the MCA process is that a sensitivity analysis can determine how this variation influences the overall outcome. This can be useful when the weighting and the scoring of some criteria are debated by the assessment team, or external parties. The assessment team should include a range of professional stakeholders and can include external stakeholders such as community representatives and elected members.
Close