Ethical, cultural, legal, social, reputational, environmental, contractual, financial and other considerations can also be relevant. An evaluation of the significance of a risk compared to other risks is often based on an estimate of the magnitude of risk compared with criteria which are directly related to thresholds set around the objectives of the organization.
Comparison with these criteria can inform an organization which risks should be focused on for treatment, based on their potential to drive outcomes outside of thresholds set around objectives. The magnitude of risk is seldom the only criterion relevant to decisions about the significance of risk. Other relevant factors can include sustainability e. Techniques for evaluating the significance of risk are described in Clause B. For such decisions several criteria might need to be met and trade-offs between competing objectives might be required.
Criteria relevant to the decision should be identified and the way in which criteria are to be weighted or trade-offs otherwise made should be decided and accounted for and the information recorded and shared. In setting criteria, the possibility that costs and benefits may differ for different stakeholders should be considered. The way in which different forms of uncertainty are to be taken into account should be decided. Techniques in Clause B. This information provides an input to statistical analysis, models or to the techniques described in Annexes A and B.
In some cases, the information can be used by decision makers without further analysis. The information needed at each point depends on the results of earlier information gathering, the purpose and scope of the assessment, and the method or methods to be used for analysis.
The way information is to be collected, stored, and made available should be decided. The records of the outputs of the assessment that are to be kept should be decided, along with how those records are to be made, stored, updated and provided to those who might need them. Sources of information should always be indicated.
Data can be collected or derived, for example, from measurements, experiments, interviews and surveys. Typically, data directly or indirectly represent past losses or benefits. Examples include project failures or successes, the number of complaints, financial gains or losses, health impacts, injuries and fatalities, etc. Additional information might also be available such as the causes of failures or successes, sources of complaints, the nature of injuries, etc.
Data can also include the output from models or other analysis techniques. When the data to be analysed are obtained from sampling, the statistical confidence that is required should be stated so that sufficient data is collected.
Where no statistical analysis is needed this should be stated. If the data or results from previous assessments are available, it should first be established whether there has been any change in context and, if so, whether the earlier data or results remain relevant.
Limitations and uncertainties in data should be identified and understood. Past data cannot be assumed to continue to apply into the future, but they can give an indication to decision makers of what is more or less likely to occur in the future.
Its purpose is to transform what might be an inherently complex situation into simpler terms that can be analysed more easily. It can be used to help understand the meaning of data and to simulate what might happen in practice under different conditions. A model may be physical, represented in software or be a set of mathematical relationships.
Each of these steps can involve approximations, assumptions and expert judgement and if possible they should be validated by people independent of the developers. Critical assumptions should be reviewed against available information to assess their credibility. Comprehensive documentation of the model and the theories and assumptions on which it is based should be kept, sufficient to enable validation of the model. Software programmes used for modelling and analysis often provide a simple user interface and a rapid output, but these characteristics might lead to invalid results that are unnoticed by the user.
Commercial software is often black box commercial in confidence and might contain any of these errors. New software should be tested using a simple model with inputs that have a known output, before progressing to test more complex models.
The testing details should be retained for use on future version updates or for new software analysis programmes. Errors in the constructed model can be checked by increasing or decreasing an input value to determine whether the output responds as expected.
This can be applied to each of the various inputs. Data input errors are often identified when varying the data inputs. This approach also provides information on the sensitivity of the model to data variations. A good understanding of the mathematics relevant to the particular analysis is recommended to avoid erroneous conclusions. Not only are the above errors likely, but also the selection of a particular programme might not be appropriate.
It is easy to follow a programme and assume that the answer will therefore be right. Evidence should be gathered to check that the outputs are reasonable. Factors to consider when selecting a particular technique for these activities are described in Clause 7. In general, analysis can be descriptive such as a report of a literature review, a scenario analysis or a description of consequences or quantitative, where data are analysed to produce numerical values.
In some cases, rating scales can be applied to compare particular risks. The way in which risk is assessed and the form of the output should be compatible with any defined criteria. For example, quantitative criteria require a quantitative analysis technique which produces an output with the appropriate units.
Mathematical operations should be used only if the chosen metrics allow. In general, mathematical operations should not be used with ordinal scales. Even with fully quantitative analysis, input values are usually estimates. A level of accuracy and precision should not be attributed to results beyond that which is consistent with the data and methods employed.
All sources of uncertainty and both beneficial and detrimental effects might be relevant, depending on the context and scope of the assessment. Techniques for identifying risk usually make use of the knowledge and experience of a variety of stakeholders see B. Physical surveys can also be useful in identifying sources of risk or early warning signs of potential consequences.
The output from risk identification can be recorded as a list of risks with events, causes and consequences specified, or using other suitable formats. Whatever techniques are used, risk identification should be approached methodically and iteratively so that it is thorough and efficient. Risk should be identified early enough to allow actions to be taken whenever possible.
However there are occasions when some risks cannot be identified during a risk assessment. A mechanism should therefore be put in place for capturing emerging risks and recognizing early warning signs of potential success or failure. Techniques for identifying risk are described in Clause B. Sources of risk can include events, decisions, actions and processes, both favourable and unfavourable, as well as situations that are known to exist but where outcomes are uncertain.
Any form of uncertainty described in 4. Events and consequences can have multiple causes or causal chains. Risk can often only be controlled by modifying risk drivers. They influence the status and development of risk exposures, and often affect more than one risk.
As a result, risk drivers often need more and closer attention than sources of individual risks. Techniques for determining sources, causes and drivers of risk are described in Clause B. NOTE A risk can have more than one control and controls can affect more than one risk. A distinction should be made between controls that change likelihood, consequences or both, and controls that change how the burden of risk is shared between stakeholders.
For example, insurance and other forms of risk financing do not directly affect the likelihood of an event or its outcomes but can make some of the consequences more tolerable to a particular stakeholder by reducing their extent or smoothing cash flow.
Any assumptions made during risk analysis about the actual effect and reliability of controls should be validated where possible, with a particular emphasis on individual or combinations of controls that are assumed to have a substantial modifying effect. This should take into account information gained through routine monitoring and review of controls.
Techniques for analysing controls are described in Clause B. Consequential effects domino or knock-on effects where one consequence leads to another should be considered where relevant. The types of consequence to be analysed should have been decided when planning the assessment.
The context statement should be checked to ensure that the consequences to be analysed align with the purpose of the assessment and the decisions to be made. This can be revisited during the assessment as more is learned. The magnitude of consequences can be expressed quantitatively as a point value or as a distribution. Consideration of the full distribution associated with a consequence provides complete information. It is possible to summarize the distribution in the form of a point value such as the expected value mean , variation variance or the percentage in the tail or some other relevant part of the distribution percentile.
It should not be assumed that data relevant to risk necessarily follows a normal distribution. In some cases information can be summarized as a qualitative or semi-quantitative rating which can be used when comparing risks.
The magnitude of consequences might also vary according to other parameters. For example, the health consequences of exposure to a chemical generally depend on the dose to which the person or other species is exposed. For this example, the risk is usually represented by a dose response curve which depicts the probability of a specified end point e.
Consequences might also change over time. For example, the adverse impacts of a fault might become more severe the longer the fault exists. Appropriate techniques should be selected to take this into account. Sometimes consequences result from exposures to multiple sources of risk: for example, environmental or human health effects from the exposure to biological, chemical, physical, and psychosocial sources of risk.
In considering multiple exposures the possibility of synergistic effects should be taken into account as well as the influence of the duration and extent of exposure.
The parameter to which a likelihood value applies should be explicitly stated and the event or consequence whose likelihood is being stated should be clearly and precisely defined. It can be necessary to include a statement about exposure and duration to fully define likelihood. Likelihood can be described in a variety of ways, including as an expected probability or frequency or in descriptive terms e.
There can be uncertainty in the likelihood which can be shown as a distribution of values representing the degree of belief that a particular value will occur. Where a percentage is used as a measure of likelihood the nature of the ratio to which the percentage applies should be stated.
To minimize misinterpretations when expressing likelihood, either qualitatively or quantitatively, the time period and population concerned should be explicit and consistent with the scope of the particular assessment.
There are many possible biases which can influence estimates of likelihood. Furthermore, interpretation of the likelihood estimate can vary depending on the context within which it is framed.
Care should be taken to understand the possible effects of individual cognitive and cultural biases. Techniques for understanding consequences and likelihood are described in Clause B. For example, multiple consequences can arise from a single cause or a particular consequence might have multiple causes.
The occurrence of some risks may make the occurrence of others more or less likely, and these causal links can form cascades or loops. To achieve a more reliable assessment of risk where causal links between risks are significant, it can be useful to create a causal model that incorporates the risks in some form. Common themes can be sought within the risk information such as common causes or drivers of risk, or common outcomes.
Interactions between risks can have a range of impacts on decision making, for example, escalating the importance of activities which span multiple connected risks or increasing the attractiveness of one option over others. Risks might be susceptible to common treatments, or there can be situations such that treating one risk has positive or negative implications elsewhere. Treatment actions can be consolidated at times to significantly reduce the amount of work and more effectively balance available resources.
A coordinated treatment plan should take account of these factors rather than assuming that each risk should be treated independently. Techniques for analysing interactions and dependencies are described in Clause B.
This can involve qualitative, semi-quantitative or quantitative measures. Points on the scale are often set up to have a logarithmic relationship to fit with data; — numeric descriptors are added to scale points, the meanings of which are described qualitatively. The use of semi-quantitative scales can lead to misinterpretations if the basis for any calculations is not explained carefully. Therefore, semi-quantitative approaches should be validated and used with caution.
Where a risk is analysed in quantitative terms, it should be ensured that appropriate units and dimensions are used and carried over through the assessment. Qualitative and semi-quantitative techniques can be used only to compare risks with other risks measured in the same way or with criteria expressed in the same terms.
They cannot be used for directly combining or aggregating risks and they are very difficult to use in situations where there are both positive and negative consequences or when trade-offs are to be made between risks. When quantitative estimates for a consequence and its likelihood are combined as a simple product to provide a magnitude for a risk, information can be lost. In particular, there is no distinction between risks with high consequence and low likelihood and those with low consequences that occur frequently.
To compensate for this, a weighting factor may be applied to either the consequence or likelihood; but this should be used with care. Risk cannot always be adequately described or estimated as a single value representing the likelihood of a specific consequence. In this case estimating a valid magnitude for risk in terms of likelihood and consequence becomes impossible. When a risk has a distribution of possible consequences, a measure of risk can be obtained as the probability weighted average of the consequences i.
However, this might not always be a good measure of risk because it reflects the mean consequence of the distribution. This results in loss of information about less likely consequences that can be severe and hence important for understanding risk. Techniques for dealing with extreme values are not included in this document. Consequence based metrics such as the maximum credible loss or probable maximum loss are mainly used when it is difficult to define which controls have the capability of failing or where there is insufficient data on which to base estimates of likelihood.
The magnitude of risk depends on the assumptions made about the presence and effectiveness of relevant controls. Terms such as inherent or gross risk for the situation where those controls which can fail are assumed to do so and residual or net risk for the level of a risk when controls are assumed to operate as intended are often used by practitioners.
However, it is difficult to define these terms unambiguously and it is therefore advisable to always state explicitly the assumptions made about controls. When reporting a magnitude of risk, either qualitatively or quantitatively, the uncertainties associated with assumptions and with the input and output parameters should be described. Provided the risks are characterized by a single consequence, measured in the same units, such as monetary value, they can in principle be combined.
That is, they can be combined only when consequences and likelihood are stated quantitatively and the units are consistent and correct. In some situations, a measure of utility can be used as a common scale to quantify and combine consequences that are measured in different units. Developing a single consolidated value for a set of more complex risks loses information about the component risks. In addition, unless great care is taken, the consolidated value can be inaccurate and has the potential to be misleading.
All methods of aggregating risks to a single value have underlying assumptions which should be understood before being applied. Data should be analysed to seek correlations and dependencies which will affect how risks combine.
Modelling techniques used to produce an aggregate level of risk should be supported by scenario analysis and stress testing. Where models incorporate calculations involving distributions, they should include correlations between those distributions in an appropriate manner.
If correlation is not taken into account appropriately the outcomes will be inaccurate and may be grossly misleading. Consolidating risks by simply adding them up is not a reliable basis for decision making and could lead to undesired results. Monte Carlo simulation can be used to combine distributions see B. Qualitative or semi-quantitative measures of risk cannot be directly aggregated. Equally, only general qualitative statements can be made about the relative effectiveness of controls based on qualitative or semi-quantitative measures of changes in level of risk.
Relevant data about different risks can be brought together in a variety of ways to assist decision makers. It is possible to conduct a qualitative aggregation based on expert opinion, taking into account more detailed risk information.
The assumptions made and information used to conduct qualitative aggregations of risk should be clearly articulated. For example, an individual's risk of a fatality from an event such as a dam failure might need to be considered differently from the same event affecting a group of individuals together.
Societal risk is typically expressed and evaluated in terms of the relationship between the frequency of occurrence of a consequence F and the number of people bearing the consequences N. See F-N diagrams in B. Techniques that provide a measure of risk are described in Clause B. Verification involves checking that the analysis was done correctly. Validation involves checking that the right analysis was done to achieve the required objectives.
For some situations verification and validation can involve independent review processes. Uncertainties and their implications should always be communicated to decision makers. When a lack of reliable data is recognized during the analysis, further data should be collected, if practicable. This can involve implementing new monitoring arrangements.
Alternatively, the analysis process should be adjusted to take account of the data limitations. A sensitivity analysis can be carried out to evaluate the significance of uncertainties in data or in the assumptions underlying the analysis. Sensitivity analysis involves determining the relative change to the results brought about by changes in individual input parameters. It is used to identify data that need to be accurate, and those that are less sensitive and hence have less effect upon overall accuracy.
Parameters to which the analysis is sensitive and the degree of sensitivity should be stated where appropriate.
Parameters that are critical to the assessment and that are subject to change should be identified for on-going monitoring, so that the risk assessment can be updated, and, if necessary, decisions reconsidered. Where a sensitivity analysis indicates parameters of particular importance to the outcome of an analysis, these should also be considered for monitoring.
Assessments should be reviewed periodically to identify whether change has occurred, including changes in the context or in assumptions, and whether there is new information or new methods available. NOTE An understanding of risk can inform actions even where no explicit decision-making process is followed. The factors to consider when making decisions and any specific criteria should have been defined as part of establishing the context for the assessment see 6.
This provides an input into decisions about whether risk is acceptable or requires treatment, and any priorities for treatment. Some risks may be accepted for a finite time for example, to allow time to actually implement treatments.
The assessor should be clear about the mechanisms for temporarily accepting risks and the process to be used for subsequent reconsideration. This method has some limitations see B. Once risks have been evaluated and treatments decided, the risk assessment process can be repeated to check that proposed treatments have not created additional adverse risks and that the risk remaining after treatment is within the organization's risk appetite.
Techniques that can be used when comparing options that involve uncertainty are described in Clause B. The way in which records are to be reviewed and updated should be defined. It follows that any documentation or records should be provided in a timely manner and be in a form that can be understood by those who will read it. Documents should also provide the necessary technical depth for validation, and sufficient detail to preserve the assessment for future use.
The information provided should be sufficient to allow both the processes followed and the outcomes to be reviewed and validated. Assumptions made, limitations in data or methods, and reasons for any recommendations made should be clear. Risk should be expressed in understandable terms, and the units in which quantitative measures are expressed should be clear and correct.
Those presenting the results should characterize their confidence or that of their team in the accuracy and completeness of the results. Uncertainties should be adequately communicated so that the report does not imply a level of certainty beyond the reality. Techniques for recording and reporting are described in Clause B. Annexes A and B list and further explain some commonly used techniques. They describe the characteristics of each technique and its possible range of application, together with its inherent strengths and weaknesses.
Many of the techniques described in this document were originally developed for particular industries seeking to manage particular types of unwanted outcomes. Several of the techniques are similar, but use different terminologies, reflecting their independent development for a similar purpose in different sectors. Over time the application of many of the techniques has broadened, for example extending from technical engineering applications to financial or managerial situations, or to consider positive as well as negative outcomes.
New techniques have emerged and old ones have been adapted to new circumstances. The techniques and their applications continue to evolve. There is potential for enhanced understanding of risk by using techniques outside their original application.
Annexes A and B therefore indicate the characteristics of techniques that can be used to determine the range of circumstances to which they can be applied. In general terms, the number and type of technique selected should be scaled to the significance of the decision, and take into account constraints on time and other resources, and opportunity costs. In deciding whether a qualitative or quantitative technique is more appropriate, the main criteria to consider are the form of output of most use to stakeholders and the availability and reliability of data.
Quantitative techniques generally require high quality data if they are to provide meaningful results. However, in some cases where data is not sufficient, the rigour needed to apply a quantitative technique can provide an improved understanding of the risk, even though the result of the calculation might be uncertain.
There is often a choice of techniques relevant for a given circumstance. Several techniques might need to be considered, and applying more than one technique can sometimes provide useful additional understanding.
The characteristics of the techniques relevant to these requirements are listed in Table A. Table A. As the degree of uncertainty, complexity and ambiguity of the context increases then the need to consult a wider group of stakeholders will increase, with implications for the combination of techniques selected.
Some of the techniques described in this document can be applied during steps of the ISO risk management process in addition to their usage in risk assessment. Application of the techniques to the risk management process is illustrated in Figure A. Annex B contains an overview of each technique, its use, its inputs and outputs, its strengths and limitations and, where applicable, a reference for where further detail can be found.
Within each grouping, techniques are arranged alphabetically and no order of importance is implied. The majority of techniques in Annex B assume that risks or sources of risk can be identified.
There are also techniques which can be used to indirectly assess residual risk by considering controls and requirements that are in place see for example IEC [36]. While this document discusses and provides example techniques, the techniques described are non-exhaustive and no recommendation is made as to the efficacy of any given technique in any given circumstance.
Care should be taken in selecting any technique to ensure that it is appropriate, reliable and effective in the given circumstance. The techniques described represent structured ways of looking at the problem in hand that have been found useful in particular contexts.
The list is not intended to be comprehensive but covers a range of commonly used techniques from a variety of sectors. For simplicity the techniques are listed in alphabetical order without any priority. Each technique is described in more detail in Annex B, as referenced in column 1 of Table A. A basic Bayesian network has estimate diagrams variables representing uncertainties. An risk extended version, known as an influence decide diagram, includes variables representing between uncertainties, consequences and actions.
Both causes and consequences of conseq. These form systemic sources and drivers of risk. Outcomes are usually expressed in monetary terms or in terms of utility. An alternative representation of a decision tree is an influence diagram see B.
People participate individually but receive feedback on the responses of others after each set of questions. Variations include a success tree analysis where the top event is desired and a cause analyse tree used to investigate past events. Risk is then considered for analysis each of these scenarios. Some of the techniques are also used in other steps of the process. This is illustrated in Figure A.
Figure A. This provides for a breadth of expertise and allows stakeholder involvement. Stakeholder and expert views can be obtained on an individual basis e. Views can include disclosure of information, expressions of opinion or creative ideas. Clause B. In some situations stakeholders have a specific expertise and role, and there is little divergence of opinion.
However, sometimes significantly varying stakeholder views might be expected and there might be power structures and other factors operating that affect how people interact. These factors will affect the choice of method used. The number of stakeholders to be consulted, time constraints and the practicalities of getting all necessary people together at the same time will also influence the choice of method.
Where a group face-to-face method is used, an experienced and skilled facilitator is important to achieving good outputs. Checklists derived from classifications and taxonomies can be used as part of the process see B. Any technique for obtaining information that relies on people's perceptions and opinions has the potential to be unreliable and suffers from a variety of biases such as availability bias a tendency to over-estimate the likelihood of something which has just happened , clustering illusion the tendency to overestimate the importance of small clusters in a large sample or bandwagon effect the tendency to do or believe things because others do or believe the same.
Guidance on function analysis which can be used to reduce bias and focus creative thinking on aspects which have the greatest impact is given in EN [4]. The information on which judgements were based and any assumptions made should be reported. Any analysis or critique of the ideas is carried out separately from the brainstorming. This technique gives the best results when an expert facilitator is available who can provide necessary stimulation but does not limit thinking.
The facilitator stimulates the group to cover all relevant areas and makes sure that ideas from the process are captured for subsequent analysis. Brainstorming can be structured or unstructured. For structured brainstorming the facilitator breaks down the issue to be discussed into sections and uses prepared prompts to generate ideas on a new topic when one is exhausted.
Unstructured brainstorming is often less formal. In both cases the facilitator starts off a train of thought and everyone is expected to generate ideas. The pace is kept up to allow ideas to trigger lateral thinking. The facilitator can suggest a new direction, or apply a different creative thinking tool when one direction of thought is exhausted or discussion deviates too far.
The goal is to collect as many diverse ideas as possible for later analysis. It has been demonstrated that, in practice, groups generate fewer ideas than the same people working individually. These encourage more individual participation and can be set up to be anonymous, thus also avoiding personal political and cultural issues. Quantitative use is possible but only in its structured form to ensure that biases are taken into account and addressed, especially when used to involve all stakeholders.
Brainstorming stimulates creativity and is therefore very useful when working on innovative designs, products and processes. Participants need to have between them the expertise, experience and range of viewpoints needed for the problem in hand. A skilled facilitator is normally necessary for brainstorming to be productive. Limitations include the following.
This can be overcome by effective facilitation. Quality, conformity, and conflict: Questioning the assumptions of Osborn's brainstorming technique B. It is a method to collect and collate judgments on a particular topic through a set of sequential questionnaires.
An essential feature of the Delphi technique is that experts express their opinions individually, independently and anonymously while having access to the other experts' views as the process progresses. The group of experts who form the panel are independently provided with the question or questions to be considered. The information from the first round of responses is analysed and combined and circulated to panellists who are then able to reconsider their original responses.
Panellists respond and the process is repeated until consensus or quasi consensus is reached. If one panellist or a minority of panellists consistently keep their response, it might indicate that they have important information or an important point of view. It can be used in forecasting and policy making, and to obtain consensus or to reconcile differences between experts. It can be used to identify risks with positive and negative outcomes , threats and opportunities and to gain consensus on the likelihood and consequences of future events.
It is usually applied at a strategic or tactical level. Its original application was for long-time-frame forecasting, but it can be applied to any time frame. The number of participants can range from a few to hundreds. Written questionnaires can be in pencil-and-paper form or distributed and returned using electronic communication tools including email and the internet. The use of technology systems helps to ensure agility and precision in the compilation of information at each cycle.
The Delphi technique: Past, present, and future prospects. Technological forecasting and social change , 78, Special Delphi Issue B. Views are first sought individually with no interaction between group members, then are discussed by the group.
The process is as follows. If group dynamics mean that some voices have more weight than others, ideas can be passed on to the facilitator anonymously.
Participants can then seek further clarification. It is also useful for prioritizing ideas within a group. A semi- structured interview is similar, but allows more freedom for a conversation to explore issues which arise. This standard is not intended for certification, regulatory or contractual use. NOTE: This standard does not deal specifically with safety. It is a generic risk management standard and any references to safety are purely of an informative nature.
Publication date : CHF Buy. Life cycle Previously Withdrawn. Final text received or FDIS registered for formal approval. Proof sent to secretariat or FDIS ballot initiated: 8 weeks. Close of voting. Proof returned by secretariat.
0コメント