Project Risk Management

Prince2 Certification
Image by/from Kokcharov

Project risk management is an important aspect of project management. Project risk is defined by PMI as, “an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives.”

Project risk management remains a relatively undeveloped discipline, distinct from the risk management used by Operational, Financial and Underwriters’ risk management. This gulf is due to several factors: Risk Aversion, especially public understanding and risk in social activities, confusion in the application of risk management to projects, and the additional sophistication of probability mechanics above those of accounting, finance and engineering.

With the above disciplines of Operational, Financial and Underwriting risk management, the concepts of risk, risk management and individual risks are nearly interchangeable; being either personnel or monetary impacts respectively. Impacts in project risk management are more diverse, overlapping monetary, schedule, capability, quality and engineering disciplines. For this reason, in project risk management, it is necessary to specify the differences (paraphrased from the “Department of Defense Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs”):

An improvement on the PMBOK definition of risk management is to add a future date to the definition of a risk. Mathematically, this is expressed as a probability multiplied by an impact, with the inclusion of a future impact date and critical dates. This addition of future dates allows predictive approaches.

Good Project Risk Management depends on supporting organizational factors, having clear roles and responsibilities, and technical analysis.

Chronologically, Project Risk Management may begin in recognizing a threat, or by examining an opportunity. For example, these may be competitor developments or novel products. Due to lack of definition, this is frequently performed qualitatively, or semi-quantitatively, using product or averaging models. This approach is used to prioritize possible solutions, where necessary.

In some instances it is possible to begin an analysis of alternatives, generating cost and development estimates for potential solutions.

Once an approach is selected, more familiar risk management tools and a general project risk management process may be used for the new projects:

Finally, risks must be integrated to provide a complete picture, so projects should be integrated into enterprise wide risk management, to seize opportunities related to the achievement of their objectives.

In order to make project management effective, the managers use risk management tools. It is necessary to assume the measures referring to the same risk of the project and accomplishing its objectives.

The project risk management (PRM) system should be based on the competences of the employees willing to use them to achieve the project’s goal. The system should track down all the processes and their exposure which occur in the project, as well as the circumstances that generate risk and determine their effects. Nowadays, the Big Data (BD) analysis appears an emerging method to create knowledge from the data being generated by different sources in production processes. According to Gorecki, the BD seems to be the adequate tool for PRM.

Project Management Triangle

Prince2 Certification
Image by/from Catalin Bogdan (talk)

The Project Management Triangle (called also the Triple Constraint, Iron Triangle and “Project Triangle”) is a model of the constraints of project management. While its origins are unclear, it has been used since at least the 1950s. It contends that:

For example, a project can be completed faster by increasing budget or cutting scope. Similarly, increasing scope may require equivalent increases in budget and schedule. Cutting budget without adjusting schedule or scope will lead to lower quality.

In practice, however, trading between constraints is not always possible. For example, throwing money (and people) at a fully staffed project can slow it down. Moreover, in poorly run projects it is often impossible to improve budget, schedule or scope without adversely affecting quality.

The Project Management Triangle is used to analyze projects. It is often misused to define success as delivering the required scope, at a reasonable quality, within the established budget and schedule. The Project Management Triangle is considered insufficient as a model of project success because it omits crucial dimensions of success including impact on stakeholders, learning and user satisfaction.

The time constraint refers to the amount of time available to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project’s end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope.

The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints.

Another approach to project management is to consider the three constraints as finance, time and human resources. If you need to finish a job in a shorter time, you can throw more people at the problem, which in turn will raise the cost of the project, unless by doing this task quicker we will reduce costs elsewhere in the project by an equal amount.

As a project management graphic aid, a triangle can show time, resources, and technical objective as the sides of a triangle, instead of the corners. John Storck, a former instructor of the American Management Association’s “Basic Project Management” course, used a pair of triangles called triangle outer and triangle inner to represent the concept that the intent of a project is to complete on or before the allowed time, on or under budget, and to meet or exceed the required scope. The distance between the inner and outer triangles illustrated the hedge or contingency for each of the three elements. Bias could be shown by the distance. His example of a project with a strong time bias was the Alaska pipeline which essentially had to be done on time no matter the cost. After years of development, oil flowed out the end of the pipe within four minutes of schedule. In this illustration, the time side of triangle inner was effectively on top of the triangle outer line. This was true of the technical objective line also. The cost line of triangle inner, however, was outside since the project ran significantly over budget.

James P. Lewis suggests that project scope represents the area of the triangle, and can be chosen as a variable to achieve project success. He calls this relationship PCTS (Performance, Cost, Time, Scope), and suggests that a project can pick any three.

The real value of the project triangle is to show the complexity that is present in any project. The plane area of the triangle represents the near infinite variations of priorities that could exist between the three competing values. By acknowledging the limitless variety, possible within the triangle, using this graphic aid can facilitate better project decisions and planning and ensure alignment among team members and the project owners.

The STR model is a mathematical model which views the “triangle model” as a graphic abstraction of the relationship:

Scope refers to complexity (which can also mean quality). Resources includes humans (workers), financial, and physical. Note that these values are not considered unbounded. For instance, if one baker can make a loaf of bread in an hour in an oven, that doesn’t mean ten bakers could make ten loaves in one hour in the same oven (Due to the oven capacity).

For analytical purposes, the time required to produce a deliverable is estimated using several techniques. One method is to identify tasks needed to produce the deliverables documented in a work breakdown structure or WBS. The work effort for each task is estimated and those estimates are rolled up into the final deliverable estimate.

The tasks are also prioritized, dependencies between tasks are identified, and this information is documented in a project schedule. The dependencies between the tasks can affect the length of the overall project (dependency constrained), as can the availability of resources (resource constrained). Time is different from all other resources and cost categories.

Using actual cost of previous, similar projects as the basis for estimating the cost of current project.

According to the Project Management Body of Knowledge (PMBOK) the Project Time Management processes include:

Due to the complex nature of the ‘Time’ process group the project management credential PMI Scheduling Professional (PMI-SP) was created.

To develop an approximation of a project cost depends on several variables including: resources, work packages such as labor rates and mitigating or controlling influencing factors that create cost variances. Tools used in cost are, risk management, cost contingency, cost escalation, and indirect costs . But beyond this basic accounting approach to fixed and variable costs, the economic cost that must be considered includes worker skill and productivity which is calculated using various project cost estimate tools. This is important when companies hire temporary or contract employees or outsource work.

Project management software can be used to calculate the cost variances for a project.

Requirements specified to achieve the end result. The overall definition of what the project is supposed to accomplish, and a specific description of what the end result should be or accomplish. A major component of scope is the quality of the final product. The amount of time put into individual tasks determines the overall quality of the project. Some tasks may require a given amount of time to complete adequately, but given more time could be completed exceptionally. Over the course of a large project, quality can have a significant impact on time and cost (or vice versa).

Together, these three constraints have given rise to the phrase “On Time, On Spec, On Budget.” In this case, the term “scope” is substituted with “spec(ification).”

Traditionally the Project Constraint Model recognised three key constraints; “Cost”, “Time” and “Scope”. These constraints construct a triangle with geometric proportions illustrating the strong interdependent relationship between these factors. If there is a requirement to shift any one of these factors then at least one of the other factors must also be manipulated.

With mainstream acceptance of the Triangle Model, “Cost” and “Time” appear to be represented consistently. “Scope” however is often used interchangeably given the context of the triangle’s illustration or the perception of the respective project. Scope / Goal / Product / Deliverable / Quality are all relatively similar and generic variation examples of this, while the above suggestion of ‘People Resources’ offers a more specialised interpretation.

This widespread use of variations implies a level of ambiguity carried by the nuance of the third constraint term and of course a level of value in the flexibility of the Triangle Model. This ambiguity allows blurred focus between a project’s output and project’s process, with the example terms above having potentially different impetus in the two contexts. Both “Cost” and “Time” / “Delivery” represent the top level project’s inputs.

The ‘Project Diamond’ model engenders this blurred focus through the inclusion of “Scope” and “Quality” separately as the ‘third’ constraint. While there is merit in the addition of “Quality” as a key constraining factor, acknowledging the increasing maturity of project management, this model still lacks clarity between output and process. The Diamond Model does not capture the analogy of the strong interrelation between points of the triangles however.

PMBOK 4.0 offered an evolved model based on the triple constraint with 6 factors to be monitored and managed. This is illustrated as a 6 pointed Star that maintains the strength of the triangle analogy (two overlaid triangles), while at the same time represents the separation and relationship between project inputs/outputs factors on one triangle and the project processes factors on the other. The star variables are:

When considering the ambiguity of the third constraint and the suggestions of the “Project Diamond”; it is possible to consider instead the Goal or Product of the project as the third constraint, being made up of the sub factors “Scope” and “Quality”. In terms of a project’s output both “Scope” and “Quality” can be adjusted resulting in an overall manipulation of the Goal/Product. This interpretation includes the four key factors in the original triangle inputs/outputs form. This can even be incorporated into the PMBOK Star illustrating that “Quality” in particular may be monitored separately in terms of project outputs and process. Further to this suggestion, the use of term “Goal” may best represent change initiative outputs, while Product may best represent more tangible outputs.

Design for Six Sigma

Design for Six Sigma (DFSS) is a business process management method related to traditional Six Sigma. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. DFSS is relevant for relatively simple items / systems. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.

There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define – Measure – Analyze – Improve – Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure.

DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists.

DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design.

Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model.

Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box’s original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years.

Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis.

It is often seen that[weasel words] the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that[weasel words] two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for “back end” Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well.

Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries.
However, it has become clear that[weasel words] the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs.

Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., “fire fighting”), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., “fire prevention”). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions).

Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering.

Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’.

DFSS for software is essentially a non superficial modification of “classical DFSS” since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels.

DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models.

Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining,

However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM

DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ]

With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set.

Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm

DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes

Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario

DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior.