Project Management Triangle

Prince2 Certification
Image by/from Catalin Bogdan (talk)

The Project Management Triangle (called also the Triple Constraint, Iron Triangle and “Project Triangle”) is a model of the constraints of project management. While its origins are unclear, it has been used since at least the 1950s. It contends that:

For example, a project can be completed faster by increasing budget or cutting scope. Similarly, increasing scope may require equivalent increases in budget and schedule. Cutting budget without adjusting schedule or scope will lead to lower quality.

In practice, however, trading between constraints is not always possible. For example, throwing money (and people) at a fully staffed project can slow it down. Moreover, in poorly run projects it is often impossible to improve budget, schedule or scope without adversely affecting quality.

The Project Management Triangle is used to analyze projects. It is often misused to define success as delivering the required scope, at a reasonable quality, within the established budget and schedule. The Project Management Triangle is considered insufficient as a model of project success because it omits crucial dimensions of success including impact on stakeholders, learning and user satisfaction.

The time constraint refers to the amount of time available to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project’s end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope.

The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints.

Another approach to project management is to consider the three constraints as finance, time and human resources. If you need to finish a job in a shorter time, you can throw more people at the problem, which in turn will raise the cost of the project, unless by doing this task quicker we will reduce costs elsewhere in the project by an equal amount.

As a project management graphic aid, a triangle can show time, resources, and technical objective as the sides of a triangle, instead of the corners. John Storck, a former instructor of the American Management Association’s “Basic Project Management” course, used a pair of triangles called triangle outer and triangle inner to represent the concept that the intent of a project is to complete on or before the allowed time, on or under budget, and to meet or exceed the required scope. The distance between the inner and outer triangles illustrated the hedge or contingency for each of the three elements. Bias could be shown by the distance. His example of a project with a strong time bias was the Alaska pipeline which essentially had to be done on time no matter the cost. After years of development, oil flowed out the end of the pipe within four minutes of schedule. In this illustration, the time side of triangle inner was effectively on top of the triangle outer line. This was true of the technical objective line also. The cost line of triangle inner, however, was outside since the project ran significantly over budget.

James P. Lewis suggests that project scope represents the area of the triangle, and can be chosen as a variable to achieve project success. He calls this relationship PCTS (Performance, Cost, Time, Scope), and suggests that a project can pick any three.

The real value of the project triangle is to show the complexity that is present in any project. The plane area of the triangle represents the near infinite variations of priorities that could exist between the three competing values. By acknowledging the limitless variety, possible within the triangle, using this graphic aid can facilitate better project decisions and planning and ensure alignment among team members and the project owners.

The STR model is a mathematical model which views the “triangle model” as a graphic abstraction of the relationship:

Scope refers to complexity (which can also mean quality). Resources includes humans (workers), financial, and physical. Note that these values are not considered unbounded. For instance, if one baker can make a loaf of bread in an hour in an oven, that doesn’t mean ten bakers could make ten loaves in one hour in the same oven (Due to the oven capacity).

For analytical purposes, the time required to produce a deliverable is estimated using several techniques. One method is to identify tasks needed to produce the deliverables documented in a work breakdown structure or WBS. The work effort for each task is estimated and those estimates are rolled up into the final deliverable estimate.

The tasks are also prioritized, dependencies between tasks are identified, and this information is documented in a project schedule. The dependencies between the tasks can affect the length of the overall project (dependency constrained), as can the availability of resources (resource constrained). Time is different from all other resources and cost categories.

Using actual cost of previous, similar projects as the basis for estimating the cost of current project.

According to the Project Management Body of Knowledge (PMBOK) the Project Time Management processes include:

Due to the complex nature of the ‘Time’ process group the project management credential PMI Scheduling Professional (PMI-SP) was created.

To develop an approximation of a project cost depends on several variables including: resources, work packages such as labor rates and mitigating or controlling influencing factors that create cost variances. Tools used in cost are, risk management, cost contingency, cost escalation, and indirect costs . But beyond this basic accounting approach to fixed and variable costs, the economic cost that must be considered includes worker skill and productivity which is calculated using various project cost estimate tools. This is important when companies hire temporary or contract employees or outsource work.

Project management software can be used to calculate the cost variances for a project.

Requirements specified to achieve the end result. The overall definition of what the project is supposed to accomplish, and a specific description of what the end result should be or accomplish. A major component of scope is the quality of the final product. The amount of time put into individual tasks determines the overall quality of the project. Some tasks may require a given amount of time to complete adequately, but given more time could be completed exceptionally. Over the course of a large project, quality can have a significant impact on time and cost (or vice versa).

Together, these three constraints have given rise to the phrase “On Time, On Spec, On Budget.” In this case, the term “scope” is substituted with “spec(ification).”

Traditionally the Project Constraint Model recognised three key constraints; “Cost”, “Time” and “Scope”. These constraints construct a triangle with geometric proportions illustrating the strong interdependent relationship between these factors. If there is a requirement to shift any one of these factors then at least one of the other factors must also be manipulated.

With mainstream acceptance of the Triangle Model, “Cost” and “Time” appear to be represented consistently. “Scope” however is often used interchangeably given the context of the triangle’s illustration or the perception of the respective project. Scope / Goal / Product / Deliverable / Quality are all relatively similar and generic variation examples of this, while the above suggestion of ‘People Resources’ offers a more specialised interpretation.

This widespread use of variations implies a level of ambiguity carried by the nuance of the third constraint term and of course a level of value in the flexibility of the Triangle Model. This ambiguity allows blurred focus between a project’s output and project’s process, with the example terms above having potentially different impetus in the two contexts. Both “Cost” and “Time” / “Delivery” represent the top level project’s inputs.

The ‘Project Diamond’ model engenders this blurred focus through the inclusion of “Scope” and “Quality” separately as the ‘third’ constraint. While there is merit in the addition of “Quality” as a key constraining factor, acknowledging the increasing maturity of project management, this model still lacks clarity between output and process. The Diamond Model does not capture the analogy of the strong interrelation between points of the triangles however.

PMBOK 4.0 offered an evolved model based on the triple constraint with 6 factors to be monitored and managed. This is illustrated as a 6 pointed Star that maintains the strength of the triangle analogy (two overlaid triangles), while at the same time represents the separation and relationship between project inputs/outputs factors on one triangle and the project processes factors on the other. The star variables are:

When considering the ambiguity of the third constraint and the suggestions of the “Project Diamond”; it is possible to consider instead the Goal or Product of the project as the third constraint, being made up of the sub factors “Scope” and “Quality”. In terms of a project’s output both “Scope” and “Quality” can be adjusted resulting in an overall manipulation of the Goal/Product. This interpretation includes the four key factors in the original triangle inputs/outputs form. This can even be incorporated into the PMBOK Star illustrating that “Quality” in particular may be monitored separately in terms of project outputs and process. Further to this suggestion, the use of term “Goal” may best represent change initiative outputs, while Product may best represent more tangible outputs.

Integrated Workplace Management System (IWMS)

An integrated workplace management system (IWMS) is a software platform that helps organizations optimize the use of workplace resources, including the management of a company’s real estate portfolio, infrastructure and facilities assets.

IWMS technology is an advanced technology platform designed to help leading organizations manage their RE/FM and asset portfolio more effectively. IWMS solution are commonly packaged as a full integrated suite or as individual modules that can be scaled over time.

IWMS integrates five core functional areas within an enterprise which were organizationally and operationally independent and showed minimal interdisciplinary synergy, prior to the advent of IWMS:

This area involves activities associated with the acquisition (including purchase and lease), financial management and disposition of real property assets. Common IWMS features that support real estate management include strategic planning, transaction management, request for proposal (RFP) analysis, lease analysis, portfolio management po, tax management, lease management, and lease accounting.

This area involves activities associated with the design and development of new facilities and the remodeling or enhancement of existing facilities, including their reconfiguration and expansion. Common IWMS features that support capital project management include capital planning, design, funding, bidding, procurement, cost and resource management, project documentation and drawing management, scheduling, and critical path analysis.

This area covers activities related to the operation and optimized utilization of facilities. Common IWMS features that support facility management include strategic facilities planning (including scenario modeling and analysis), CAD and BIM integration, space management, site and employee service management, resource scheduling, and move management.

This area covers activities related to the corrective and preventive maintenance and operation of facilities and assets. Common IWMS features that support maintenance management include asset management, work requests, preventive maintenance, work order administration, warranty tracking, inventory management, vendor management and facility condition assessment.

This area covers activities related to the measurement and reduction of resource consumption (including energy and water) and waste production (including greenhouse gas emissions) within facilities. Common IWMS features that support sustainability and energy management include integration with building management systems (BMS), sustainability performance metrics, energy benchmarking, carbon emissions tracking, and energy efficiency project analysis.

IWMS components can be implemented in any order—or all together as a single, comprehensive implementation—according to the organization’s needs. As an implementation best practice, a phased approach for implementing IWMS components sequentially is advised—though a multi-function approach can still be followed. Each IWMS functional area requires the same steps for its implementation, though extra care, coordination and project management will be necessary to ensure smooth functioning for more complex implementations.

Adoption of as-shipped business processes included in the IWMS software over an organization’s existing business processes constitutes a “core success prerequisite and best practice” in the selection and implementation of IWMS software. As a result, organizations should limit configuration to all but the most compelling cases.

Since 2004, the IWMS market has been reported on by independent analyst firms Gartner Inc., IWMSconnect and IWMSNews.

An annual Gartner Market Guide, formerly the IWMS Magic Quadrant (MQ) is posted on the IWMS market that evaluates vendors upon two criteria: ‘completeness of vision’ and ‘ability to execute’.

The original author, Michael Bell, first described IWMS software as “integrated enterprise solutions that span the life cycle of facilities asset management, from acquisition and operations to disposition.” In this first market definition, Gartner identified critical requirements of an IWMS, including a common database, advanced web services technologies and a system architecture that enabled user-defined workflow processes and customized portal interfaces.

Gartner released updated IWMS Market Guide reports, as follows:

The latest Gartner analysis was released August 21, 2018. The current Gartner analyst responsible for publication of the IWMS MQ is Carol Rozwell, Distinguished VP Analyst.

The future of IWMS

While the core functions of the IWMS remain critical, today’s leaders expect more. They need a cloud-based software platform that is built with the workplace experience at the center. It needs to have an exceptional user interface, allowing employees to access a variety of workplace services from a mobile app, kiosk or desktop. In the latest Verdantix research, 80% of executives considering IWMS software said the quality of the user interface was the most important factor influencing their decision.

The IWMS of the future should serve as a digital workplace concierge, allowing employees to find people, reserve rooms, request service and receive mail or visitors.

Project Management 2.0

Project Management 2.0 (sometimes mistakenly called Social Project Management) is one branch of evolution of project management practices, which was enabled by the emergence of Web 2.0 technologies. Such applications include: blogs, wikis, collaborative software, etc. Because of Web 2.0 technologies, small distributed & virtual teams can work together much more efficiently by utilizing the new-generation, usually low or no-cost Web-based project management tools. These tools challenge the traditional view of the project manager, as Project Management 2.0 represents a dramatic increase in the ability for distributed teams’ collaboration.

While traditional project management structures focused on the paradigm of the project manager as controller, Project management 2.0 stresses the concept of distributed collaboration, and the project manager as a leader. Project management 2.0 advocates open communication. While traditional project management often was driven by formal reporting and hierarchical structures, project management 2.0 stresses the need for access to information for the whole team. This has led to one of the many criticisms of Project Management 2.0 – that it cannot scale to large projects. However, for distributed teams performing agile development, which are often emergent structures, the use of rich collaborative software may enable the development of collective intelligence

Common comparisons of traditional project management vs. project management 2.0 are listed in the table below.

Project Production Management

Project production management (PPM) is the application of operations management to the delivery of capital projects. The PPM framework is based on a project as a production system view, in which a project transforms inputs (raw materials, information, labor, plant & machinery) into outputs (goods and services).

The knowledge that forms the basis of PPM originated in the discipline of industrial engineering during the Industrial Revolution. During this time, industrial engineering matured and then found application in many areas such as military planning and logistics for both the First and Second World Wars and manufacturing systems. As a coherent body of knowledge began to form, industrial engineering evolved into various scientific disciplines including operations research, operations management and queueing theory, amongst other areas of focus. Project Production Management (PPM) is the application of this body of knowledge to the delivery of capital projects.

Project management, as defined by the Project Management Institute, specifically excludes operations management from its body of knowledge, on the basis that projects are temporary endeavors with a beginning and an end, whereas operations refer to activities that are either ongoing or repetitive. However, by looking at a large capital project as a production system, such as what is encountered in construction, it is possible to apply the theory and associated technical frameworks from operations research, industrial engineering and queuing theory to optimize, plan, control and improve project performance.

For example, Project Production Management applies tools and techniques typically used in manufacturing management, such as described by Philip M. Morse in, or in Factory Physics to assess the impact of variability and inventory on project performance. Although any variability in a production system degrades its performance, by understanding which variability is detrimental to the business and which is beneficial, steps can be implemented to reduce detrimental variability. After mitigation steps are put in place, the impact of any residual variability can be addressed by allocating buffers at select points in the project production system – a combination of capacity, inventory and time.

Scientific and Engineering disciplines have contributed to many mathematical methods for the design and planning in project planning and scheduling, most notably linear and dynamic programming yielding techniques such as the critical path method (CPM) and the program evaluation and review technique (PERT). The application of engineering disciplines, particularly the areas of operations research, industrial engineering and queueing theory have found much application in the fields of manufacturing and factory production systems. Factory Physics is an example of where these scientific principles are described as forming a framework for manufacturing and production management.  Just as Factory Physics is the application of scientific principles to construct a framework for manufacturing and production management, Project Production Management is the application of the very same operations principles to the activities in a project, covering an area that has been conventionally out of scope for project management.

Modern project management theory and techniques started with Frederick Taylor and Taylorism/scientific management at the beginning of the 20th century, with the advent of mass manufacturing. It was refined further in the 1950s with techniques such as critical path method (CPM) and program evaluation and review technique (PERT). Use of CPM and PERT became more common as the computer revolution progressed. As the field of project management continued to grow, the role of the project manager was created and certifying organizations such as the Project Management Institute (PMI) emerged. Modern project management has evolved into a broad variety of knowledge areas described in the Guide to the Project Management Body of Knowledge (PMBOK).

Operations management (related to the fields of production management, operations research and industrial engineering) is a field of science that emerged from the modern manufacturing industry and focuses on modeling and controlling actual work processes. The practice is based upon defining and controlling production systems, which typically consist of a series of inputs, transformational activities, inventory and outputs. Over the last 50 years, project management and operations management have been considered separate fields of study and practice.

PPM applies the theory and results of the various disciplines known as operations management, operations research, queueing theory and industrial engineering to the management and execution of projects. By viewing a project as a production system, the delivery of capital projects can be analyzed for the impact of variability. The effects of variability can be summarized by VUT equation (specifically Kingman’s formula for G/G/1 queue). By using a combination of buffers – capacity, inventory and time – the impact of variability to project execution performance can be minimized.

A set of key results used to analyze and optimize the work in projects were originally articulated by Philip Morse, considered the father of operations research in the U.S. and summarized in his seminal volume. In introducing its framework for manufacturing management, Factory Physics summarizes these results:

There are key mathematical models that describe the relationships between buffers and variability. Little’s law – named after academic John Little – describes the relationship between throughput, cycle time and work-in-process (WIP) or inventory.  The Cycle Time Formula summarizes how much time a set of tasks at a particular point in a project take to execute.  Kingman’s formula, also known as the VUT equation – summarizing the impact of variability.

The following academic journals publish papers pertaining to Operations Management issues:

Agile Software Development

Agile software development comprises various approaches to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages rapid and flexible response to change.[further explanation needed]

The term agile (sometimes written Agile) was popularized, in this context, by the Manifesto for Agile Software Development. The values and principles espoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban.

While there is much anecdotal evidence that adopting agile practices and values improves the agility of software professionals, teams and organizations, some empirical studies have disputed that evidence.

Iterative and incremental development methods can be traced back as early as 1957, with evolutionary project management and adaptive software development emerging in the early 1970s.

During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods that critics described as overly regulated, planned, and micro-managed. These included: rapid application development (RAD), from 1991; the unified process (UP) and dynamic systems development method (DSDM), both from 1994; Scrum, from 1995; Crystal Clear and extreme programming (XP), both from 1996; and feature-driven development, from 1997. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods. At the same time, similar changes were underway in manufacturing and aerospace.

In 2001, these seventeen software developers met at a resort in Snowbird, Utah to discuss these lightweight development methods: Kent Beck, Ward Cunningham, Dave Thomas, Jeff Sutherland, Ken Schwaber, Jim Highsmith, Alistair Cockburn, Robert C. Martin, Mike Beedle, Arie van Bennekum, Martin Fowler, James Grenning, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, and Steve Mellor. Together they published the Manifesto for Agile Software Development.

In 2005, a group headed by Cockburn and Highsmith wrote an addendum of project management principles, the PM Declaration of Interdependence, to guide software project management according to agile software development methods.

In 2009, a group working with Martin wrote an extension of software development principles, the Software Craftsmanship Manifesto, to guide agile software development according to professional conduct and mastery.

In 2011, the Agile Alliance created the Guide to Agile Practices (renamed the Agile Glossary in 2016), an evolving open-source compendium of the working definitions of agile practices, terms, and elements, along with interpretations and experience guidelines from the worldwide community of agile practitioners.

Based on their combined experience of developing software and helping others do that, the seventeen signatories to the manifesto proclaimed that they value:

That is to say, the items on the left are valued more than the items on the right.

As Scott Ambler elucidated:

Some of the authors formed the Agile Alliance, a non-profit organization that promotes software development according to the manifesto’s values and principles. Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith said,

The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of never-maintained and rarely-used tomes. We plan, but recognize the limits of planning in a turbulent environment. Those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as “hackers” are ignorant of both the methodologies and the original definition of the term hacker.

The Manifesto for Agile Software Development is based on twelve principles:

Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design. Iterations, or sprints, are short time frames (timeboxes) that typically last from one to four weeks. Each iteration involves a cross-functional team working in all functions: planning, analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration a working product is demonstrated to stakeholders. This minimizes overall risk and allows the product to adapt to changes quickly. An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations might be required to release a product or new features. Working software is the primary measure of progress.

The principle of co-location is that co-workers on the same team should be situated together to better establish the identity as a team and to improve communication. This enables face-to-face interaction, ideally in front of a whiteboard, that reduces the cycle time typically taken when questions and answers are mediated through phone, persistent chat, wiki, or email.

No matter which development method is followed, every team should include a customer representative (“Product Owner” in Scrum). This person is agreed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer questions throughout the iteration. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (ROI) and ensuring alignment with customer needs and company goals.

In agile software development, an information radiator is a (normally large) physical display located prominently near the development team, where passers-by can see it. It presents an up-to-date summary of the product development status. A build light indicator may also be used to inform a team about the current status of their product development.

A common characteristic in agile software development is the daily stand-up (also known as the daily scrum). In a brief session, team members report to each other what they did the previous day toward their team’s iteration goal, what they intend to do today toward the goal, and any roadblocks or impediments they can see to the goal.

Specific tools and techniques, such as continuous integration, automated unit testing, pair programming, test-driven development, design patterns, behavior-driven development, domain-driven design, code refactoring and other techniques are often used to improve quality and enhance product development agility. This is predicated on designing and building quality in from the beginning and being able to demonstrate software for customers at any point, or at least at the end of every iteration.

Compared to traditional software engineering, agile software development mainly targets complex systems and product development with dynamic, non-deterministic and non-linear characteristics. Accurate estimates, stable plans, and predictions are often hard to get in early stages, and confidence in them is likely to be low. Agile practitioners will seek to reduce the leap-of-faith that is needed before any evidence of value can be obtained. Requirements and design are held to be emergent. Big up-front specifications would probably cause a lot of waste in such cases, i.e., are not economically sound. These basic arguments and previous industry experiences, learned from years of successes and failures, have helped shape agile development’s favor of adaptive, iterative and evolutionary development.

Development methods exist on a continuum from adaptive to predictive. Agile software development methods lie on the adaptive side of this continuum. One key of adaptive development methods is a rolling wave approach to schedule planning, which identifies milestones but leaves flexibility in the path to reach them, and also allows for the milestones themselves to change.

Adaptive methods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive team changes as well. An adaptive team has difficulty describing exactly what will happen in the future. The further away a date is, the more vague an adaptive method is about what will happen on that date. An adaptive team cannot report exactly what tasks they will do next week, but only which features they plan for next month. When asked about a release six months from now, an adaptive team might be able to report only the mission statement for the release, or a statement of expected value vs. cost.

Predictive methods, in contrast, focus on analysing and planning the future in detail and cater for known risks. In the extremes, a predictive team can report exactly what features and tasks are planned for the entire length of the development process. Predictive methods rely on effective early phase analysis and if this goes very wrong, the project may have difficulty changing direction. Predictive teams often institute a change control board to ensure they consider only the most valuable changes.

Risk analysis can be used to choose between adaptive (agile or value-driven) and predictive (plan-driven) methods. Barry Boehm and Richard Turner suggest that each side of the continuum has its own home ground, as follows:

One of the differences between agile software development methods and waterfall is the approach to quality and testing. In the waterfall model, there is always a separate testing phase after a build phase; however, in agile software development testing is completed in the same iteration as programming.

Another difference is that traditional “waterfall” software development moves a project through various Software Development Lifecycle (SDLC) phases. One phase is completed in its entirety before moving on to the next phase.

Because testing is done in every iteration—which develops a small piece of the software—users can frequently use those new pieces of software and validate the value. After the users know the real value of the updated piece of software, they can make better decisions about the software’s future. Having a value retrospective and software re-planning session in each iteration—Scrum typically has iterations of just two weeks—helps the team continuously adapt its plans so as to maximize the value it delivers. This follows a pattern similar to the PDCA cycle, as the work is planned, done, checked (in the review and retrospective), and any changes agreed are acted upon.

This iterative approach supports a product rather than a project mindset. This provides greater flexibility throughout the development process; whereas on projects the requirements are defined and locked down from the very beginning, making it difficult to change them later. Iterative product development allows the software to evolve in response to changes in business environment or market requirements.

Because of the short iteration style of agile software development, it also has strong connections with the lean startup concept.

In a letter to IEEE Computer, Steven Rakitin expressed cynicism about agile software development, calling it “yet another attempt to undermine the discipline of software engineering” and translating “working software over comprehensive documentation” as “we want to spend all our time coding. Remember, real programmers don’t write documentation.”

This is disputed by proponents of agile software development, who state that developers should write documentation if that is the best way to achieve the relevant goals, but that there are often better ways to achieve those goals than writing static documentation.
Scott Ambler states that documentation should be “just barely good enough” (JBGE), that too much or comprehensive documentation would usually cause waste, and developers rarely trust detailed documentation because it’s usually out of sync with code, while too little documentation may also cause problems for maintenance, communication, learning and knowledge sharing. Alistair Cockburn wrote of the Crystal Clear method:

Crystal considers development a series of co-operative games, and intends that the documentation is enough to help the next win at the next game. The work products for Crystal include use cases, risk list, iteration plan, core domain models, and design notes to inform on choices…however there are no templates for these documents and descriptions are necessarily vague, but the objective is clear, just enough documentation for the next game. I always tend to characterize this to my team as: what would you want to know if you joined the team tomorrow.

Agile software development methods support a broad range of the software development life cycle. Some focus on the practices (e.g., XP, pragmatic programming, agile modeling), while some focus on managing the flow of work (e.g., Scrum, Kanban). Some support activities for requirements specification and development (e.g., FDD), while some seek to cover the full development life cycle (e.g., DSDM, RUP).

Notable agile software development frameworks include:

Agile software development is supported by a number of concrete practices, covering areas like requirements, design, modeling, coding, testing, planning, risk management, process, quality, etc. Some notable agile software development practices include:

In the literature, different terms refer to the notion of method adaptation, including ‘method tailoring’, ‘method fragment adaptation’ and ‘situational method engineering’. Method tailoring is defined as:

A process or capability in which human agents determine a system development approach for a specific project situation through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments.

Situation-appropriateness should be considered as a distinguishing characteristic between agile methods and more plan-driven software development methods, with agile methods allowing product development teams to adapt working practices according to the needs of individual products. Potentially, most agile methods could be suitable for method tailoring, such as DSDM tailored in a CMM context. and XP tailored with the Rule Description Practices (RDP) technique. Not all agile proponents agree, however, with Schwaber noting “that is how we got into trouble in the first place, thinking that the problem was not having a perfect methodology. Efforts [should] center on the changes [needed] in the enterprise”. Bas Vodde reinforced this viewpoint, suggesting that unlike traditional, large methodologies that require you to pick and choose elements, Scrum provides the basics on top of which you add additional elements to localise and contextualise its use. Practitioners seldom use system development methods, or agile methods specifically, by the book, often choosing to omit or tailor some of the practices of a method in order to create an in-house method.

In practice, methods can be tailored using various tools. Generic process modeling languages such as Unified Modeling Language can be used to tailor software development methods. However, dedicated tools for method engineering such as the Essence Theory of Software Engineering of SEMAT also exist.

Agile software development has been widely seen as highly suited to certain types of environments, including small teams of experts working on greenfield projects,:157 and the challenges and limitations encountered in the adoption of agile software development methods in a large organization with legacy infrastructure are well-documented and understood.

In response, a range of strategies and patterns has evolved for overcoming challenges with large-scale development efforts (>20 developers) or distributed (non-colocated) development teams, amongst other challenges; and there are now several recognised frameworks that seek to mitigate or avoid these challenges.

There are many conflicting viewpoints on whether all of these are effective or indeed fit the definition of agile development, and this remains an active and ongoing area of research.

When agile software development is applied in a distributed setting (with teams dispersed across multiple business locations), it is commonly referred to as distributed agile development. The goal is to leverage the unique benefits offered by each approach. Distributed development allows organizations to build software by strategically setting up teams in different parts of the globe, virtually building software round-the-clock (more commonly referred to as follow-the-sun model). On the other hand, agile development provides increased transparency, continuous feedback and more flexibility when responding to changes.

Agile software development methods were initially seen as best suitable for non-critical product developments, thereby excluded from use in regulated domains such as medical devices, pharmaceutical, financial, nuclear systems, automotive, and avionics sectors, etc. However, in the last several years, there have been several initiatives for the adaptation of agile methods for these domains.

There are numerous standards that may apply in regulated domains, including ISO 26262, ISO 9000, ISO 9001, and ISO/IEC 15504.
A number of key concerns are of particular importance in regulated domains:

Although agile software development methods can be used with any programming paradigm or language in practice, they were originally closely associated with object-oriented environments such as Smalltalk and Lisp and later Java. The initial adopters of agile methods were usually small to medium-sized teams working on unprecedented systems with requirements that were difficult to finalize and likely to change as the system was being developed. This section describes common problems that organizations encounter when they try to adopt agile software development methods as well as various techniques to measure the quality and performance of agile teams.

The best agile practitioners have always emphasized thorough engineering principles. As a result, there are a number of best practices and tools for measuring the performance of agile software development and teams.

The Agility measurement index, amongst others, rates developments against five dimensions of product development (duration, risk, novelty, effort, and interaction). Other techniques are based on measurable goals and one study suggests that velocity can be used as a metric of agility. There are also agile self-assessments to determine whether a team is using agile software development practices (Nokia test, Karlskrona test, 42 points test).

One of the early studies reporting gains in quality, productivity, and business satisfaction by using agile software developments methods was a survey conducted by Shine Technologies from November 2002 to January 2003.

A similar survey, the State of Agile, is conducted every year starting in 2006 with thousands of participants from around the software development community. This tracks trends on the benefits of agility, lessons learned, and good practices. Each survey has reported increasing numbers saying that agile software development helps them deliver software faster; improves their ability to manage changing customer priorities; and increases their productivity. Surveys have also consistently shown better results with agile product development methods compared to classical project management. In balance, there are reports that some feel that agile development methods are still too young to enable extensive academic research of their success.

Organizations and teams implementing agile software development often face difficulties transitioning from more traditional methods such as waterfall development, such as teams having an agile process forced on them. These are often termed agile anti-patterns or more commonly agile smells. Below are some common examples:

A goal of agile software development is to focus more on producing working software and less on documentation. This is in contrast to waterfall models where the process is often highly controlled and minor changes to the system require significant revision of supporting documentation. However, this does not justify completely doing without any analysis or design at all. Failure to pay attention to design can cause a team to proceed rapidly at first but then to have significant rework required as they attempt to scale up the system. One of the key features of agile software development is that it is iterative. When done correctly design emerges as the system is developed and commonalities and opportunities for re-use are discovered.

In agile software development, stories (similar to use case descriptions) are typically used to define requirements and an iteration is a short period of time during which the team commits to specific goals. Adding stories to an iteration in progress is detrimental to a good flow of work. These should be added to the product backlog and prioritized for a subsequent iteration or in rare cases the iteration could be cancelled.

This does not mean that a story cannot expand. Teams must deal with new information, which may produce additional tasks for a story. If the new information prevents the story from being completed during the iteration, then it should be carried over to a subsequent iteration. However, it should be prioritized against all remaining stories, as the new information may have changed the story’s original priority.

Agile software development is often implemented as a grassroots effort in organizations by software development teams trying to optimize their development processes and ensure consistency in the software development life cycle. By not having sponsor support, teams may face difficulties and resistance from business partners, other development teams and management. Additionally, they may suffer without appropriate funding and resources. This increases the likelihood of failure.

A survey performed by VersionOne found respondents cited insufficient training as the most significant cause for failed agile implementations Teams have fallen into the trap of assuming the reduced processes of agile software development compared to other methodologies such as waterfall means that there are no actual rules for agile software development.

The product owner is responsible for representing the business in the development activity and is often the most demanding role.

A common mistake is to have the product owner role filled by someone from the development team. This requires the team to make its own decisions on prioritization without real feedback from the business. They try to solve business issues internally or delay work as they reach outside the team for direction. This often leads to distraction and a breakdown in collaboration.

Agile software development requires teams to meet product commitments, which means they should focus only on work for that product. However, team members who appear to have spare capacity are often expected to take on other work, which makes it difficult for them to help complete the work to which their team had committed.

Teams may fall into the trap of spending too much time preparing or planning. This is a common trap for teams less familiar with agile software development where the teams feel obliged to have a complete understanding and specification of all stories. Teams should be prepared to move forward only with those stories in which they have confidence, then during the iteration continue to discover and prepare work for subsequent iterations (often referred to as backlog refinement or grooming).

A daily standup should be a focused, timely meeting where all team members disseminate information. If problem-solving occurs, it often can only involve certain team members and potentially is not the best use of the entire team’s time. If during the daily standup the team starts diving into problem-solving, it should be set aside until a sub-team can discuss, usually immediately after the standup completes.

One of the intended benefits of agile software development is to empower the team to make choices, as they are closest to the problem. Additionally, they should make choices as close to implementation as possible, to use more timely information in the decision. If team members are assigned tasks by others or too early in the process, the benefits of localized and timely decision making can be lost.

Being assigned work also constrains team members into certain roles (for example, team member A must always do the database work), which limits opportunities for cross-training. Team members themselves can choose to take on tasks that stretch their abilities and provide cross-training opportunities.

Another common pitfall is for a scrum master to act as a contributor. While not prohibited by the Scrum methodology, the scrum master needs to ensure they have the capacity to act in the role of scrum master first and not working on development tasks. A scrum master’s role is to facilitate the process rather than create the product.

Having the scrum master also multitasking may result in too many context switches to be productive. Additionally, as a scrum master is responsible for ensuring roadblocks are removed so that the team can make forward progress, the benefit gained by individual tasks moving forward may not outweigh roadblocks that are deferred due to lack of capacity.

Due to the iterative nature of agile development, multiple rounds of testing are often needed. Automated testing helps reduce the impact of repeated unit, integration, and regression tests and frees developers and testers to focus on higher value work.

Test automation also supports continued refactoring required by iterative software development. Allowing a developer to quickly run tests to confirm refactoring has not modified the functionality of the application may reduce the workload and increase confidence that cleanup efforts have not introduced new defects.

Focusing on delivering new functionality may result in increased technical debt. The team must allow themselves time for defect remediation and refactoring. Technical debt hinders planning abilities by increasing the amount of unscheduled work as production defects distract the team from further progress.

As the system evolves it is important to refactor as entropy of the system naturally increases. Over time the lack of constant maintenance causes increasing defects and development costs.

A common misconception is that agile software development allows continuous change, however an iteration backlog is an agreement of what work can be completed during an iteration. Having too much work-in-progress (WIP) results in inefficiencies such as context-switching and queueing. The team must avoid feeling pressured into taking on additional work.

Agile software development fixes time (iteration duration), quality, and ideally resources in advance (though maintaining fixed resources may be difficult if developers are often pulled away from tasks to handle production incidents), while the scope remains variable. The customer or product owner often push for a fixed scope for an iteration. However, teams should be reluctant to commit to the locked time, resources and scope (commonly known as the project management triangle). Efforts to add scope to the fixed time and resources of agile software development may result in decreased quality.

Due to the focused pace and continuous nature of agile practices, there is a heightened risk of burnout among members of the delivery team.

The term agile management is applied to an iterative, incremental method of managing the design and build activities of engineering, information technology and other business areas that aim to provide new product or service development in a highly flexible and interactive manner, based on the principles expressed in the Manifesto for Agile Software Development.

Agile X techniques may also be called extreme project management. It is a variant of iterative life cycle where deliverables are submitted in stages. The main difference between agile and iterative development is that agile methods complete small portions of the deliverables in each delivery cycle (iteration), while iterative methods evolve the entire set of deliverables over time, completing them near the end of the project. Both iterative and agile methods were developed as a reaction to various obstacles that developed in more sequential forms of project organization. For example, as technology projects grow in complexity, end users tend to have difficulty defining the long-term requirements without being able to view progressive prototypes. Projects that develop in iterations can constantly gather feedback to help refine those requirements.

Agile management also offers a simple framework promoting communication and reflection on past work amongst team members. Teams who were using traditional waterfall planning and adopted the agile way of development typically go through a transformation phase and often take help from agile coaches who help guide the teams through a smooth transformation. There are typically two styles of agile coaching: push-based and pull-based agile coaching. Agile management approaches have also been employed and adapted to the business and government sectors. For example, within the federal government of the United States, the United States Agency for International Development (USAID) is employing a collaborative project management approach that focuses on incorporating collaborating, learning and adapting (CLA) strategies to iterate and adapt programming.

Agile methods are mentioned in the Guide to the Project Management Body of Knowledge (PMBOK Guide) under the Project Lifecycle definition:

Adaptive project life cycle, a project life cycle, also known as change-driven or agile methods, that is intended to facilitate change and require a high degree of ongoing stakeholder involvement. Adaptive life cycles are also iterative and incremental, but differ in that iterations are very rapid (usually 2-4 weeks in length) and are fixed in time and resources.

According to Jean-Loup Richet (Research Fellow at ESSEC Institute for Strategic Innovation & Services) “this approach can be leveraged effectively for non-software products and for project management in general, especially in areas of innovation and uncertainty.” The end result is a product or project that best meets current customer needs and is delivered with minimal costs, waste, and time, enabling companies to achieve bottom line gains earlier than via traditional approaches.

Agile software development methods have been extensively used for development of software products and some of them use certain characteristics of software, such as object technologies. However, these techniques can be applied to the development of non-software products, such as computers, motor vehicles, medical devices, food, clothing, and music. Agile software development methods have been used in non-development IT infrastructure deployments and migrations. Some of the wider principles of agile software development have also found application in general management (e.g., strategy, governance, risk, finance) under the terms business agility or agile business management.

Under an agile business management model, agile software development techniques, practices, principles and values are expressed across five domains.

Agile software development paradigms can be used in other areas of life such as raising children. Its success in child development might be founded on some basic management principles; communication, adaptation, and awareness. In a TED Talk, Bruce Feiler shared how he applied basic agile paradigms to household management and raising children.

Agile practices can be inefficient in large organizations and certain types of developments. Many organizations believe that agile software development methodologies are too extreme and adopt a Hybrid approach that mixes elements of agile software development and plan-driven approaches. Some methods, such as dynamic systems development method (DSDM) attempt this in a disciplined way, without sacrificing fundamental principles.

The increasing adoption of agile practices has also been criticized as being a management fad that simply describes existing good practices under new jargon, promotes a one size fits all mindset towards development strategies, and wrongly emphasizes method over results.

Alistair Cockburn organized a celebration of the 10th anniversary of the Manifesto for Agile Software Development in Snowbird, Utah on 12 February 2011, gathering some 30+ people who had been involved at the original meeting and since. A list of about 20 elephants in the room (‘undiscussable’ agile topics/issues) were collected, including aspects: the alliances, failures and limitations of agile software development practices and context (possible causes: commercial interests, decontextualization, no obvious way to make progress based on failure, limited objective evidence, cognitive biases and reasoning fallacies), politics and culture. As Philippe Kruchten wrote:

The agile movement is in some ways a bit like a teenager: very self-conscious, checking constantly its appearance in a mirror, accepting few criticisms, only interested in being with its peers, rejecting en bloc all wisdom from the past, just because it is from the past, adopting fads and new jargon, at times cocky and arrogant. But I have no doubts that it will mature further, become more open to the outside world, more reflective, and therefore, more effective.

Business Process Interoperability (BPI)

Business process interoperability (BPI) is a property referring to the ability of diverse business processes to work together, to so called “inter-operate”. It is a state that exists when a business process can meet a specific objective automatically utilizing essential human labor only. Typically, BPI is present when a process conforms to standards that enable it to achieve its objective regardless of ownership, location, make, version or design of the computer systems used.

The main attraction of BPI is that a business process can start and finish at any point worldwide regardless of the types of hardware and software required to automate it. Because of its capacity to offload human “mind” labor, BPI is considered by many as the final stage in the evolution of business computing. BPI’s twin criteria of specific objective and essential human labor are both subjective.

The objectives of BPI vary, but tend to fall into the following categories:

Business process interoperability is limited to enterprise software systems in which functions are designed to work together, such as a payroll module and a general ledger module that are part of the same program suite, and in controlled software environments that use EDI. Interoperability is also present between incompatible systems where middleware has been applied. In each of these cases, however, the processes seldom meet the test of BPI because they are constrained by information silos and the systems’ inability to freely communicate among each other.

The term “Business process interoperability” (BPI) was coined in the late 1990s, mostly in connection with the value chain in electronic commerce. BPI has been utilized in promotional materials by various companies, and appears as a subject of research at organizations concerned with computer science ontologies.

Despite the attention it has received, business process interoperability has not been applied outside of limited information system environments. A possible reason is that BPI requires universal conformance to standards so that a business process can start and finish at any point worldwide. The standards themselves are fairly straightforward—organizations use a finite set of shared processes to manage most of their operations. Bringing enterprises together to create and adopt the standards is another matter entirely. The world of management systems is, after all, characterized by information silos. Moving away from silos requires organizations to deal with cultural issues such as ownership and sharing of processes and data, competitive forces and security, not to mention the effect of automation on their work forces.

While the timetable or adoption of BPI cannot be predicted, it remains a subject of interest in organizations and think tanks alike.

To test for BPI, an organization analyzes a business process to determine if it can meet its specific objective utilizing essential human labor only.

The specific objective must be clearly defined from start to finish. Start and finish are highly subjective, however. In one organization, a process may start when a customer orders a product and finish when the product is delivered to the customer. In another organization, the same process may be preceded with product manufacture and distribution, and may be followed by management of after-sale warranty and repairs.

Essential human labor includes:

To qualify for BPI, every process task must be taken into account from start to finish, including the labor that falls between the cracks created by incompatible software applications, such as gathering data from one system and re-inputting it in another, and preparing reports that include data from disparate systems. The process must flow uninterrupted regardless of the underlying computerized systems used. If non-essential human labor exists at any point, the process fails the test of BPI.

To assure that business processes can meet their specific objectives automatically utilizing essential human labor only, BPI takes a “service-oriented architecture“ (SOA) approach, which focuses on the processes rather than on the technologies required to automate them. A widely used SOA is an effective way to address the problems caused by any disparate system that is the heart of each information silo.

SOA makes practical sense because organizations cannot be expected to replace or modify their current enterprise software to achieve BPI, regardless of the benefits involved. Many workers’ jobs are built around the applications they use, and most organizations have sizable investments in their current information infrastructures which are so complex that even the smallest modification can be very costly, time-consuming and disruptive. Even if software makers were to unite and conform their products to a single set of standards, the problem would not be solved. Besides software from well-known manufacturers, organizations use a great many legacy software systems, custom applications, manual procedures and paper forms. Without SOA, streamlining such a huge number of disparate internal processes so that they interoperate across the entire global enterprise spectrum is simply out of the question.

To create an SOA for widespread use, BPI relies on a centralized database repository containing shared data and procedures common to applications in every industry and geographical area. In essence, the repository serves as a top application layer, enabling organizations to export their data to its distributed database and obtain the programs they need by simply logging on via a portal. To assure security and commercial neutrality, the repository conforms to standards promulgated by the community of BPI stakeholders.

Organizations and interest groups that wish to achieve business process interoperability begin by establishing one or more BPI initiatives.

Project Portfolio Management (PPM)

Prince2 Certification
Image by/from Kokcharov

Project Portfolio Management (PPM) is the centralized management of the processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals, while honouring constraints imposed by customers, strategic objectives, or external real-world factors. The International standard defines the framework of the Project Portfolio Management

PPM provides program and project managers in large, program/project-driven organizations with the capabilities needed to manage the time, resources, skills, and budgets necessary to accomplish all interrelated tasks. It provides a framework for issue resolution and risk mitigation, as well as the centralized visibility to help planning and scheduling teams to identify the fastest, cheapest, or most suitable approach to deliver projects and programs. Portfolio Managers define Key Performance Indicators and the strategy for their portfolio .

Pipeline management involves steps to ensure that an adequate number of project proposals are generated and evaluated to determine whether (and how) a set of projects in the portfolio can be executed with finite development resources in a specified time. There are three major sub-components to pipeline management: ideation, work intake processes, and Phase-Gate reviews. Fundamental to pipeline management is the ability to align the decision-making process for estimating and selecting new capital investment projects with the strategic plan.

The focus on the efficient and effective deployment of an organization’s resources where and when they are needed. These can include financial resources, inventory, human resources, technical skills, production, and design. In addition to project-level resource allocation, users can also model ‘what-if’ resource scenarios, and extend this view across the portfolio.

The capture and prioritization of change requests that can include new requirements, features, functions, operational constraints, regulatory demands, and technical enhancements. PPM provides a central repository for these change requests and the ability to match available resources to evolving demand within the financial and operational constraints of individual projects.

With PPM, the Office of Finance can improve their accuracy for estimating and managing the financial resources of a project or group of projects. In addition, the value of projects can be demonstrated in relation to the strategic objectives and priorities of the organization through financial controls and to assess progress through earned value and other project financial techniques.

An analysis of the risk sensitivities residing within each project, as the basis for determining confidence levels across the portfolio. The integration of cost and schedule risk management with techniques for determining contingency and risk response plans, enable organizations to gain an objective view of project uncertainties.

In the early 2000s, many PPM vendors realized that project portfolio reporting services only addressed part of a wider need for PPM in the marketplace. Another more senior audience had emerged, sitting at management and executive levels above detailed work execution and schedule management, who required a greater focus on process improvement and ensuring the viability of the portfolio in line with overall strategic objectives. In addition, as the size, scope, complexity, and geographical spread of organizations’ project portfolios continued to grow, greater visibility was needed of project work across the enterprise, allied to improved resource utilization and capacity planning.

Enterprise Project Portfolio Management (EPPM) is a top-down approach to managing all project-intensive work and resources across the enterprise. This contrasts with the traditional approach of combining manual processes, desktop project tools, and PPM applications for each project portfolio environment.

The PPM landscape is evolving rapidly as a result of the growing preference for managing multiple capital investment initiatives from a single, enterprise-wide system. This more centralized approach, and resulting ‘single version of the truth’ for project and project portfolio information, provides the transparency of performance needed by management to monitor progress versus the strategic plan.

The key aims of EPPM can be summarized as follows:

A key result of PPM is to decide which projects to fund in an optimal manner. Project Portfolio Optimization (PPO) is the effort to make the best decisions possible under these conditions.

Management System

A management system is a set of policies, processes and procedures used by an organization to ensure that it can fulfill the tasks required to achieve its objectives. These objectives cover many aspects of the organization’s operations (including financial success, safe operation, product quality, client relationships, legislative and regulatory conformance and worker management). For instance, an environmental management system enables organizations to improve their environmental performance and an occupational health and safety management system (OHSMS) enables an organization to control its occupational health and safety risks, etc.

Many parts of the management system are common to a range of objectives, but others may be more specific.

A simplification of the main aspects of a management system is the 4-element “Plan, Do, Check, Act” approach. A complete management system covers every aspect of management and focuses on supporting the performance management to achieve the objectives. The management system should be subject to continuous improvement as the organization learns.

Elements may include:

Examples of management system standards include:

Critical Path Method (CPM)

Prince2 Certification
Image by/from Nuggetkiwi

The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities. It is commonly used in conjunction with the program evaluation and review technique (PERT). A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish.

The critical path method (CPM) is a project modeling technique developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley Jr. of Remington Rand. Kelley and Walker related their memories of the development of CPM in 1989. Kelley attributed the term “critical path” to the developers of the Program Evaluation and Review Technique which was developed at about the same time by Booz Allen Hamilton and the U.S. Navy. The precursors of what came to be known as Critical Path were developed and put into practice by DuPont between 1940 and 1943 and contributed to the success of the Manhattan Project.

Critical Path Analysis is commonly used with all forms of projects, including construction, aerospace and defense, software development, research projects, product development, engineering, and plant maintenance, among others. Any project with interdependent activities can apply this method of mathematical analysis. The first time CPM was used for major skyscraper development was in 1966 while constructing the former World Trade Center Twin Towers in New York City. Although the original CPM program and approach is no longer used, the term is generally applied to any approach used to analyze a project network logic diagram.

The essential technique for using CPM is to construct a model of the project that includes the following:

Using these values, CPM calculates the longest path of planned activities to logical end points or to the end of the project, and the earliest and latest that each activity can start and finish without making the project longer. This process determines which activities are “critical” (i.e., on the longest path) and which have “total float” (i.e., can be delayed without making the project longer). In project management, a critical path is the sequence of project network activities which add up to the longest overall duration, regardless if that longest duration has float or not. This determines the shortest time possible to complete the project. There can be ‘total float’ (unused time) within the critical path. For example, if a project is testing a solar panel and task ‘B’ requires ‘sunrise’, there could be a scheduling constraint on the testing activity so that it would not start until the scheduled time for sunrise. This might insert dead time (total float) into the schedule on the activities on that path prior to the sunrise due to needing to wait for this event. This path, with the constraint-generated total float would actually make the path longer, with total float being part of the shortest possible duration for the overall project. In other words, individual tasks on the critical path prior to the constraint might be able to be delayed without elongating the critical path; this is the ‘total float’ of that task. However, the time added to the project duration by the constraint is actually critical path drag, the amount by which the project’s duration is extended by each critical path activity and constraint.

A project can have several, parallel, near critical paths; and some or all of the tasks could have ‘free float’ and/or ‘total float’. An additional parallel path through the network with the total durations shorter than the critical path is called a sub-critical or non-critical path. Activities on sub-critical paths have no drag, as they are not extending the project’s duration.

CPM analysis tools allow a user to select a logical end point in a project and quickly identify its longest series of dependent activities (its longest path). These tools can display the critical path (and near critical path activities if desired) as a cascading waterfall that flows from the project’s start (or current status date) to the selected logical end point.

Although the activity-on-arrow diagram (PERT Chart) is still used in a few places, it has generally been superseded by the activity-on-node diagram, where each activity is shown as a box or node and the arrows represent the logical relationships going from predecessor to successor as shown here in the “Activity-on-node diagram”.

In this diagram, Activities A, B, C, D, and E comprise the critical or longest path, while Activities F, G, and H are off the critical path with floats of 15 days, 5 days, and 20 days respectively. Whereas activities that are off the critical path have float and are therefore not delaying completion of the project, those on the critical path will usually have critical path drag, i.e., they delay project completion. The drag of a critical path activity can be computed using the following formula:

These results, including the drag computations, allow managers to prioritize activities for the effective management of project, and to shorten the planned critical path of a project by pruning critical path activities, by “fast tracking” (i.e., performing more activities in parallel), and/or by “crashing the critical path” (i.e., shortening the durations of critical path activities by adding resources).

Critical path drag analysis has also been used to optimize schedules in processes outside of strict project-oriented contexts, such as to increase manufacturing throughput by using the technique and metrics to identify and alleviate delaying factors and thus reduce assembly lead time.

Crash duration is a term referring to the shortest possible time for which an activity can be scheduled. It can be achieved by shifting more resources towards the completion of that activity, resulting in decreased time spent and often a reduced quality of work, as the premium is set on speed.
Crash duration is typically modeled as a linear relationship between cost and activity duration; however, in many cases a convex function or a step function is more applicable.

Originally, the critical path method considered only logical dependencies between terminal elements. Since then, it has been expanded to allow for the inclusion of resources related to each activity, through processes called activity-based resource assignments and resource leveling. A resource-leveled schedule may include delays due to resource bottlenecks (i.e., unavailability of a resource at the required time), and may cause a previously shorter path to become the longest or most “resource critical” path. A related concept is called the critical chain, which attempts to protect activity and project durations from unforeseen delays due to resource constraints.

Since project schedules change on a regular basis, CPM allows continuous monitoring of the schedule, which allows the project manager to track the critical activities, and alerts the project manager to the possibility that non-critical activities may be delayed beyond their total float, thus creating a new critical path and delaying project completion. In addition, the method can easily incorporate the concepts of stochastic predictions, using the program evaluation and review technique (PERT) and event chain methodology.

Currently, there are several software solutions available in industry that use the CPM method of scheduling; see list of project management software. The method currently used by most project management software is based on a manual calculation approach developed by Fondahl of Stanford University.

A schedule generated using the critical path techniques often is not realized precisely, as estimations are used to calculate times: if one mistake is made, the results of the analysis may change. This could cause an upset in the implementation of a project if the estimates are blindly believed, and if changes are not addressed promptly. However, the structure of critical path analysis is such that the variance from the original schedule caused by any change can be measured, and its impact either ameliorated or adjusted for. Indeed, an important element of project postmortem analysis is the as built critical path (ABCP), which analyzes the specific causes and impacts of changes between the planned schedule and eventual schedule as actually implemented.

Management Process

Management process is a process of setting goals, planning and/or controlling the organizing and leading the execution of any type of activity, such as:

An organization’s senior management is responsible for carrying out its management process. However, this is not always the case for all management processes, for example, it is the responsibility of the project manager to carry out a project management process.

Planning, it determines the objectives, evaluate the different alternatives and choose the best

Organizing, define group’s functions, establish relationships and defining authority and responsibility

Staffing, recruitment or placement and selection or training takes place for the development of members in the firm

directing, is to give the Direction to the employees.