Project Portfolio Management (PPM)

Prince2 Certification
Image by/from Kokcharov

Project Portfolio Management (PPM) is the centralized management of the processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization’s operational and financial goals, while honouring constraints imposed by customers, strategic objectives, or external real-world factors. The International standard defines the framework of the Project Portfolio Management

PPM provides program and project managers in large, program/project-driven organizations with the capabilities needed to manage the time, resources, skills, and budgets necessary to accomplish all interrelated tasks. It provides a framework for issue resolution and risk mitigation, as well as the centralized visibility to help planning and scheduling teams to identify the fastest, cheapest, or most suitable approach to deliver projects and programs. Portfolio Managers define Key Performance Indicators and the strategy for their portfolio .

Pipeline management involves steps to ensure that an adequate number of project proposals are generated and evaluated to determine whether (and how) a set of projects in the portfolio can be executed with finite development resources in a specified time. There are three major sub-components to pipeline management: ideation, work intake processes, and Phase-Gate reviews. Fundamental to pipeline management is the ability to align the decision-making process for estimating and selecting new capital investment projects with the strategic plan.

The focus on the efficient and effective deployment of an organization’s resources where and when they are needed. These can include financial resources, inventory, human resources, technical skills, production, and design. In addition to project-level resource allocation, users can also model ‘what-if’ resource scenarios, and extend this view across the portfolio.

The capture and prioritization of change requests that can include new requirements, features, functions, operational constraints, regulatory demands, and technical enhancements. PPM provides a central repository for these change requests and the ability to match available resources to evolving demand within the financial and operational constraints of individual projects.

With PPM, the Office of Finance can improve their accuracy for estimating and managing the financial resources of a project or group of projects. In addition, the value of projects can be demonstrated in relation to the strategic objectives and priorities of the organization through financial controls and to assess progress through earned value and other project financial techniques.

An analysis of the risk sensitivities residing within each project, as the basis for determining confidence levels across the portfolio. The integration of cost and schedule risk management with techniques for determining contingency and risk response plans, enable organizations to gain an objective view of project uncertainties.

In the early 2000s, many PPM vendors realized that project portfolio reporting services only addressed part of a wider need for PPM in the marketplace. Another more senior audience had emerged, sitting at management and executive levels above detailed work execution and schedule management, who required a greater focus on process improvement and ensuring the viability of the portfolio in line with overall strategic objectives. In addition, as the size, scope, complexity, and geographical spread of organizations’ project portfolios continued to grow, greater visibility was needed of project work across the enterprise, allied to improved resource utilization and capacity planning.

Enterprise Project Portfolio Management (EPPM) is a top-down approach to managing all project-intensive work and resources across the enterprise. This contrasts with the traditional approach of combining manual processes, desktop project tools, and PPM applications for each project portfolio environment.

The PPM landscape is evolving rapidly as a result of the growing preference for managing multiple capital investment initiatives from a single, enterprise-wide system. This more centralized approach, and resulting ‘single version of the truth’ for project and project portfolio information, provides the transparency of performance needed by management to monitor progress versus the strategic plan.

The key aims of EPPM can be summarized as follows:

A key result of PPM is to decide which projects to fund in an optimal manner. Project Portfolio Optimization (PPO) is the effort to make the best decisions possible under these conditions.

Management System

A management system is a set of policies, processes and procedures used by an organization to ensure that it can fulfill the tasks required to achieve its objectives. These objectives cover many aspects of the organization’s operations (including financial success, safe operation, product quality, client relationships, legislative and regulatory conformance and worker management). For instance, an environmental management system enables organizations to improve their environmental performance and an occupational health and safety management system (OHSMS) enables an organization to control its occupational health and safety risks, etc.

Many parts of the management system are common to a range of objectives, but others may be more specific.

A simplification of the main aspects of a management system is the 4-element “Plan, Do, Check, Act” approach. A complete management system covers every aspect of management and focuses on supporting the performance management to achieve the objectives. The management system should be subject to continuous improvement as the organization learns.

Elements may include:

Examples of management system standards include:

Critical Path Method (CPM)

Prince2 Certification
Image by/from Nuggetkiwi

The critical path method (CPM), or critical path analysis (CPA), is an algorithm for scheduling a set of project activities. It is commonly used in conjunction with the program evaluation and review technique (PERT). A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish.

The critical path method (CPM) is a project modeling technique developed in the late 1950s by Morgan R. Walker of DuPont and James E. Kelley Jr. of Remington Rand. Kelley and Walker related their memories of the development of CPM in 1989. Kelley attributed the term “critical path” to the developers of the Program Evaluation and Review Technique which was developed at about the same time by Booz Allen Hamilton and the U.S. Navy. The precursors of what came to be known as Critical Path were developed and put into practice by DuPont between 1940 and 1943 and contributed to the success of the Manhattan Project.

Critical Path Analysis is commonly used with all forms of projects, including construction, aerospace and defense, software development, research projects, product development, engineering, and plant maintenance, among others. Any project with interdependent activities can apply this method of mathematical analysis. The first time CPM was used for major skyscraper development was in 1966 while constructing the former World Trade Center Twin Towers in New York City. Although the original CPM program and approach is no longer used, the term is generally applied to any approach used to analyze a project network logic diagram.

The essential technique for using CPM is to construct a model of the project that includes the following:

Using these values, CPM calculates the longest path of planned activities to logical end points or to the end of the project, and the earliest and latest that each activity can start and finish without making the project longer. This process determines which activities are “critical” (i.e., on the longest path) and which have “total float” (i.e., can be delayed without making the project longer). In project management, a critical path is the sequence of project network activities which add up to the longest overall duration, regardless if that longest duration has float or not. This determines the shortest time possible to complete the project. There can be ‘total float’ (unused time) within the critical path. For example, if a project is testing a solar panel and task ‘B’ requires ‘sunrise’, there could be a scheduling constraint on the testing activity so that it would not start until the scheduled time for sunrise. This might insert dead time (total float) into the schedule on the activities on that path prior to the sunrise due to needing to wait for this event. This path, with the constraint-generated total float would actually make the path longer, with total float being part of the shortest possible duration for the overall project. In other words, individual tasks on the critical path prior to the constraint might be able to be delayed without elongating the critical path; this is the ‘total float’ of that task. However, the time added to the project duration by the constraint is actually critical path drag, the amount by which the project’s duration is extended by each critical path activity and constraint.

A project can have several, parallel, near critical paths; and some or all of the tasks could have ‘free float’ and/or ‘total float’. An additional parallel path through the network with the total durations shorter than the critical path is called a sub-critical or non-critical path. Activities on sub-critical paths have no drag, as they are not extending the project’s duration.

CPM analysis tools allow a user to select a logical end point in a project and quickly identify its longest series of dependent activities (its longest path). These tools can display the critical path (and near critical path activities if desired) as a cascading waterfall that flows from the project’s start (or current status date) to the selected logical end point.

Although the activity-on-arrow diagram (PERT Chart) is still used in a few places, it has generally been superseded by the activity-on-node diagram, where each activity is shown as a box or node and the arrows represent the logical relationships going from predecessor to successor as shown here in the “Activity-on-node diagram”.

In this diagram, Activities A, B, C, D, and E comprise the critical or longest path, while Activities F, G, and H are off the critical path with floats of 15 days, 5 days, and 20 days respectively. Whereas activities that are off the critical path have float and are therefore not delaying completion of the project, those on the critical path will usually have critical path drag, i.e., they delay project completion. The drag of a critical path activity can be computed using the following formula:

These results, including the drag computations, allow managers to prioritize activities for the effective management of project, and to shorten the planned critical path of a project by pruning critical path activities, by “fast tracking” (i.e., performing more activities in parallel), and/or by “crashing the critical path” (i.e., shortening the durations of critical path activities by adding resources).

Critical path drag analysis has also been used to optimize schedules in processes outside of strict project-oriented contexts, such as to increase manufacturing throughput by using the technique and metrics to identify and alleviate delaying factors and thus reduce assembly lead time.

Crash duration is a term referring to the shortest possible time for which an activity can be scheduled. It can be achieved by shifting more resources towards the completion of that activity, resulting in decreased time spent and often a reduced quality of work, as the premium is set on speed.
Crash duration is typically modeled as a linear relationship between cost and activity duration; however, in many cases a convex function or a step function is more applicable.

Originally, the critical path method considered only logical dependencies between terminal elements. Since then, it has been expanded to allow for the inclusion of resources related to each activity, through processes called activity-based resource assignments and resource leveling. A resource-leveled schedule may include delays due to resource bottlenecks (i.e., unavailability of a resource at the required time), and may cause a previously shorter path to become the longest or most “resource critical” path. A related concept is called the critical chain, which attempts to protect activity and project durations from unforeseen delays due to resource constraints.

Since project schedules change on a regular basis, CPM allows continuous monitoring of the schedule, which allows the project manager to track the critical activities, and alerts the project manager to the possibility that non-critical activities may be delayed beyond their total float, thus creating a new critical path and delaying project completion. In addition, the method can easily incorporate the concepts of stochastic predictions, using the program evaluation and review technique (PERT) and event chain methodology.

Currently, there are several software solutions available in industry that use the CPM method of scheduling; see list of project management software. The method currently used by most project management software is based on a manual calculation approach developed by Fondahl of Stanford University.

A schedule generated using the critical path techniques often is not realized precisely, as estimations are used to calculate times: if one mistake is made, the results of the analysis may change. This could cause an upset in the implementation of a project if the estimates are blindly believed, and if changes are not addressed promptly. However, the structure of critical path analysis is such that the variance from the original schedule caused by any change can be measured, and its impact either ameliorated or adjusted for. Indeed, an important element of project postmortem analysis is the as built critical path (ABCP), which analyzes the specific causes and impacts of changes between the planned schedule and eventual schedule as actually implemented.

Management Process

Management process is a process of setting goals, planning and/or controlling the organizing and leading the execution of any type of activity, such as:

An organization’s senior management is responsible for carrying out its management process. However, this is not always the case for all management processes, for example, it is the responsibility of the project manager to carry out a project management process.

Planning, it determines the objectives, evaluate the different alternatives and choose the best

Organizing, define group’s functions, establish relationships and defining authority and responsibility

Staffing, recruitment or placement and selection or training takes place for the development of members in the firm

directing, is to give the Direction to the employees.

Technology Management

Technology management is a set of management disciplines that allows organizations to manage their technological fundamentals to create competitive advantage. Typical concepts used in technology management are:

The role of the technology management function in an organization is to understand the value of certain technology for the organization. Continuous development of technology is valuable as long as there is a value for the customer and therefore the technology management function in an organization should be able to argue when to invest on technology development and when to withdraw.

Technology management can also be defined as the integrated planning, design, optimization, operation and control of technological products, processes and services, a better definition would be the management of the use of technology for human advantage.

The Association of Technology, Management, and Applied Engineering defines technology management as the field concerned with the supervision of personnel across the technical spectrum and a wide variety of complex technological systems. Technology management programs typically include instruction in production and operations management, project management, computer applications, quality control, safety and health issues, statistics, and general management principles.

Perhaps the most authoritative input to our understanding of technology is the diffusion of innovations theory developed in the first half of the twentieth century. It suggests that all innovations follow a similar diffusion pattern – best known today in the form of an “s” curve though originally based upon the concept of a standard distribution of adopters. In broad terms the “s” curve suggests four phases of a technology life cycle – emerging, growth, mature and aging.

These four phases are coupled to increasing levels of acceptance of an innovation or, in our case a new technology. In recent times for many technologies an inverse curve – which corresponds to a declining cost per unit – has been postulated. This may not prove to be universally true though for information technology where much of the cost is in the initial phase it has been a reasonable expectation.

The second major contribution to this area is the Carnegie Mellon Capability Maturity Model. This model proposes that a series of progressive capabilities can be quantified through a set of threshold tests. These tests determine repeatability, definition, management and optimization. The model suggests that any organization has to master one level before being able to proceed to the next.

The third significant contribution comes from Gartner – the research service, it is the Hype cycle, this suggests that our modern approach to marketing technology results in the technology being over hyped in the early stages of growth. Taken together, these fundamental concepts provide a foundation for formalizing the approach to managing technology.

Mobile device management (MDM) is the administrative area dealing with deploying, securing, monitoring, integrating and managing mobile devices, such as smartphones, tablets and laptops, in the workplace and other areas. The intent of MDM is to optimize the functionality and security of mobile devices within the enterprise, while simultaneously protecting the corporate network. MDM is usually implemented with the use of a third party product that has management features for particular vendors of mobile devices.

Modern Mobile Device Management products supports tablets, Windows 10 and macOS computers. The practice of using MDM to control PC is also known as unified endpoint management.

The Association of Technology, Management, and Applied Engineering (ATMAE), accredits selected collegiate programs in technology management. An instructor or graduate of a technology management program may choose to become a Certified Technology Manager (CTM) by sitting for a rigorous exam administered by ATMAE covering production planning & control, safety, quality, and management/supervision.

ATMAE program accreditation is recognized by the Council for Higher Education Accreditation (CHEA) for accrediting technology management programs. CHEA recognizes ATMAE in the U.S. for accrediting associate, baccalaureate, and master’s degree programs in technology, applied technology, engineering technology, and technology-related disciplines delivered by national or regional accredited institutions in
the United States.(2011)

Project Workforce Management (PWM)

Project workforce management is the practice of combining the coordination of all logistic elements of a project through a single software application (or workflow engine). This includes planning and tracking of schedules and mileposts, cost and revenue, resource allocation, as well as overall management of these project elements. Efficiency is improved by eliminating manual processes, like spreadsheet tracking to monitor project progress. It also allows for at-a-glance status updates and ideally integrates with existing legacy applications in order to unify ongoing projects, enterprise resource planning (ERP) and broader organizational goals. There are a lot of logistic elements in a project. Different team members are responsible for managing each element and often, the organisation may have a mechanism to manage some logistic areas as well.

By coordinating these various components of project management, workforce management and financials through a single solution, the process of configuring and changing project and workforce details is simplified.

A project workforce management system defines project tasks, project positions, and assigns personnel to the project positions. The project tasks and positions are correlated to assign a responsible project position or even multiple positions to complete each project task. Because each project position may be assigned to a specific person, the qualifications and availabilities of that person can be taken into account when determining the assignment. By associating project tasks and project positions, a manager can better control the assignment of the workforce and complete the project more efficiently.

When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined. Therefore, all the logistic processes take place in the workflow engine.

This invention relates to project management systems and methods, more particularly to a software-based system and method for project and workforce management.

Due to the software usage, all the project workflow management tasks can be fully automated without leaving many tasks for the project managers. This returns high efficiency to the project management when it comes to project tracking proposes. In addition to different tracking mechanisms, project workforce management software also offer a dashboard for the project team. Through the dashboard, the project team has a glance view of the overall progress of the project elements.

Most of the times, project workforce management software can work with the existing legacy software systems such as ERP (enterprise resource planning) systems. This easy integration allows the organisation to use a combination of software systems for management purposes.

Good project management is an important factor for the success of a project. A project may be thought of as a collection of activities and tasks designed to achieve a specific goal of the organisation, with specific performance or quality requirements while meeting any subject time and cost constraints. Project management refers to managing the activities that lead to the successful completion of a project. Furthermore, it focuses on finite deadlines and objectives. A number of tools may be used to assist with this as well as with assessment.

Project management may be used when planning personnel resources and capabilities. The project may be linked to the objects in a professional services life cycle and may accompany the objects from the opportunity over quotation, contract, time and expense recording, billing, period-end-activities to the final reporting. Naturally the project gets even more detailed when moving through this cycle.

For any given project, several project tasks should be defined. Project tasks describe the activities and phases that have to be performed in the project such as writing of layouts, customising, testing. What is needed is a system that allows project positions to be correlated with project tasks. Project positions describe project roles like project manager, consultant, tester, etc. Project-positions are typically arranged linearly within the project. By correlating project tasks with project positions, the qualifications and availability of personnel assigned to the project positions may be considered.

Good project management should:

When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined in them. So, all the logistic processes take place in the workflow engine.

The regular and most common types of tasks handled by project workforce management software or a similar workflow engine are:

Regularly monitoring your project’s schedule performance can provide early indications of possible activity-coordination problems, resource conflicts, and possible cost overruns. To monitor schedule performance. Collecting information and evaluating it ensure a project accuracy.

The project schedule outlines the intended result of the project and what’s required to bring it to completion. In the schedule, we need to include all the resources involved and cost and time constraints through a work breakdown structure (WBS). The WBS outlines all the tasks and breaks them down into specific deliverables.

The importance of tracking actual costs and resource usage in projects depends upon the project situation.

Tracking actual costs and resource usage is an essential aspect of the project control function.

Organisational profitability is directly connected to project management efficiency and optimal resource utilisation. To sum up, organisations that struggle with either or both of these core competencies typically experience cost overruns, schedule delays and unhappy customers.

The focus for project management is the analysis of project performance to determine whether a change is needed in the plan for the remaining project activities to achieve the project goals.

Risk identification consists of determining which risks are likely to affect the project and documenting the characteristics of each.

Project communication management is about how communication is carried out during the course of the project

It is of no use completing a project within the set time and budget if the final product is of poor quality. The project manager has to ensure that the final product meets the quality expectations of the stakeholders. This is done by good:

There are three main differences between Project Workforce Management and traditional project management and workforce management disciplines and solutions:

All project and workforce processes are designed, controlled and audited using a built-in graphical workflow engine. Users can design, control and audit the different processes involved in the project. The graphical workflow is quite attractive for the users of the system and allows the users to have a clear idea of the workflow engine.

Project Workforce Management provides organization and work breakdown structures to create, manage and report on functional and approval hierarchies, and to track information at any level of detail. Users can create, manage, edit and report work breakdown structures. Work breakdown structures have different abstraction levels, so the information can be tracked at any level. Usually, project workforce management has approval hierarchies. Each workflow created will go through several records before it becomes an organisational or project standard. This helps the organisation to reduce the inefficiencies of the process, as it is audited by many stakeholders.

Unlike traditional disconnected project, workforce and billing management systems that are solely focused on tracking IT projects, internal workforce costs or billable projects, Project Workforce Management is designed to unify the coordination of all project and workforce processes, whether internal, shared (IT) or billable.

A project workforce management system defines project tasks, project positions and assigns personnel to the project positions. The project tasks and project positions are correlated to assign a responsible project position or positions to complete each project task. Because each project position may be assigned to a specific person, the qualification and availabilities of the person can be taken into account when determining the assignment. By correlating the project tasks and project positions, a manager can better control the assignment of the workforce and complete projects more efficiently.

Project workflow management is one of the best methods for managing different aspects of project. If the project is complex, then the outcomes for the project workforce management could be more effective.

For simple projects or small organisations, project workflow management may not add much value, but for more complex projects and big organisations, managing project workflow will make a big difference. This is because that small organisations or projects do not have a significant overhead when it comes to managing processes. There are many project workforce management, but many organisations prefer to adopt unique solutions.

Therefore, organisation gets software development companies to develop custom project workflow managing systems for them. This has proved to be the most suitable way of getting the best project workforce management system acquired for the company.

Project Network

Prince2 Certification
Image by/from WikiMedia Commons

A project network is a graph (weighted directed graph) depicting the sequence in which a project’s terminal elements are to be completed by showing terminal elements and their dependencies. It is always drawn from left to right to reflect project chronology.

The work breakdown structure or the product breakdown structure show the “part-whole” relations. In contrast, the project network shows the “before-after” relations.

The most popular form of project network is activity on node (AON) (as shown above), the other one is activity on arrow (AOA).

The condition for a valid project network is that it doesn’t contain any circular references.

Project dependencies can also be depicted by a predecessor table. Although such a form is very inconvenient for human analysis, project management software often offers such a view for data entry.

An alternative way of showing and analyzing the sequence of project work is the design structure matrix or dependency structure matrix.

Design Methods

Design methods are procedures, techniques, aids, or tools for designing. They offer a number of different kinds of activities that a designer might use within an overall design process. Conventional procedures of design, such as drawing, can be regarded as design methods, but since the 1950s new procedures have been developed that are more usually grouped together under the name of “design methods”. What design methods have in common is that they “are attempts to make public the hitherto private thinking of designers; to externalise the design process”.

Design methodology is the broader study of method in design: the study of the principles, practices and procedures of designing.

Design methods originated in new approaches to problem solving developed in the mid-20th Century, and also in response to industrialisation and mass-production, which changed the nature of designing. A “Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications”, held in London in 1962 is regarded as a key event marking the beginning of what became known within design studies as the “design methods movement”, leading to the founding of the Design Research Society and influencing design education and practice. Leading figures in this movement in the UK were J. Christopher Jones at the University of Manchester and L. Bruce Archer at the Royal College of Art.

The movement developed through further conferences on new design methods in the UK and USA in the 1960s. The first books on rational design methods, and on creative methods also appeared in this period.

New approaches to design were developing at the same time in Germany, notably at the Ulm School of Design (Hochschule fur Gestaltung-HfG Ulm) (1953-1968) under the leadership of Tomas Maldonado. Design teaching at Ulm integrated design with science (including social sciences) and introduced new fields of study such as cybernetics, systems theory and semiotics into design education. Bruce Archer also taught at Ulm, and another influential teacher was Horst Rittel. In 1963 Rittel moved to the School of Architecture at the University of California, Berkeley, where he helped found the Design Methods Group, a society focused on developing and promoting new methods especially in architecture and planning.

At the end of the 1960s two influential, but quite different works were published: Herbert A. Simon’s The Sciences of the Artificial and J. Christopher Jones’s Design Methods. Simon proposed the “science of design” as “a body of intellectually tough, analytic, partly formalizable, partly empirical, teachable doctrine about the design process”, whereas Jones catalogued a variety of approaches to design, both rational and creative, within a context of a broad, futures creating, systems view of design.

The 1970s saw some reaction against the rationality of design methods, notably from two of its pioneers, Christopher Alexander and J. Christopher Jones. Fundamental issues were also raised by Rittel, who characterised design and planning problems as wicked problems, un-amenable to the techniques of science and engineering, which deal with “tame” problems. The criticisms turned some in the movement away from rationalised approaches to design problem solving and towards “argumentative”, participatory processes in which designers worked in partnership with the problem stakeholders (clients, customers, users, the community). This led to participatory design, user centered design and the role of design thinking as a creative process in problem solving and innovation.

However, interest in systematic and rational design methods continued to develop strongly in engineering design during the 1980s; for example, through the Conference on Engineering Design series of The Design Society and the work of the Verein Deutscher Ingenieure association in Germany, and also in Japan, where the Japanese Society for the Science of Design had been established as early as 1954. Books on systematic engineering design methods were published in Germany and the UK. In the USA the American Society of Mechanical Engineers Design Engineering Division began a stream on design theory and methodology within its annual conferences. The interest in systematic, rational approaches to design has led to design science and design science (methodology) in engineering and computer science.

The development of design methods has been closely associated with prescriptions for a systematic process of designing. These process models usually comprise a number of phases or stages, beginning with a statement or recognition of a problem or a need for a new design and culminating in a finalised solution proposal. In his ‘Systematic Method for Designers’ L. Bruce Archer produced a very elaborate, 229 step model of a systematic design process for industrial design, but also a summary model consisting of three phases: Analytical phase (programming and data collection, analysis), Creative phase (synthesis, development), and Executive phase (communication). The UK’s Design Council models the creative design process in four phases: Discover (insight into the problem), Define (the area to focus upon), Develop (potential solutions), Deliver (solutions that work). A systematic model for engineering design by Pahl and Beitz has phases of Clarification of the task, Conceptual design, Embodiment design, and Detail design. A less prescriptive approach to designing a basic design process for oneself has been outlined by J. Christopher Jones.

In the engineering design process systematic models tend to be linear, in sequential steps, but acknowledging the necessity of iteration. In architectural design, process models tend to be cyclical and spiral, with iteration as essential to progression towards a final design. In industrial and product design, process models tend to comprise a sequence of stages of divergent and convergent thinking. The Dubberly Design Office has compiled examples of more than 80 design process models, but it is not an exhaustive list.

Within these process models there are numerous design methods that can be applied. In his book of ‘Design Methods’ J. C. Jones grouped 26 methods according to their purposes within a design process: Methods of exploring design situations (e.g. Stating Objectives, Investigating User Behaviour, Interviewing Users), Methods of searching for ideas (e.g. Brainstorming, Synectics, Morphological Charts), Methods of exploring problem structure (e.g. Interaction Matrix, Functional Innovation, Information Sorting), Methods of evaluation (e.g. Checklists, Ranking and Weighting).

Nigel Cross outlined eight stages in a process of engineering product design, each with an associated method: Identifying Opportunities – User Scenarios; Clarifying Objectives – Objectives Tree; Establishing Functions – Function Analysis; Setting Requirements – Performance Specification; Determining Characteristics – Quality Function Deployment; Generating Alternatives – Morphological Chart; Evaluating Alternatives – Weighted Objectives; Improving Details – Value Engineering.

Many design methods still currently in use originated in the design methods movement of the 1960s and 70s, adapted to modern design practices. Recent developments have seen the introduction of more qualitative techniques, including ethnographic methods such as cultural probes and situated methods.

The design methods movement had a profound influence on the development of academic interest in design and designing and the emergence of design research and design studies. Arising directly from the 1962 Conference on Design Methods, the Design Research Society (DRS) was founded in the UK in 1966. The purpose of the Society is to promote “the study of and research into the process of designing in all its many fields” and is an interdisciplinary group with many professions represented.

In the USA, a similar Design Methods Group (DMG) was also established in 1966 by Horst Rittel and others at the University of California, Berkeley. The DMG held a conference at MIT in 1968 with a focus on environmental design and planning, and that led to the foundation of the Environmental Design Research Association (EDRA), which held its first conference in 1969. A group interested in design methods and theory in architecture and engineering formed at MIT in the early 1980s, including Donald Schon, who was studying the working practices of architects, engineers and other professionals and developing his theory of reflective practice. In 1984 the National Science Foundation created a Design Theory and Methodology Program to promote methods and process research in engineering design.

Meanwhile in Europe, Vladimir Hubka established the Workshop Design-Konstruction (WDK),which led to a series of International Conferences on Engineering Design (ICED) beginning in 1981 and later became the Design Society.

Academic research journals in design also began publication. DRS initiated Design Studies in 1979, Design Issues appeared in 1984, and Research in Engineering Design in 1989.

Several pioneers of design methods developed their work in association with industry. The Ulm school established a significant partnership with the German consumer products company Braun through their designer Dieter Rams. J. Christopher Jones began his approach to systematic design as an ergonomist at the electrical engineering company AEI. L. Bruce Archer developed his systematic approach in projects for medical equipment for the UK National Health Service.

In the USA, designer Henry Dreyfuss had a profound impact on the practice of industrial design by developing systematic processes and promoting the use of anthropometrics, ergonomics and human factors in design, including through his 1955 book ‘Designing for People’. Another successful designer, Jay Doblin, was also influential on the theory and practice of design as a systematic process.

Much of current design practice has been influenced and guided by design methods. For example, the influential IDEO consultancy uses design methods extensively in its ‘Design Kit’ and ‘Method Cards’. Increasingly, the intersections of design methods with business and government through the application of design thinking have been championed by numerous consultancies within the design profession. Wide influence has also come through Christopher Alexander’s pattern language method, originally developed for architectural and urban design, which has been adopted in software design, interaction design, pedagogical design and other domains.

Acquisition Initiation Within the Information Services Procurement Library (ISPL)

Uncategorized
Image by/from User:Mcalukin of the English Wikipedia

Acquisition Initiation is the initial process within the Information Services Procurement Library (ISPL) and is executed by a customer organization intending to procure Information Services. The process is composed of two main activities: the making of the acquisition goal definition and the making of the acquisition planning. During the acquisition initiation, an iterative process arises in which questions about the goal of the acquisition are usually asked. In response to these questions the Library provides details of the requirements, covering areas such as cost, feasibility and timelines. An example of such requirements is the “planning of the acquisition”, a component that may also lead to more questions about the acquisition goal (thus, it is reasonable to state that a relationship exists between the acquisition goal and the acquisition planning).

The process-data model shown in the following section displays the acquisition initiation stages. It shows both the process and the data ensuing from the process, and parts of the image will also be used as references in the body of this article. The concepts and data found in the model are explained in separate tables which can be found in the section immediately following the model. A textual, and more thorough, explanation of the activities and concepts that make up the Acquisition Initiation process can be found in the remainder of this article.

The draft acquisition goal is the description of the global goal that is to be achieved by starting procurement. It is inspired by the business needs or business strategy. It is similar, though simpler, to the concept of the project brief in PRINCE2. It is the first draft of the acquisition goal, containing at least a (short) problem definition and a (short) goal definition. The draft acquisition goal is meant to give the main reasons and the main goals to those people who will have to make the decision to actually start of the acquisition or not. It may thus also encapsulate items like a cost-benefit analysis, stakes & stakeholders and other items that will be further refined during the actual Acquisition Initiation.

It is, in this sense, not an activity of the acquisition initiation process, but it is the input for starting the process.

The problem definition is a statement about the problem that could be resolved by starting the acquisition process.

The goal definition is a statement about the goal that will have to be reached when the acquisition is executed:

The draft acquisition goal can be made as short or as long as is needed by the organization, as long as it serves to be a good basis to make the initial decision to start of an acquisition process.

When the decision is made to start an acquisition process, the first activity of the acquisition initiation is to define the acquisition goal.

The acquisition goal is the whole of defined systems and services requirements, attributed by costs & benefits and stakes & stakeholders, with a defined target domain serving as the boundary. The acquisition goal is the basis of the acquisition process and for formulating the acquisition strategy during the acquisition planning. Input can be the draft acquisition goal and the business needs (of the target domain). The business needs can be derived from strategic business plans or information system plans.

The target domain is that part of the customer organization that is involved in, or influenced by an information service. It is defined in terms of business processes, business actors, business information, business technology and the relations between these four aspects. The target domain is identified to ensure a fitting acquisition goal with fitting requirements for the systems and services to be acquired.

A limited description of a target domain can thus be, for example:The part of customer organization that uses the software program MarketingUnlimited (fictional), involving the marketing-process, several types of information related to the marketing-department of the customer organization, the employees working in the marketing department and the production platform for MarketingUnlimited consisting of an application server, several workstations, and tools related to MarketingUnlimited.

The acquisition goal is then described by system descriptions and service descriptions:

Other information on how ISPL defines the deliverables of the acquisition can be found in the general ISPL entry.

Cost-benefit analysis concerns the analysis of:

to successfully evaluate the investment issues of the acquisition.

It is important to identify all actors affected, and in what way they are affected (their stake), because a negative attitude of actors may negatively influence the success of the acquisition. ISPL proposes to perform a SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) for the actors involved to properly and thoroughly identify the stakes of the actors. The results of this analysis can serve as input for the situation & risk analysis.

The information contained within the Acquisition Goal is used to produce the acquisition plan. The acquisition plan is the master plan of the entire acquisition. In this plan, delivery scenarios are determined and the situation and risks are evaluated. Based on the scenarios, situations, and risks, a strategy is formed to manage the acquisition. Furthermore, the acquisition plan will hold the main decision points, in which decisions are made about the deliverables within the acquisition and the acquisition organization (similar to a project organization) is formed.

On the basis of the acquisition goal, which contains among others the systems and services requirements and descriptions, scenarios can be formed for the deliveries of the information services to be acquired. Multiple scenarios may be built, which will then be evaluated and used in the design of the acquisition strategy. The scenarios are built with the priorities and interdependence of the deliverables in mind.

Priorities are related to the importance of each delivery: which delivery has time-preference over another.

Some deliveries may be dependent on each other, requiring one services to be delivered before the next one can be.

Example:

During a project to make ISPL more specific for generic product software implementations, a number of scenario’s were made. One of these was for an approach of a one-shot implementation. The below picture shows a diagram of the scenario that was made. An example of a scenario for the implementation of software is shown in the thumbnail on the right.

In this example, the general deliverables within an implementation of software are mentioned, executed and delivered differently as time goes by, mainly because of dependencies between deliverables. For instance, the actual go for the delivery of the software and the other related deliverables are dependent on the outcome of the deliverable “Proeftuin”. “Proeftuin” is an agreed period in which the software is extensively tested by the customer organization, provided and supported by the supplier. The customer organization can get a feel for the use of the software in the target domain, to guarantee that the software is a “fit” within the organization.

ISPL adheres to a situational approach to manage an acquisition. The situational approach takes into account properties of the problem situation, which are called situational factors. ISPL provides a number of these situational factors. Some of these situational factors affect events that have adverse consequences: the risks. Thus, the situational factors and risks within ISPL are related to each other. This makes it possible to provide a number of heuristics on which factors have an influence on certain risks.
First the situation is analysed, then ISPL proposes a number of risks which may arise from the situation at hand. With this information, an acquisition strategy can be formed to mitigate both the situation and risks in a number of areas.
Other information of the situation & risk analysis of ISPL can be found in the ISPL entry.

The acquisition strategy within ISPL acts as the design of a risk management strategy. The risk management strategy provides choices for options that reduce the probability and/or effect of risks. ISPL provides several options, divided over four classes:

Options are chosen based on their efficiency, costs and the related delay for delivery.

Based upon all the previous activities, the decision points planning is made. This is a time-set planning of decision points.

A decision point is described by:

An example of a decision points planning can be found through the thumbnail on the right. In this planning the decisions points are planned through time (top to bottom, left to right). This planning was taken from a study to make ISPL more specific for product software implementations. The decision points planning is thus aimed at a part of the process of a software implementation.

A shortened example of a decision point description can be found in the image below. This description was taken from the same study to make ISPL more specific for product software implementations.

The acquisition organization is set, which is similar to a project organization. Although the acquisition organization is more focused on the (legal) relationship that it has to maintain with the supplier organization.

Design for Six Sigma

Design for Six Sigma (DFSS) is a business process management method related to traditional Six Sigma. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. DFSS is relevant for relatively simple items / systems. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.

There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define – Measure – Analyze – Improve – Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure.

DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists.

DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design.

Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model.

Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box’s original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years.

Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis.

It is often seen that[weasel words] the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that[weasel words] two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for “back end” Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well.

Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries.
However, it has become clear that[weasel words] the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs.

Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., “fire fighting”), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., “fire prevention”). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions).

Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering.

Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’.

DFSS for software is essentially a non superficial modification of “classical DFSS” since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels.

DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models.

Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining,

However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM

DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ]

With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set.

Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm

DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes

Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario

DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior.