Project Workforce Management (PWM)

Project workforce management is the practice of combining the coordination of all logistic elements of a project through a single software application (or workflow engine). This includes planning and tracking of schedules and mileposts, cost and revenue, resource allocation, as well as overall management of these project elements. Efficiency is improved by eliminating manual processes, like spreadsheet tracking to monitor project progress. It also allows for at-a-glance status updates and ideally integrates with existing legacy applications in order to unify ongoing projects, enterprise resource planning (ERP) and broader organizational goals. There are a lot of logistic elements in a project. Different team members are responsible for managing each element and often, the organisation may have a mechanism to manage some logistic areas as well.

By coordinating these various components of project management, workforce management and financials through a single solution, the process of configuring and changing project and workforce details is simplified.

A project workforce management system defines project tasks, project positions, and assigns personnel to the project positions. The project tasks and positions are correlated to assign a responsible project position or even multiple positions to complete each project task. Because each project position may be assigned to a specific person, the qualifications and availabilities of that person can be taken into account when determining the assignment. By associating project tasks and project positions, a manager can better control the assignment of the workforce and complete the project more efficiently.

When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined. Therefore, all the logistic processes take place in the workflow engine.

This invention relates to project management systems and methods, more particularly to a software-based system and method for project and workforce management.

Due to the software usage, all the project workflow management tasks can be fully automated without leaving many tasks for the project managers. This returns high efficiency to the project management when it comes to project tracking proposes. In addition to different tracking mechanisms, project workforce management software also offer a dashboard for the project team. Through the dashboard, the project team has a glance view of the overall progress of the project elements.

Most of the times, project workforce management software can work with the existing legacy software systems such as ERP (enterprise resource planning) systems. This easy integration allows the organisation to use a combination of software systems for management purposes.

Good project management is an important factor for the success of a project. A project may be thought of as a collection of activities and tasks designed to achieve a specific goal of the organisation, with specific performance or quality requirements while meeting any subject time and cost constraints. Project management refers to managing the activities that lead to the successful completion of a project. Furthermore, it focuses on finite deadlines and objectives. A number of tools may be used to assist with this as well as with assessment.

Project management may be used when planning personnel resources and capabilities. The project may be linked to the objects in a professional services life cycle and may accompany the objects from the opportunity over quotation, contract, time and expense recording, billing, period-end-activities to the final reporting. Naturally the project gets even more detailed when moving through this cycle.

For any given project, several project tasks should be defined. Project tasks describe the activities and phases that have to be performed in the project such as writing of layouts, customising, testing. What is needed is a system that allows project positions to be correlated with project tasks. Project positions describe project roles like project manager, consultant, tester, etc. Project-positions are typically arranged linearly within the project. By correlating project tasks with project positions, the qualifications and availability of personnel assigned to the project positions may be considered.

Good project management should:

When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined in them. So, all the logistic processes take place in the workflow engine.

The regular and most common types of tasks handled by project workforce management software or a similar workflow engine are:

Regularly monitoring your project’s schedule performance can provide early indications of possible activity-coordination problems, resource conflicts, and possible cost overruns. To monitor schedule performance. Collecting information and evaluating it ensure a project accuracy.

The project schedule outlines the intended result of the project and what’s required to bring it to completion. In the schedule, we need to include all the resources involved and cost and time constraints through a work breakdown structure (WBS). The WBS outlines all the tasks and breaks them down into specific deliverables.

The importance of tracking actual costs and resource usage in projects depends upon the project situation.

Tracking actual costs and resource usage is an essential aspect of the project control function.

Organisational profitability is directly connected to project management efficiency and optimal resource utilisation. To sum up, organisations that struggle with either or both of these core competencies typically experience cost overruns, schedule delays and unhappy customers.

The focus for project management is the analysis of project performance to determine whether a change is needed in the plan for the remaining project activities to achieve the project goals.

Risk identification consists of determining which risks are likely to affect the project and documenting the characteristics of each.

Project communication management is about how communication is carried out during the course of the project

It is of no use completing a project within the set time and budget if the final product is of poor quality. The project manager has to ensure that the final product meets the quality expectations of the stakeholders. This is done by good:

There are three main differences between Project Workforce Management and traditional project management and workforce management disciplines and solutions:

All project and workforce processes are designed, controlled and audited using a built-in graphical workflow engine. Users can design, control and audit the different processes involved in the project. The graphical workflow is quite attractive for the users of the system and allows the users to have a clear idea of the workflow engine.

Project Workforce Management provides organization and work breakdown structures to create, manage and report on functional and approval hierarchies, and to track information at any level of detail. Users can create, manage, edit and report work breakdown structures. Work breakdown structures have different abstraction levels, so the information can be tracked at any level. Usually, project workforce management has approval hierarchies. Each workflow created will go through several records before it becomes an organisational or project standard. This helps the organisation to reduce the inefficiencies of the process, as it is audited by many stakeholders.

Unlike traditional disconnected project, workforce and billing management systems that are solely focused on tracking IT projects, internal workforce costs or billable projects, Project Workforce Management is designed to unify the coordination of all project and workforce processes, whether internal, shared (IT) or billable.

A project workforce management system defines project tasks, project positions and assigns personnel to the project positions. The project tasks and project positions are correlated to assign a responsible project position or positions to complete each project task. Because each project position may be assigned to a specific person, the qualification and availabilities of the person can be taken into account when determining the assignment. By correlating the project tasks and project positions, a manager can better control the assignment of the workforce and complete projects more efficiently.

Project workflow management is one of the best methods for managing different aspects of project. If the project is complex, then the outcomes for the project workforce management could be more effective.

For simple projects or small organisations, project workflow management may not add much value, but for more complex projects and big organisations, managing project workflow will make a big difference. This is because that small organisations or projects do not have a significant overhead when it comes to managing processes. There are many project workforce management, but many organisations prefer to adopt unique solutions.

Therefore, organisation gets software development companies to develop custom project workflow managing systems for them. This has proved to be the most suitable way of getting the best project workforce management system acquired for the company.

Project Network

Prince2 Certification
Image by/from WikiMedia Commons

A project network is a graph (weighted directed graph) depicting the sequence in which a project’s terminal elements are to be completed by showing terminal elements and their dependencies. It is always drawn from left to right to reflect project chronology.

The work breakdown structure or the product breakdown structure show the “part-whole” relations. In contrast, the project network shows the “before-after” relations.

The most popular form of project network is activity on node (AON) (as shown above), the other one is activity on arrow (AOA).

The condition for a valid project network is that it doesn’t contain any circular references.

Project dependencies can also be depicted by a predecessor table. Although such a form is very inconvenient for human analysis, project management software often offers such a view for data entry.

An alternative way of showing and analyzing the sequence of project work is the design structure matrix or dependency structure matrix.

Design Methods

Design methods are procedures, techniques, aids, or tools for designing. They offer a number of different kinds of activities that a designer might use within an overall design process. Conventional procedures of design, such as drawing, can be regarded as design methods, but since the 1950s new procedures have been developed that are more usually grouped together under the name of “design methods”. What design methods have in common is that they “are attempts to make public the hitherto private thinking of designers; to externalise the design process”.

Design methodology is the broader study of method in design: the study of the principles, practices and procedures of designing.

Design methods originated in new approaches to problem solving developed in the mid-20th Century, and also in response to industrialisation and mass-production, which changed the nature of designing. A “Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications”, held in London in 1962 is regarded as a key event marking the beginning of what became known within design studies as the “design methods movement”, leading to the founding of the Design Research Society and influencing design education and practice. Leading figures in this movement in the UK were J. Christopher Jones at the University of Manchester and L. Bruce Archer at the Royal College of Art.

The movement developed through further conferences on new design methods in the UK and USA in the 1960s. The first books on rational design methods, and on creative methods also appeared in this period.

New approaches to design were developing at the same time in Germany, notably at the Ulm School of Design (Hochschule fur Gestaltung-HfG Ulm) (1953-1968) under the leadership of Tomas Maldonado. Design teaching at Ulm integrated design with science (including social sciences) and introduced new fields of study such as cybernetics, systems theory and semiotics into design education. Bruce Archer also taught at Ulm, and another influential teacher was Horst Rittel. In 1963 Rittel moved to the School of Architecture at the University of California, Berkeley, where he helped found the Design Methods Group, a society focused on developing and promoting new methods especially in architecture and planning.

At the end of the 1960s two influential, but quite different works were published: Herbert A. Simon’s The Sciences of the Artificial and J. Christopher Jones’s Design Methods. Simon proposed the “science of design” as “a body of intellectually tough, analytic, partly formalizable, partly empirical, teachable doctrine about the design process”, whereas Jones catalogued a variety of approaches to design, both rational and creative, within a context of a broad, futures creating, systems view of design.

The 1970s saw some reaction against the rationality of design methods, notably from two of its pioneers, Christopher Alexander and J. Christopher Jones. Fundamental issues were also raised by Rittel, who characterised design and planning problems as wicked problems, un-amenable to the techniques of science and engineering, which deal with “tame” problems. The criticisms turned some in the movement away from rationalised approaches to design problem solving and towards “argumentative”, participatory processes in which designers worked in partnership with the problem stakeholders (clients, customers, users, the community). This led to participatory design, user centered design and the role of design thinking as a creative process in problem solving and innovation.

However, interest in systematic and rational design methods continued to develop strongly in engineering design during the 1980s; for example, through the Conference on Engineering Design series of The Design Society and the work of the Verein Deutscher Ingenieure association in Germany, and also in Japan, where the Japanese Society for the Science of Design had been established as early as 1954. Books on systematic engineering design methods were published in Germany and the UK. In the USA the American Society of Mechanical Engineers Design Engineering Division began a stream on design theory and methodology within its annual conferences. The interest in systematic, rational approaches to design has led to design science and design science (methodology) in engineering and computer science.

The development of design methods has been closely associated with prescriptions for a systematic process of designing. These process models usually comprise a number of phases or stages, beginning with a statement or recognition of a problem or a need for a new design and culminating in a finalised solution proposal. In his ‘Systematic Method for Designers’ L. Bruce Archer produced a very elaborate, 229 step model of a systematic design process for industrial design, but also a summary model consisting of three phases: Analytical phase (programming and data collection, analysis), Creative phase (synthesis, development), and Executive phase (communication). The UK’s Design Council models the creative design process in four phases: Discover (insight into the problem), Define (the area to focus upon), Develop (potential solutions), Deliver (solutions that work). A systematic model for engineering design by Pahl and Beitz has phases of Clarification of the task, Conceptual design, Embodiment design, and Detail design. A less prescriptive approach to designing a basic design process for oneself has been outlined by J. Christopher Jones.

In the engineering design process systematic models tend to be linear, in sequential steps, but acknowledging the necessity of iteration. In architectural design, process models tend to be cyclical and spiral, with iteration as essential to progression towards a final design. In industrial and product design, process models tend to comprise a sequence of stages of divergent and convergent thinking. The Dubberly Design Office has compiled examples of more than 80 design process models, but it is not an exhaustive list.

Within these process models there are numerous design methods that can be applied. In his book of ‘Design Methods’ J. C. Jones grouped 26 methods according to their purposes within a design process: Methods of exploring design situations (e.g. Stating Objectives, Investigating User Behaviour, Interviewing Users), Methods of searching for ideas (e.g. Brainstorming, Synectics, Morphological Charts), Methods of exploring problem structure (e.g. Interaction Matrix, Functional Innovation, Information Sorting), Methods of evaluation (e.g. Checklists, Ranking and Weighting).

Nigel Cross outlined eight stages in a process of engineering product design, each with an associated method: Identifying Opportunities – User Scenarios; Clarifying Objectives – Objectives Tree; Establishing Functions – Function Analysis; Setting Requirements – Performance Specification; Determining Characteristics – Quality Function Deployment; Generating Alternatives – Morphological Chart; Evaluating Alternatives – Weighted Objectives; Improving Details – Value Engineering.

Many design methods still currently in use originated in the design methods movement of the 1960s and 70s, adapted to modern design practices. Recent developments have seen the introduction of more qualitative techniques, including ethnographic methods such as cultural probes and situated methods.

The design methods movement had a profound influence on the development of academic interest in design and designing and the emergence of design research and design studies. Arising directly from the 1962 Conference on Design Methods, the Design Research Society (DRS) was founded in the UK in 1966. The purpose of the Society is to promote “the study of and research into the process of designing in all its many fields” and is an interdisciplinary group with many professions represented.

In the USA, a similar Design Methods Group (DMG) was also established in 1966 by Horst Rittel and others at the University of California, Berkeley. The DMG held a conference at MIT in 1968 with a focus on environmental design and planning, and that led to the foundation of the Environmental Design Research Association (EDRA), which held its first conference in 1969. A group interested in design methods and theory in architecture and engineering formed at MIT in the early 1980s, including Donald Schon, who was studying the working practices of architects, engineers and other professionals and developing his theory of reflective practice. In 1984 the National Science Foundation created a Design Theory and Methodology Program to promote methods and process research in engineering design.

Meanwhile in Europe, Vladimir Hubka established the Workshop Design-Konstruction (WDK),which led to a series of International Conferences on Engineering Design (ICED) beginning in 1981 and later became the Design Society.

Academic research journals in design also began publication. DRS initiated Design Studies in 1979, Design Issues appeared in 1984, and Research in Engineering Design in 1989.

Several pioneers of design methods developed their work in association with industry. The Ulm school established a significant partnership with the German consumer products company Braun through their designer Dieter Rams. J. Christopher Jones began his approach to systematic design as an ergonomist at the electrical engineering company AEI. L. Bruce Archer developed his systematic approach in projects for medical equipment for the UK National Health Service.

In the USA, designer Henry Dreyfuss had a profound impact on the practice of industrial design by developing systematic processes and promoting the use of anthropometrics, ergonomics and human factors in design, including through his 1955 book ‘Designing for People’. Another successful designer, Jay Doblin, was also influential on the theory and practice of design as a systematic process.

Much of current design practice has been influenced and guided by design methods. For example, the influential IDEO consultancy uses design methods extensively in its ‘Design Kit’ and ‘Method Cards’. Increasingly, the intersections of design methods with business and government through the application of design thinking have been championed by numerous consultancies within the design profession. Wide influence has also come through Christopher Alexander’s pattern language method, originally developed for architectural and urban design, which has been adopted in software design, interaction design, pedagogical design and other domains.

Acquisition Initiation Within the Information Services Procurement Library (ISPL)

Uncategorized
Image by/from User:Mcalukin of the English Wikipedia

Acquisition Initiation is the initial process within the Information Services Procurement Library (ISPL) and is executed by a customer organization intending to procure Information Services. The process is composed of two main activities: the making of the acquisition goal definition and the making of the acquisition planning. During the acquisition initiation, an iterative process arises in which questions about the goal of the acquisition are usually asked. In response to these questions the Library provides details of the requirements, covering areas such as cost, feasibility and timelines. An example of such requirements is the “planning of the acquisition”, a component that may also lead to more questions about the acquisition goal (thus, it is reasonable to state that a relationship exists between the acquisition goal and the acquisition planning).

The process-data model shown in the following section displays the acquisition initiation stages. It shows both the process and the data ensuing from the process, and parts of the image will also be used as references in the body of this article. The concepts and data found in the model are explained in separate tables which can be found in the section immediately following the model. A textual, and more thorough, explanation of the activities and concepts that make up the Acquisition Initiation process can be found in the remainder of this article.

The draft acquisition goal is the description of the global goal that is to be achieved by starting procurement. It is inspired by the business needs or business strategy. It is similar, though simpler, to the concept of the project brief in PRINCE2. It is the first draft of the acquisition goal, containing at least a (short) problem definition and a (short) goal definition. The draft acquisition goal is meant to give the main reasons and the main goals to those people who will have to make the decision to actually start of the acquisition or not. It may thus also encapsulate items like a cost-benefit analysis, stakes & stakeholders and other items that will be further refined during the actual Acquisition Initiation.

It is, in this sense, not an activity of the acquisition initiation process, but it is the input for starting the process.

The problem definition is a statement about the problem that could be resolved by starting the acquisition process.

The goal definition is a statement about the goal that will have to be reached when the acquisition is executed:

The draft acquisition goal can be made as short or as long as is needed by the organization, as long as it serves to be a good basis to make the initial decision to start of an acquisition process.

When the decision is made to start an acquisition process, the first activity of the acquisition initiation is to define the acquisition goal.

The acquisition goal is the whole of defined systems and services requirements, attributed by costs & benefits and stakes & stakeholders, with a defined target domain serving as the boundary. The acquisition goal is the basis of the acquisition process and for formulating the acquisition strategy during the acquisition planning. Input can be the draft acquisition goal and the business needs (of the target domain). The business needs can be derived from strategic business plans or information system plans.

The target domain is that part of the customer organization that is involved in, or influenced by an information service. It is defined in terms of business processes, business actors, business information, business technology and the relations between these four aspects. The target domain is identified to ensure a fitting acquisition goal with fitting requirements for the systems and services to be acquired.

A limited description of a target domain can thus be, for example:The part of customer organization that uses the software program MarketingUnlimited (fictional), involving the marketing-process, several types of information related to the marketing-department of the customer organization, the employees working in the marketing department and the production platform for MarketingUnlimited consisting of an application server, several workstations, and tools related to MarketingUnlimited.

The acquisition goal is then described by system descriptions and service descriptions:

Other information on how ISPL defines the deliverables of the acquisition can be found in the general ISPL entry.

Cost-benefit analysis concerns the analysis of:

to successfully evaluate the investment issues of the acquisition.

It is important to identify all actors affected, and in what way they are affected (their stake), because a negative attitude of actors may negatively influence the success of the acquisition. ISPL proposes to perform a SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) for the actors involved to properly and thoroughly identify the stakes of the actors. The results of this analysis can serve as input for the situation & risk analysis.

The information contained within the Acquisition Goal is used to produce the acquisition plan. The acquisition plan is the master plan of the entire acquisition. In this plan, delivery scenarios are determined and the situation and risks are evaluated. Based on the scenarios, situations, and risks, a strategy is formed to manage the acquisition. Furthermore, the acquisition plan will hold the main decision points, in which decisions are made about the deliverables within the acquisition and the acquisition organization (similar to a project organization) is formed.

On the basis of the acquisition goal, which contains among others the systems and services requirements and descriptions, scenarios can be formed for the deliveries of the information services to be acquired. Multiple scenarios may be built, which will then be evaluated and used in the design of the acquisition strategy. The scenarios are built with the priorities and interdependence of the deliverables in mind.

Priorities are related to the importance of each delivery: which delivery has time-preference over another.

Some deliveries may be dependent on each other, requiring one services to be delivered before the next one can be.

Example:

During a project to make ISPL more specific for generic product software implementations, a number of scenario’s were made. One of these was for an approach of a one-shot implementation. The below picture shows a diagram of the scenario that was made. An example of a scenario for the implementation of software is shown in the thumbnail on the right.

In this example, the general deliverables within an implementation of software are mentioned, executed and delivered differently as time goes by, mainly because of dependencies between deliverables. For instance, the actual go for the delivery of the software and the other related deliverables are dependent on the outcome of the deliverable “Proeftuin”. “Proeftuin” is an agreed period in which the software is extensively tested by the customer organization, provided and supported by the supplier. The customer organization can get a feel for the use of the software in the target domain, to guarantee that the software is a “fit” within the organization.

ISPL adheres to a situational approach to manage an acquisition. The situational approach takes into account properties of the problem situation, which are called situational factors. ISPL provides a number of these situational factors. Some of these situational factors affect events that have adverse consequences: the risks. Thus, the situational factors and risks within ISPL are related to each other. This makes it possible to provide a number of heuristics on which factors have an influence on certain risks.
First the situation is analysed, then ISPL proposes a number of risks which may arise from the situation at hand. With this information, an acquisition strategy can be formed to mitigate both the situation and risks in a number of areas.
Other information of the situation & risk analysis of ISPL can be found in the ISPL entry.

The acquisition strategy within ISPL acts as the design of a risk management strategy. The risk management strategy provides choices for options that reduce the probability and/or effect of risks. ISPL provides several options, divided over four classes:

Options are chosen based on their efficiency, costs and the related delay for delivery.

Based upon all the previous activities, the decision points planning is made. This is a time-set planning of decision points.

A decision point is described by:

An example of a decision points planning can be found through the thumbnail on the right. In this planning the decisions points are planned through time (top to bottom, left to right). This planning was taken from a study to make ISPL more specific for product software implementations. The decision points planning is thus aimed at a part of the process of a software implementation.

A shortened example of a decision point description can be found in the image below. This description was taken from the same study to make ISPL more specific for product software implementations.

The acquisition organization is set, which is similar to a project organization. Although the acquisition organization is more focused on the (legal) relationship that it has to maintain with the supplier organization.

Design for Six Sigma

Design for Six Sigma (DFSS) is a business process management method related to traditional Six Sigma. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. DFSS is relevant for relatively simple items / systems. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.

There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define – Measure – Analyze – Improve – Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure.

DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists.

DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design.

Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model.

Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box’s original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years.

Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis.

It is often seen that[weasel words] the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that[weasel words] two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for “back end” Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well.

Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries.
However, it has become clear that[weasel words] the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs.

Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., “fire fighting”), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., “fire prevention”). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions).

Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering.

Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’.

DFSS for software is essentially a non superficial modification of “classical DFSS” since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels.

DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models.

Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining,

However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM

DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ]

With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set.

Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm

DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes

Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario

DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior.