Management Process

Management process is a process of setting goals, planning and/or controlling the organizing and leading the execution of any type of activity, such as:

An organization’s senior management is responsible for carrying out its management process. However, this is not always the case for all management processes, for example, it is the responsibility of the project manager to carry out a project management process.

Planning, it determines the objectives, evaluate the different alternatives and choose the best

Organizing, define group’s functions, establish relationships and defining authority and responsibility

Staffing, recruitment or placement and selection or training takes place for the development of members in the firm

directing, is to give the Direction to the employees.

Technology Management

Technology management is a set of management disciplines that allows organizations to manage their technological fundamentals to create competitive advantage. Typical concepts used in technology management are:

The role of the technology management function in an organization is to understand the value of certain technology for the organization. Continuous development of technology is valuable as long as there is a value for the customer and therefore the technology management function in an organization should be able to argue when to invest on technology development and when to withdraw.

Technology management can also be defined as the integrated planning, design, optimization, operation and control of technological products, processes and services, a better definition would be the management of the use of technology for human advantage.

The Association of Technology, Management, and Applied Engineering defines technology management as the field concerned with the supervision of personnel across the technical spectrum and a wide variety of complex technological systems. Technology management programs typically include instruction in production and operations management, project management, computer applications, quality control, safety and health issues, statistics, and general management principles.

Perhaps the most authoritative input to our understanding of technology is the diffusion of innovations theory developed in the first half of the twentieth century. It suggests that all innovations follow a similar diffusion pattern – best known today in the form of an “s” curve though originally based upon the concept of a standard distribution of adopters. In broad terms the “s” curve suggests four phases of a technology life cycle – emerging, growth, mature and aging.

These four phases are coupled to increasing levels of acceptance of an innovation or, in our case a new technology. In recent times for many technologies an inverse curve – which corresponds to a declining cost per unit – has been postulated. This may not prove to be universally true though for information technology where much of the cost is in the initial phase it has been a reasonable expectation.

The second major contribution to this area is the Carnegie Mellon Capability Maturity Model. This model proposes that a series of progressive capabilities can be quantified through a set of threshold tests. These tests determine repeatability, definition, management and optimization. The model suggests that any organization has to master one level before being able to proceed to the next.

The third significant contribution comes from Gartner – the research service, it is the Hype cycle, this suggests that our modern approach to marketing technology results in the technology being over hyped in the early stages of growth. Taken together, these fundamental concepts provide a foundation for formalizing the approach to managing technology.

Mobile device management (MDM) is the administrative area dealing with deploying, securing, monitoring, integrating and managing mobile devices, such as smartphones, tablets and laptops, in the workplace and other areas. The intent of MDM is to optimize the functionality and security of mobile devices within the enterprise, while simultaneously protecting the corporate network. MDM is usually implemented with the use of a third party product that has management features for particular vendors of mobile devices.

Modern Mobile Device Management products supports tablets, Windows 10 and macOS computers. The practice of using MDM to control PC is also known as unified endpoint management.

The Association of Technology, Management, and Applied Engineering (ATMAE), accredits selected collegiate programs in technology management. An instructor or graduate of a technology management program may choose to become a Certified Technology Manager (CTM) by sitting for a rigorous exam administered by ATMAE covering production planning & control, safety, quality, and management/supervision.

ATMAE program accreditation is recognized by the Council for Higher Education Accreditation (CHEA) for accrediting technology management programs. CHEA recognizes ATMAE in the U.S. for accrediting associate, baccalaureate, and master’s degree programs in technology, applied technology, engineering technology, and technology-related disciplines delivered by national or regional accredited institutions in
the United States.(2011)

Project Workforce Management (PWM)

Project workforce management is the practice of combining the coordination of all logistic elements of a project through a single software application (or workflow engine). This includes planning and tracking of schedules and mileposts, cost and revenue, resource allocation, as well as overall management of these project elements. Efficiency is improved by eliminating manual processes, like spreadsheet tracking to monitor project progress. It also allows for at-a-glance status updates and ideally integrates with existing legacy applications in order to unify ongoing projects, enterprise resource planning (ERP) and broader organizational goals. There are a lot of logistic elements in a project. Different team members are responsible for managing each element and often, the organisation may have a mechanism to manage some logistic areas as well.

By coordinating these various components of project management, workforce management and financials through a single solution, the process of configuring and changing project and workforce details is simplified.

A project workforce management system defines project tasks, project positions, and assigns personnel to the project positions. The project tasks and positions are correlated to assign a responsible project position or even multiple positions to complete each project task. Because each project position may be assigned to a specific person, the qualifications and availabilities of that person can be taken into account when determining the assignment. By associating project tasks and project positions, a manager can better control the assignment of the workforce and complete the project more efficiently.

When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined. Therefore, all the logistic processes take place in the workflow engine.

This invention relates to project management systems and methods, more particularly to a software-based system and method for project and workforce management.

Due to the software usage, all the project workflow management tasks can be fully automated without leaving many tasks for the project managers. This returns high efficiency to the project management when it comes to project tracking proposes. In addition to different tracking mechanisms, project workforce management software also offer a dashboard for the project team. Through the dashboard, the project team has a glance view of the overall progress of the project elements.

Most of the times, project workforce management software can work with the existing legacy software systems such as ERP (enterprise resource planning) systems. This easy integration allows the organisation to use a combination of software systems for management purposes.

Good project management is an important factor for the success of a project. A project may be thought of as a collection of activities and tasks designed to achieve a specific goal of the organisation, with specific performance or quality requirements while meeting any subject time and cost constraints. Project management refers to managing the activities that lead to the successful completion of a project. Furthermore, it focuses on finite deadlines and objectives. A number of tools may be used to assist with this as well as with assessment.

Project management may be used when planning personnel resources and capabilities. The project may be linked to the objects in a professional services life cycle and may accompany the objects from the opportunity over quotation, contract, time and expense recording, billing, period-end-activities to the final reporting. Naturally the project gets even more detailed when moving through this cycle.

For any given project, several project tasks should be defined. Project tasks describe the activities and phases that have to be performed in the project such as writing of layouts, customising, testing. What is needed is a system that allows project positions to be correlated with project tasks. Project positions describe project roles like project manager, consultant, tester, etc. Project-positions are typically arranged linearly within the project. By correlating project tasks with project positions, the qualifications and availability of personnel assigned to the project positions may be considered.

Good project management should:

When it comes to project workforce management, it is all about managing all the logistic aspects of a project or an organisation through a software application. Usually, this software has a workflow engine defined in them. So, all the logistic processes take place in the workflow engine.

The regular and most common types of tasks handled by project workforce management software or a similar workflow engine are:

Regularly monitoring your project’s schedule performance can provide early indications of possible activity-coordination problems, resource conflicts, and possible cost overruns. To monitor schedule performance. Collecting information and evaluating it ensure a project accuracy.

The project schedule outlines the intended result of the project and what’s required to bring it to completion. In the schedule, we need to include all the resources involved and cost and time constraints through a work breakdown structure (WBS). The WBS outlines all the tasks and breaks them down into specific deliverables.

The importance of tracking actual costs and resource usage in projects depends upon the project situation.

Tracking actual costs and resource usage is an essential aspect of the project control function.

Organisational profitability is directly connected to project management efficiency and optimal resource utilisation. To sum up, organisations that struggle with either or both of these core competencies typically experience cost overruns, schedule delays and unhappy customers.

The focus for project management is the analysis of project performance to determine whether a change is needed in the plan for the remaining project activities to achieve the project goals.

Risk identification consists of determining which risks are likely to affect the project and documenting the characteristics of each.

Project communication management is about how communication is carried out during the course of the project

It is of no use completing a project within the set time and budget if the final product is of poor quality. The project manager has to ensure that the final product meets the quality expectations of the stakeholders. This is done by good:

There are three main differences between Project Workforce Management and traditional project management and workforce management disciplines and solutions:

All project and workforce processes are designed, controlled and audited using a built-in graphical workflow engine. Users can design, control and audit the different processes involved in the project. The graphical workflow is quite attractive for the users of the system and allows the users to have a clear idea of the workflow engine.

Project Workforce Management provides organization and work breakdown structures to create, manage and report on functional and approval hierarchies, and to track information at any level of detail. Users can create, manage, edit and report work breakdown structures. Work breakdown structures have different abstraction levels, so the information can be tracked at any level. Usually, project workforce management has approval hierarchies. Each workflow created will go through several records before it becomes an organisational or project standard. This helps the organisation to reduce the inefficiencies of the process, as it is audited by many stakeholders.

Unlike traditional disconnected project, workforce and billing management systems that are solely focused on tracking IT projects, internal workforce costs or billable projects, Project Workforce Management is designed to unify the coordination of all project and workforce processes, whether internal, shared (IT) or billable.

A project workforce management system defines project tasks, project positions and assigns personnel to the project positions. The project tasks and project positions are correlated to assign a responsible project position or positions to complete each project task. Because each project position may be assigned to a specific person, the qualification and availabilities of the person can be taken into account when determining the assignment. By correlating the project tasks and project positions, a manager can better control the assignment of the workforce and complete projects more efficiently.

Project workflow management is one of the best methods for managing different aspects of project. If the project is complex, then the outcomes for the project workforce management could be more effective.

For simple projects or small organisations, project workflow management may not add much value, but for more complex projects and big organisations, managing project workflow will make a big difference. This is because that small organisations or projects do not have a significant overhead when it comes to managing processes. There are many project workforce management, but many organisations prefer to adopt unique solutions.

Therefore, organisation gets software development companies to develop custom project workflow managing systems for them. This has proved to be the most suitable way of getting the best project workforce management system acquired for the company.

Project Network

Prince2 Certification
Image by/from WikiMedia Commons

A project network is a graph (weighted directed graph) depicting the sequence in which a project’s terminal elements are to be completed by showing terminal elements and their dependencies. It is always drawn from left to right to reflect project chronology.

The work breakdown structure or the product breakdown structure show the “part-whole” relations. In contrast, the project network shows the “before-after” relations.

The most popular form of project network is activity on node (AON) (as shown above), the other one is activity on arrow (AOA).

The condition for a valid project network is that it doesn’t contain any circular references.

Project dependencies can also be depicted by a predecessor table. Although such a form is very inconvenient for human analysis, project management software often offers such a view for data entry.

An alternative way of showing and analyzing the sequence of project work is the design structure matrix or dependency structure matrix.

Design Methods

Design methods are procedures, techniques, aids, or tools for designing. They offer a number of different kinds of activities that a designer might use within an overall design process. Conventional procedures of design, such as drawing, can be regarded as design methods, but since the 1950s new procedures have been developed that are more usually grouped together under the name of “design methods”. What design methods have in common is that they “are attempts to make public the hitherto private thinking of designers; to externalise the design process”.

Design methodology is the broader study of method in design: the study of the principles, practices and procedures of designing.

Design methods originated in new approaches to problem solving developed in the mid-20th Century, and also in response to industrialisation and mass-production, which changed the nature of designing. A “Conference on Systematic and Intuitive Methods in Engineering, Industrial Design, Architecture and Communications”, held in London in 1962 is regarded as a key event marking the beginning of what became known within design studies as the “design methods movement”, leading to the founding of the Design Research Society and influencing design education and practice. Leading figures in this movement in the UK were J. Christopher Jones at the University of Manchester and L. Bruce Archer at the Royal College of Art.

The movement developed through further conferences on new design methods in the UK and USA in the 1960s. The first books on rational design methods, and on creative methods also appeared in this period.

New approaches to design were developing at the same time in Germany, notably at the Ulm School of Design (Hochschule fur Gestaltung-HfG Ulm) (1953-1968) under the leadership of Tomas Maldonado. Design teaching at Ulm integrated design with science (including social sciences) and introduced new fields of study such as cybernetics, systems theory and semiotics into design education. Bruce Archer also taught at Ulm, and another influential teacher was Horst Rittel. In 1963 Rittel moved to the School of Architecture at the University of California, Berkeley, where he helped found the Design Methods Group, a society focused on developing and promoting new methods especially in architecture and planning.

At the end of the 1960s two influential, but quite different works were published: Herbert A. Simon’s The Sciences of the Artificial and J. Christopher Jones’s Design Methods. Simon proposed the “science of design” as “a body of intellectually tough, analytic, partly formalizable, partly empirical, teachable doctrine about the design process”, whereas Jones catalogued a variety of approaches to design, both rational and creative, within a context of a broad, futures creating, systems view of design.

The 1970s saw some reaction against the rationality of design methods, notably from two of its pioneers, Christopher Alexander and J. Christopher Jones. Fundamental issues were also raised by Rittel, who characterised design and planning problems as wicked problems, un-amenable to the techniques of science and engineering, which deal with “tame” problems. The criticisms turned some in the movement away from rationalised approaches to design problem solving and towards “argumentative”, participatory processes in which designers worked in partnership with the problem stakeholders (clients, customers, users, the community). This led to participatory design, user centered design and the role of design thinking as a creative process in problem solving and innovation.

However, interest in systematic and rational design methods continued to develop strongly in engineering design during the 1980s; for example, through the Conference on Engineering Design series of The Design Society and the work of the Verein Deutscher Ingenieure association in Germany, and also in Japan, where the Japanese Society for the Science of Design had been established as early as 1954. Books on systematic engineering design methods were published in Germany and the UK. In the USA the American Society of Mechanical Engineers Design Engineering Division began a stream on design theory and methodology within its annual conferences. The interest in systematic, rational approaches to design has led to design science and design science (methodology) in engineering and computer science.

The development of design methods has been closely associated with prescriptions for a systematic process of designing. These process models usually comprise a number of phases or stages, beginning with a statement or recognition of a problem or a need for a new design and culminating in a finalised solution proposal. In his ‘Systematic Method for Designers’ L. Bruce Archer produced a very elaborate, 229 step model of a systematic design process for industrial design, but also a summary model consisting of three phases: Analytical phase (programming and data collection, analysis), Creative phase (synthesis, development), and Executive phase (communication). The UK’s Design Council models the creative design process in four phases: Discover (insight into the problem), Define (the area to focus upon), Develop (potential solutions), Deliver (solutions that work). A systematic model for engineering design by Pahl and Beitz has phases of Clarification of the task, Conceptual design, Embodiment design, and Detail design. A less prescriptive approach to designing a basic design process for oneself has been outlined by J. Christopher Jones.

In the engineering design process systematic models tend to be linear, in sequential steps, but acknowledging the necessity of iteration. In architectural design, process models tend to be cyclical and spiral, with iteration as essential to progression towards a final design. In industrial and product design, process models tend to comprise a sequence of stages of divergent and convergent thinking. The Dubberly Design Office has compiled examples of more than 80 design process models, but it is not an exhaustive list.

Within these process models there are numerous design methods that can be applied. In his book of ‘Design Methods’ J. C. Jones grouped 26 methods according to their purposes within a design process: Methods of exploring design situations (e.g. Stating Objectives, Investigating User Behaviour, Interviewing Users), Methods of searching for ideas (e.g. Brainstorming, Synectics, Morphological Charts), Methods of exploring problem structure (e.g. Interaction Matrix, Functional Innovation, Information Sorting), Methods of evaluation (e.g. Checklists, Ranking and Weighting).

Nigel Cross outlined eight stages in a process of engineering product design, each with an associated method: Identifying Opportunities – User Scenarios; Clarifying Objectives – Objectives Tree; Establishing Functions – Function Analysis; Setting Requirements – Performance Specification; Determining Characteristics – Quality Function Deployment; Generating Alternatives – Morphological Chart; Evaluating Alternatives – Weighted Objectives; Improving Details – Value Engineering.

Many design methods still currently in use originated in the design methods movement of the 1960s and 70s, adapted to modern design practices. Recent developments have seen the introduction of more qualitative techniques, including ethnographic methods such as cultural probes and situated methods.

The design methods movement had a profound influence on the development of academic interest in design and designing and the emergence of design research and design studies. Arising directly from the 1962 Conference on Design Methods, the Design Research Society (DRS) was founded in the UK in 1966. The purpose of the Society is to promote “the study of and research into the process of designing in all its many fields” and is an interdisciplinary group with many professions represented.

In the USA, a similar Design Methods Group (DMG) was also established in 1966 by Horst Rittel and others at the University of California, Berkeley. The DMG held a conference at MIT in 1968 with a focus on environmental design and planning, and that led to the foundation of the Environmental Design Research Association (EDRA), which held its first conference in 1969. A group interested in design methods and theory in architecture and engineering formed at MIT in the early 1980s, including Donald Schon, who was studying the working practices of architects, engineers and other professionals and developing his theory of reflective practice. In 1984 the National Science Foundation created a Design Theory and Methodology Program to promote methods and process research in engineering design.

Meanwhile in Europe, Vladimir Hubka established the Workshop Design-Konstruction (WDK),which led to a series of International Conferences on Engineering Design (ICED) beginning in 1981 and later became the Design Society.

Academic research journals in design also began publication. DRS initiated Design Studies in 1979, Design Issues appeared in 1984, and Research in Engineering Design in 1989.

Several pioneers of design methods developed their work in association with industry. The Ulm school established a significant partnership with the German consumer products company Braun through their designer Dieter Rams. J. Christopher Jones began his approach to systematic design as an ergonomist at the electrical engineering company AEI. L. Bruce Archer developed his systematic approach in projects for medical equipment for the UK National Health Service.

In the USA, designer Henry Dreyfuss had a profound impact on the practice of industrial design by developing systematic processes and promoting the use of anthropometrics, ergonomics and human factors in design, including through his 1955 book ‘Designing for People’. Another successful designer, Jay Doblin, was also influential on the theory and practice of design as a systematic process.

Much of current design practice has been influenced and guided by design methods. For example, the influential IDEO consultancy uses design methods extensively in its ‘Design Kit’ and ‘Method Cards’. Increasingly, the intersections of design methods with business and government through the application of design thinking have been championed by numerous consultancies within the design profession. Wide influence has also come through Christopher Alexander’s pattern language method, originally developed for architectural and urban design, which has been adopted in software design, interaction design, pedagogical design and other domains.

Acquisition Initiation Within the Information Services Procurement Library (ISPL)

Uncategorized
Image by/from User:Mcalukin of the English Wikipedia

Acquisition Initiation is the initial process within the Information Services Procurement Library (ISPL) and is executed by a customer organization intending to procure Information Services. The process is composed of two main activities: the making of the acquisition goal definition and the making of the acquisition planning. During the acquisition initiation, an iterative process arises in which questions about the goal of the acquisition are usually asked. In response to these questions the Library provides details of the requirements, covering areas such as cost, feasibility and timelines. An example of such requirements is the “planning of the acquisition”, a component that may also lead to more questions about the acquisition goal (thus, it is reasonable to state that a relationship exists between the acquisition goal and the acquisition planning).

The process-data model shown in the following section displays the acquisition initiation stages. It shows both the process and the data ensuing from the process, and parts of the image will also be used as references in the body of this article. The concepts and data found in the model are explained in separate tables which can be found in the section immediately following the model. A textual, and more thorough, explanation of the activities and concepts that make up the Acquisition Initiation process can be found in the remainder of this article.

The draft acquisition goal is the description of the global goal that is to be achieved by starting procurement. It is inspired by the business needs or business strategy. It is similar, though simpler, to the concept of the project brief in PRINCE2. It is the first draft of the acquisition goal, containing at least a (short) problem definition and a (short) goal definition. The draft acquisition goal is meant to give the main reasons and the main goals to those people who will have to make the decision to actually start of the acquisition or not. It may thus also encapsulate items like a cost-benefit analysis, stakes & stakeholders and other items that will be further refined during the actual Acquisition Initiation.

It is, in this sense, not an activity of the acquisition initiation process, but it is the input for starting the process.

The problem definition is a statement about the problem that could be resolved by starting the acquisition process.

The goal definition is a statement about the goal that will have to be reached when the acquisition is executed:

The draft acquisition goal can be made as short or as long as is needed by the organization, as long as it serves to be a good basis to make the initial decision to start of an acquisition process.

When the decision is made to start an acquisition process, the first activity of the acquisition initiation is to define the acquisition goal.

The acquisition goal is the whole of defined systems and services requirements, attributed by costs & benefits and stakes & stakeholders, with a defined target domain serving as the boundary. The acquisition goal is the basis of the acquisition process and for formulating the acquisition strategy during the acquisition planning. Input can be the draft acquisition goal and the business needs (of the target domain). The business needs can be derived from strategic business plans or information system plans.

The target domain is that part of the customer organization that is involved in, or influenced by an information service. It is defined in terms of business processes, business actors, business information, business technology and the relations between these four aspects. The target domain is identified to ensure a fitting acquisition goal with fitting requirements for the systems and services to be acquired.

A limited description of a target domain can thus be, for example:The part of customer organization that uses the software program MarketingUnlimited (fictional), involving the marketing-process, several types of information related to the marketing-department of the customer organization, the employees working in the marketing department and the production platform for MarketingUnlimited consisting of an application server, several workstations, and tools related to MarketingUnlimited.

The acquisition goal is then described by system descriptions and service descriptions:

Other information on how ISPL defines the deliverables of the acquisition can be found in the general ISPL entry.

Cost-benefit analysis concerns the analysis of:

to successfully evaluate the investment issues of the acquisition.

It is important to identify all actors affected, and in what way they are affected (their stake), because a negative attitude of actors may negatively influence the success of the acquisition. ISPL proposes to perform a SWOT analysis (Strengths, Weaknesses, Opportunities and Threats) for the actors involved to properly and thoroughly identify the stakes of the actors. The results of this analysis can serve as input for the situation & risk analysis.

The information contained within the Acquisition Goal is used to produce the acquisition plan. The acquisition plan is the master plan of the entire acquisition. In this plan, delivery scenarios are determined and the situation and risks are evaluated. Based on the scenarios, situations, and risks, a strategy is formed to manage the acquisition. Furthermore, the acquisition plan will hold the main decision points, in which decisions are made about the deliverables within the acquisition and the acquisition organization (similar to a project organization) is formed.

On the basis of the acquisition goal, which contains among others the systems and services requirements and descriptions, scenarios can be formed for the deliveries of the information services to be acquired. Multiple scenarios may be built, which will then be evaluated and used in the design of the acquisition strategy. The scenarios are built with the priorities and interdependence of the deliverables in mind.

Priorities are related to the importance of each delivery: which delivery has time-preference over another.

Some deliveries may be dependent on each other, requiring one services to be delivered before the next one can be.

Example:

During a project to make ISPL more specific for generic product software implementations, a number of scenario’s were made. One of these was for an approach of a one-shot implementation. The below picture shows a diagram of the scenario that was made. An example of a scenario for the implementation of software is shown in the thumbnail on the right.

In this example, the general deliverables within an implementation of software are mentioned, executed and delivered differently as time goes by, mainly because of dependencies between deliverables. For instance, the actual go for the delivery of the software and the other related deliverables are dependent on the outcome of the deliverable “Proeftuin”. “Proeftuin” is an agreed period in which the software is extensively tested by the customer organization, provided and supported by the supplier. The customer organization can get a feel for the use of the software in the target domain, to guarantee that the software is a “fit” within the organization.

ISPL adheres to a situational approach to manage an acquisition. The situational approach takes into account properties of the problem situation, which are called situational factors. ISPL provides a number of these situational factors. Some of these situational factors affect events that have adverse consequences: the risks. Thus, the situational factors and risks within ISPL are related to each other. This makes it possible to provide a number of heuristics on which factors have an influence on certain risks.
First the situation is analysed, then ISPL proposes a number of risks which may arise from the situation at hand. With this information, an acquisition strategy can be formed to mitigate both the situation and risks in a number of areas.
Other information of the situation & risk analysis of ISPL can be found in the ISPL entry.

The acquisition strategy within ISPL acts as the design of a risk management strategy. The risk management strategy provides choices for options that reduce the probability and/or effect of risks. ISPL provides several options, divided over four classes:

Options are chosen based on their efficiency, costs and the related delay for delivery.

Based upon all the previous activities, the decision points planning is made. This is a time-set planning of decision points.

A decision point is described by:

An example of a decision points planning can be found through the thumbnail on the right. In this planning the decisions points are planned through time (top to bottom, left to right). This planning was taken from a study to make ISPL more specific for product software implementations. The decision points planning is thus aimed at a part of the process of a software implementation.

A shortened example of a decision point description can be found in the image below. This description was taken from the same study to make ISPL more specific for product software implementations.

The acquisition organization is set, which is similar to a project organization. Although the acquisition organization is more focused on the (legal) relationship that it has to maintain with the supplier organization.

Design for Six Sigma

Design for Six Sigma (DFSS) is a business process management method related to traditional Six Sigma. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. DFSS is relevant for relatively simple items / systems. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.

There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define – Measure – Analyze – Improve – Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure.

DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists.

DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design.

Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model.

Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box’s original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years.

Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis.

It is often seen that[weasel words] the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that[weasel words] two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for “back end” Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well.

Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries.
However, it has become clear that[weasel words] the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs.

Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., “fire fighting”), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., “fire prevention”). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions).

Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering.

Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’.

DFSS for software is essentially a non superficial modification of “classical DFSS” since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels.

DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models.

Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining,

However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM

DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ]

With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set.

Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm

DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes

Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario

DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior.

Service Design

Service design is the activity of planning and organizing people, infrastructure, communication and material components of a service in order to improve its quality and the interaction between the service provider and its customers. Service design may function as a way to inform changes to an existing service or create a new service entirely.

The purpose of service design methodologies is to establish best practices for designing services according to both the needs of customers and the competencies and capabilities of service providers. If a successful method of service design is adapted then the service will be user-friendly and relevant to the customers, while being sustainable and competitive for the service provider. For this purpose, service design uses methods and tools derived from different disciplines, ranging from ethnography to information and management science to interaction design. Service design concepts and ideas are typically portrayed visually, using different representation techniques according to the culture, skill and level of understanding of the stakeholders involved in the service processes (Krucken and Meroni, 2006).

Service design practice is the specification and construction of processes that delivers valuable capacities for action to a particular customer. Service design practice can be both tangible and intangible and it can involve artifacts or other elements such as communication, environment and behaviors. Several authors of service design theory including Pierre Eiglier, Richard Normann, Nicola Morelli, emphasize that services come to existence at the same moment they are being provided and used. In contrast, products are created and “exist” before being purchased and used. While a designer can prescribe the exact configuration of a product, s/he cannot prescribe in the same way the result of the interaction between customers and service providers, nor can s/he prescribe the form and characteristics of any emotional value produced by the service.

Consequently, service design is an activity that, among other things, suggests behavioral patterns or “scripts” to the actors interacting in the service. Understanding how these patterns interweave and support each other are important aspects of the character of design and service. This allows greater customer freedom, and better provider adaptability to the customers’ behavior.

Early contributions to service design were made by G. Lynn Shostack, a bank and marketing manager and consultant, in the form of written articles and books. The activity of designing service was considered to be part of the domain of marketing and management disciplines in the early years. For instance, in 1982 Shostack proposed the integration of the design of material components (products) and immaterial components (services). This design process, according to Shostack, can be documented and codified using a “service blueprint” to map the sequence of events in a service and its essential functions in an objective and explicit manner. A service blueprint is an extension of a customer journey map, and this document specifies all the interactions a customer has with an organization throughout their customer lifecycle.

Servicescape is a model developed by B.H. Booms and Mary Jo Bitner to emphasize the impact of the physical environment in which a service process takes place and to explain the behavior of people within the service environment, with a view to designing environments that accomplish organizational goals in terms of achieving desired behavioral responses.

In 1991, service design was first introduced as a design discipline by professors Michael Erlhoff and Brigit Mager at Koln International School of Design (KISD). In 2004, the Service Design Network was launched by Koln International School of Design, Carnegie Mellon University, Linkopings Universitet, Politecnico di Milano and Domus Academy in order to create an international network for service design academics and professionals.

In 2001, Livework, the first service design and innovation consultancy, opened for business in London. In 2003, Engine, initially founded in 2000 in London as an ideation company, positioned themselves as a service design consultancy.

The 2018 book, This Is Service Design Doing: Applying Service Design Thinking in the Real World, by Adam Lawrence, Jakob Schneider, Marc Stickdorn, and Markus Edgar Hormess, proposes six service design principles:

In the 2011 book, This is Service Design Thinking: Basics, Tools, Cases, the first principle is “user-centred”. “User” refers to any user of the service system, including the organization’s customers and employees. Thus, the authors renewing “user-centred” to “human-centred” in their new book, This is service design doing, to make the meaning clearly that human includes service providers, customers, and all others relevant stakeholders. For instance, service design must consider not only the customer experience, but also the interests of all relevant people in retailing.

“Collaborative” and “iterative” come from the principle “co-creative” in this is service design thinking. The service exists with the participation of customers, and is created by a group of people from different backgrounds. In most cases, people tend to focus only on the meaning of “collaborative” emphasizing the collaborative and interdisciplinary nature of service design, but ignored a service only exists with the participation of a customer. Therefore, in the definition of new service design principles, the “co-creative” is divided into two principles of “collaborative” and “iterative”. “Collaboration” is used to indicate the process of creation by the entire stakeholders from different backgrounds. “Iteration” is used to describe service design is an iterating process keeping evolve to adapt the change of business posture.

“Sequential” means that service need to be logically, rhythmically and visually displayed. Service design is a dynamic process of a period of time. The timeline is important for customers in the service system. For example, when a customer shop at an online website, the first information showed up should be the regions where the products can be delivered. In this way, if the customer finds that the products cannot be delivered to their region, they will not continually browse the products on the website.

Service is often invisible and occurs in a state that the user cannot perceive. “Real” means that the intangible service needs to be displayed in a tangible way. For example, when people order food in a restaurant, they can’t perceive the various attributes of the food. If we play the cultivation and picking process of vegetables in the restaurant, people can perceive the intangible services in the backstage, such as the cultivation of organic vegetables, and get a quality service experience. This service also helps the restaurant establish a natural and organic brand image to customers.

Thinking in a holistic way is the cornerstone of service design. Holistic thinking needs to consider both intangible and tangible service, and ensure that every moment the user interacts with the service, such moment called touchpoint, is considered and optimized. Holistic thinking also needs to understand that customers have multiple logics to complete an experience process. Thus, service designer should think about each aspect from different perspectives to ensure that no needs are missing.

Together with the most traditional methods used for product design, service design requires methods and tools to control new elements of the design process, such as the time and the interaction between actors. An overview of the methodologies for designing services is proposed by Nicola Morelli in 2006, who proposes three main directions:

Analytical tools refer to anthropology, social studies, ethnography and social construction of technology. Appropriate elaborations of those tools have been proposed with video-ethnography and different observation techniques to gather data about users’ behavior. Other methods, such as cultural probes, have been developed in the design discipline, which aim to capture information on customers in their context of use (Gaver, Dunne et al. 1999; Lindsay and Rocchi 2003).

Design tools aim at producing a blueprint of the service, which describes the nature and characteristics of the interaction in the service. Design tools include service scenarios (which describe the interaction) and use cases (which illustrate the detail of time sequences in a service encounter). Both techniques are already used in software and systems engineering to capture the functional requirements of a system. However, when used in service design, they have been adequately adapted to include more information concerning material and immaterial components of a service, as well as time sequences and physical flows. Crowdsourced information has been shown to be highly beneficial in providing such information for service design purposes, particularly when the information has either a very low or very high monetary value. Other techniques, such as IDEF0, just in time and total quality management are used to produce functional models of the service system and to control its processes. However, it is important to note that such tools may prove too rigid to describe services in which customers are supposed to have an active role, because of the high level of uncertainty related to the customer’s behavior.

Because of the need for communication between inner mechanisms of services and actors (such as final users), representation techniques are critical in service design. For this reason, storyboards are often used to illustrate the interaction of the front office. Other representation techniques have been used to illustrate the system of interactions or a “platform” in a service (Manzini, Collina et al. 2004). Recently, video sketching (Jegou 2009, Keitsch et al. 2010) and prototypes (Blomkvist 2014) have also been used to produce quick and effective tools to stimulate customers’ participation in the development of the service and their involvement in the value production process.

Public sector service design is associated with civic technology, open government, e-government, and can be either government-led or citizen-led initiatives. The public sector is the part of the economy composed of public services and public enterprises. Public services include public goods and governmental services such as the military, police, infrastructure(public roads, bridges, tunnels, water supply, sewers, electrical grids, telecommunications, etc.), public transit, public education, along with health care and those working for the government itself, such as elected officials. Due to new investments in hospitals, schools, cultural institutions and security infrastructures in the last few years, the public sector has expanded. The number of jobs in public services has also grown; such growth can be associated with the large and rapid social change that is calling for a reorganization. In this context, governments are considering service design for a reorganization of public services.

In 2002 MindLab, an innovation public sector service design group was established by the Danish ministries of Business and Growth, Employment, and Children and Education. MindLab was the one of the world’s first public sector design innovation labs and their work inspired the proliferation of similar labs and user-centered design methodologies deployed in many countries worldwide. The design methods used at MindLab are typically an iterative approach of rapid prototyping and testing to evolve not just their government projects, but also government organizational structure using ethnographic-inspired user research, creative ideation processes, and visualization and modeling of service prototypes. In Denmark, design within the public sector has been applied to a variety of projects including rethinking Copenhagen’s waste management, improving social interactions between convicts and guards in Danish prisons, transforming services in Odense for mentally disabled adults and more.

In 2007 and 2008 documents from the British government explore the concept of “user-driven public services” and scenarios of highly personalized public services. The documents proposed a new view on the role of service providers and users in the development of new and highly customized public services, utilizing user involvement. This view has been explored through an initiative in the UK. Under the influence of the European Union, the possibilities of service design for the public sector are being researched, picked up, and promoted in countries such as Belgium.

Behavioural Insights Team (BIT) (also known as Nudge) was originally part of the UK cabinet and was founded in 2010, in order to apply nudge theory to try to improve UK government policy, services and save money. As of 2014, BIT became a decentralized, semi-privatized company with Nesta (charity), BIT employees and the UK government each owning a third of this new business. That same year a Nudge unit was added to the United States government under president Obama, referred to as the ‘US Nudge Unit,’ working within the White House Office of Science and Technology Policy.

Clinical service redesign is an approach to improving quality and productivity in health. A redesign is clinically led and involves all stakeholders (e.g. primary and secondary care clinicians, senior management, patients, commissioners etc.) to ensure national and local clinical standards are set and communicated across the care settings. By following the patient’s journey or pathway, the team can focus on improving both the patient experience and the outcomes of care.

A practical example of service design thinking can be found at the Myyrmanni shopping mall in Vantaa, Finland. The management attempted to improve the customer flow to the second floor as there were queues at the landscape lifts and the KONE steel car lifts were ignored. To improve customer flow to the second floor of the mall (2010) Kone Lifts implemented their ‘People Flow’ Service Design Thinking by turning the Elevators into a Hall of Fame for the ‘Incredibles’ comic strip characters. Making their Elevators more attractive to the public solved the people flow problem. This case of service design thinking by Kone Elevator Company is used in literature as an example of extending products into services.

Retail Design

Retail design is a creative and commercial discipline that combines several different areas of expertise
together in the design and construction of retail space. Retail design is primarily a specialized practice of architecture and interior design, however it also incorporates elements of industrial design, graphic design, ergonomics, and advertising.

Retail design is a very specialized discipline due to the heavy demands placed on retail space. Because the primary purpose of retail space is to stock and sell product to consumers, the spaces must be designed in a way that promotes an enjoyable and hassle-free shopping experience for the consumer.
For example, research shows that male and female shoppers who were accidentally touched from behind by other shoppers left a store earlier than people who had not been touched and evaluated brands more negatively. The space must be specially-tailored to the kind of product being sold in that space; for example, a bookstore requires many large shelving units to accommodate small products that can be arranged categorically while a clothing store requires more open space to fully display product.

Retail spaces, especially when they form part of a retail chain, must also be designed to draw people into the space to shop. The storefront must act as a billboard for the store, often employing large display windows that allow shoppers to see into the space and the product inside. In the case of a retail chain, the individual spaces must be unified in their design.

Retail design first began to grow in the middle of the 19th century, with stores such as Bon Marche and Printemps in Paris, “followed by Marshall Fields in Chicago, Selfridges in London and Macy’s in New York.” These early retail design stores were swiftly continued with an innovation called the chain store.

The first known chain department stores were established in Belgium in 1868, when Isidore, Benjamin and Modeste Dewachter incorporated Dewachter freres (Dewachter Brothers) selling ready-to-wear clothing for men and children and specialty clothing such as riding apparel and beachwear. The firm opened with four locations and, by 1904, Maison Dewachter (House of Dewachter) had stores in 20 cities and towns in Belgium and France, with multiple stores in some cities. Isidore’s eldest son, Louis Dewachter, managed the chain at its peak and also became an internationally known landscape artist, painting under the pseudonym Louis Dewis.

The first retail chain store in the United States was opened in the early 20th century by Frank Winfield Woolworth, which quickly became a franchise across the US. Other chain stores began growing in places like the UK a decade or so later, with stores like Boots. After World War II, a new type of retail design building known as the shopping centre came into being. This type of building took two different paths in comparison between the US and Europe. Shopping centres began being built out of town within the United States to benefit the suburban family, while Europe began putting shopping centres in the middle of town. The first shopping centre in the Netherlands was built in the 1950s, as retail design ideas began spreading east.

The next evolution of retail design was the creation of the boutique in the 1960s, which emphasized retail design run by individuals. Some of the earliest examples of boutiques are the Biba boutique created by Barbara Hulanicki and the Habitat line of stores made by Terence Conran. The rise of the boutique was followed, in the next two decades, with an overall increase in consumer spending across the developed world. This rise made retail design shift to compensate for increased customers and alternative focuses. Many retail design stores redesigned themselves over the period to keep up with changing consumer tastes. These changes resulted on one side with the creation of multiple “expensive, one-off designer shops” catering to specific fashion designers and retailers.

The rise of the internet and internet retailing in the latter part of the 20th century and into the 21st century saw another change in retail design to compensate. Many different sectors not related to the internet reached out to retail design and its practices to lure online shoppers back to physical shops, where retail design can be properly utilized.

A retail designer must create a thematic experience for the consumer, by using spatial cues to entertain as well as entice the consumer to purchase goods and interact with the space. The success of their designs are not measured by design critics but rather the records of the store which compare amount of foot traffic against the overall productivity. Retail designers have an acute awareness that the store and their designs are the background to the merchandise and are only there to represent and create the best possible environment in which to reflect the merchandise to the target consumer group.

Since the evolution of retail design and its impact on productivity have become clear, a series of standardisations in the techniques and design qualities has been determined. These standardisations range from alterations to the perspective of the structure of the space, entrances, circulation systems, atmospheric qualities (light and sound) and materiality. By exploring these standardisations in retail design the consumer will be given a thematic experience that entices them to purchase the merchandise. It is also important to acknowledge that a retail space must combine both permanent and non permanent features, that allow it to change as the needs of the consumer and merchandise change (e.g. per season).

The structure of retail space creates the constraints of the overall design; often the spaces already exist, and have had many prior uses. It is at this stage that logistics must be determined, structural features like columns, stairways, ceiling height, windows and emergency exists all must be factored into the final design. In retail one hundred percent of the space must be utilised and have a purpose. The floor plan creates the circulation which then directly controls the direction of the traffic flow based on the studied psychology of consumer movement pattern within a retail space. Circulation is important because it ensures that the consumer moves through the store from front to back, guiding them to important displays and in the end to the cashier. There are six basic store layouts and circulation plans that all provide a different experience:

Once the overall structure and circulation of the space has been determined, the atmosphere and thematics of the space must be created through lighting, sound, materials and visual branding. These design elements will cohesively have the greatest impact on the consumer and thus the level of productivity that could be achieved.

Lighting can have a dramatic impact on the space. It needs to be functional but also complement the merchandise as well as emphasize key points throughout the store. The lighting should be layered and of a variety of intensities and fixtures. Firstly, examine the natural light and what impact it has in the space. Natural light adds interest and clarity to the space; also consumers also prefer to examine the quality of merchandise in natural light. If no natural light exists, a sky light can be used to introduce it to the retail space. The lighting of the ceiling and roof is the next thing to consider. This lighting should wash the structural features while creating vectors that direct the consumer to key merchandise selling areas. The next layer should emphasize the selling areas. These lights should be direct but not too bright and harsh. Poor lighting can cause eye strain and an uncomfortable experience for the consumer. To minimize the possibility of eye strain, the ratio of luminance should decrease between merchandise selling areas. The next layer will complement and bring focus onto the merchandise; this lighting should be flattering for the merchandise and consumer. The final layer is to install functional lighting such as clear exit signs.

Ambiance can then be developed within the atmosphere through sound and audio, the music played within the store should reflect what your target market would be drawn to, this would also be developed through the merchandise that is being marketed. In a lingerie store the music should be soft, feminine and romanticized; where in a technology department the music would be more upbeat and more masculine.

Materiality is another key selling tool, the choices made must not only be aesthetically pleasing and persuasive but also functional with a minimal need for maintenance. Retail spaces are high traffic area and are thus exposed to a lot of wear this means that possible finishes of the materials should be durable. The warmth of a material will make the space more inviting, a floor that is firm and somewhat buoyant will be more comfortable for that consumer to walk on and thus this will allow them to take longer when exploring the store. By switching materials throughout the store zones/ areas can be defined, for example by making the path one material and contrast it against another for the selling areas this help to guide the consumer through the store. Colour is also important to consider it must not over power or clash against the merchandise but rather create a complementary background for the merchandise. As merchandise will change seasonally the interior colours should not be trend based but rather have timeless appeal like neutral based colours.

Visual branding of the store will ensure a memorable experience for the consumer to take with them once they leave the store ensuring that they will want to return. The key factor is consistency exterior branding and signage should continue into the interior, they should attract, stimulate and dramatise the store. To ensure consistency the font should be consistent with the font size altering. The interior branding should allow the consumer to easily self direct themselves through the store, proper placement of sales signs that will draw consumer in and show exactly where the cashier is located. The branding should reflect what the merchandise is and what the target market would be drawn to.

The final element of a well-executed retail space is the staging of the consumer’s perspective. It is the role of retail design to have total control of the view that the consumer will have of the retail space. From the exterior of a retail store the consumer should have a clear unobstructed view into the interior.

Open-Design Movement (ODM)

Uncategorized
Image by/from The original uploader was CharlesC at English Wikipedia.

The open-design movement involves the development of physical products, machines and systems through use of publicly shared design information. This includes the making of both free and open-source software (FOSS) as well as open-source hardware. The process is generally facilitated by the Internet and often performed without monetary compensation. The goals and philosophy of the movement are identical to that of the open-source movement, but are implemented for the development of physical products rather than software. Open design is a form of co-creation, where the final product is designed by the users, rather than an external stakeholder such as a private company.

Sharing of manufacturing information can be traced back to the 18th and 19th century. Aggressive patenting put an end to that period of extensive knowledge sharing.
More recently, principles of open design have been related to the free and open-source software movements. In 1997 Eric S. Raymond, Tim O’Reilly and Larry Augustin established “open source” as an alternative expression to “free software,” and in 1997 Bruce Perens published the Open Source Definition. In late 1998, Dr. Sepehr Kiani (a PhD in mechanical engineering from MIT) realized that designers could benefit from open source policies, and in early 1999 he convinced Dr. Ryan Vallance and Dr. Samir Nayfeh of the potential benefits of open design in machine design applications. Together they established the Open Design Foundation (ODF) as a non-profit corporation, and set out to develop an Open Design Definition.

The idea of open design was taken up, either simultaneously or subsequently, by several other groups and individuals. The principles of open design are closely similar to those of open-source hardware design, which emerged in March 1998 when Reinoud Lamberts of the Delft University of Technology proposed on his “Open Design Circuits” website the creation of a hardware design community in the spirit of free software.

Ronen Kadushin coined the title “Open Design” in his 2004 Master’s thesis, and the term was later formalized in the 2010 Open Design Manifesto.

The open-design movement currently unites two trends. On one hand, people apply their skills and time on projects for the common good, perhaps where funding or commercial interest is lacking, for developing countries or to help spread ecological or cheaper technologies. On the other hand, open design may provide a framework for developing advanced projects and technologies that might be beyond the resource of any single company or country and involve people who, without the copyleft mechanism, might not collaborate otherwise. There is now also a third trend, where these two methods come together to use high-tech open-source (e.g. 3D printing) but customized local solutions for sustainable development. Open Design holds great potential in driving future innovation as recent research has proven that stakeholder users working together produce more innovative designs than designers consulting users through more traditional means.

The open-design movement is currently fairly nascent but holds great potential for the future. In some respects design and engineering are even more suited to open collaborative development than the increasingly common open-source software projects, because with 3D models and photographs the concept can often be understood visually. It is not even necessary that the project members speak the same languages to usefully collaborate.

However, there are certain barriers to overcome for open design when compared to software development where there are mature and widely used tools available and the duplication and distribution of code cost next to nothing. Creating, testing and modifying physical designs is not quite so straightforward because of the effort, time and cost required to create the physical artefact; although with access to emerging flexible computer-controlled manufacturing techniques the complexity and effort of construction can be significantly reduced (see tools mentioned in the fab lab article).

Open design is currently a fledgling movement consisting of several unrelated or loosely related initiatives. Many of these organizations are single, funded projects, while a few organizations are focusing on an area needing development. In some cases (e.g. Thingiverse for 3D printable designs or Appropedia for open source appropriate technology) organizations are making an effort to create a centralized open source design repository as this enables innovation. Notable organizations include: