Process Optimization – Workshop 2 (Process-Oriented Thinking)
The Appleton Greene Corporate Training Program (CTP) for Process Optimization is provided by Dr. Ogunbiyi Certified Learning Provider (CLP). Program Specifications: Monthly cost USD$2,500.00; Monthly Workshops 6 hours; Monthly Support 4 hours; Program Duration 12 months; Program orders subject to ongoing availability.
If you would like to view the Client Information Hub (CIH) for this program, please Click Here
Learning Provider Profile
Dr. Ogunbiyi is a Certified Six Sigma Master Black Belt and entrepreneur with extensive experience in harnessing the interplay between technology and processing to improve operational outcomes across two decades in the financial and public service sectors. He is the founder of a boutique consultancy specialising in business process management and co-founder of a Software-as-a-Service (SaaS) company that enables public service providers to improve interaction continuously and measurably with the public.
He has a proven track record of delivering a variety of successful strategic, global, cross-functional programmes and to date, he has led process optimization initiatives that have yielded tens of millions of Euros in savings.
In addition, Dr. Ogunbiyi is an academic researcher who has made original contributions to the field of process mining and monitoring. His research interests include exploring how contextual (i.e., case, process, social and external) factors contribute to the predictive power of process mining models, causal process mining and object-centric process mining among others.
He obtained a BSc in Computing Science from the University of Greenwich, an MBA from Imperial College Business School and his PhD in Computing Science from the University of Westminster, where he currently serves as a part-time visiting lecturer.
MOST Analysis
Mission Statement
To cultivate an understanding and management of processes for improved outcomes through process-oriented thinking.
To equip participants with the skills and knowledge to effectively design new processes and optimize processes using Lean and Six Sigma methodologies whilst following a structured Scrum change process.
Objectives
01. Explore the essence and advantages of process-oriented thinking in optimizing operations and fostering innovation within organizational structures and roles.
02. To harness Information Technology (IT) as a critical enabler for optimizing processes across diverse industries by improving process sequences, data management, and stakeholder connectivity.
03. To manage and optimize organizational processes throughout their lifecycle by employing Design for Six Sigma methodologies, ensuring high reliability and optimized costs from inception.
04. To optimize existing business processes within organizations using the Six Sigma DMAIC methodology, focusing on the Define and Analyze phases to discover the current state process and quantify associated issues.
05. To enhance process performance and sustainability through the DMAIC methodology, focusing on the Analyze, Improve, and Control phases to identify root causes, develop solutions, and ensure long-term improvement.
06. To implement lean management principles within the organization to maximize customer value and minimize waste, drawing from core practices like waste reduction, 5S, visual management, and the PDCA lean methodology.
07. To enhance the effectiveness of process improvement initiatives by adopting agile change methodologies, focusing on Scrum for its ability to accommodate and adapt to change efficiently.
08. To enhance team collaboration and process optimization projects’ delivery by thoroughly understanding and applying Scrum artifacts, ceremonies, and roles.
09. Enhance project management efficiency, flexibility, and productivity by optimizing Scrum team performance.
Strategies
01. Leverage insights from the seminal research by Davenport and Short, contrasting process-oriented approaches with function-oriented ones to highlight the benefits of improved coordination and IT-enabled optimization.
02. Integrate advanced IT solutions such as Robotic Process Automation (RPA), Intelligent Automation, and low-code platforms to automate routine tasks, enhance data capture and validation, and enable flexible process management and collaboration across large distances.
03. Adopt a disciplined, data-driven approach focusing on the Design for Six Sigma methodologies, specifically IDOV and DMADV, to prevent defects and enhance quality, efficiency, and customer satisfaction across various sectors.
04. Implement the DMAIC (Define, Measure, Analyze, Improve, Control) framework as a structured problem-solving method, leveraging statistical data to inform decisions across each phase, analogous to diagnosing and treating a patient, to guide process optimization projects effectively.
05. Employ statistical analysis and visualization tools in the Analyze phase to identify root causes, innovate and test solutions in the Improve phase, and implement control mechanisms in the Control phase to sustain improvements.
06. Adopt a systematic approach to organizational operations by focusing on continuous improvement and efficiency enhancement, leveraging lean tools such as waste reduction techniques, implementing the 5S framework for workspace optimization, and utilizing visual management for clearer communication and process monitoring.
07. Implement the Scrum framework by focusing on the detailed exploration and application of Scrum artifacts like the Project Charter and Product Backlog, alongside effective engagement in Scrum ceremonies such as Sprint Planning and Daily Scrums, to guide, prioritize, and track project progress.
08. Focus on establishing an optimal team size, prioritizing collective performance and transparency, and incorporating effective communication, agility, and cross-functional collaboration within the team.
Tasks
01. Investigate process optimization techniques, including assessing current states, identifying optimization needs, recognizing IT levers, and implementing process prototyping while considering organizational implications and roles.
02. Develop and implement IT-enabled processes that streamline data collection, ensure accurate data validation, and leverage automation and advanced analytics to improve efficiency, reduce costs, and enhance customer experiences while maintaining data security and ethical standards.
03. Implement the IDOV methodology by identifying customer requirements, designing efficient processes to meet those needs, optimizing for maximum efficiency, and validating the process through pilot testing and capability analysis to ensure it meets Six Sigma quality levels.
04. Conduct a comprehensive process discovery using manual methods such as interviews, workshops, observation, and document review to clearly define the problem, understand current process states, and set the foundation for subsequent phases of measurement, analysis, improvement, and control as per the DMAIC methodology.
05. Utilize Exploratory Data Analysis and Pareto Analysis to pinpoint process deficiencies, develop prioritization matrices, conduct pilot tests for chosen improvements, and establish control charts and response plans to monitor and maintain process enhancements.
06. Conduct regular “waste walks” to identify and eliminate non-value-added activities, organize and maintain work environments using the 5S method, integrate visual management tools for better process visibility, and apply the PDCA cycle for ongoing process evaluation and improvement.
07. Introduce Scrum methodologies into the process improvement workflow, starting with training teams on its core principles, practices, and roles, and then applying these through iterative sprints that focus on continuous delivery of value and adaptability to change.
08. Educate and enable the project team and stakeholders to maintain Scrum artifacts for transparency and value delivery, and actively participate in Scrum ceremonies to ensure continuous improvement and alignment with project objectives.
09. Implement Scrum methodologies by maintaining team sizes of seven or fewer members, fostering a culture of collective achievement over individual accomplishments, and ensuring transparent project management processes.
Introduction
This workshop is designed to cultivate an understanding and management of processes for improved outcomes through process-oriented thinking. It also aims to equip participants with the skills and knowledge to effectively design new processes and optimize existing processes using Lean and Six Sigma methodologies whilst following a structured Scrum change process. The first course outlines the shift from a function-oriented to a process-oriented approach in organizational management, emphasizing the need for detailed understanding and management of processes to identify areas for improvement and innovation. It highlights the benefits of process orientation, such as improved coordination across departments and IT-enabled optimization, over the function-oriented approach, which often results in siloed departments and inefficiencies. It introduces roles like the Process Owner and Case Manager, which are essential for overseeing process efficiency and effectiveness. It emphasizes the need for management buy-in and skills development across the organization to support continuous improvement.
Subsequently, the following course emphasizes the critical role of Information Technology (IT) in enhancing process optimization across various sectors by integrating detailed information into processes, modifying sequences for efficiency, and enabling precise tracking and connections among involved parties. It delves into IT’s transformative impact through automation, highlighting Robotic Process Automation (RPA), Intelligent Automation, and low-code platforms that democratize automation by making it accessible to non-technical users. This course also addresses the challenges and risks associated with IT, including security threats, data privacy concerns, reliance on IT systems, and the ethical use of data, stressing the importance of strategic planning, regular updates, and comprehensive training to mitigate these risks and ensure effective use of IT in process optimization.
The next section in the workshop outlines a comprehensive approach to understanding and implementing Design for Six Sigma (DFSS) within the context of the process lifecycle. Initially introduced in the 1980s by Bill Smith at Motorola, Six Sigma has evolved into a crucial strategy for achieving near-perfect quality across various sectors beyond its manufacturing origins. It operates on two main fronts: Design for Six Sigma (DFSS), focusing on embedding quality into new processes through methodologies like DMADV (Define, Measure, Analyze, Design, Verify) and IDOV (Identify, Design, Optimize, Verify). DFSS, in particular, aims to design processes that are robust against variations, thereby “designing it right the first time” to meet customer needs with high reliability and optimized costs, using detailed methodologies for process design, optimization, and validation to ensure processes meet stringent Six Sigma quality levels
This is followed by two courses that examine the optimization of existing processes using DMAIC (Define, Measure, Analyze, Improve, Control). The first of these courses provides a detailed exploration of the DMAIC phases, emphasizing the importance of statistical data in informed decision-making across each phase. The Define phase identifies the process’s current state and customer requirements using tools like SIPOC diagrams and Voice of the Customer analysis. The Measure phase quantifies problems through precise data collection, which is crucial for identifying and eliminating inefficiencies. The course addresses challenges in manual process discovery, such as fragmented process knowledge and lack of familiarity with process modeling notation.
The second course delves into the Analyze, Improve, and Control phases, each integral for adjusting and enhancing process life cycles. The Analyze phase scrutinizes collected data to identify root causes of defects using statistical tools like Exploratory Data Analysis and Pareto Analysis to pinpoint necessary process optimizations. The Improve phase then develops and implements solutions to these root causes, employing prioritization matrices and Failure Mode and Effects Analysis (FMEA), focusing on ideation and innovation to enhance process efficiency and effectiveness. Finally, the Control phase ensures these improvements are sustained over time through monitoring mechanisms like control charts and response plans, embedding the changes into the organization’s culture and maintaining the gains to prevent regression. This structured approach facilitates continuous process optimization, aligning closely with strategic goals and enhancing overall operational efficiency.
The next course explores Lean management, a systematic approach aimed at maximizing customer value while minimizing waste, a concept that originated from the Toyota Production System and is now applied across various industries. It encompasses a range of practices, including waste reduction, 5S and visual management, and the PDCA (Plan-Do-Check-Act) methodology, focusing on continuous improvement in efficiency and effectiveness. Lean tools such as waste walks, Kaizen, and visual management tools are instrumental in identifying inefficiencies and fostering a culture of continuous improvement and employee engagement. Additionally, lean management’s integration with methodologies like Six Sigma enhances its effectiveness, allowing organizations to optimize processes by eliminating waste and reducing variability, thereby achieving higher efficiency, quality, and customer satisfaction.
The final section of the workshop explores the crucial role of agile change methodologies, particularly Scrum, in managing process optimization projects, highlighting the importance of structured change processes to mitigate risks, enhance communication, and foster adaptability. Agile principles advocate for customer satisfaction through early, continuous value delivery and embrace change, promoting frequent, collaborative, and iterative work with a focus on empowerment and trust. Scrum, an agile framework, exemplifies these principles through iterative sprints, collaboration, and responsiveness to change, contrasting sharply with traditional waterfall methodologies’ linear, sequential nature. It emphasizes the importance of regular feedback, risk management through early discovery, and the team’s ability to adapt and overcome challenges. The course material covers agile and Scrum’s theoretical underpinnings, practical applications, and comparative analysis with waterfall methodologies. It aims to equip participants with the knowledge to apply these frameworks effectively in optimizing processes and managing change within their organizations.
The second course in this section outlines Scrum’s essential components: artifacts, ceremonies, and roles, each serving a specific purpose in the project management and execution process. Scrum artifacts, such as the Project Charter, Product (Process) Backlog, Sprint Backlog, and Increment, ensure project transparency and guide the team towards value delivery. The course emphasizes the significance of Scrum ceremonies, including Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective, which provide the structural rhythm necessary for navigating complex projects and achieving success. Furthermore, it details the critical roles within Scrum: the Product (Process) Owner, who bridges stakeholders and the development team; the Scrum Master, who ensures adherence to Scrum practices; and the Team Members, who are central to delivering improvements. Collectively, these elements facilitate a dynamic, flexible approach to project management, encouraging continuous improvement and alignment with business values and customer needs.
The final course of the workshop emphasizes Scrum’s role in project management, focusing on efficiency, flexibility, and productivity by optimizing team performance. It outlines critical factors such as optimal team size, emphasizing collective performance and transparency to achieve this optimization. With the ideal team size being seven or fewer to enhance communication, flexibility, and collaboration, the course highlights the importance of focusing on the team’s collective achievements over individual accomplishments. It advocates for transparency in Scrum through open information sharing, which fosters trust, enables accurate decision-making, and facilitates adaptability. Furthermore, it touches on the concept of Sprint deliverables and the Minimum Viable Process Change (MVPC) as methods to minimize waste and de-risk project delivery by allowing for early and continuous feedback. The course also integrates Scrum with the OODA loop and PDCA cycle for improved agility and decision-making and discusses strategies to increase sprint velocity for better planning and efficiency improvements.
Executive Summary
Chapter 1: What is Process-Oriented Thinking?
Process-oriented thinking emphasizes the importance of understanding and managing the steps leading to a process outcome, focusing on improvement, efficiency gains, or innovation. This approach is critical for optimizing processes and significantly impacts organizational structure and roles, contrasting with the function-oriented approach that organizes a company based on specialized departments. The process-oriented approach fosters improved coordination and IT-enabled optimization across departments, preventing silo mentalities and encouraging holistic system designs that facilitate cross-departmental and even external collaboration.
The importance of process orientation is underscored by the discussion of its advantages over the function-oriented approach, particularly in facilitating better process coordination and leveraging information technology for optimization. The function-oriented approach often leads to isolated departments working without collaboration, while the process-oriented strategy ensures activities are coordinated across various actors, thereby avoiding conflicting decisions and inefficiencies. It promotes designing IT systems that support processes across different departments, thereby enhancing overall efficiency and reducing the risk of defects associated with non-integrated systems.
The course introduces Davenport and Short’s five-step approach to process optimization, highlighting the need to create a business vision, identify processes needing optimization, recognize and assess the current state, leverage IT for optimization, and process prototyping. These steps aim to align process optimization initiatives with strategic goals, identify and prioritize processes for optimization, assess and understand the current state to identify gaps, leverage IT advancements like generative AI to enhance processes and implement changes iteratively while incorporating stakeholder feedback.
Organizational implications of adopting a process-oriented approach include creating new roles, such as the Process Owner and Case Manager, each with specific responsibilities to ensure process efficiency and effectiveness. This approach necessitates a shift towards a matrix organizational structure that balances process and function orientations, requiring management buy-in and cross-functional support. It also underscores the importance of developing hard and soft skills across the workforce to support ongoing process optimization, ensuring processes meet organizational objectives and adapt to changing business needs.
Chapter 2: Technology Enablers
Information Technology (IT) serves as a critical enabler across various industries, significantly enhancing process execution through automation, data capture and validation, and the integration of comprehensive information into organizational processes. IT revolutionizes business operations by implementing tools like Robotic Process Automation (RPA), Intelligent Automation, and low-code platforms, collectively contributing to increased efficiency, productivity, and cost savings. These technologies automate routine tasks, enable handling complex processes through AI and ML, and democratize automation by allowing users with limited technical expertise to create applications and automate processes.
The course further explores IT’s role in improving operational efficiency through effective data capture and validation techniques. It emphasizes the importance of capturing accurate and reliable input data at the process inception and validates this data to ensure its quality. Online forms, surveys, and interactive customer interfaces are highlighted as crucial methods facilitated by IT for efficient data collection. Moreover, data validation techniques such as restricting incomplete form submissions and utilizing drop-down lists enhance the quality of the captured data, thereby supporting the successful execution of processes.
In addition to optimizing data capture and validation, IT significantly enriches process information through advanced data analytics, cloud computing, and sophisticated databases, allowing organizations to access detailed data for informed decision-making, forecasting, and strategic planning, transforming traditional processes into data-driven operations. The discussion includes the discovery and enrichment of data through techniques like data mining and machine learning, with knowledge graphs exemplified as powerful tools for organizing and representing data relationships, thereby aiding in improved data integration, information retrieval, and predictive analytics.
Despite the substantial benefits IT brings to process optimization, it also introduces certain limitations and risks, such as security threats, data privacy concerns, over-reliance and dependency on IT systems, and ethical considerations in data use. Organizations are urged to navigate these challenges through strategic planning, risk management, regular updates and maintenance, and comprehensive employee training and support. These measures aim to mitigate risks, ensure the longevity and effectiveness of IT systems, and balance leveraging technology for process improvement and managing the associated risks responsibly.
Chapter 3: Process Design
The process lifecycle, introduced earlier in this training program, is essential for organizations to manage, optimize, and adapt their processes effectively from design to termination or radical redesign. This understanding forms the basis for exploring various tools and methodologies for delivering each lifecycle stage, with a particular focus in this course on Design for Six Sigma (DFSS). DFSS aims to embed quality into the design of new processes, contrasting with optimizing existing processes using the DMAIC methodology. The history of Six Sigma traces back to the 1980s at Motorola and has evolved into a critical strategy for achieving near-perfect quality across various sectors. Previous quality improvement methodologies influenced it and was significantly adopted and adapted by corporations like General Electric and Honeywell in the 1990s.
Six Sigma’s benefits include improved quality and efficiency, cost reduction, enhanced customer satisfaction, increased employee engagement through training and involvement in continuous improvement efforts, strategic planning capabilities, and flexibility across various sectors. DFSS, with its methodologies such as IDOV (Identify, Design, Optimize, Validate) and DMADV (Define, Measure, Analyze, Design, Verify), is a proactive approach applied to new processes, products, or services design. These methodologies ensure Six Sigma quality levels are designed into the process, aiming to prevent defects and reduce variability through detailed, customer-centric process design and optimization.
The DFSS methodologies, IDOV and DMADV, offer frameworks for process design, emphasizing efficiency, thorough analysis, optimization, and verification. IDOV is suitable for streamlined design efforts focusing on customer needs, while DMADV is preferred for projects requiring extensive analysis and verification, offering a comprehensive approach to designing and testing new products or processes. These methodologies guide organizations through identifying customer requirements, designing to meet those needs, optimizing for efficiency, and validating the process design through pilot testing and customer feedback.
The optimization and validation phases in DFSS focus on refining the design for maximum efficiency and effectiveness and ensuring the designed process meets specified limits and customer requirements through rigorous testing and feedback. Techniques such as mistake-proofing (Poka-Yoke) and capability analysis are utilized to minimize defects and assess the process’s capability to produce output within specified limits. Ultimately, the success of the DFSS approach in delivering high-quality, efficient, and customer-satisfying processes depends on a thorough understanding of customer needs, rigorous design and testing methodologies, and continuous improvement based on feedback and analysis.
Chapter 4: Process Discovery
The introduction sets the stage for discussing the necessity of optimizing existing business processes within organizations, highlighting that new processes are rarely designed from scratch except in response to new regulations or market changes. This segment introduces the Six Sigma DMAIC methodology as a systematic, data-driven approach to improve, optimize, and stabilize business processes. The DMAIC methodology, which stands for Define, Measure, Analyze, Improve, and Control, is detailed as a structured problem-solving technique guiding process optimization projects. The analogy of diagnosing and treating a patient explains the sequential phases of DMAIC, emphasizing the importance of statistical data in making informed decisions throughout the process.
The Define phase aims to understand the current state of the process, identify the problem or improvement opportunity, and understand customer requirements using tools like SIPOC diagrams and Voice of the Customer (VOC) analysis. The exploration of manual process discovery methods such as interviews, workshops, observation, and document review is elaborated, underlining the significance of these methods in capturing a comprehensive view of the process. Each method’s preparation, execution, and analysis stages are discussed, revealing how they contribute to identifying patterns, inconsistencies, and areas for process improvement.
Challenges associated with manual process discovery, including fragmented process knowledge, lack of generalization, and unfamiliarity with process modeling notation, are addressed, acknowledging the limitations of manual discovery in achieving an effective and standardized understanding of business processes. These challenges underscore the importance of moving towards automated tools, like descriptive process mining, to mitigate risks associated with manual process discovery. The Measure phase is then detailed, focusing on quantifying problems identified in the Define phase and setting the foundation for eliminating defects or inefficiencies. The discussion includes the conceptual formula Y=f(x), types and sources of process variation, the significance of distinguishing between common and special cause variations, and the structured approach to data collection planning to ensure relevant, accurate, and sufficient data for analysis.
The course emphasizes the critical role of the Measure phase in the DMAIC methodology, where quantifying the problem through precise measurement tools and understanding process variations lay the groundwork for subsequent analysis and improvement efforts. The Measure phase involves intricate planning for data collection, selecting appropriate variables, understanding types of process variation, and employing suitable data collection tools and sampling methods. This foundation supports the targeted improvement actions in later phases, aiming for a thorough and data-driven approach to optimizing business processes, highlighting the systematic and structured nature of DMAIC as a practical framework for process optimization within organizations.
Chapter 5: Process Adjustment
The DMAIC methodology’s Analyze, Improve, and Control phases are essential for identifying root causes, developing and implementing solutions, and ensuring improvements are sustained. The Analyze phase uses statistical analysis and data visualization tools to examine collected data, identifying defects or problems’ root causes. It distinguishes between numeric and categorical data types, employing Exploratory Data Analysis (EDA), and highlights the importance of understanding data distributions and utilizing Pareto Analysis for prioritizing issues. This phase is foundational in pinpointing what needs improvement setting the stage for devising actionable strategies.
In the Improve phase, the focus shifts to ideating and innovating solutions to the previously identified root causes, employing tools like prioritization and pay-off matrices to assess and select the most effective solutions given limited resources. This phase is critical for generating and refining solutions that can enhance efficiency, reduce costs, and improve customer satisfaction, ensuring that the chosen improvements are aligned with strategic goals and are feasible within time, budget, and personnel constraints. Through pilot testing and simulations, the effectiveness of these solutions is validated, preparing the groundwork for widespread implementation.
The Control phase is dedicated to embedding the improvements within the organization’s operations to maintain the gains achieved. This involves using control charts to monitor process performance and implementing response plans and continuous process monitoring strategies to address deviations or variations. The focus here is on ensuring that the improvements are effective in the short term and sustainable in the long term, thereby securing the benefits of the DMAIC process adjustments. This phase is crucial for the ongoing success of the process improvements, embedding changes into the organization’s culture and practices to achieve lasting benefits.
Each phase of the DMAIC methodology—Analyze, Improve, and Control—plays a pivotal role in process optimization, from identifying and analyzing root causes of inefficiencies to implementing solutions and ensuring their sustainability. The methodology emphasizes a structured approach to problem-solving, prioritizing issues based on their impact and feasibility and maintaining improvements through continuous monitoring and Control. This comprehensive approach ensures that process adjustments lead to significant and sustained improvements, driving efficiency, reducing costs, and enhancing customer satisfaction.
Chapter 6: Lean Management
Lean management is a comprehensive approach emphasizing continuous improvement, aiming to deliver maximum value to customers while minimizing waste. Originating from the Toyota Production System, it has become a universally applied methodology across various industries. Its core lies in maximizing customer value by identifying and eliminating waste. To achieve its objectives, lean management incorporates several tools and practices, such as waste reduction, 5S, visual management, and the PDCA (Plan-Do-Check-Act) cycle. These tools help organizations improve efficiency, reduce costs, and enhance customer satisfaction by systematically addressing and mitigating inefficiencies.
Waste reduction is a cornerstone of lean management, focusing on eliminating non-value-adding activities across eight identified types of waste: defects, overproduction, waiting, unutilized talent, transportation, inventory excess, motion, and extra-processing. Organizations can significantly improve their operational efficiency by identifying and eliminating these wastes. Techniques like “waste walks” are used to observe and record inefficiencies, engaging employees to identify and prioritize areas for improvement. This process streamlines operations and enhances product and service quality, directly benefiting the customer.
The 5S system and visual management are vital lean tools that organize the workplace and make information about processes and performance readily accessible and understandable. 5S stands for Sort, Set in Order, Shine, Standardize, and Sustain, each step aimed at creating and maintaining an organized, efficient, and safe work environment. Visual management transforms complex data and workflows into visual formats that are easy to comprehend, fostering a culture of transparency, accountability, and continuous improvement. These practices help maintain focus on efficiency and workflow optimization, contributing to a culture where lean principles are embedded in daily operations.
Lean methodologies, particularly the PDCA cycle, offer a structured problem-solving and continuous improvement approach. This cycle facilitates the identification and resolution of problems through a systematic process of planning, doing, checking, and acting. Lean’s integration with Six Sigma, known as Lean Six Sigma, combines Lean’s focus on efficiency and Six Sigma’s emphasis on quality and precision. This fusion enables organizations to achieve faster, more efficient processes without compromising quality, using tools and techniques from both methodologies to optimize operations. The application of Lean Six Sigma principles allows for eliminating waste and reducing process variation, leading to improved process flow, quality, and overall operational excellence.
Chapter 7: Change process
The workshop materials transition from focusing on optimizing specific processes to examining the “process of change” through agile change methodologies, particularly highlighting the necessity of a structured change process. Without such a process, organizations might face unassessed risks, communication gaps, and resistance to change, potentially jeopardizing the success of change initiatives. Agile methodologies, emphasizing Scrum, are presented as revolutionary approaches to managing and executing projects, prioritizing flexibility, team collaboration, and continuous improvement over the traditional, linear waterfall methodology. This shift aims to address the shortcomings of unstructured change processes by promoting adaptability and transparency throughout the change management process.
Based on the Agile Manifesto, Agile principles prioritize customer satisfaction, embrace change, and encourage frequent delivery of value, collaboration, and maintaining a sustainable pace. As a subset of Agile, Scrum operationalizes these principles through iterative change, collaboration, responding to change, and empowering teams to self-organize and make decisions. The comparison with the waterfall methodology highlights Scrum’s advantages in managing change, including its flexibility to adapt to late-stage changes, incorporate regular feedback, and more effectively manage risks through its iterative nature. This contrasts with the waterfall’s linear approach, which often struggles with change and delays feedback until project completion.
Scrum planning encompasses several stages, from broad aggregate planning to detailed sprint planning, emphasizing setting direction and vision while allowing for flexibility in execution. Key components include roadmap, release, sprint, and capacity planning, ensuring that Scrum teams have a clear yet adaptable action plan. This planning process enables teams to define sprint goals, break down tasks, and assess team capacity to ensure realistic and achievable sprint plans. Additionally, effective task estimation and user stories facilitate clarity, focus on value, and enhanced communication, contributing to a more efficient and effective project management process.
The workshop materials delve into Scrum estimation challenges and solutions, such as Planning Poker and using the Fibonacci sequence for relative sizing to improve accuracy and mitigate cognitive biases. The concept of user story readiness, incorporating the INVEST criteria and the Definition of Done, ensures that user stories are well-prepared for implementation within sprints. These methodologies foster a collaborative, adaptive approach to project management, contrasting with the segmented, sequential process typical of the waterfall methodology. By prioritizing flexibility, team empowerment, and customer collaboration, Scrum emerges as a preferred framework for organizations seeking efficiency and adaptability in their project management processes.
Chapter 8: Scrum Practices
The Scrum framework, an Agile project management approach, enhances teamwork and project delivery through structured roles, artifacts, and ceremonies. This framework facilitates transparency, communication, and value delivery by maintaining key artifacts such as the Project Charter, Product (Process) Backlog, Sprint Backlog, and the Increment. These artifacts serve as tools for guiding, prioritizing, and tracking project progress. The Project Charter, for example, outlines the project’s purpose, scope, objectives, and stakeholders, providing a clear direction and ensuring alignment among all stakeholders. The Product (Process) Backlog, a prioritized list of process improvement ideas, acts as the single source of requirements for changes, while the Increment represents the tangible outcome of the sprint’s efforts, demonstrating progress toward the final optimized process.
Scrum ceremonies create a routine and structure for team collaboration, ensuring projects stay on track and continuous improvement is achieved. These ceremonies, including Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective, facilitate team alignment, project tracking, and the adoption of improvements. Sprint Planning allows teams to decide on their goals for the upcoming sprint, while the Daily Scrum ensures day-to-day synchronization and problem-solving. The Sprint Review and Retrospective focus on reviewing accomplishments, adapting plans as necessary, and reflecting on internal processes for future enhancements.
Three primary roles are central to Scrum’s successful implementation: the Product (Process) Owner, the Scrum Master, and the Team Member. Each role comes with distinct responsibilities crucial for smooth operation. The Product (Process) Owner bridges stakeholders and the development team, prioritizing tasks to maximize business value. The Scrum Master facilitates Scrum practices, ensuring the team adheres to Agile methodologies and remains focused and efficient. Team Members, including SMEs and analysts, are responsible for delivering incremental process improvements, embodying self-organization, cross-functionality, and continuous learning.
The synergy between the Scrum roles of Process Owner, Scrum Master, and Team Members drives the framework’s success. The Process Owner sets the vision for process optimization, the Scrum Master supports the team in adhering to Scrum practices, and the Team Members execute the work, delivering improvements. This collaborative and structured approach enables teams to tackle complex projects effectively, ensuring continuous improvement and the delivery of value to customers. Through these defined roles, artifacts, and ceremonies, Scrum fosters an environment of efficiency, adaptability, and focused goal achievement.
Chapter 9: Scrum Optimization
The Scrum framework, grounded in Agile methodologies, emphasizes efficiency, flexibility, and productivity in project management by optimizing team performance through various factors, including team size, collective performance emphasis, and transparency. It posits that an optimal Scrum team comprises seven or fewer members to ensure effective communication, increase flexibility and agility, and foster better focus and collaboration. This smaller team size simplifies the complexity of communication channels, facilitating more manageable and effective interactions among team members, which is crucial for the dynamic and rapid adjustments Scrum teams are often required to make.
Emphasizing collective performance over individual achievements, Scrum encourages collaboration and knowledge sharing, leading to more innovative solutions and reducing undue pressure on individual team members. The framework supports self-organization, allowing teams the autonomy to manage their work and make decisions. This autonomy increases accountability and empowers team members, enhancing motivation and satisfaction while naturally managing individual performance. Additionally, transcendent purpose and cross-functionality within teams ensure that all efforts are aligned towards a common objective, fostering a sense of working towards a greater good and leveraging diverse skills for more effective problem-solving.
Transparency is integral to the success of Scrum initiatives, fostering trust among team members and stakeholders, enabling accurate decision-making, and facilitating rapid adaptation to change. Scrum artifacts, such as the Kanban board and the Burndown chart, play a significant role in maintaining transparency. These tools provide real-time, visual representations of the team’s workflow, help identify bottlenecks, and allow stakeholders to gauge project progress quickly. This openness is critical for the Scrum framework, as it ensures all team members have access to the same information, aligning efforts towards common goals and enhancing collaboration.
Sprint deliverables, focusing on Minimum Viable Process Changes (MVPC), aim to minimize inventory waste and de-risk project delivery by ensuring continuous value delivery to the customer. This approach encourages efficient use of resources, early and continuous feedback, and course correction based on stakeholder and customer feedback. Additionally, aligning Scrum with decision-making processes like the OODA loop and the PDCA cycle enhances team agility and responsiveness, fostering a culture of continuous improvement and facilitating the identification and elimination of waste. This integration ensures that Scrum teams can adapt quickly, make informed decisions, and continuously refine their processes to increase efficiency, productivity, and project success.
Curriculum
Process Optimization – WDP2 – Process-Oriented Thinking
- What is Process-Oriented Thinking?
- Technology Enablers
- Process Design
- Process Discovery
- Process Adjustment
- Lean Management
- Change Process
- Scrum Practices
- Scrum Optimization
Distance Learning
Introduction
Welcome to Appleton Greene and thank you for enrolling on the Process Optimization corporate training program. You will be learning through our unique facilitation via distance-learning method, which will enable you to practically implement everything that you learn academically. The methods and materials used in your program have been designed and developed to ensure that you derive the maximum benefits and enjoyment possible. We hope that you find the program challenging and fun to do. However, if you have never been a distance-learner before, you may be experiencing some trepidation at the task before you. So we will get you started by giving you some basic information and guidance on how you can make the best use of the modules, how you should manage the materials and what you should be doing as you work through them. This guide is designed to point you in the right direction and help you to become an effective distance-learner. Take a few hours or so to study this guide and your guide to tutorial support for students, while making notes, before you start to study in earnest.
Study environment
You will need to locate a quiet and private place to study, preferably a room where you can easily be isolated from external disturbances or distractions. Make sure the room is well-lit and incorporates a relaxed, pleasant feel. If you can spoil yourself within your study environment, you will have much more of a chance to ensure that you are always in the right frame of mind when you do devote time to study. For example, a nice fire, the ability to play soft soothing background music, soft but effective lighting, perhaps a nice view if possible and a good size desk with a comfortable chair. Make sure that your family know when you are studying and understand your study rules. Your study environment is very important. The ideal situation, if at all possible, is to have a separate study, which can be devoted to you. If this is not possible then you will need to pay a lot more attention to developing and managing your study schedule, because it will affect other people as well as yourself. The better your study environment, the more productive you will be.
Study tools & rules
Try and make sure that your study tools are sufficient and in good working order. You will need to have access to a computer, scanner and printer, with access to the internet. You will need a very comfortable chair, which supports your lower back, and you will need a good filing system. It can be very frustrating if you are spending valuable study time trying to fix study tools that are unreliable, or unsuitable for the task. Make sure that your study tools are up to date. You will also need to consider some study rules. Some of these rules will apply to you and will be intended to help you to be more disciplined about when and how you study. This distance-learning guide will help you and after you have read it you can put some thought into what your study rules should be. You will also need to negotiate some study rules for your family, friends or anyone who lives with you. They too will need to be disciplined in order to ensure that they can support you while you study. It is important to ensure that your family and friends are an integral part of your study team. Having their support and encouragement can prove to be a crucial contribution to your successful completion of the program. Involve them in as much as you can.
Successful distance-learning
Distance-learners are freed from the necessity of attending regular classes or workshops, since they can study in their own way, at their own pace and for their own purposes. But unlike traditional internal training courses, it is the student’s responsibility, with a distance-learning program, to ensure that they manage their own study contribution. This requires strong self-discipline and self-motivation skills and there must be a clear will to succeed. Those students who are used to managing themselves, are good at managing others and who enjoy working in isolation, are more likely to be good distance-learners. It is also important to be aware of the main reasons why you are studying and of the main objectives that you are hoping to achieve as a result. You will need to remind yourself of these objectives at times when you need to motivate yourself. Never lose sight of your long-term goals and your short-term objectives. There is nobody available here to pamper you, or to look after you, or to spoon-feed you with information, so you will need to find ways to encourage and appreciate yourself while you are studying. Make sure that you chart your study progress, so that you can be sure of your achievements and re-evaluate your goals and objectives regularly.
Self-assessment
Appleton Greene training programs are in all cases post-graduate programs. Consequently, you should already have obtained a business-related degree and be an experienced learner. You should therefore already be aware of your study strengths and weaknesses. For example, which time of the day are you at your most productive? Are you a lark or an owl? What study methods do you respond to the most? Are you a consistent learner? How do you discipline yourself? How do you ensure that you enjoy yourself while studying? It is important to understand yourself as a learner and so some self-assessment early on will be necessary if you are to apply yourself correctly. Perform a SWOT analysis on yourself as a student. List your internal strengths and weaknesses as a student and your external opportunities and threats. This will help you later on when you are creating a study plan. You can then incorporate features within your study plan that can ensure that you are playing to your strengths, while compensating for your weaknesses. You can also ensure that you make the most of your opportunities, while avoiding the potential threats to your success.
Accepting responsibility as a student
Training programs invariably require a significant investment, both in terms of what they cost and in the time that you need to contribute to study and the responsibility for successful completion of training programs rests entirely with the student. This is never more apparent than when a student is learning via distance-learning. Accepting responsibility as a student is an important step towards ensuring that you can successfully complete your training program. It is easy to instantly blame other people or factors when things go wrong. But the fact of the matter is that if a failure is your failure, then you have the power to do something about it, it is entirely in your own hands. If it is always someone else’s failure, then you are powerless to do anything about it. All students study in entirely different ways, this is because we are all individuals and what is right for one student, is not necessarily right for another. In order to succeed, you will have to accept personal responsibility for finding a way to plan, implement and manage a personal study plan that works for you. If you do not succeed, you only have yourself to blame.
Planning
By far the most critical contribution to stress, is the feeling of not being in control. In the absence of planning we tend to be reactive and can stumble from pillar to post in the hope that things will turn out fine in the end. Invariably they don’t! In order to be in control, we need to have firm ideas about how and when we want to do things. We also need to consider as many possible eventualities as we can, so that we are prepared for them when they happen. Prescriptive Change, is far easier to manage and control, than Emergent Change. The same is true with distance-learning. It is much easier and much more enjoyable, if you feel that you are in control and that things are going to plan. Even when things do go wrong, you are prepared for them and can act accordingly without any unnecessary stress. It is important therefore that you do take time to plan your studies properly.
Management
Once you have developed a clear study plan, it is of equal importance to ensure that you manage the implementation of it. Most of us usually enjoy planning, but it is usually during implementation when things go wrong. Targets are not met and we do not understand why. Sometimes we do not even know if targets are being met. It is not enough for us to conclude that the study plan just failed. If it is failing, you will need to understand what you can do about it. Similarly if your study plan is succeeding, it is still important to understand why, so that you can improve upon your success. You therefore need to have guidelines for self-assessment so that you can be consistent with performance improvement throughout the program. If you manage things correctly, then your performance should constantly improve throughout the program.
Study objectives & tasks
The first place to start is developing your program objectives. These should feature your reasons for undertaking the training program in order of priority. Keep them succinct and to the point in order to avoid confusion. Do not just write the first things that come into your head because they are likely to be too similar to each other. Make a list of possible departmental headings, such as: Customer Service; E-business; Finance; Globalization; Human Resources; Technology; Legal; Management; Marketing and Production. Then brainstorm for ideas by listing as many things that you want to achieve under each heading and later re-arrange these things in order of priority. Finally, select the top item from each department heading and choose these as your program objectives. Try and restrict yourself to five because it will enable you to focus clearly. It is likely that the other things that you listed will be achieved if each of the top objectives are achieved. If this does not prove to be the case, then simply work through the process again.
Study forecast
As a guide, the Appleton Greene Process Optimization corporate training program should take 12-18 months to complete, depending upon your availability and current commitments. The reason why there is such a variance in time estimates is because every student is an individual, with differing productivity levels and different commitments. These differentiations are then exaggerated by the fact that this is a distance-learning program, which incorporates the practical integration of academic theory as an as a part of the training program. Consequently all of the project studies are real, which means that important decisions and compromises need to be made. You will want to get things right and will need to be patient with your expectations in order to ensure that they are. We would always recommend that you are prudent with your own task and time forecasts, but you still need to develop them and have a clear indication of what are realistic expectations in your case. With reference to your time planning: consider the time that you can realistically dedicate towards study with the program every week; calculate how long it should take you to complete the program, using the guidelines featured here; then break the program down into logical modules and allocate a suitable proportion of time to each of them, these will be your milestones; you can create a time plan by using a spreadsheet on your computer, or a personal organizer such as MS Outlook, you could also use a financial forecasting software; break your time forecasts down into manageable chunks of time, the more specific you can be, the more productive and accurate your time management will be; finally, use formulas where possible to do your time calculations for you, because this will help later on when your forecasts need to change in line with actual performance. With reference to your task planning: refer to your list of tasks that need to be undertaken in order to achieve your program objectives; with reference to your time plan, calculate when each task should be implemented; remember that you are not estimating when your objectives will be achieved, but when you will need to focus upon implementing the corresponding tasks; you also need to ensure that each task is implemented in conjunction with the associated training modules which are relevant; then break each single task down into a list of specific to do’s, say approximately ten to do’s for each task and enter these into your study plan; once again you could use MS Outlook to incorporate both your time and task planning and this could constitute your study plan; you could also use a project management software like MS Project. You should now have a clear and realistic forecast detailing when you can expect to be able to do something about undertaking the tasks to achieve your program objectives.
Performance management
It is one thing to develop your study forecast, it is quite another to monitor your progress. Ultimately it is less important whether you achieve your original study forecast and more important that you update it so that it constantly remains realistic in line with your performance. As you begin to work through the program, you will begin to have more of an idea about your own personal performance and productivity levels as a distance-learner. Once you have completed your first study module, you should re-evaluate your study forecast for both time and tasks, so that they reflect your actual performance level achieved. In order to achieve this you must first time yourself while training by using an alarm clock. Set the alarm for hourly intervals and make a note of how far you have come within that time. You can then make a note of your actual performance on your study plan and then compare your performance against your forecast. Then consider the reasons that have contributed towards your performance level, whether they are positive or negative and make a considered adjustment to your future forecasts as a result. Given time, you should start achieving your forecasts regularly.
With reference to time management: time yourself while you are studying and make a note of the actual time taken in your study plan; consider your successes with time-efficiency and the reasons for the success in each case and take this into consideration when reviewing future time planning; consider your failures with time-efficiency and the reasons for the failures in each case and take this into consideration when reviewing future time planning; re-evaluate your study forecast in relation to time planning for the remainder of your training program to ensure that you continue to be realistic about your time expectations. You need to be consistent with your time management, otherwise you will never complete your studies. This will either be because you are not contributing enough time to your studies, or you will become less efficient with the time that you do allocate to your studies. Remember, if you are not in control of your studies, they can just become yet another cause of stress for you.
With reference to your task management: time yourself while you are studying and make a note of the actual tasks that you have undertaken in your study plan; consider your successes with task-efficiency and the reasons for the success in each case; take this into consideration when reviewing future task planning; consider your failures with task-efficiency and the reasons for the failures in each case and take this into consideration when reviewing future task planning; re-evaluate your study forecast in relation to task planning for the remainder of your training program to ensure that you continue to be realistic about your task expectations. You need to be consistent with your task management, otherwise you will never know whether you are achieving your program objectives or not.
Keeping in touch
You will have access to qualified and experienced professors and tutors who are responsible for providing tutorial support for your particular training program. So don’t be shy about letting them know how you are getting on. We keep electronic records of all tutorial support emails so that professors and tutors can review previous correspondence before considering an individual response. It also means that there is a record of all communications between you and your professors and tutors and this helps to avoid any unnecessary duplication, misunderstanding, or misinterpretation. If you have a problem relating to the program, share it with them via email. It is likely that they have come across the same problem before and are usually able to make helpful suggestions and steer you in the right direction. To learn more about when and how to use tutorial support, please refer to the Tutorial Support section of this student information guide. This will help you to ensure that you are making the most of tutorial support that is available to you and will ultimately contribute towards your success and enjoyment with your training program.
Work colleagues and family
You should certainly discuss your program study progress with your colleagues, friends and your family. Appleton Greene training programs are very practical. They require you to seek information from other people, to plan, develop and implement processes with other people and to achieve feedback from other people in relation to viability and productivity. You will therefore have plenty of opportunities to test your ideas and enlist the views of others. People tend to be sympathetic towards distance-learners, so don’t bottle it all up in yourself. Get out there and share it! It is also likely that your family and colleagues are going to benefit from your labors with the program, so they are likely to be much more interested in being involved than you might think. Be bold about delegating work to those who might benefit themselves. This is a great way to achieve understanding and commitment from people who you may later rely upon for process implementation. Share your experiences with your friends and family.
Making it relevant
The key to successful learning is to make it relevant to your own individual circumstances. At all times you should be trying to make bridges between the content of the program and your own situation. Whether you achieve this through quiet reflection or through interactive discussion with your colleagues, client partners or your family, remember that it is the most important and rewarding aspect of translating your studies into real self-improvement. You should be clear about how you want the program to benefit you. This involves setting clear study objectives in relation to the content of the course in terms of understanding, concepts, completing research or reviewing activities and relating the content of the modules to your own situation. Your objectives may understandably change as you work through the program, in which case you should enter the revised objectives on your study plan so that you have a permanent reminder of what you are trying to achieve, when and why.
Distance-learning check-list
Prepare your study environment, your study tools and rules.
Undertake detailed self-assessment in terms of your ability as a learner.
Create a format for your study plan.
Consider your study objectives and tasks.
Create a study forecast.
Assess your study performance.
Re-evaluate your study forecast.
Be consistent when managing your study plan.
Use your Appleton Greene Certified Learning Provider (CLP) for tutorial support.
Make sure you keep in touch with those around you.
Tutorial Support
Programs
Appleton Greene uses standard and bespoke corporate training programs as vessels to transfer business process improvement knowledge into the heart of our clients’ organizations. Each individual program focuses upon the implementation of a specific business process, which enables clients to easily quantify their return on investment. There are hundreds of established Appleton Greene corporate training products now available to clients within customer services, e-business, finance, globalization, human resources, information technology, legal, management, marketing and production. It does not matter whether a client’s employees are located within one office, or an unlimited number of international offices, we can still bring them together to learn and implement specific business processes collectively. Our approach to global localization enables us to provide clients with a truly international service with that all important personal touch. Appleton Greene corporate training programs can be provided virtually or locally and they are all unique in that they individually focus upon a specific business function. They are implemented over a sustainable period of time and professional support is consistently provided by qualified learning providers and specialist consultants.
Support available
You will have a designated Certified Learning Provider (CLP) and an Accredited Consultant and we encourage you to communicate with them as much as possible. In all cases tutorial support is provided online because we can then keep a record of all communications to ensure that tutorial support remains consistent. You would also be forwarding your work to the tutorial support unit for evaluation and assessment. You will receive individual feedback on all of the work that you undertake on a one-to-one basis, together with specific recommendations for anything that may need to be changed in order to achieve a pass with merit or a pass with distinction and you then have as many opportunities as you may need to re-submit project studies until they meet with the required standard. Consequently the only reason that you should really fail (CLP) is if you do not do the work. It makes no difference to us whether a student takes 12 months or 18 months to complete the program, what matters is that in all cases the same quality standard will have been achieved.
Support Process
Please forward all of your future emails to the designated (CLP) Tutorial Support Unit email address that has been provided and please do not duplicate or copy your emails to other AGC email accounts as this will just cause unnecessary administration. Please note that emails are always answered as quickly as possible but you will need to allow a period of up to 20 business days for responses to general tutorial support emails during busy periods, because emails are answered strictly within the order in which they are received. You will also need to allow a period of up to 30 business days for the evaluation and assessment of project studies. This does not include weekends or public holidays. Please therefore kindly allow for this within your time planning. All communications are managed online via email because it enables tutorial service support managers to review other communications which have been received before responding and it ensures that there is a copy of all communications retained on file for future reference. All communications will be stored within your personal (CLP) study file here at Appleton Greene throughout your designated study period. If you need any assistance or clarification at any time, please do not hesitate to contact us by forwarding an email and remember that we are here to help. If you have any questions, please list and number your questions succinctly and you can then be sure of receiving specific answers to each and every query.
Time Management
It takes approximately 1 Year to complete the Process Optimization corporate training program, incorporating 12 x 6-hour monthly workshops. Each student will also need to contribute approximately 4 hours per week over 1 Year of their personal time. Students can study from home or work at their own pace and are responsible for managing their own study plan. There are no formal examinations and students are evaluated and assessed based upon their project study submissions, together with the quality of their internal analysis and supporting documents. They can contribute more time towards study when they have the time to do so and can contribute less time when they are busy. All students tend to be in full time employment while studying and the Process Optimization program is purposely designed to accommodate this, so there is plenty of flexibility in terms of time management. It makes no difference to us at Appleton Greene, whether individuals take 12-18 months to complete this program. What matters is that in all cases the same standard of quality will have been achieved with the standard and bespoke programs that have been developed.
Distance Learning Guide
The distance learning guide should be your first port of call when starting your training program. It will help you when you are planning how and when to study, how to create the right environment and how to establish the right frame of mind. If you can lay the foundations properly during the planning stage, then it will contribute to your enjoyment and productivity while training later. The guide helps to change your lifestyle in order to accommodate time for study and to cultivate good study habits. It helps you to chart your progress so that you can measure your performance and achieve your goals. It explains the tools that you will need for study and how to make them work. It also explains how to translate academic theory into practical reality. Spend some time now working through your distance learning guide and make sure that you have firm foundations in place so that you can make the most of your distance learning program. There is no requirement for you to attend training workshops or classes at Appleton Greene offices. The entire program is undertaken online, program course manuals and project studies are administered via the Appleton Greene web site and via email, so you are able to study at your own pace and in the comfort of your own home or office as long as you have a computer and access to the internet.
How To Study
The how to study guide provides students with a clear understanding of the Appleton Greene facilitation via distance learning training methods and enables students to obtain a clear overview of the training program content. It enables students to understand the step-by-step training methods used by Appleton Greene and how course manuals are integrated with project studies. It explains the research and development that is required and the need to provide evidence and references to support your statements. It also enables students to understand precisely what will be required of them in order to achieve a pass with merit and a pass with distinction for individual project studies and provides useful guidance on how to be innovative and creative when developing your Unique Program Proposition (UPP).
Tutorial Support
Tutorial support for the Appleton Greene Process Optimization corporate training program is provided online either through the Appleton Greene Client Support Portal (CSP), or via email. All tutorial support requests are facilitated by a designated Program Administration Manager (PAM). They are responsible for deciding which professor or tutor is the most appropriate option relating to the support required and then the tutorial support request is forwarded onto them. Once the professor or tutor has completed the tutorial support request and answered any questions that have been asked, this communication is then returned to the student via email by the designated Program Administration Manager (PAM). This enables all tutorial support, between students, professors and tutors, to be facilitated by the designated Program Administration Manager (PAM) efficiently and securely through the email account. You will therefore need to allow a period of up to 20 business days for responses to general support queries and up to 30 business days for the evaluation and assessment of project studies, because all tutorial support requests are answered strictly within the order in which they are received. This does not include weekends or public holidays. Consequently you need to put some thought into the management of your tutorial support procedure in order to ensure that your study plan is feasible and to obtain the maximum possible benefit from tutorial support during your period of study. Please retain copies of your tutorial support emails for future reference. Please ensure that ALL of your tutorial support emails are set out using the format as suggested within your guide to tutorial support. Your tutorial support emails need to be referenced clearly to the specific part of the course manual or project study which you are working on at any given time. You also need to list and number any questions that you would like to ask, up to a maximum of five questions within each tutorial support email. Remember the more specific you can be with your questions the more specific your answers will be too and this will help you to avoid any unnecessary misunderstanding, misinterpretation, or duplication. The guide to tutorial support is intended to help you to understand how and when to use support in order to ensure that you get the most out of your training program. Appleton Greene training programs are designed to enable you to do things for yourself. They provide you with a structure or a framework and we use tutorial support to facilitate students while they practically implement what they learn. In other words, we are enabling students to do things for themselves. The benefits of distance learning via facilitation are considerable and are much more sustainable in the long-term than traditional short-term knowledge sharing programs. Consequently you should learn how and when to use tutorial support so that you can maximize the benefits from your learning experience with Appleton Greene. This guide describes the purpose of each training function and how to use them and how to use tutorial support in relation to each aspect of the training program. It also provides useful tips and guidance with regard to best practice.
Tutorial Support Tips
Students are often unsure about how and when to use tutorial support with Appleton Greene. This Tip List will help you to understand more about how to achieve the most from using tutorial support. Refer to it regularly to ensure that you are continuing to use the service properly. Tutorial support is critical to the success of your training experience, but it is important to understand when and how to use it in order to maximize the benefit that you receive. It is no coincidence that those students who succeed are those that learn how to be positive, proactive and productive when using tutorial support.
Be positive and friendly with your tutorial support emails
Remember that if you forward an email to the tutorial support unit, you are dealing with real people. “Do unto others as you would expect others to do unto you”. If you are positive, complimentary and generally friendly in your emails, you will generate a similar response in return. This will be more enjoyable, productive and rewarding for you in the long-term.
Think about the impression that you want to create
Every time that you communicate, you create an impression, which can be either positive or negative, so put some thought into the impression that you want to create. Remember that copies of all tutorial support emails are stored electronically and tutors will always refer to prior correspondence before responding to any current emails. Over a period of time, a general opinion will be arrived at in relation to your character, attitude and ability. Try to manage your own frustrations, mood swings and temperament professionally, without involving the tutorial support team. Demonstrating frustration or a lack of patience is a weakness and will be interpreted as such. The good thing about communicating in writing, is that you will have the time to consider your content carefully, you can review it and proof-read it before sending your email to Appleton Greene and this should help you to communicate more professionally, consistently and to avoid any unnecessary knee-jerk reactions to individual situations as and when they may arise. Please also remember that the CLP Tutorial Support Unit will not just be responsible for evaluating and assessing the quality of your work, they will also be responsible for providing recommendations to other learning providers and to client contacts within the Appleton Greene global client network, so do be in control of your own emotions and try to create a good impression.
Remember that quality is preferred to quantity
Please remember that when you send an email to the tutorial support team, you are not using Twitter or Text Messaging. Try not to forward an email every time that you have a thought. This will not prove to be productive either for you or for the tutorial support team. Take time to prepare your communications properly, as if you were writing a professional letter to a business colleague and make a list of queries that you are likely to have and then incorporate them within one email, say once every month, so that the tutorial support team can understand more about context, application and your methodology for study. Get yourself into a consistent routine with your tutorial support requests and use the tutorial support template provided with ALL of your emails. The (CLP) Tutorial Support Unit will not spoon-feed you with information. They need to be able to evaluate and assess your tutorial support requests carefully and professionally.
Be specific about your questions in order to receive specific answers
Try not to write essays by thinking as you are writing tutorial support emails. The tutorial support unit can be unclear about what in fact you are asking, or what you are looking to achieve. Be specific about asking questions that you want answers to. Number your questions. You will then receive specific answers to each and every question. This is the main purpose of tutorial support via email.
Keep a record of your tutorial support emails
It is important that you keep a record of all tutorial support emails that are forwarded to you. You can then refer to them when necessary and it avoids any unnecessary duplication, misunderstanding, or misinterpretation.
Individual training workshops or telephone support
Please be advised that Appleton Greene does not provide separate or individual tutorial support meetings, workshops, or provide telephone support for individual students. Appleton Greene is an equal opportunities learning and service provider and we are therefore understandably bound to treat all students equally. We cannot therefore broker special financial or study arrangements with individual students regardless of the circumstances. All tutorial support is provided online and this enables Appleton Greene to keep a record of all communications between students, professors and tutors on file for future reference, in accordance with our quality management procedure and your terms and conditions of enrolment. All tutorial support is provided online via email because it enables us to have time to consider support content carefully, it ensures that you receive a considered and detailed response to your queries. You can number questions that you would like to ask, which relate to things that you do not understand or where clarification may be required. You can then be sure of receiving specific answers to each individual query. You will also then have a record of these communications and of all tutorial support, which has been provided to you. This makes tutorial support administration more productive by avoiding any unnecessary duplication, misunderstanding, or misinterpretation.
Tutorial Support Email Format
You should use this tutorial support format if you need to request clarification or assistance while studying with your training program. Please note that ALL of your tutorial support request emails should use the same format. You should therefore set up a standard email template, which you can then use as and when you need to. Emails that are forwarded to Appleton Greene, which do not use the following format, may be rejected and returned to you by the (CLP) Program Administration Manager. A detailed response will then be forwarded to you via email usually within 20 business days of receipt for general support queries and 30 business days for the evaluation and assessment of project studies. This does not include weekends or public holidays. Your tutorial support request, together with the corresponding TSU reply, will then be saved and stored within your electronic TSU file at Appleton Greene for future reference.
Subject line of your email
Please insert: Appleton Greene (CLP) Tutorial Support Request: (Your Full Name) (Date), within the subject line of your email.
Main body of your email
Please insert:
1. Appleton Greene Certified Learning Provider (CLP) Tutorial Support Request
2. Your Full Name
3. Date of TS request
4. Preferred email address
5. Backup email address
6. Course manual page name or number (reference)
7. Project study page name or number (reference)
Subject of enquiry
Please insert a maximum of 50 words (please be succinct)
Briefly outline the subject matter of your inquiry, or what your questions relate to.
Question 1
Maximum of 50 words (please be succinct)
Maximum of 50 words (please be succinct)
Question 3
Maximum of 50 words (please be succinct)
Question 4
Maximum of 50 words (please be succinct)
Question 5
Maximum of 50 words (please be succinct)
Please note that a maximum of 5 questions is permitted with each individual tutorial support request email.
Procedure
* List the questions that you want to ask first, then re-arrange them in order of priority. Make sure that you reference them, where necessary, to the course manuals or project studies.
* Make sure that you are specific about your questions and number them. Try to plan the content within your emails to make sure that it is relevant.
* Make sure that your tutorial support emails are set out correctly, using the Tutorial Support Email Format provided here.
* Save a copy of your email and incorporate the date sent after the subject title. Keep your tutorial support emails within the same file and in date order for easy reference.
* Allow up to 20 business days for a response to general tutorial support emails and up to 30 business days for the evaluation and assessment of project studies, because detailed individual responses will be made in all cases and tutorial support emails are answered strictly within the order in which they are received.
* Emails can and do get lost. So if you have not received a reply within the appropriate time, forward another copy or a reminder to the tutorial support unit to be sure that it has been received but do not forward reminders unless the appropriate time has elapsed.
* When you receive a reply, save it immediately featuring the date of receipt after the subject heading for easy reference. In most cases the tutorial support unit replies to your questions individually, so you will have a record of the questions that you asked as well as the answers offered. With project studies however, separate emails are usually forwarded by the tutorial support unit, so do keep a record of your own original emails as well.
* Remember to be positive and friendly in your emails. You are dealing with real people who will respond to the same things that you respond to.
* Try not to repeat questions that have already been asked in previous emails. If this happens the tutorial support unit will probably just refer you to the appropriate answers that have already been provided within previous emails.
* If you lose your tutorial support email records you can write to Appleton Greene to receive a copy of your tutorial support file, but a separate administration charge may be levied for this service.
How To Study
Your Certified Learning Provider (CLP) and Accredited Consultant can help you to plan a task list for getting started so that you can be clear about your direction and your priorities in relation to your training program. It is also a good way to introduce yourself to the tutorial support team.
Planning your study environment
Your study conditions are of great importance and will have a direct effect on how much you enjoy your training program. Consider how much space you will have, whether it is comfortable and private and whether you are likely to be disturbed. The study tools and facilities at your disposal are also important to the success of your distance-learning experience. Your tutorial support unit can help with useful tips and guidance, regardless of your starting position. It is important to get this right before you start working on your training program.
Planning your program objectives
It is important that you have a clear list of study objectives, in order of priority, before you start working on your training program. Your tutorial support unit can offer assistance here to ensure that your study objectives have been afforded due consideration and priority.
Planning how and when to study
Distance-learners are freed from the necessity of attending regular classes, since they can study in their own way, at their own pace and for their own purposes. This approach is designed to let you study efficiently away from the traditional classroom environment. It is important however, that you plan how and when to study, so that you are making the most of your natural attributes, strengths and opportunities. Your tutorial support unit can offer assistance and useful tips to ensure that you are playing to your strengths.
Planning your study tasks
You should have a clear understanding of the study tasks that you should be undertaking and the priority associated with each task. These tasks should also be integrated with your program objectives. The distance learning guide and the guide to tutorial support for students should help you here, but if you need any clarification or assistance, please contact your tutorial support unit.
Planning your time
You will need to allocate specific times during your calendar when you intend to study if you are to have a realistic chance of completing your program on time. You are responsible for planning and managing your own study time, so it is important that you are successful with this. Your tutorial support unit can help you with this if your time plan is not working.
Keeping in touch
Consistency is the key here. If you communicate too frequently in short bursts, or too infrequently with no pattern, then your management ability with your studies will be questioned, both by you and by your tutorial support unit. It is obvious when a student is in control and when one is not and this will depend how able you are at sticking with your study plan. Inconsistency invariably leads to in-completion.
Charting your progress
Your tutorial support team can help you to chart your own study progress. Refer to your distance learning guide for further details.
Making it work
To succeed, all that you will need to do is apply yourself to undertaking your training program and interpreting it correctly. Success or failure lies in your hands and your hands alone, so be sure that you have a strategy for making it work. Your Certified Learning Provider (CLP) and Accredited Consultant can guide you through the process of program planning, development and implementation.
Reading methods
Interpretation is often unique to the individual but it can be improved and even quantified by implementing consistent interpretation methods. Interpretation can be affected by outside interference such as family members, TV, or the Internet, or simply by other thoughts which are demanding priority in our minds. One thing that can improve our productivity is using recognized reading methods. This helps us to focus and to be more structured when reading information for reasons of importance, rather than relaxation.
Speed reading
When reading through course manuals for the first time, subconsciously set your reading speed to be just fast enough that you cannot dwell on individual words or tables. With practice, you should be able to read an A4 sheet of paper in one minute. You will not achieve much in the way of a detailed understanding, but your brain will retain a useful overview. This overview will be important later on and will enable you to keep individual issues in perspective with a more generic picture because speed reading appeals to the memory part of the brain. Do not worry about what you do or do not remember at this stage.
Content reading
Once you have speed read everything, you can then start work in earnest. You now need to read a particular section of your course manual thoroughly, by making detailed notes while you read. This process is called Content Reading and it will help to consolidate your understanding and interpretation of the information that has been provided.
Making structured notes on the course manuals
When you are content reading, you should be making detailed notes, which are both structured and informative. Make these notes in a MS Word document on your computer, because you can then amend and update these as and when you deem it to be necessary. List your notes under three headings: 1. Interpretation – 2. Questions – 3. Tasks. The purpose of the 1st section is to clarify your interpretation by writing it down. The purpose of the 2nd section is to list any questions that the issue raises for you. The purpose of the 3rd section is to list any tasks that you should undertake as a result. Anyone who has graduated with a business-related degree should already be familiar with this process.
Organizing structured notes separately
You should then transfer your notes to a separate study notebook, preferably one that enables easy referencing, such as a MS Word Document, a MS Excel Spreadsheet, a MS Access Database, or a personal organizer on your cell phone. Transferring your notes allows you to have the opportunity of cross-checking and verifying them, which assists considerably with understanding and interpretation. You will also find that the better you are at doing this, the more chance you will have of ensuring that you achieve your study objectives.
Question your understanding
Do challenge your understanding. Explain things to yourself in your own words by writing things down.
Clarifying your understanding
If you are at all unsure, forward an email to your tutorial support unit and they will help to clarify your understanding.
Question your interpretation
Do challenge your interpretation. Qualify your interpretation by writing it down.
Clarifying your interpretation
If you are at all unsure, forward an email to your tutorial support unit and they will help to clarify your interpretation.
Qualification Requirements
The student will need to successfully complete the project study and all of the exercises relating to the Process Optimization corporate training program, achieving a pass with merit or distinction in each case, in order to qualify as an Accredited Process Optimization Specialist (APTS). All monthly workshops need to be tried and tested within your company. These project studies can be completed in your own time and at your own pace and in the comfort of your own home or office. There are no formal examinations, assessment is based upon the successful completion of the project studies. They are called project studies because, unlike case studies, these projects are not theoretical, they incorporate real program processes that need to be properly researched and developed. The project studies assist us in measuring your understanding and interpretation of the training program and enable us to assess qualification merits. All of the project studies are based entirely upon the content within the training program and they enable you to integrate what you have learnt into your corporate training practice.
Process Optimization – Grading Contribution
Project Study – Grading Contribution
Customer Service – 10%
E-business – 05%
Finance – 10%
Globalization – 10%
Human Resources – 10%
Information Technology – 10%
Legal – 05%
Management – 10%
Marketing – 10%
Production – 10%
Education – 05%
Logistics – 05%
TOTAL GRADING – 100%
Qualification grades
A mark of 90% = Pass with Distinction.
A mark of 75% = Pass with Merit.
A mark of less than 75% = Fail.
If you fail to achieve a mark of 75% with a project study, you will receive detailed feedback from the Certified Learning Provider (CLP) and/or Accredited Consultant, together with a list of tasks which you will need to complete, in order to ensure that your project study meets with the minimum quality standard that is required by Appleton Greene. You can then re-submit your project study for further evaluation and assessment. Indeed you can re-submit as many drafts of your project studies as you need to, until such a time as they eventually meet with the required standard by Appleton Greene, so you need not worry about this, it is all part of the learning process.
When marking project studies, Appleton Greene is looking for sufficient evidence of the following:
Pass with merit
A satisfactory level of program understanding
A satisfactory level of program interpretation
A satisfactory level of project study content presentation
A satisfactory level of Unique Program Proposition (UPP) quality
A satisfactory level of the practical integration of academic theory
Pass with distinction
An exceptional level of program understanding
An exceptional level of program interpretation
An exceptional level of project study content presentation
An exceptional level of Unique Program Proposition (UPP) quality
An exceptional level of the practical integration of academic theory
Preliminary Analysis
Additional Reading
THE NEW INDUSTRIAL ENGINEERING: INFORMATION TECHNOLOGY AND BUSINESS PROCESS REDESIGN
Paper by Thomas H. Davenport and James E. Short
At the turn of the century, Frederick Taylor revolutionized the design and improvement of work with his ideas on work organization, task decomposition and job measurement. Taylor’s basic aim was to increase organizational productivity by applying to human labor the same engineering principles that had proven so successful in solving technical problems in the workplace. The same approaches that had transformed mechanical activity could also be used to structure jobs performed by people. Taylor, rising from worker to chief engineer at Midvale Iron Works, came to symbolize the ideas and practical realizations in industry that we now call industrial engineering (EE), or the scientific school of management. In fact, though work design remains a contemporary IE concern, no subsequent concept or tool has rivaled the power of Taylor’s mechanizing vision. As we enter the 1990’s, however, two newer tools of the “information age” are beginning to transform organizations to the degree that Taylorism did earlier. These are information technology — the capabilities offered by computers, software applications, and telecommunications — and business process redesign — the analysis and design of work flows and processes within an organization. The ideas and capabilities offered by these two tools working together have the potential to create a new type of industrial engineering, changing the way the discipline is practised and the skills necessary to practice it. This article explores in detail the relationship between information technology (IT) and business process redesign (BPR). We report on research conducted in nineteen companies, including detailed case studies from five firms engaged in substantial process redesign. After defining business processes in greater detail, we extract from the experiences of companies we studied a generic five-step approach to redesigning processes with IT. We then define the major types of processes, along with the primary role of IT in each type of process. Examples are provided throughout of specific efforts within these firms to use IT to radically redesign and upgrade particularly important business processes —some as part of a total business redesign, others as more isolated but still valuable efforts. Finally, management issues encountered at our research sites in using IT to redesign business processes are considered.
SCRUM: THE ART OF DOING TWICE THE WORK IN HALF THE TIME
Book by Jeff Sutherland
Preface
Why Scrum?
I first created Scrum, with Ken Schwaber, twenty years ago, as a faster, more reliable, more effective way to create software in the tech industry. Up to that point – and even as late as 2005 most software development projects were created using the Waterfall method, where a project was completed in distinct stages and moved step by step toward ultimate release to consumers or software users. The process was slow, unpredictable, and often never resulted in a product that people wanted or would pay to buy. Delays of months or even years were endemic to the process. The early step-by-step plans, laid out in comforting detail in Gantt charts, reassured management that we were in control of the development process but almost without fail, we would fall quickly behind schedule and disastrously over budget.
To overcome those faults, in 1993 I invented a new way of doing things: Scrum. It is a radical change from the prescriptive, top-down project management methodologies of the past. Scrum, instead, is akin to evolutionary, adaptive, and self-correcting systems. Since its inception, the Scrum framework has become the way the tech industry creates new software and products. But while Scrum has become famously successful in managing software and hardware projects in Silicon Valley, it remains relatively unknown in general business practice. And that is why I wrote Scrum: to reveal and explain the Scrum management system to businesses outside the world of technology. In the book, I talk about the origins of Scrum in the Toyota Production System and the OODA loop of combat aviation. I discuss how we organize projects around small teams – and why that is such an effective way to work. I explain how we prioritize projects, how we set up one-week to one-month “sprints” to gain momentum and hold everyone on the team accountable, how we conduct brief daily stand ups to keep tabs on what has been done and on the challenges that have inevitably cropped up. And how Scrum incorporates the concepts of continuous improvement and minimum viable products to get immediate feedback from consumers, rather than waiting until a project is finished. As you’ll see in the pages that follow, we’ve used Scrum to build everything from affordable 100-mile-per-gallon cars to bringing the FBI database systems into the twenty-first century.
Read on. I think you’ll see how Scrum can help transform how your company works, creates, plans and thinks. I firmly believe that Scrum can help to revolutionize how business works in virtually every industry, just as it has revolutionized innovation and speed to market at a dazzling array of new companies and a breathtaking range of new products emerging out of Silicon Valley and the world of technology.
Recommended Pre-Workshop Activities
For key organizational processes, collect existing organizational metrics or KPIs that shed light on the following:
o Voice of the Customer data collected using any of the following methods:
i. Surveys and Questionnaires
ii. Interviews and Focus Groups
iii. Observation
iv. Feedback and Complaints Analysis
v. Market Research
o Voice of the Process data including
i. Input variables
ii. Outcome measures
iii. Process Variation
Course Manuals 1-9
Course Manual 1: What is Process-Oriented Thinking?
What is Process-Oriented Thinking?
1.1. Introduction
Process-oriented thinking is an approach that emphasizes the importance of understanding and managing the series of activities or steps that lead to a particular process outcome. It involves a detailed analysis of these processes to identify areas for improvement, efficiency gains, or innovation.
In this course manual, we will explore why the process-oriented approach is essential, how it enables optimization and the implications of this approach on organizational structure and roles.
1.2. The Importance of Process Orientation
For the remainder of the course manual, we will draw heavily from the findings of a seminal research study published in a paper by Thomas Davenport and James Short entitled “The New Industrial Engineering: Information Technology and Business Process Redesign”. This section discusses how the process-oriented approach contrasts with another widely adopted approach – the function-oriented approach. This function-oriented approach advocates the organization of a company based on specialized functions or departments, such as finance, marketing, operations, and human resources. Each department specializes in its respective area, aiming to optimize its tasks and responsibilities. This approach emphasizes departmental expertise and efficiency in each functional area.
Below, we briefly discuss two advantages of the process-oriented approach:
i. Improved Process Coordination
While the function-oriented approach offers some advantages, including the development of in-depth expertise in specific business functions and clarity of roles and responsibilities, it often leads to a silo mentality with departments working in isolation, reducing collaboration and cross-functional cooperation.
This silo mentality can present a significant problem as the execution of processes usually requires the involvement of actors across several departments. To illustrate, consider the Credit Card Application-to-Activation process introduced in Workshop 1. An essential activity in this process is performing a Know-Your-Customer (KYC) check, which aims to verify the identity of customers, assess their suitability and determine the potential risks of illegal intentions towards the business relationship (see Figure 1.1). The Risk Assessment activity (see Figure 1,1; Step 3) may unearth certain risk factors, and depending on the nature of the risk (e.g., sanctions, fraud, money laundering, etc.), it may be referred to specialist risk teams to assess and decide on the significance of the risk. However, where there are multiple risk factors, a function-orientation may lead to conflicting decisions where one team may recommend rejection of the application while another recommends acceptance. However, the process-oriented approach avoids this problem by ensuring the activity is holistically coordinated regardless of which actors execute it.
.
ii. IT-Enabled Optimization
As highlighted in Workshop 1, Information Technology (IT) is a critical enabler which supports and facilitates the successful execution of processes. Course Manual 2 develops this point in additional detail. However, a function-oriented approach encourages each department to build and maintain its IT systems, which often do not interface with the systems developed by other departments, resulting in rework, e.g. re-keying data into other IT systems, which is inefficient and increases the likelihood of defects.
However, the process-oriented approach avoids this issue by promoting the design of IT systems to enable processes even when actors spanning different departments execute the various activities.
The point above is more significant given that process performers often involve not just several internal organizational departments but extend outside the organization. For example, revisiting the Credit Card Application-to-Activation process from Workshop 1,
the credit card provider executes the Credit Check Request-to-Report sub-process when they request a credit check on an applicant from a credit agency (or bureau), which is typically external to the credit card provider. The implication is that organizations need to develop the competencies to effectively manage the performance of activities executed outside of their organization to ensure that they contribute to delivering positive process outcomes. This extends the point made in Workshop 1 that the execution of activities progressively increases the value of the inputs as these are transformed, leading to the concept of a value chain and network. Workshop 3 further explores these concepts (which describe the activities within and around an organization that collectively create a product or service). Effectively managing these requires process-oriented thinking.
Exercise 1.1
1.3. Process-Oriented Thinking and Process Optimization
Davenport and Short proposed a five-step approach to redesign processes around IT capabilities. We extend these steps to deal with process optimization as follows:
1.3.1. Create a business vision and process goals
The objective of improving a specific metric in isolation or removing certain inefficiencies often drives process optimization initiatives. However, as discussed in Workshop 1, process excellence can act as a competitive lever enabling the achievement of strategic goals linked to key stakeholder groups (e.g. customers, regulators, shareholders, etc). Process optimization initiatives should be launched as vehicles for achieving these strategic goals.
Management often determines the strategic vision and cascades it to employees. However, multiple dimensions frequently have to be considered simultaneously, resulting in setting performance targets for prioritized processes along the four dimensions identified in Workshop 1 Course Manual 3. i.e. quality, cost, time and flexibility.
1.3.2. Determine Which Processes Need to Be Optimized
Due to the scarcity of resources required to optimize processes, the organization must make choices regarding the optimization scope and order for organizational processes. Davenport and Short’s study proposed a couple of approaches to this problem. The first approach, which they called the exhaustive approach, entails identifying and prioritizing every organizational process. The second approach, dubbed the high-impact approach, seeks to identify only the most critical processes or those most at odds with the business vision. The study recommends this approach as it argues that it is wasteful to identify all processes as the organization is unlikely to possess the resources to optimize all its processes. However, due to the interconnected nature of processes, we believe it is essential to identify the complete process landscape as problems which manifest in a process may have their root cause in the outputs from an upstream process, and a seemingly trouble-free process could be causing issues in a connected process. As such, we recommend the exhaustive approach. In Workshop 3, we will examine in detail an approach for efficiently identifying the processes across the organization and prioritizing the order in which they should be optimized based on the overall value delivered to the organization.
1.3.3. Recognizing and Assessing Current State
To optimize processes effectively, it is essential to assess the current state of the process to determine the gap between the expected and actual performance and ideally identify the deviation’s root cause(s). This ensures that any changes made to the process should prevent these problems from recurring. Workshop 7 will explore in detail how to use Diagnostic Process Mining techniques to explain why problems occur and determine the causal estimates of the various causal factors.
However, as mentioned during the examination of the process lifecycle in Workshop 1, it is necessary to radically redesign the process when it is deemed that it significantly fails to meet its critical requirements. Course Manual 3 of this workshop examines a methodology and tool for radical process redesign.
1.3.4. Recognizing IT Levers
As mentioned earlier, IT systems are a powerful enabler for process optimization. However, even when processes are built from a process-oriented perspective, it is vital to consider the ever-increasing capabilities of IT to optimize processes. For example, recent breakthroughs in generative AI have contributed to optimizing customer-facing processes by generating personalized responses or recommendations, resulting in enhanced customer experience while streamlining operations. Generative AI also frees up human resources for more strategic, creative tasks by automating routine tasks, thereby increasing overall productivity and innovation. The ability of generative AI to learn and improve over time means that it continually enhances its process optimization capabilities.
The second course manual of this workshop explores the various ways IT acts as a process enabler.
1.3.5. Process Prototyping
This last step focuses on implementing the changes to the process iteratively, ensuring that regular feedback from key process stakeholders (including process performers) is received and incorporated into the changes. As a result, the value of the changes is delivered incrementally throughout this step rather than in a “big bang” at the end, effectively de-risking the change. Course Manuals 7-9 of this workshop will explore the Scrum Methodology, which encapsulates this approach to delivering process change.
1.4. Organizational Implications of Process-Oriented Thinking
Adopting a process-oriented approach will have implications for the organization, including (but not restricted to) creating new roles. Below, we discuss some of these implications.
1.4.1. Organizational Design
The execution of processes is likely to traverse multiple functions. It is often necessary to assign someone to manage designated processes and ensure it consistently delivers positive outcomes. Whilst a potential solution to this problem is to configure the organization along process lines, as we shall later establish, process-orientation encourages continuous improvement. Hence, a purely process-oriented organizational structure will likely change frequently, potentially resulting in employee uncertainty and morale issues.
The organizational design favoured by most organizations is a matrix of process and function-oriented structures. A key player is the Process Owner, who assumes accountability for process outcomes. As this person will often have to influence and motivate employees who do not report to them, they will require strong powers of persuasion and facilitation. Course Manual 8 further considers the attributes of an ideal Process Owner.
1.4.2. Management Buy-In
As established in Workshop 1, it is essential to secure buy-in from organizational leaders to secure and maintain a commitment to optimizing processes as management sets the tone for the organization. However, a process optimization initiative spearheaded by a single business function (perhaps because the ‘pain’ is most acute there) may encounter resistance from other functions impacted by the process. For example, the output format produced by process performers in one department may need to be changed to make it easier for performers in a different department to consume and result in improved overall process outcomes. However, the upstream department may resist this change as it requires additional effort—the abovementioned issue highlights the need for cross-functional management support. As discussed earlier, establishing the link between positive process outcomes and the strategic vision helps obtain this buy-in.
1.4.3. Process Roles
i. Process Owner:
This role is crucial for maintaining the health and efficiency of designated business processes, ensuring they meet organizational objectives and adapt to changing business needs. The person who is assigned this role is accountable for the following:
a. Oversight and Management: The Process Owner oversees a specific process within the organization. They ensure that the process is aligned with the organization’s goals and is functioning efficiently.
b. Process Improvement: They continuously monitor and analyze the process to identify areas for improvement. The Process Owner coordinates the implementation of changes and enhancements to optimize the process, increase efficiency, reduce costs, and improve quality.
c. Defining and Documenting: The Process Owner is accountable for defining the process, including its start and end points, key steps, inputs, and outputs. They are also accountable for ensuring the process is correctly documented, including creating guidelines and manuals containing its execution details.
d. Setting Performance Metrics: They establish performance metrics to measure the effectiveness of the process. These metrics help monitor the process’s performance and guide decision-making for improvements.
e. Training and Communication: The Process Owner ensures that all stakeholders, including employees involved in the process, are trained and informed about the process, its objectives, and any changes or updates.
f. Cross-functional Coordination: They often work across multiple departments, coordinating with various stakeholders to ensure the process integrates seamlessly with other business operations.
g. Compliance and Standards Adherence: Ensuring the process complies with relevant laws, regulations, and industry standards is a critical part of their role.
h. Resource Allocation: The Process Owner determines and allocates the necessary resources, including personnel and technology, to support the process effectively.
i. Problem-Solving: They play a crucial role in addressing any issues or obstacles that arise, employing problem-solving skills to find effective solutions.
j. Stakeholder Engagement: Engaging with key stakeholders, including senior management and process performers, to communicate the value of the process and gather feedback for continuous improvement.
ii. Case Manager
This role is vital in coordinating and overseeing the comprehensive aspects of a particular case (or process instance), typically for complex processes. Their primary responsibilities include:
a. Assessment: Conducting thorough assessments of the individual’s or client’s needs, including understanding the specific situation, identifying necessary services, and determining the best course of action.
b. Planning: Developing a tailored plan for each case, including setting goals, outlining steps, and determining resources needed to achieve the desired outcome.
c. Coordination: As the central point of communication, they coordinate all activities and services involved in the case. To illustrate, for a process in the healthcare sector, this could include working with various professionals like doctors, therapists, social workers, or legal experts.
d. Monitoring and Follow-up: Regularly monitor the case’s progress and adjust the plan as needed, involving staying in constant touch with the customer and other professionals involved in the case.
e. Advocacy: Acting as an advocate for the customer, ensuring that their needs are met and their rights are respected. This might involve liaising with institutions, agencies, or family members.
f. Documentation: Keeping detailed records of all aspects of the case, including plans, services provided, communications, and progress notes.
g. Resource Connection: Connecting customers with appropriate resources and services that they need, which may include medical care, counseling, social services, or legal assistance.
h. Crisis Intervention: Providing support and intervention in crisis situations, helping to stabilize the case and ensure the safety and well-being of the client.
1.4.4. Skills Development
The final organizational implication is developing process design, monitoring, and improvement skills across the general organizational workforce to ensure that processes are continually optimized. These will include hard skills, such as the ability to effectively utilize tools and methodologies detailed in subsequent course manuals (e.g. CTQ trees, FMEA and statistical techniques), and softer skills, such as facilitation and influencing skills. These softer skills encompass many techniques discussed in Workshop 1 for activating the seeking system for motivating others (e.g., facilitating experimentation and serious play). However, recognizing the importance of these skills will ensure their inclusion in skills gap analysis and appropriate measures taken to ensure that the skills are developed and retained within the organization.
Case Study: Rank Xerox UK (RXUK)
Problem
Shortly after his appointment, the Managing Director of RXUK identified two critical issues related to the company’s operations. First, it needed to concentrate on marketing “office systems” instead of its customary reprographics products; and second, its strong functional culture and ineffective business processes would seriously impede its expansion.
Process-Oriented Solution
The senior management team of RXUK reviewed the firm’s external environment and mission over a series of off-site meetings. They also identified the critical business processes that the company had to execute to fulfil its purpose. Starting with cross-functional processes, the group reorganized the organization by defining high-level goals and forming teams to specify the data and other resources needed for each process. Rather than focusing on hierarchical authority, they developed career frameworks emphasizing facilitation abilities and cross-functional management.
Because functional skills would still be required in a process organisation and a completely new structure could have required too much organisational change, the MD chose to maintain a somewhat functional formal structure.
However, there was still a great deal of change. Several high-ranking executives left because they could not function in the modified organization. Two new cross-functional senior roles were created, dubbed “Facilitating Directors”; one for business and organisational growth and the other for quality, information systems, and process management.
The organization started considering how internal and external IT systems could enable and assist in process optimization.
The Facilitating Director of Processes and Systems determined that creating information systems centred around processes required a new strategy. He collaborated with an external consultant to fine-tune and validate the identification of relevant processes. Eighteen “macro” business processes (e.g. logistics) and 145 distinct “micro” processes (e.g. fleet management) were the output of the process identification activity.
To determine the prioritization order of processes for system development, the senior management team met again and decided that seven macro processes—installed equipment management, customer order life cycle, customer satisfaction, integrated planning, logistics, financial management, and personnel management—were especially crucial.
With the help of the Information Engineering Facility product’s automated code generation features, the personnel management system was successfully completed in a lot less time than it would have taken with conventional techniques.
Results
With revamping its business procedures, RXUK began to see improvements in its financial position. After a protracted period of stagnation, the company experienced a 20% increase in revenue. The number of jobs that did not entail direct client contact was cut from 1,100 to 800. The average order delivery time was shortened from 33 days to 6 days. RXUK’s MD attributes much of the improvement to the process change, even though numerous other factors were influencing RXUK’s markets simultaneously.
After learning of RXUK’s success with process redesign, other Xerox divisions started similar initiatives. Sizable cross-functional teams in Xerox’s U.S. product development and marketing departments subsequently optimized their internal processes.
Additionally, senior corporate management at Xerox became more committed to IT-driven process optimization.
Exercise 1.2
Course Manual 2: Technology Enablers
Technology Enablers
1.1 Introduction
As earlier highlighted in Workshop 1, Information Technology (IT) is a vital enabler which supports and facilitates the successful execution of processes across diverse industries. This course manual delves into how IT, through its vast capabilities, brings detailed information into processes, modifies process sequences, enables efficient capture and dissemination of information, offers precise process tracking, and connects various parties within a process.
Additionally, it explores IT’s impact on transforming unstructured processes, transferring information across large distances, and automating and facilitating complex analytical methods.
1.2 Automation
Information Technology (IT) has revolutionized business operations by significantly contributing to process optimization through automation. Below, we explore the various facets of this transformation, focusing on Robotic Process Automation (RPA), Intelligent Automation, and the role of low-code platforms in democratizing automation.
2.2.1. Robotic Process Automation (RPA)
RPA involves using software robots, or ‘bots’, to automate highly repetitive, routine tasks traditionally performed by human workers. e.g. data extraction and update, account generation, etc. This technology excels in environments with rule-based, structured processes. These bots can run in attended mode, triggered or managed by humans, or unattended mode, where they are kicked off and coordinated in an automated manner.
Key contributions of RPA to process optimization include:
i. Increased Efficiency and Productivity: RPA bots work tirelessly around the clock, significantly reducing the time required to complete tasks, leading to higher throughput and productivity.
ii. Accuracy and Compliance: RPA ensures higher task accuracy by reducing human error. It also helps maintain compliance, as bots adhere strictly to the programmed rules and regulations.
iii. Cost Reduction: Automating routine tasks with RPA can lead to substantial cost savings as it reduces the need for manual labor.
2.2.2. Intelligent Automation
While RPA is largely rule-based, Intelligent Automation combines RPA with Artificial Intelligence (AI) and Machine Learning (ML). This integration allows for automating more complex processes requiring decision-making and learning from unstructured data. The impact of Intelligent Automation includes:
i. Handling Complex Tasks: By integrating AI, these systems can make decisions, process language, recognize images, and handle unstructured data.
ii. Continuous Improvement: Machine Learning algorithms enable these systems to learn from past decisions and continuously improve their performance.
iii. Enhanced Customer Experience: Intelligent Automation can personalize customer interactions, predict customer needs, and provide more efficient services.
2.2.3. Low-Code Platforms
Low-code platforms have played a pivotal role in democratizing automation by making it accessible to a broader range of users, including those with limited technical expertise. These platforms provide a visual development environment to create applications and automate processes. The democratization through low-code platforms is characterized by:
i. Ease of Use: Drag-and-drop interfaces and pre-built templates allow non-technical users to create applications and automate processes without deep programming knowledge.
ii. Rapid Development and Deployment: Low-code platforms enable quick development and deployment of applications, significantly reducing the time-to-market.
iii. Empowering Business Users: These platforms empower business users to take charge of their automation needs, leading to solutions that align with business requirements.
1.3 Data Capture and Validation
Business processes typically require capturing, storing, and disseminating information throughout the execution lifecycle. IT enables these through communication tools, collaborative platforms, and information management systems, ensuring that critical information is shared and accessible across different levels of an organization. The birth of the digital era has accelerated this trend, introducing a paradigm shift in how data is processed, stored, and accessed, leading to significant improvements in operational efficiency.
This capability is crucial in maintaining transparency, enhancing collaboration, and ensuring all stakeholders align with process goals.
Below, we explore the pivotal role of IT in enhancing process optimization, particularly focusing on how it facilitates data capture from process inception and improves input quality through data validation techniques.
2.3.1. Data Capture
One of the key areas where IT contributes to process optimization is the capture of input data, which triggers the process. This data is critical for the successful execution of the process and is typically transformed throughout its lifecycle. IT enables various methods for data capture, such as online forms, surveys, and feedback tools, making it easier and more convenient for customers to provide their information.
i. Online Forms and Surveys
IT enables the creation of dynamic online forms and surveys that customers can easily access. These tools make it simple for customers to submit their information and businesses to collect and organize data efficiently, e.g., on customer experience.
ii. Interactive Customer Interfaces
Interactive customer interfaces such as chatbots and virtual assistants leverage IT to engage customers conversationally, allowing for a more natural and effective data collection process.
2.3.2. Data Validation
IT has also provided the capability to validate data at the point of entry, which is crucial for ensuring the quality and reliability of the data captured. IT facilitates various data validation methods to enhance input quality.
i. Restricting Incomplete Form Submissions
One common technique is not permitting forms with missing fields to be submitted, ensuring that all necessary data is collected before processing. This is a form of mistake-proofing, a concept which ideally prevents defects from occurring or ensures that they are caught close to the source of the defect when they occur. Mistake-proofing (also referred to as poka yoke) will be explored in further detail in Course Manual 3.
ii. Use of Drop-down Lists
Drop-down lists restrict input to predefined options, ensuring that the data entered is valid and consistent, improving data quality and simplifying user data entry.
iii. Data Derivation Techniques
Data derivation involves using existing data to infer or calculate additional information, significantly saving processing time and reducing the need for manual data entry. An example is using a customer’s postal code to automatically fill city and state fields.
1.4 Enriching Process Information
Most processes in the service sector often add value by transforming data into higher-value information. One of the critical contributions of IT to process optimization is its ability to integrate extensive and detailed information into organizational processes. Through data analytics, cloud computing, and sophisticated databases, IT provides access to granular data that can be utilized for decision-making, forecasting, and strategic planning. This infusion of data transforms traditional processes into more informed, data-driven ones.
Below, we explore this topic in additional detail, focusing on tools such as knowledge graphs, social networks, and use cases that have been applied to increase quality and save time.
2.4.1. Discovering and Enriching Data
Recent IT innovations has made it possible to efficiently gather and analyze vast amounts of data, leading to more informed decision-making processes. This discovery involves identifying data sources, extracting relevant information, and preparing the data for analysis. Data enrichment, on the other hand, involves enhancing, refining, or improving raw data, making it more valuable for specific purposes. IT facilitates these processes through advanced algorithms, data mining techniques, and machine learning models.
Knowledge Graphs
Knowledge graphs have emerged as powerful tools for organizing and representing data relationships. They are essentially large networks of interconnected data points that help understand the context and relationships between different data elements. Knowledge graphs consist of nodes representing entities such as individuals, locations, documents, other resources, etc. Links (or lines) between the nodes (also called edges) represent the relationship between them. These relationships may be named and possess a direction (See Figure 2.1). This combination of nodes and edges allows for more intuitive data organization and retrieval, making performing complex queries and analysis easier.
In process optimization, knowledge graphs enable:
i. Improved Data Integration: By connecting disparate data sources, knowledge graphs provide a more unified view of data, aiding in more cohesive analysis and decision-making.
ii. Enhanced Information Retrieval: They improve the accuracy and relevance of information retrieved, as the context and relationships between data points are better understood.
iii. Predictive Analytics: Knowledge graphs, combined with AI, can predict trends and patterns, leading to proactive rather than reactive decision-making.
Knowledge graphs have enhanced process execution of the following use cases, among others:
i. Information Search: Knowledge graphs have transformed information search, particularly when combined with AI and machine learning algorithms. Search algorithms are now more sophisticated, providing more accurate and relevant results based on the user’s context and past behavior.
ii. Data Cataloging: Knowledge graphs have automated the process of data cataloging, making it easier to store, retrieve, and manage data, leading to better data governance and compliance.
iii. Single View of X: Whether it’s a single view of the customer, product, or supplier, knowledge graphs enable organizations to consolidate data from multiple sources into a unified view. This comprehensive perspective is crucial for personalized marketing, inventory management, and customer service.
IMAGE TOO DETAILED
1.5 Transforming Unstructured Activities and Processes
Previously, human judgment and decision-making were required to execute unstructured activities, i.e., activities that are not predefined or predictable. Examples of these activities include responding to customer queries, problem-solving, and managing issues.
However, IT tools and systems, like artificial intelligence (AI) and machine learning, have become proficient at analyzing and learning from unstructured data, including identifying patterns and insights that were previously inaccessible. This capability allows organizations to structure and optimize these processes for greater efficiency and effectiveness.
For instance, AI-driven customer service chatbots can handle many customer queries, learning and adapting over time, speeding up response times and freeing human agents to focus on more complex issues.
1.6 Transferring Information Across Large Distances
The ability to rapidly and reliably transfer information across large distances is critical in a globalized economy. e.g., sharing data between different branches of a multinational corporation or collaborating with international partners.
Technologies like cloud computing and advanced telecommunications networks (such as 5G) have revolutionized this aspect. They enable real-time data sharing and collaboration, irrespective of geographical barriers.
These technologies have enabled companies to operate 24/7 (.i.e. a “follow-the-sun” model), leveraging talents and resources from across the globe, thus optimizing productivity and innovation.
1.7 Enabling Changes in Process Sequences
IT facilitates flexibility in process management. With advanced software solutions, processes that were once linear and rigid can now be re-sequenced and modified to improve efficiency and output. For example, IT systems enable just-in-time inventory management in manufacturing, allowing companies to reduce holding costs and respond swiftly to market demands.
In Workshop 10, we will explore prescriptive process monitoring approaches, which recommend appropriate interventions for process performers to proactively prevent the occurrence of problems, e.g., altering the order in which activities are executed to accelerate the completion of a case.
Finally, in Workshop 11, we examine Augmented Process Management, which enables the design of an autonomous system that can act independently within a defined framework, adjust its actions to improve process performance continuously and react to changes in its environment.
1.8 Detailed Process Tracking
IT enables meticulous tracking of various processes. Through real-time monitoring and reporting tools, businesses can track the progress of each stage in a process, identify bottlenecks, and take corrective actions promptly. In Workshop 9, we will explore the topic of predictive process monitoring, which combines historical data with data from in-flight cases to provide operational support by predicting a process metric of interest (e.g. remaining time or cost) or the future state of the process instance (e.g. outcome or next step) leveraging historical process data. This aspect of IT ensures process adherence and helps continuously improve process efficiency.
Organizations may also make some of this information available to customers, e.g., providing case status via self-service portals. As established in Workshop 1 Course Manual 5, customers value the transparency and accessibility such features offer.
1.9 Connecting Parties to a Process
Perhaps one of the most significant contributions of IT is its ability to connect disparate parties involved in a process. Whether suppliers, customers, or internal teams, IT systems provide interaction, negotiation, and transaction platforms. This interaction typically occurs throughout the process lifecycle (e.g. notification to the customer at crucial process milestones).
Collaborative tools such as email, instant messaging applications and process portals ensure that the various process actors can interact seamlessly. This connectivity provides a cohesive process flow and enhances the overall efficiency of operations.
Exercise 2.1
1.10 Limitation and Risks
While IT significantly enhances process efficiency and effectiveness, it also introduces specific limitations and risks that organizations must navigate, several of which are discussed below:
2.10.1. Security Risks
i. Cybersecurity threats
Cybersecurity threats are a predominant and ever-evolving risk in IT. These threats encompass various malicious activities aimed at accessing, altering, or destroying sensitive information, extorting money from users, or interrupting normal business processes. One of the most common cybersecurity threats is malware, including viruses, worms, and ransomware, which can infiltrate and compromise systems. Phishing attacks, another prevalent threat, involve tricking recipients into revealing personal or financial information using fraudulent emails or phone calls. Additionally, advanced persistent threats (APTs) pose significant risks as they involve prolonged and targeted cyberattacks, where attackers infiltrate a network and remain undetected for an extended period to gain confidential information continuously. The complexity and sophistication of these threats and the rapid pace of technological advancements create a challenging landscape for IT professionals who must constantly adapt their security measures to protect against these evolving risks.
ii. Data Privacy Concerns
IT data privacy concerns are primarily focused on protecting personal and sensitive data from unauthorized access and exposure. Organizations typically collect, store, and process vast quantities of personal data, making data privacy critical. Risks to data privacy can arise from various sources, including inadequate data protection policies, weak cybersecurity measures, and human error. Data breaches, where sensitive information is accessed or disclosed without authorization, can have severe consequences, ranging from identity theft to financial loss for individuals and reputational damage and legal repercussions for organizations. Furthermore, the emergence of technologies such as cloud computing and the Internet of Things (IoT) has expanded the potential attack surface for data breaches, complicating data privacy efforts. Regulatory compliance, such as adherence to the General Data Protection Regulation (GDPR) in the European Union, is a significant aspect of addressing data privacy concerns, necessitating robust data protection strategies and practices to safeguard personal information against unauthorized access and ensure privacy rights are respected.
2.10.2. Reliance and Dependency
The over-reliance on IT systems in modern organizational operations presents notable risks, particularly concerning system failures and a diminished understanding of manual processes. This dependence often leads to a scenario where critical business functions are tightly coupled with IT systems, making them highly vulnerable to disruptions caused by system failures. Such failures, whether due to technical faults, cyber-attacks, or software glitches, can lead to significant operational downtime, financial losses, and damage to customer trust. Moreover, an over-reliance on automated processes can result in a workforce that lacks the knowledge or skills to manage or control these processes manually. This gap in understanding becomes particularly problematic when IT systems fail or when unique situations arise that automated systems are not equipped to handle. The absence of manual process proficiency hinders the ability to respond effectively to IT system failures and limits the organization’s flexibility and adaptability in managing unforeseen challenges. Therefore, while IT systems offer efficiency and precision, balancing them with a competent understanding of manual processes and contingency planning is crucial to mitigate these risks.
2.10.3. Ethical Use of Data
The ethical use of data analytics and artificial intelligence (AI) in business processes is a critical consideration, as these technologies have the power to significantly impact the business landscape and society. Ethical data analytics and AI practices revolve around principles of fairness, transparency, accountability, and respect for privacy. Organizations need to ensure that data is collected, analyzed, and used in a manner that respects individual privacy rights and avoids biases that could lead to discriminatory outcomes. Transparency in how AI algorithms make decisions and use data is vital to maintain trust and enable users to understand and challenge outcomes. Moreover, businesses must be accountable for the decisions made by their AI systems, particularly in areas affecting people’s lives and livelihoods, like employment, healthcare, and access to services. Ethical AI also involves ongoing monitoring to identify and correct unintended consequences, ensuring that AI-driven decisions support equitable and beneficial outcomes for all stakeholders. Adopting these ethical practices is not just a regulatory compliance issue but a commitment to corporate social responsibility, fostering a technology-driven future that is inclusive, fair, and respectful of human rights.
2.10.4. Mitigating Risks and Overcoming Limitations
i. Strategic Planning and Risk Management
Strategic planning and risk management form the backbone of effective IT management. This approach involves identifying potential IT risks, including cybersecurity threats, data breaches, system failures, and technological obsolescence, and then developing a comprehensive strategy to address them. Effective strategic planning requires a clear understanding of the organization’s objectives, the role of IT in achieving these objectives, and the potential barriers that IT-related issues can pose. On the other hand, risk management involves implementing policies and procedures to reduce the impact of identified risks, including establishing a robust IT governance framework, conducting regular risk assessments, and developing disaster recovery and business continuity plans. By proactively managing risks, organizations can ensure that their IT systems support and enhance their operational resilience and strategic objectives.
ii. Regular Updates and Maintenance
Regular updates and maintenance of IT systems are critical in mitigating risks and ensuring the longevity and effectiveness of these systems. Outdated software and hardware pose security risks, leading to inefficiencies and incompatibility issues. Regular updates ensure systems are protected against the latest cyber threats and run on the most current versions, often including performance improvements and new features. Maintenance involves routine checks and repairs to prevent system failures and downtime. This proactive approach can identify potential issues before they become significant problems, minimizing disruptions to business operations. Regular updates and maintenance optimize system performance, ensuring that IT infrastructure remains robust and reliable.
iii. Training and Support
Training and support play a crucial role in maximizing the effectiveness of IT systems and minimizing risks. Training ensures that employees know the latest IT tools and practices, enabling them to use technology efficiently and securely. A well-trained workforce is less likely to fall prey to cybersecurity threats like phishing scams and is better equipped to utilize IT resources effectively. Regular training sessions on IT policies, data protection, and emerging technologies can foster a culture of IT proficiency and security awareness within the organization. Additionally, robust IT support is essential for resolving technical issues quickly and efficiently, minimizing downtime, and helping employees navigate complex IT systems. Effective support services also include providing guidance and assistance in using IT applications ensuring employees can leverage technology to its fullest potential.
Exercise 2.2
Case Study: Camping World, USA
Problem
As the world’s largest retailer of recreational vehicles (RVs), Camping World realizes that maintaining a competitive edge depends on offering outstanding customer service. To provide unparalleled customer service, the company primarily depends on its contact centres; nevertheless, a spike in activity after the COVID-19 outbreak exposed several shortcomings in its current infrastructure. Gaps in agent management and response times were increasingly noticeable as the volume and traffic rose.
The company realized that customer response times were unsatisfactory, caused by increased volume and a lack of transparency regarding average response times, performance, and the maximum number of chats.
Solution
The solution expanded the scope of queries and phone capabilities by
integrating a conversational cloud platform and deploying it across the organization’s internal network. This enabled their customers to communicate with a virtual agent through the solution, freeing human agents to handle more complicated conversations. The virtual agent, named Arvee, has capacity management and dynamic routing features that guarantee quicker and more effective response times. Arvee’s lead generation feature made it easy for live agents to monitor and proactively follow up on client questions, particularly beyond office hours.
Callers can choose to be connected to a live agent who is available for that chat when the automated routing assistant detects their intentions.
The solution has also increased the efficiency of live agents in handling online and SMS messages because they can handle numerous chats simultaneously due to the agent desktop integration and Arvee’s proactive client data collection.
Result
Customer engagement rates have increased dramatically since the installation, and fewer abandoned interactions occur. Customers are experiencing shorter wait times, and quicker responses, and agent efficiency has significantly increased. With the help of the agent desktop integration and Arvee’s proactive client data collection, agents can now handle multiple chats simultaneously, leading to a 33% increase in productivity. Customer engagement scores rose by 40%, and wait times at Camping World decreased to 33 seconds.
Around eight thousand retail chat interactions (57% of total interactions) were resolved by the chatbot and did not require a transfer to a live agent.
Course Manual 3: Process Design
1.11 Introduction
The concept of the process lifecycle was introduced in Workshop 1 Course Manual 1, which describes the progression of a process from its design to its termination (or radical redesign) – See Figure 3.1. Organizations must understand this lifecycle to manage, optimize, and adapt these processes effectively.
The following five course manuals will examine several tools and methodologies for delivering each lifecycle stage. In this course manual, we will explore the design of processes using Design for Six Sigma (DFSS).
1.12 History of Six Sigma
Six Sigma, a methodology developed to improve business processes, has evolved into a key strategy for companies striving for near-perfect quality. The inception of Six Sigma dates back to the 1980s at Motorola. Engineer Bill Smith, seeking a method to improve quality and reduce defects, pioneered this approach. The idea was to enhance manufacturing processes, but quickly expanded beyond this scope. Earlier quality improvement methodologies like Total Quality Management (TQM) and statistical process control heavily influenced the Six Sigma methodology.
The 1990s witnessed the adoption of Six Sigma by major corporations like General Electric (GE) and Honeywell, primarily credited to Jack Welch, then-CEO of GE. Welch’s implementation of Six Sigma transformed it into a corporate culture rather than a mere process improvement technique, proving its effectiveness in various sectors, not just manufacturing.
Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects in any process – from manufacturing to transactional and from product to service. The central idea is to measure how many “defects” are present in a process and systematically figure out how to eliminate them, getting as close to “zero defects” as possible. This goal is achieved through a two-pronged approach:
i. Design for Six Sigma, which aims to “bake” quality into the design of new processes using methodologies such as DMADV (Define, Measure, Analyze, Design, Verify) and IDOV (Identify, Design, Optimize, Verify). This approach will be the focus of this course manual.
ii. Optimizing existing processes using the DMAIC (Define, Measure, Analyze, Improve, Control) methodology. Subsequent course manuals explore this approach.
1.13 Benefits of Six Sigma
i. Improved Quality and Efficiency: Six Sigma methodologies help identify and eliminate the causes of defects and errors, leading to a significant improvement in the quality and efficiency of output.
ii. Cost Reduction: Six Sigma can save considerable costs by reducing defects and improving processes. It helps identify and eliminate waste and non-value-adding activities.
iii. Customer Satisfaction: Enhanced quality and efficiency directly translate to improved customer satisfaction, which is crucial for the success and growth of any business.
iv. Employee Engagement and Training: Six Sigma involves training employees at various organizational levels and fostering a culture of continuous improvement. It also enhances employee engagement, as they are directly engaged in problem-solving and process improvements.
v. Strategic Planning: Six Sigma tools can be used for strategic planning at an organizational level, helping to set goals and determine the course of action to achieve them.
vi. Flexibility and Adaptability: Although it started in manufacturing, Six Sigma has proven its adaptability by being effective in various sectors, including healthcare, finance, IT, and more.
1.14 Design for Six Sigma
Design for Six Sigma (DFSS) is a proactive approach that is applied to designing new processes, products, or services with Six Sigma quality levels from the ground up. It aims to prevent defects by designing processes that are robust and less prone to variations by “designing it right the first time.” This approach ensures that the products or services meet customer needs and expectations with high reliability and optimized costs.
3.4.1. DFSS Methodologies
The IDOV and DMADV methodologies are frameworks used in process design with DFSS. They are used in different scenarios and have distinct focuses and steps as follows:
IDOV Methodology
i. Identify: Define the project goals and customer (internal or external) requirements.
ii. Design: Develop the process to meet the customer needs.
iii. Optimize: Refine the process for maximum efficiency without compromising the design.
iv. Validate: Ensure the process performs reliably in the intended environment.
IDOV is primarily focused on designing new processes or products. It emphasizes understanding customer needs and creating a process that efficiently meets them. The optimization step is crucial in ensuring the process is effective and efficient.
DMADV Methodology
i. Define: Identify the project goals, scope, and customer requirements.
ii. Measure: Assess customer needs and specifications.
iii. Analyze: Develop and design alternatives, create and evaluate a high-level design.
iv. Design: Develop detailed designs, optimize the design, and plan for verification. This step often involves simulations or prototypes.
v. Verify: Test and validate the full-scale process, implement it, and hand it over to the process owner.
DMADV is also used to create new products or processes, similar to IDOV, but it places a stronger emphasis on the analysis, design, and verification stages. It is comprehensive in ensuring that the design meets customer needs and is the best possible solution among alternatives.
Below, we contrast both methodologies in terms of focus, optimization and verification:
o Focus: IDOV is more streamlined towards efficient design, whereas DMADV is more thorough, emphasising analysis and verification.
o Optimization: IDOV explicitly includes an optimization phase, whereas DMADV incorporates optimization within the design phase.
o Verification: DMADV strongly emphasises verification and testing of the design.
Choosing between IDOV and DMADV depends on the specific context. If the goal is to design a new process or product with a strong emphasis on efficiency and a streamlined approach, IDOV would be suitable. However, where the project requires thorough analysis, consideration of multiple design alternatives, and extensive verification, DMADV would be more appropriate.
For projects where the design needs to be rigorously tested and validated, such as for mission-critical processes, and where there are multiple potential solutions to be considered, we recommend DMADV due to its comprehensive approach to design and verification. However, this course will focus on the IDOV methodology, which is suitable for most process design requirements. Below, we examine the methodology in further detail, exploring the tools used in each phase:
3.4.2. Identify
The first phase involves identifying the customer requirements, market trends and defining the design goals. This phase lays the foundation for successful process design by specifying the Critical to Quality (CTQ) characteristics, which are vital for meeting customer expectations. It involves extensive research, including Voice of Customer (VOC) techniques, to accurately capture customer requirements.
A critical objective of this phase is understanding and ensuring that the process design meets customer requirements. The Voice of the Customer (VoC) is a term that describes the process of capturing customer’s expectations, preferences, and aversions. Translating VoC into Critical to Quality (CTQ) attributes ensures that a product or service meets customer demands. Below, we explore how to effectively gather VoC requirements and translate these into CTQs, with a particular focus on using CTQ trees.
i. Identify the Customer
Determine who the customers are for the product or service. As established in Workshop 1, Course Manual 1, the (internal or external) customer is the person or entity that consumes a process’s output. It is worth noting that a process may have multiple customers. In cases where direct access to customers is difficult, it is worth identifying internal and external stakeholders who may act as proxies for them. For example, Workshop 1 Course Manual 6 mentioned the case study of a UK food delivery company whose delivery drivers had developed a keen awareness of customer needs due to their regular interaction, making them good candidates to act as customer proxies.
ii. Gathering VoC Data
Subsequently, collect VoC data using the following methods:
a. Surveys and Questionnaires: Structured tools to collect quantitative and qualitative customer data.
b. Interviews and Focus Groups: Direct interaction to gain deeper insights into customer needs.
c. Observation: Studying how customers interact with a product or service in their environment.
d. Feedback and Complaints Analysis: Reviewing customer feedback (including those obtained from customer interaction) and complaints to identify common issues or suggestions.
e. Market Research: Analyzing market trends, customer reviews, and competitor offerings.
iii. Documenting and Analyzing VoC Data
a. Capture all data and insights systematically. For example, interviews and focus group sessions should be video recorded (where possible) and the data transcribed for analysis.
b. Use analytical tools like affinity diagrams to categorize and prioritize customer needs. See Figure 3.2 for an example of classifying hypothetical VoC data for customers who applied for mortgages from a hypothetical financial services organization.
iv. Translating VoC into CTQs
CTQs are the key measurable characteristics of a product or service that directly impact meeting customer needs.
The VoC data needs to be mapped to CTQ using the following steps:
a. For each identified customer need, define specific, measurable attributes that will satisfy that need.
b. Ensure that CTQs are actionable and directly related to customer satisfaction.
A valuable tool for performing this translation is a CTQ Tree, a visual tool that helps break down complex customer needs into specific, measurable quality attributes.
Follow the steps below to create a CTQ tree:
a. At the top of the tree, list broad customer needs identified from the VoC data.
b. Break down needs into requirements. For each need, identify more specific requirements.
c. Translate requirements into CTQs. Further break down the requirements into quantifiable CTQs.
d. Define measurable specifications for each CTQ that align with customer expectations.
Challenges in Translating VoC to CTQs
i. Subjectivity of Customer Perceptions:
Customers may have subjective or emotional needs that are hard to quantify.
ii. Balancing Diverse Needs:
Different customer segments might have conflicting requirements.
iii. Continuous Evolution:
Customer needs can evolve rapidly, requiring ongoing adjustment of CTQs.
Despite the challenges listed above, gathering VoC requirements and translating them into CTQs is critical to ensure that the process meets and exceeds customer expectations. CTQ trees provide a structured and effective way to break down complex customer requirements into specific, measurable quality attributes. While there are challenges in this translation process, careful and continuous analysis, effective communication and feedback loops with customers can lead to the successful alignment of product or service offerings with customer expectations. Ultimately, the goal is to deliver value to the customer, and understanding and implementing their voice into the quality attributes is critical to achieving this objective.
The following are deliverables (i.e. outputs) from the Identify Phase:
o CTQ Trees
o Affinity Diagram
o VoC-to-CTQ Translation Tables
Exercise 3.1
3.4.3. Design
In this phase, potential designs and solutions are conceptualized based on the requirements identified in the previous phase. Various tools such as Quality Function Deployment (QFD), Failure Modes and Effects Analysis (FMEA), and concept generation techniques are used.
To briefly provide an overview of how these tools add value in the design phase of IDOV,
QFD is used to understand customer requirements and translate them into design specifications. Subsequently, concept generation techniques help to explore a wide range of potential solutions. Finally, FMEA evaluates these concepts for potential risks and failure modes, ensuring a robust design.
Below, we describe how each tool can be used to create a detailed process design that meets the identified customer needs and specifications.
i. Quality Function Deployment
QFD is a customer-driven planning tool that establishes a relationship between customer needs (Voice of the Customer) and process specifications. It prioritizes customer wants and needs and translates them into specific plans to produce products to meet those needs.
Various methods exist for accomplishing this including Kano Analysis (see Workshop 1 Course Manual 5), Simple L Matrix and House of Quality. The latter two methods are both matrices that help designers connect customer needs with process specifications.
Below, we detail the steps for creating the Simple L Matrix.
a. Determine customer priorities:
Based on customer feedback (e.g. gathered from a survey or focus group), rank the identified needs in order of importance. Extending the previous example, customers who applied for a mortgage valued the following features:
o Timely receipt of offers (from the time of application submission)
o Competitive mortgage rates
o Broad product range
o Prompt release of funds
o Choice of service channels (flexibility)
o High service quality
Based on the feedback, on a scale of 1 to 5, indicate the relative importance of each of the identified customer needs. Receiving several 5s or 4s is acceptable because customers may rank numerous attributes as highly important. Additionally, ratings don’t have to be whole numbers.
List these needs on the vertical axis of the matrix, together with the ratings.
b. Enumerate process specifications:
Identify and list the process design requirements, including (but not limited to) processing cost, access channels, risk appetite, technology enablers, etc.
c. Examine how client requirements and process specifications relate to one another:
The relationship matrix will indicate how each process design requirement influences consumer needs and to what extent. It indicates which design specification will contribute to or hinder customer needs and whether this will do so significantly or not. The following symbols should be used:
For example, the process specification to provide digital-only access will significantly impact processing time (as it will reduce the number of hand-offs) but will likely negatively impact those digitally excluded customers (see Table 3.2 – cells highlighted yellow and purple).
By using QFD in the design phase, designers can ensure that the final design aligns closely with what customers value the most. Revisiting the example above, we discern that the specification for offshore processing conflicts significantly with customer requirements, whilst the use of on-site underwriters contributes the most (see Table 3.2 – cells highlighted red and green respectively).
The QFD offers the following benefits:
o Enhanced focus on customer requirements.
o Improved communication and collaboration across different departments.
o Identification and prioritization of critical features and improvements.
ii. Concept Generation Techniques
Concept generation in the design phase involves ideating using techniques such as brainstorming, mind mapping, and TRIZ to develop potential solutions that satisfy the identified customer needs while addressing the process design.
Below, we describe these techniques and how they are used in this context.
a. Brainstorming:
Brainstorming serves as a powerful tool for generating ideas. Process design involves gathering diverse individuals to freely share thoughts, solutions, and suggestions without the fear of criticism. This collective ideation phase is crucial for uncovering hidden opportunities and challenges within the process. The key to successful brainstorming is to encourage unrestrained participation, allowing the quantity of ideas to flow, which can later be refined and evaluated for feasibility. This method ensures a wide range of solutions are considered, setting a solid foundation for the following design stages.
b. Mind Mapping:
Mind mapping is a visual tool that helps to structure information, helping to analyze and comprehend a subject better. Mind mapping takes the ideas generated from brainstorming and organizes them into a visual format. This step is vital for structuring thoughts and uncovering relationships between different ideas. In process design, a mindmap can help designers visualize the process flow and identify critical areas for improvement or innovation. By laying out ideas in a diagrammatic form, mind mapping facilitates easier understanding and communication among team members, enabling more effective collaboration and decision-making. It acts as a bridge between the initial ideation and the detailed analysis required for process optimization.
See below for an example mind map for the design of a new website to support a redesigned process.
IMAGE TOO DETAILED
c. TRIZ:
TRIZ, or the Theory of Inventive Problem Solving, is a systematic innovation and problem-solving methodology that Genrich Altshuller and his colleagues developed. It is based on analyzing a vast body of patents and identifying patterns in the problems and solutions across different industries and scientific fields. TRIZ encompasses 40 principles that guide users in overcoming design contradictions and finding innovative solutions.
![](https://www.appletongreene.com/wp-content/uploads/Figure3_8.jpg)
TRIZ Problem Solving Method
Below are brief descriptions of some of TRIZ principles, along with examples of how each can be employed in process design:
Principle of Segmentation: This principle suggests dividing an object into independent parts or making it easy to disassemble. In process design, this could mean modularizing a manufacturing process so that each module can be independently optimized, maintained, or updated without disrupting the entire system. For example, different processing stages (such as washing, cutting, and packaging) can be designed as separate modules in a food processing plant. This modular approach facilitates easier maintenance, scalability, and the potential integration of new technologies into each stage without a complete overhaul of the entire process.
Principle of Local Quality: This principle involves changing an object’s structure from uniform to non-uniform, making each part of an object function in conditions most suitable for its operation. This could be applied in process design by customizing the working conditions for different parts of a production line to optimize performance. For instance, in a chemical manufacturing process, reactants in a reactor might be mixed more efficiently by varying the speed or direction of the mixing blades in different sections of the reactor, thus improving reaction rates and product quality.
Principle of Prior Action: This principle suggests that necessary actions should be carried out beforehand to make the final and main actions easier to achieve. In process design, this can be employed by pre-heating raw materials before they enter a reaction chamber in a chemical plant, thereby reducing the energy required for the reaction and increasing efficiency.
Principle of Asymmetry: The asymmetry principle proposes changing an object’s shape from symmetrical to asymmetrical. When applied to process design, this could mean designing the flow of materials through a system in a non-linear, optimized path that reduces bottlenecks and improves efficiency. For example, in a wastewater treatment facility, the layout of treatment tanks might be designed asymmetrically to match the natural flow of water and reduce the energy needed for pumping.
Principle of Nested Doll: This principle involves placing one object inside another, which is inside another, etc. In process design, this could be seen in the development of compact, integrated systems where processes that are typically separate are combined into a single piece of equipment. An example would be a multi-stage reactor where different reactions occur in concentric chambers, allowing for a more efficient process footprint and reduced cross-contamination between stages.
Employing these TRIZ principles in process design encourages innovative thinking and problem-solving. By systematically analyzing and applying these principles, designers can overcome challenges and improve process efficiency, effectiveness, and innovation.
iii. FMEA
Failure Mode and Effects Analysis (FMEA) is a systematic technique for identifying potential failure modes within a system, classifying them according to their severity, occurrence, and detectability.
During the design phase, FMEA helps anticipate potential points of failure of the proposed process design and allows designers to mitigate these risks early in the development process.
It involves creating an artifact to list possible failure modes, their effects on the system, causes, current controls, and the Risk Priority Number (RPN).
Table 3.3 below describes the components of the analysis together with its description.
![](https://www.appletongreene.com/wp-content/uploads/Table3_3.jpg)
FMEA Components and Descriptions
Below are the steps for completing the analysis:
a. Define the Scope and Team: Begin by clearly defining the scope of the FMEA, which will typically be the specific process to be designed. Assemble a cross-functional team with expertise relevant to the scope.
b. Identify the Components: Break down the process into its constituent activities (steps) or sub-processes, which assists in a detailed analysis and ensures that all aspects are considered.
c. Outcome Analysis: For each process step, identify its intended outcome. For example, the intended outcome of the Credit Check Request-to-Report sub-process (see Workshop 1 Course Manual 3) is to verify each applicant’s credit history. Understanding what each part is supposed to do is crucial for identifying when and how it might fail.
d. Failure Mode Identification: For each component or process step, list all the ways (modes) in which it could fail to perform its intended function. These are the failure modes, and they should be identified comprehensively.
e. Failure Effects Analysis: For each failure mode, identify all the potential effects on the process, customer, the environment and relevant stakeholder groups (see Work 1, Course Manual 3; Section 3.3). This step assesses the severity of each failure mode’s impact.
d. Failure Causes Analysis: Identify the potential causes for each failure mode, which involves understanding why a failure could happen. This failure could be due to design flaws, material defects, process variability, etc.
e. Risk Priority Number (RPN) Calculation: For each failure mode, assess three factors: severity (S), occurrence (O), and detection (D) on a scale from 1 to 10, where 1 represents the least severe impact, frequent occurrence or effective control and 10, the most (see Table 4.2 above). Severity is the impact of the failure, occurrence is the frequency with which the failure might happen, and detection is the likelihood of detecting the failure before it reaches the customer. Multiply these three numbers to get the Risk Priority Number (RPN): RPN = S × O × D. The RPN helps prioritize which failure modes require the most urgent attention.
f. Action Plan Development: For failure modes with high RPNs or those deemed unacceptable, develop action plans to mitigate or eliminate the risks, which could involve design changes, process improvements, additional quality controls, or other corrective actions.
g. Implement Actions: Carry out the action plans developed in the previous step, which may involve design changes, process modifications, or other interventions.
h. Results Review and Monitoring: After implementing the actions, review the results to ensure that the intended improvements have been achieved. Monitor the system, process, or product for ongoing effectiveness and make further adjustments as necessary.
i. Documentation: Throughout the FMEA process, document every step, decision, and action taken. This documentation is crucial for traceability, future reference, and ongoing quality improvement efforts.
Performing FMEA unlocks the following benefits:
o Proactive identification and mitigation of potential failure points.
o Prioritizing risks based on their impact leads to more focused and effective problem-solving.
iv. Tolerance Specification
Often, the CTQ (i.e. quantitative measure) will fall within a given range. Tolerance specification involves defining the acceptable limits for this range. For example, for the mortgage application-to-funds release process, it is specified that 80% of offers must be made to customers within 28 days (see Table 3.1). However, the specified range (typically determined from VoC data) may be 14 to 28 business days.
Below are some definitions associated with tolerance specification:
Target Specification: The desired value for the metric. For example, the organization may expect that most mortgage offers should be made to the customer 18 days from receipt of the application.
Mean Specification: The mean (or average) value of the upper and lower specification values. In the example above, the mean specification is 21 days, i.e. (14 + 28) / 2 = 21 days
Upper Specification Limit: The highest allowable value, also known as the upper tolerance. For the mortgage process, the upper specification indicates that no applications are expected to take more than 28 days.
Lower Specification Limit: The lowest value permitted. For the mortgage process, this will be 14 days as the lender needs to perform affordability checks and mortgage valuation on the property that serves as the collateral for the mortgage.
The following are outputs from the Design Phase:
o Simple L Matrix (or House of Quality)
o FMEA
o SIPOC (see Workshop 1, Course Manual 2, Section 2.5)
o Proposed Process Model (see Workshop 1, Course Manual 2, Section 2.5)
o Process Tolerance Specification
Exercise 3.2
3.4.4. Optimize
Optimization involves refining the design to ensure maximum efficiency and effectiveness while minimizing defects. The goal is to optimize performance and reliability, ensuring that the designed process can be produced consistently at Six Sigma quality levels.
The Optimize phase in the IDOV methodology is where the designed process is fine-tuned. The primary objective here is to enhance the process to ensure it meets and exceeds customer requirements while maintaining efficiency. This phase involves rigorous analysis and adjustment of the process to eliminate waste, reduce variability, and improve overall performance.
Data will likely exist for existing processes (see Figure 3.1) to determine the “Voice of the Process”. Additional tools (some of which will be explored in Course Manual 5) can be utilized to optimize the designed process. However, in this section, we will assume we are optimizing a “greenfield” process, i.e., developed entirely from scratch. Below, we examine in detail a process design optimization tool, namely Mistake Proofing (aka poke-yoke)
i. Mistake Proofing (Poka-Yoke)
Poka-Yoke is a Japanese term that roughly translates to “mistake-proofing”. Though it is typically viewed as a lean management tool (see Course Manual 6), it is a valuable tool for reducing defects in the design of a process. Poka-yoke refers to any mechanism in a process that helps avoid mistakes by preventing, correcting, or drawing attention to human errors. Developed by Shigeo Shingo as part of the Toyota Production System, it is an integral part of a broader philosophy that seeks to eliminate waste and improve the quality and efficiency of processes.
Poka-Yoke focuses on error prevention, which is far more effective and less costly than error detection and correction. It aims to eliminate defects by preventing human errors before they occur or create a defect. This is achieved through the design of processes or using specific tools that make mistakes impossible or at least very difficult.
There are three primary types of poka-yoke “devices”:
a. Contact Type: These devices detect whether a physical attribute of a product is as it should be, such as a part being present or correctly positioned.
For example, real-time error detection mechanisms are used in online forms or applications. Suppose a required field remains blank or is filled incorrectly (like an improperly formatted email address). In that case, the system immediately highlights the error and prompts the user to correct it before submission (see Figure 3.9), preventing incomplete or incorrect information from being submitted.
![](https://www.appletongreene.com/wp-content/uploads/Figure3_9.jpg)
Contact Type Poka Yoke Example
b. Fixed-Value Type: This kind checks for a predetermined number of movements or actions, ensuring a process is completed correctly. In banking, a fixed-value type poka-yoke can be implemented by having transaction counters. For example, suppose a bank teller must process a set number of documents or checks. In that case, the system can alert the teller if the actual number processed is less or more than the required number, ensuring that no documents are missed and that each transaction is completed with the necessary checks.
c. Motion-Step Type: These ensure that the correct sequence of process steps is followed.
In healthcare, particularly in surgical or clinical procedures, motion-step type poka-yoke can be crucial. A checklist that must be followed for each procedure ensures that no step is missed. For example, before surgery, a checklist might include steps like confirming the patient’s identity, the surgical site, the type of surgery, and whether all necessary equipment is available and sterile. This sequential checking ensures that each critical step is completed in the correct order, reducing the risk of errors.
The steps for implementing poka-yoke in the Optimize phase are as follows:
1. Identify which errors are likely to occur in the designed process. The identified failure modes from the FMEA will provide a list of these errors.
2. Create a poka-yoke mechanism that prevents, detects, or draws immediate attention to the error.
3. During pilot testing (see Validate Phase), implement the solution, test it, and refine it based on feedback and results.
The following is the main deliverable from the Optimize Phase:
o Poka-Yoke devices (ideally addressing each failure mode identified in the FMEA)
3.4.5. Validate
The final phase of the IDOV methodology is about validating the designed process, including rigorous testing to ensure that the product or service meets the defined specifications and customer requirements. Validation is critical to confirm that the design can deliver the desired performance in the real world. This phase may include pilot runs, quality assurance tests, and customer feedback sessions to ensure the final product meets or exceeds expectations.
This phase helps identify any residual issues that may not have been apparent in the controlled conditions of the design and optimization phases.
Below, we discuss critical tools employed in the Validate Phase.
i. Pilot Testing
Pilot testing involves implementing the process on a small scale to observe its performance in a controlled yet realistic environment. This step is crucial for identifying unforeseen problems and assessing the practicality of the process. Feedback from pilot testing can lead to final adjustments before full-scale implementation.
The steps below outline how to conduct a pilot test effectively:
1. Define Objectives
Clearly define the aims of the pilot test. Objectives could include validating the process design, identifying bottlenecks, testing the process under real-life conditions, or assessing feasibility and efficiency.
2. Select a Pilot Site
Choose a suitable location for the pilot test that closely mirrors the conditions of the full-scale implementation. The site should allow for controlled experimentation and data collection.
3. Develop a Test Plan
Create a detailed test plan that outlines the scope of the pilot, including the duration, the specific processes to be tested, the equipment and resources required, and the data to be collected. Define key performance indicators (KPIs) to evaluate the process.
4. Prepare the Pilot Site
Ensure the pilot site is ready for testing, which may involve setting up equipment, configuring process parameters, training staff involved in the pilot, and ensuring all safety measures are in place.
5. Conduct Training
Train the personnel who will be involved in the pilot test. They should understand the process, the purpose of the pilot, their roles and responsibilities, and how to collect and record data.
6. Execute the Pilot Test
Carry out the test according to the plan. Monitor the process closely and collect data on the defined KPIs. Ensure that all variations and observations are recorded accurately.
7. Monitor and Document
Continuously monitor the process during the pilot test. Document any issues, unexpected results, or deviations from the plan. This documentation will be invaluable for analyzing the test outcomes and making improvements.
8. Analyze Results
After completing the pilot test, analyze the collected data to assess how well the process performed against the objectives. Look for areas of success as well as opportunities for improvement.
9. Make Adjustments
Based on the analysis, identify any necessary adjustments to the process design or operation, which might involve tweaking process parameters, altering workflows, or addressing unforeseen issues.
10. Report Findings
Compile a comprehensive report detailing the pilot test’s objectives, methodology, results, and conclusions. Include recommendations for full-scale implementation and any identified areas for improvement.
11. Plan for Full-Scale Implementation
Use the insights gained from the pilot test to plan the implementation of the full-scale process. This should include a detailed rollout plan, considering any adjustments identified during the pilot.
12. Validate Adjustments
If significant changes are made post-pilot, consider conducting a follow-up test or validation step to ensure that these adjustments achieve the desired outcomes before proceeding with full-scale implementation.
Performing a pilot test is an iterative process that may require several rounds of testing and adjustment to optimize the process design fully. This step is crucial for mitigating risks and ensuring the process performs as expected when fully implemented.
ii. Capability Analysis
Process capability analysis is a statistical tool used to measure the ability of a process to produce output within specified limits. Just as it is helpful for a coach to measure how a basketball player can score points from different positions on a court, process capability is a valuable tool for assessing the data collected from pilot testing to determine how well a process meets its requirements in real-world conditions.
Key Process Capability Metrics:
Two key metrics are central to capability analysis: Cp (Process Capability Ratio) and Cpk (Process Capability Index). Below is a brief description of both metrics:
Cp (Process Capability Ratio): This metric tells you how well the designed process can fit within the specified limits without considering the process’s mean (average) position relative to these limits. It’s like measuring if the basketball player has the potential to score from anywhere on the court, not considering the accuracy of shots.
The following formula calculates it:
Cp = (USL – LSL) / (6σ)
Where USL is the Upper Specification Limit, LSL is the Lower Specification Limit (see Tolerance Specification above) and σ (sigma) represents the standard deviation of the process. i.e. a measure of how dispersed the data is about the mean. Data are tightly clustered around the mean when the standard deviation is low or small and more widely distributed when the standard deviation is high or large.
If Cp is greater than 1, the process’s potential capability is good and likely to meet the specification limits.
For example, for the mortgage process, the USL is 28 days, the LSL is 14 business days, and assuming the standard deviation of the process is 0.7 days, then:
Cp = (28-14) / (6 * 0.7) = 14 / 4.2 or approximately 3.33, meaning the process potentially fits within the specification limits quite well.
Cpk (Process Capability Index): This metric further extends the process capability ratio. It considers how centered the process is relative to the specification limits. Revisiting the analogy of the basketball player, it indicates whether they can score from anywhere but if their average shot lands in the basket.
The following formula calculates it:
Cpk = min [(USL – μ) / (3σ), (μ – LSL) / (3σ)] Where μ (mu) represents the process mean.
Note: min[x,y,z] is a mathematical operation indicating that the minimum value in the set of numbers contained in square brackets should be selected. For example, min [2,3,4] = 2
If Cpk exceeds 1, the process is considered capable and well-centered within the specification limits.
For example, continuing with the mortgage process, if the average time to offer is 17 days, then:
Cpk = min [(28-17)/(3*0.7) , (17-14)/(3*0.7)] = min[5.24, 1.43] = 1.43
As the process capability index is greater than 1, it suggests that the process fits within the specification limits and centers well, indicating that it is both capable and reliable.
iii. Customer Feedback
Direct feedback from end-users on the designed process will provide insights into how the process is perceived in a real-world scenario. The tools utilized to gather VoC data (.i.e. surveys, focus groups, and user testing sessions) can also be used to collect customer feedback.
In conclusion, it should be borne in mind that validation is often an iterative process. Feedback from each tool should be used to make incremental improvements. Effective validation also requires collaboration between different teams. However, a well-executed Validate phase is critical to achieving excellence in process design, ensuring that the outcomes are theoretically sound, practically viable, and customer-centric.
The following are deliverables from the Validate Phase:
o Pilot test report
o Capability Analysis Metrics
o VoC data on designed process
Case Study: Job Requisition-to-Hire Process, Pharmaceutical Company
Problem
The time required to recruit a sales representative (calculated from when a sales rep left the role to the new sales rep’s first day on the job) was 60 days (on average), which stakeholders considered excessive.
The company needed to organise its hiring process to be as effective and efficient as possible because it expected to raise the sales force by 10% over the next three years due to a growing market.
The amount of revenue lost daily due to the absence of a sales representative in a particular region was significant.
Process Redesign Activities
The process redesign project team adopted the Identify-Design-Optimize-Verify (IDOV) methodology. Below is a summary of key activities undertaken in each phase and the outputs:
i. Identify
• The team collected data from the Voice-of-the-Customer (VOC), and the attributes that were most critical to the quality of the process outputs (CTQs) were summarized in a design scorecard.
• An initial baseline that demonstrated the performance of the current process against the requirements was produced using historical data.
ii. Design
• The team defined the high-level process activities without considering low-level solutions, detailed concepts, or a detailed process design.
• The team identified the high-level activities that contributed most to satisfying the customer’s requirements, which enabled them to narrow their focus on three of these activities, namely:
o Create Requisition
o Interview Candidates
o Prepare Compensation and Benefits Package
![](https://www.appletongreene.com/wp-content/uploads/Figure3_10.jpg)
High-level Process Activities
iii. Optimize
More detailed activities, roles and duties, information systems, human resources, templates and tools, and supplier quality were specified and documented for the three prioritised high-level activities.
The Failure-Mode-and-Effects-Analysis (FMEA) methodology was used to conduct a risk analysis for the new process to anticipate future failure modes and their causes so that the right mitigation measures could be agreed.
![](https://www.appletongreene.com/wp-content/uploads/Figure3_11.jpg)
Detailed Process Activities for ‘Planning and Conducting the Candidate Interview ‘
![](https://www.appletongreene.com/wp-content/uploads/Table3_5.jpg)
FMEA for ‘Planning and Conducting the Candidate Interview‘
iv. Verify
The project team implemented a pilot of the newly designed process in three regions and refreshed the data for the CTQs for comparison against the baseline.
Additionally, they obtained and examined data about the FMEA’s highest-priority failure modes to confirm which process steps were crucial and needed to be regulated to indicate the process performance early.
The team made final adjustments based on the pilot’s results.
Results
The new process was transitioned to the Head of HR (the Process Owner) after the pilot phase had been completed.
Post-implementation process outcomes (measured after six months) indicated that all stakeholder expectations were completely surpassed.
Exercise 3.3
Course Manual 4: Process Discovery
1.15 Introduction
In many organizations, existing business processes likely require optimization. New business processes are not very often designed from scratch as these typically occur due to new regulations or market pivots, which are generally infrequent.
This course manual will explore the Six Sigma DMAIC methodology – a set of systematic, data-driven techniques and tools for improving, optimizing and stabilizing existing business processes. Below, we detail each phase and the associated tools.
1.16 DMAIC Methodology
DMAIC stands for Define, Measure, Analyze, Improve, and Control. It is a structured problem-solving technique that guides process optimization projects. Each phase in DMAIC serves a distinct purpose in the overall process improvement journey, which emphasizes the use of statistical data to make informed decisions.
A helpful analogy to explain the DMAIC methodology is the process of diagnosing and treating a patient. The treatment process begins with a diagnosis, where a healthcare professional determines the patient’s condition based on their symptoms and medical history (Define Phase), followed by various tests conducted to measure the patient’s baseline health status (Measure Phase). These assessments can include blood tests, imaging, and other diagnostics to understand the severity and specifics of the condition. Subsequently, doctors analyze the diagnostic results to identify the underlying cause of a patient’s symptoms, which might involve differential diagnosis techniques considering multiple potential causes, which are either confirmed or ruled out (Analyze Phase).
Once a diagnosis is confirmed, a treatment plan is developed and implemented. This plan could involve medication, surgery, lifestyle changes, or a combination of treatments aimed at addressing the root cause of the patient’s condition (Improve Phase). In patient care, the treatment process often concludes with a plan for ongoing management and follow-up, including regular check-ups, medication adjustments, and lifestyle management to ensure the patient’s condition remains stable or improves over time (Control Phase).
Below, we delve into each of these phases for a process optimization initiative.
4.2.1. Define
This initial phase aims to discover the current state of the process, which involves clearly defining the problem (or the process improvement opportunity initiative) and the internal or external customer requirements. Tools such as process models, SIPOC (Suppliers, Inputs, Process, Outputs, Customers) diagrams (discussed in Workshop 1 Course Manual 2), and Voice of the Customer (VOC) analysis (discussed in the preceding course manual) are commonly used to do this. Additionally, in a typical DMAIC process initiative, the phase also focuses on identifying the goals and scope of the process optimization initiative. However, we will address this in a future course manual.
This section will explain the primary tools used in the Define phase and delve into manual process discovery methods, including interviews, workshops, and document exploration.
Manual Process Discovery Methods
Manual process discovery involves identifying and documenting the various processes that occur within an organization. Below, we outline four methods for performing manual process discovery: interviews, workshops, observation, and document review.
i. Interviews
Interviews are a cornerstone of manual process discovery, providing direct insights from those who perform and manage the processes.
a. Preparation: Before conducting interviews, it’s essential to identify the right stakeholders and prepare open-ended questions covering all aspects of the process.
b. Execution: During interviews, encourage interviewees to detail their daily routines, challenges, and any improvisations they make in their processes. Active listening and follow-up questions are crucial to uncovering hidden details.
c. Analysis: Post-interview, analyze the responses to identify patterns, inconsistencies, and areas for improvement. Collating insights from various interviews can offer a comprehensive view of the process.
ii. Workshops
Workshops bring together multiple stakeholders, fostering a collaborative environment for process discovery.
a. Planning: Define clear objectives for the workshop and ensure representation from all relevant departments. Prepare interactive activities that encourage participation and discussion.
b. Conducting: Use brainstorming, role-playing, and process modeling techniques to facilitate a deep dive into processes. Encourage open communication and the sharing of different perspectives.
c. Follow-Up: Document the workshop outcomes and share them with participants for validation. This step ensures that the collective understanding of the process is accurate.
iii. Observation
Direct observation provides an unfiltered view of how processes are carried out (.i.e., the de facto process) as opposed to how they should be performed (.i.e., the normative process).
a. Setting: Choose a representative period for observation to ensure that the process is observed under typical conditions
b. Methodology: Employ techniques like shadowing or time-motion studies to gain insights into the process flow, time taken for each step, and potential bottlenecks.
c. Analysis: Analyze the observed data to identify discrepancies between the intended process and actual practice. This analysis can reveal inefficiencies and areas for process optimization. This analysis is also referred to as conformance checking.
iv. Document Review
Reviewing existing documentation can offer insights into the formalized aspect of processes.
a. Collection: Gather all relevant documents such as process manuals, flowcharts, standard operating procedures (SOPs), and previous process audits.
b. Examination: Critically examine these documents to understand the officially documented process. Pay attention to any outdated information or inconsistencies with current practices.
c. Comparison: Compare the insights gained from documents with those obtained from interviews, workshops, and observations. This comparison helps identify gaps between formal procedures and actual practices.
Challenges of Manual Process Discovery
Traditionally, process discovery has been completed manually using the methods outlined above. While organizations must understand how to discover processes manually, it is also worth acknowledging that this approach is fraught with challenges that can undermine its effectiveness. Dumas et al describe three significant challenges associated with manual process discovery as follows:
i. Fragmented Process Knowledge
One of the primary challenges of manual process discovery is the fragmentation of process knowledge. In most organizations, knowledge about processes is often dispersed among various individuals and departments. This dispersion creates several issues:
a. Inconsistency in Understanding: Different employees may have varying perceptions of the same process, leading to inconsistencies in how processes are understood and documented.
b. Knowledge Hoarding: Some employees might hold crucial process knowledge that they do not readily share due to job security concerns or lack of proper communication channels.
c. Time-Consuming Data Collection: Gathering information from multiple sources is time-consuming and often extends the time required to model the process.
ii. Lack of Generalization
Another significant challenge is the lack of generalization in manual process discovery. When processes are discovered manually, they often reflect the specific experiences and biases of the individuals involved. This subjectivity can lead to several problems:
a. Process Variability: Different individuals might document the same process differently, leading to a lack of standardization.
b. Ineffectiveness in Diverse Scenarios: Manually discovered processes may not be applicable or efficient in contexts other than those where they were discovered.
c. Difficulty in Scaling: Processes that are too specific are difficult to scale or adapt to changing business environments.
iii. Lack of Familiarity with Process Modeling Notation
Finally, the effectiveness of manual process discovery is often hampered by a lack of familiarity with process modeling notation (e.g., BPMN) among those involved in process discovery. This unfamiliarity leads to:
a. Inadequate Documentation: Process documentation can become inconsistent and difficult to interpret without a standardized modelling notation.
b. Limited Communication: The inability to use a common modeling notation limits the ability of different stakeholders to communicate effectively about processes.
c. Barrier to Automation: Non-standardized or poorly documented processes pose challenges when organizations attempt to transition to automated process discovery tools.
In Workshop 6, we will explore descriptive process mining, which aims to automatically discover process models and mitigate some of the abovementioned risks.
The following are outputs from the Discovery Phase:
o SIPOC (see Workshop 1, Course Manual 2, Section 2.5)
o Current State Process Model (see Workshop 1, Course Manual 2, Section 2.5)
o CTQ Trees
o VoC-to-CTQ Translation Tables
Exercise 4.1
4.2.2. Measure
The Measure phase in the DMAIC methodology focuses on quantifying the problem stated in the Define phase by collecting data using precise measurement tools to ensure the data’s accuracy and reliability. The tools typically utilized in this phase include Measurement Systems Analysis (MSA) and data collection plans, among others.
The Measure phase sets the foundation for identifying, analyzing, and eventually eliminating the root causes of defects or inefficiencies. Below, we delve into the concept of Y=f(x), understanding the types and sources of variation, distinguishing between common and special causes of variation, and learning how to select variables to measure.
i. Y=f(x) Concept
At the heart of the Measure phase, and indeed the DMAIC methodology, lies the formula Y=f(x), which succinctly captures the essence of process improvement. In this equation, ‘Y’ represents a process outcome(s), while the ‘x’ components are the input variables or process steps that can be adjusted or controlled to affect these outcome(s). This formula emphasizes that every outcome is the result of specific inputs, and by understanding and controlling these inputs, one can influence the desired outcomes. For example, ‘Y’ could be the customer satisfaction score in a customer service call centre, which may be influenced by various ‘x’ factors like call handling time, agent courtesy, and resolution efficiency.
Let’s revisit the mortgage application-to-funds release process, where a primary outcome measure (Y) is the time to make an offer. Potential input variables (X’s) could include the number of missing requirements at submission, the time taken to obtain a valuation report and the Defect Per Million Opportunities (DPMO) of the mortgage underwriting sub-process. By measuring these variables, the mortgage provider can identify which inputs most significantly impact the approval time and target these for improvement.
ii. Types of Process Variation
In any process, variation is inevitable, but identifying and reducing unwanted variation is crucial for process improvement. Variations can broadly be classified into common causes (inherent to the process) and special causes (external to the process and often sporadic).
Common Cause Variation: These are the natural, predictable variations within a process. In a service setting, this might include the time of day affecting call volumes in a customer service center. Such variations are systemic and require changes to improve the process.
Special Cause Variation: These variations are unpredictable and not inherent to the process. They are often due to specific, identifiable events. For instance, a sudden spike in call volume due to a product recall announcement is a special cause that is not part of the normal process variability.
Common vs. Special Cause Variation
The distinction between common and special cause variation is critical for process improvement. Common cause variation requires a strategic approach, often involving changes to the process or system to reduce the variation. On the other hand, special cause variation calls for identifying and addressing the specific external factor(s) causing the variation. Misidentifying these causes can lead to ineffective solutions, such as making unnecessary changes to a process when a simple, specific fix is needed.
![](https://www.appletongreene.com/wp-content/uploads/Figure4_1-1-300x111.jpg)
Common vs Special Cause Variation
iii. Sources of Process Variation
Item-to-Item Variation
Item-to-item variation refers to the differences observed between individual outputs or products of a process. This variation can stem from multiple sources, including variability in raw materials, operator performance, or environmental conditions. For example, in a service setting such as a fast food restaurant, the item-to-item variation could manifest in a specific menu item’s preparation time and quality. Understanding and reducing this type of variation is crucial for improving consistency and customer satisfaction.
Measurement System Variation
Measurement system variation concerns the differences in data collected due to the measurement process rather than variations in the actual measured process. This type of variation can significantly impact the interpretation of data and subsequent decisions made during the DMAIC project. It encompasses two main components:
Repeatability: The variation observed when the same operator measures the same item or characteristic multiple times using the same measurement tool. Lack of repeatability can signal issues with the measurement tool or its use.
Reproducibility: The variation observed when different operators measure the same item or characteristic under the same conditions with the same measurement tool. Differences in reproducibility can indicate training gaps or subjective measurement techniques.
Addressing measurement system variation is pivotal before making any conclusions about process improvements, as decisions based on flawed data can lead to misguided efforts.
Understanding these variations is essential for deciding which aspects of a process need standardization and which require more detailed investigation.
![](https://www.appletongreene.com/wp-content/uploads/Figure4_3-400x289.jpg)
Total Process Variation Breakdown
iv. Data Collection Planning
Data collection planning is a structured approach to deciding what data will be collected, how, by whom, and when. This plan ensures that the data collected will be relevant, accurate, and sufficient for analysis.
Selecting the right variables to measure is pivotal in the Measure phase. The chosen variables should be closely linked to the process’s critical outcomes (Y’s), which are identified in the Define phase. The selection process involves:
a. Identifying Critical to Quality (CTQ) Attributes: These are the attributes most important to the customer, which the Y variables represent.
b. Modeling the Process: Understanding the process flow helps identify potential X variables influencing the Y.
c. Statistical Analysis: Preliminary data collection and analysis can help identify which variables show the most significant correlation with the Y variables and, therefore, should be measured in detail.
For instance, if customer satisfaction (Y) is primarily affected by wait time and food quality, these are the variables to measure in a restaurant. Initial data might show that the variability in customer satisfaction correlates more strongly with wait time than other factors, indicating where to focus improvement efforts. Having said this, as we shall later discuss, correlation does not imply causation. We shall examine proven techniques for identifying and validating the actual root causes of problems in Workshop 7.
v. Sampling
A critical component of data collection planning involves selecting a subset of data from a process that is representative of the entire process. Sampling methods include:
Random Sampling: Every item or event has an equal chance of being selected. This method reduces bias but may not always be practical in every situation.
Stratified Sampling: The process is divided into subgroups (strata), and samples are taken from each subgroup. This method ensures representation across different segments of the process.
Systematic Sampling: Items are selected at regular intervals throughout the process. While easier to implement, it may introduce bias if a pattern coincides with the sampling interval.
Considerations in sampling include ensuring the sample size is sufficient to draw statistically significant conclusions and that the sampling method does not introduce bias.
vi. Data Collection Tools
The right data collection tools are crucial for effectively capturing the necessary information. Common tools include check sheets, surveys, interviews, time recording devices, and software tools for automated data capture. The choice of tool depends on the nature of the data being collected, the process being analyzed, and the resources available for the project.
In Workshop 5, we will explore how data can be extracted from “process-aware” information systems that support most organizational processes to provide valuable insights into the performance and efficiency of these processes, identify areas for improvement and optimize operations.
The following are deliverables from the Measure Phase:
o Data Collection Plan
o Collected Data
Case Study: City of San Antonio, Texas, USA
Problem
The City of San Antonio’s network of on-call contractors, which it used for roadway repair projects, frequently complained to the city that they were not getting paid for their work promptly. Contractors frequently complained that it took more than 30 days for the city to process reimbursements for their completed work.
Contractors would inevitably stop accepting bids on City of San Antonio projects if the city did nothing to address the delayed compensation situation, adversely impacting the time taken to address roadway maintenance concerns across the entire city. This would, in turn, have a detrimental impact on San Antonio’s infrastructure and raise concerns about public safety.
How Six Sigma Helped
Based on available metrics, the city usually completed payments to its contractors in an average of 13 days. However, this time frame measured processing time from when the city received all required documentation. In reality, a sizable portion of the payment for completed work could not be processed in that time frame due to problems such as rejected invoices.
The project manager created a plan for gathering data for the Measure phase. The data collected included the number of payments completed, payments made on time, rejection rate, types of rejections, and total number of rejections by type, among others.
During the Analyze stage, multiple underlying reasons for the rejection of payment requests were found. The team determined that the contractor amounts in the payment requests did not match the verified quantities that the City of San Antonio had agreed upon, which might be explained by the variance in the consistency of documentation by the city’s inspectors. This analysis revealed a need for uniform reporting and documentation among the inspectors.
Changes implemented during the Improve phase included a modification of the project management system’s workflow to notify contractors of rejected orders. The project management software was amended to include a tolerance threshold for quantity yield computations to reduce the number of rejections. Introducing a daily standardized quantity log for recording and a published procedure for reporting them addressed the root cause identified in the Analyze phase.
Results
The average number of monthly payments processed increased from 97 to 116, indicating a spike in overall payments. Additionally, the enhancements decreased the average number of monthly declined payments from 17 to 12. The percentage of rejections attributable to disputed quantity amounts fell from 58% on average to 42% on average.
Exercise 4.2
Course Manual 5: Process Adjustment
1.17 Introduction
In this course manual, we continue our examination of the DMAIC methodology focusing on the last three phases – Analyze, Improve and Control. These phases correspond to the process adjustment activities in the process lifecycle discussed earlier.
Below, we discuss each of these phases in detail, including their objectives, the tools and techniques employed, and the expected deliverables from each phase.
1.18 DMAIC Continued
5.2.1. Analyze
The Analyze phase involves examining the data collected (in the Measure phase) to identify the root causes of defects or other identified problems. This phase utilizes various statistical analysis and data visualization tools, such as Exploratory Data Analysis (EDA) and Pareto Analysis, to provide insight into sub-optimal process performance. The goal is to pinpoint precisely what needs to be improved to optimize the process.
Below, we delve into a detailed examination of these tools, commencing with an understanding of different data types and implications for analysis.
i. Understanding Data Types
Data in the Analyze phase can be broadly classified into numeric and categorical data.
Numeric Data: This includes any data that is quantifiable and measurable. It can be further divided into:
o Nominal: These are numbers used solely for identification purposes, such as a customer ID or a transaction number.
o Ratio: Ratio data has a true zero (.i.e, a total absence of the variable of interest) and can be used in calculations. Examples include the years of experience a process performer has or the cost of items processed.
o Count: Count data represents the number of occurrences, such as the number of complaints received in a month.
Categorical Data: This type of data represents categories or groups, e.g. the type of service requested by customers (e.g., banking, insurance) or the region where the service is provided.
o Ordinal Data: This is a type of categorical data where the categories have a meaningful order or ranking, but the differences between these categories are not necessarily consistent or measurable. This characteristic distinguishes ordinal data from nominal data (which doesn’t have a natural order) and ratio data (which has order and quantifiable differences between measurements).
An example of ordinal data in a service process can be found in customer satisfaction surveys on a numerical scale (e.g. from 1-5) or on a scale from “Very Unsatisfied” to “Very Satisfied.”
Understanding data types lets us know what mathematical operations can be logically performed on the collected data. See Table 5.1 below for permissible operations on the various data types. For example, adding customer IDs or performing division on ordinal data is not permissible. i.e. a customer who gives a service experience score of ‘4’ in a satisfaction survey is not twice as satisfied as the customer who scored the same experience ‘2’.
![](https://www.appletongreene.com/wp-content/uploads/Table5_1.jpg)
Permissible mathematical operations by data type
Continuous Data
Continuous data can take any value within a given range or interval and represent measurements. These data are not restricted to defined separate values but can occupy any value over a continuous range. For instance, temperature, height, weight, and time are examples of continuous data. Because continuous data can theoretically assume an infinite number of values, they allow for a very precise level of measurement. When graphically represented, continuous data often take the form of line graphs or histograms with a smooth curve, indicating the flow of data points seamlessly without gaps.
Discrete Data
Discrete data, on the other hand, consists of distinct, separate values. This data type can only take specific values and cannot be subdivided meaningfully. Discrete data typically represent counts of items or occurrences and can be categorized into nominal or ordinal categories. Examples include the number of students in a class, the number of cars in a parking lot, or the results of a roll of a die. Discrete data are often represented graphically as bar charts or pie charts, emphasizing the separate, distinct values that the data can take.
ii. Exploratory Data Analysis (EDA)
EDA involves using statistical figures and visualizations to summarize the main characteristics of the data.
Measures of Central Tendency: In statistics, measures of central tendency are metrics that describe a dataset’s center point or typical value. The three most common measures are the mean, median, and mode. Each has its unique method of calculation and applicability, offering various insights into data distribution. Understanding these measures is crucial in fields ranging from business analytics to social sciences, as they help summarize complex datasets with a single, informative value.
Credit Card-to-Activation Process: Time to Decision Sample Data
Mean
The mean, often called the average, is calculated by adding up all the values in a dataset and dividing by the number of values. For the sample data above (see Table 5.2), the mean time to provide a decision to the customer is 2.9 days (3+4+0+1+1+6+5+3 = 23/8)
Pros:
o The mean includes all values in the dataset, making it a comprehensive measure.
o It is useful for further statistical analyses, such as variance and standard deviation.
Cons:
o The mean is sensitive to outliers, which can skew the result.
o It may not accurately represent the central value in a skewed distribution.
Median
The median is the middle value in a dataset when the values are arranged in ascending or descending order. If the dataset has an even number of observations, the median is the average of the two middle numbers. The median effectively splits the dataset into two equal halves. For the example above, the mean is 3 (. i.e. 6/2) as the two middle numbers (when the data is arranged in ascending order) are both 3.
In a process cycle time analysis, the median time can indicate the typical service duration, even if there are extreme outliers due to unexpected delays.
Pros:
o The median is unaffected by outliers or skewed data, making it a robust measure of central tendency.
o It provides a better central value for skewed distributions than the mean.
Cons:
o The median does not consider the value of all data points, which can be a limitation in evenly distributed datasets.
o It is less useful for further statistical analyses.
Mode
The mode is the value that appears most frequently in a dataset. A dataset can have more than one mode if multiple values have the same highest frequency.
For the example above, two modes (1 and 3) occur twice in the data.
In analyzing common service requests, the mode can identify the most frequently requested service type, offering insights into customer needs and preferences.
Pros:
o The mode is the only measure of central tendency that can be used with nominal data.
o It is not affected by extreme values or outliers.
Cons:
o In datasets with no repeated values, the mode is not defined.
o When data have multiple modes, it can be less informative about the center of the distribution.
iii. Data Visualization
Data visualization plays a crucial role in data analysis and decision-making processes by transforming complex datasets into intuitive visual formats that facilitate understanding and insight. Among the myriad visualization tools available, histograms, scatter plots, and box plots stand out for their ability to convey detailed data distributions, relationships, and variations. Below, we delve into these three types of visualizations, discussing their purposes, how to interpret them, and their significance in data analysis.
Histograms
A histogram is a type of bar chart representing a dataset’s frequency distribution. Each bar in a histogram corresponds to a range of values, known as a bin, and the height of the bar indicates the number of observations within that bin. Histograms are ideal for understanding continuous data distributions’ shape, central tendency, and spread.
For example, Figure 5.1 below shows a histogram displaying the frequency of hourly defect rates, helping identify the most common rate (. i.e. three defects per hour) and the distribution’s skewness towards high or low defect rate.
Below are some critical characteristics of histograms that facilitate interpretation:
o Shape: The overall shape of a histogram can indicate whether a distribution is symmetric, skewed, bimodal, etc. This helps identify patterns and anomalies in data.
o Central Tendency: Although histograms do not directly show the mean or median, these can be inferred from the distribution’s center.
o Spread: The range and distribution of the bars indicate the variability of the data. Wider spreads suggest higher variability.
IMAGE TOO BLURRED
Scatter Plots
Scatter plots typically display numerical values for two variables in a dataset. The position of each dot on the horizontal and vertical axis indicates values for an individual data point. Scatter plots are used to observe relationships, trends, and potential correlations between variables, e.g. the relationship between a process input variable and the output variable.
In Figure 5.2 below, the scatter plot examines the relationship between delivery time and distance, revealing that longer delivery times correlate with larger distances.
Below are three characteristics of scatter plots that help to interpret them:
o Trend: A pattern in the points can suggest a relationship; an upward trend indicates a positive correlation, while a downward trend suggests a negative correlation.
o Strength: The closer the points are to forming a straight line, the stronger the relationship between the variables.
o Outliers: Points that deviate significantly from the overall pattern can be identified as outliers.
.
Box Plots
Box plots, or box-and-whisker plots, provide a five-number summary of a dataset: the minimum, first quartile (Q1), median, third quartile (Q3), and maximum. The “box” represents the interquartile range (IQR), and the “whiskers” extend to the minimum and maximum values within a specific range, typically 1.5 times the IQR. Points outside this range are considered outliers.
For example, Figure 5.3 below analyses the number of invoices processed by department locations. The box plot summarizes processing volumes’ central tendency and variability across different locations, highlighting departments with unusual spreads or outliers (e.g. Dept Location D).
Below are some essential characteristics of boxplots that facilitate interpretation:
o Central Tendency and Spread: The box shows the median, highlighting the data’s central tendency and the IQR, indicating the data’s spread.
o Skewness: Asymmetry in the box and whiskers can indicate skewness in the distribution.
o Outliers: Points outside the whiskers are outliers and can be significant in understanding data anomalies.
iv. Understanding Data Distributions:
Data distribution refers to how data points are spread out or clustered over a range of values. Distributions can be visualized using various graphs, such as histograms, box plots, or density plots, each offering insights into the dataset’s characteristics, such as its central tendency, variability, and outliers.
Understanding data distributions is crucial for selecting appropriate statistical methods for analysis, interpreting results, and making informed decisions.
The calculation or representation of a data distribution often starts with organizing data points within a defined range. For continuous data, this involves dividing the range of data into bins or intervals and counting the number of data points that fall into each bin. The resulting histogram visually represents the distribution, showing the frequency (or probability) of data points within each interval.
For discrete data, distributions can be represented using frequency tables or bar charts, where each unique value is counted and displayed.
Skewness in Data Distributions
Skewness measures the asymmetry of the probability distribution of a real-valued random variable about its mean. In simpler terms, skewness indicates whether the data points are spread out more on one side of the mean than the other.
Positive Skew (Right-Skewed): The tail on the right side of the distribution is longer or fatter than the left side, indicating that a majority of the data points are concentrated on the lower side of the scale (see Figure 5.4 below).
![](https://www.appletongreene.com/wp-content/uploads/Figure5_4-400x245.jpg)
Right–Skewed Distribution
Negative Skew (Left-Skewed): The tail on the left side of the distribution is longer or fatter than the right side, suggesting that most data points are concentrated on the higher side of the scale. (see Figure 5.5 below).
Skewness significantly affects data analysis and interpretation. For instance, in a positively skewed distribution, the mean will be higher than the median, which can impact the analysis if the mean is used to measure central tendency.
![](https://www.appletongreene.com/wp-content/uploads/Figure5_5-400x256.jpg)
Left–Skewed Distribution
The Normal Distribution
The normal distribution, also known as the Gaussian distribution, is a symmetric distribution where the mean, median, and mode are all equal and located at the center of the distribution. It is characterized by its bell-shaped curve, where data points are evenly distributed around the mean, decreasing in frequency as they move away from the center.
The Six Sigma methodology uses the sigma rating to indicate the maturity of a process by measuring how many defects it produces. The term “Six Sigma” refers to six standard deviations (sigma) between the mean of a process and the nearest specification limit. Practically, achieving Six Sigma means striving for 3.4 defects per million opportunities (DPMO), representing a nearly defect-free process (see Figure 5.6). The sigma level for a process determines how close the process is to its mean (or target) and how much variation exists. The goal is to have as small a variation as possible, corresponding to a higher sigma level.
v. Pareto Analysis:
Pareto Analysis, often encapsulated by the “80/20 rule,” is a statistical technique in decision-making used to identify a limited number of factors that produce the majority of an effect. It is named after Vilfredo Pareto, an Italian economist who observed that 80% of Italy’s land was owned by 20% of the population in 1906. This principle was later generalized to imply that a small number of causes often lead to a large portion of the effect, a concept widely applicable across various domains, including service processes.
The core idea behind Pareto Analysis is to prioritize efforts on those few causes that have the most significant impact on an issue rather than dispersing efforts thinly over many. In the context of process optimization, it can be a powerful tool to identify critical areas that need improvement, optimize resources, and enhance customer satisfaction efficiently.
Below are the steps for conducting a Pareto Analysis:
a. Identify and List Problems: Gather data on service process issues.
b. Score Problems: Quantify the impact of each problem, often by frequency of occurrence or financial cost.
c. Rank Problems: Order the issues from the highest score to the lowest.
d. Create a Pareto Chart: A visual representation with problems displayed on the horizontal axis and their scores on the vertical axis, typically as a bar graph. A cumulative line graph often overlays this to indicate the cumulative impact.
e. Analyze and Take Action: Focus on the issues that contribute most significantly to the problem (typically the leftmost bars, which should ideally account for roughly 80% of the impact).
Consider Figure 5.7, which displays the relative frequency of causes of errors on a website. The Pareto chart indicates that broken links, spelling errors and missing title tags account for 80% of the errors, while the other seven causes comprise the remaining 30%. By creating a Pareto Chart, the developers can visually assess these contributions and decide to prioritize addressing those top three issues first.
![](https://www.appletongreene.com/wp-content/uploads/Figure5_7.jpg)
Pareto Chart for Error Causes on a Website
.
The following are outputs from the Analyze Phase:
o Process Capability Analysis (see Course Manual 4, Section 4.4.5)
o Analysis Report (utilizing the tools above to identify root causes)
5.2.2. Improve
Based on the analysis, this phase focuses on developing and implementing solutions to address the root causes identified. It involves ideation and innovation to find the best solutions, which are subsequently tested through pilots or simulations. Techniques typically utilized in this phase include prioritization and pay-off matrices, future state process definition (see Workshop 1, Course Manual 2, Section 2.5) and FMEA (see Course Manual 4, Section 4.4.3). Understanding these tools can significantly enhance the effectiveness of process improvement initiatives.
During this phase, it is not unusual to be inundated with a plethora of ideas to enhance efficiency, reduce costs, and improve customer satisfaction. However, resources such as time, budget, and personnel are limited, making it imperative to prioritize initiatives effectively. This is where prioritization and pay-off matrices become invaluable tools, helping decision-makers objectively evaluate and rank process improvement ideas according to their potential impact and feasibility. Below, we examine each tool in turn
i. Prioritization Matrix
A prioritization matrix (also referred to as a weighted criteria matrix) serves several critical functions in organizational strategy and process improvement:
Creating an effective prioritization matrix involves the following steps:
a. Define Criteria: The first step is to establish the criteria against which ideas will be evaluated. Common criteria include impact on business goals, cost, ROI, feasibility, and alignment with strategic objectives. It’s crucial that these criteria are tailored to the organization’s specific context and goals. In Table 5.3 below, three criteria – business value, cost of implementing the solution and implementation risk – are selected.
b. Gather Improvement Ideas (or Solutions): Collect a comprehensive list of process improvement ideas from various sources within the organization, including frontline employees, management, and customers. In the example below (see Table 5.3), three solutions – Adapt Product to French Market, Develop Mobile App and User Onboarding 2.0 – are being evaluated.
c. Weight the Criteria: Not all criteria will be equally important. Assign weights to each criterion to reflect its relative importance to the organization. This step ensures that the scoring system aligns with organizational priorities. In Table 5.3 below, the criteria (.i.e. business value, cost of implementing the solution and implementation risk) are weighted 5, 2 and 3, respectively.
d. Score Each Idea: Rate each idea against the predefined criteria using a consistent scoring system, such as a scale from 1 to 5 or 1 to 10. This step may require quantitative data analysis, expert judgment, or a combination of both. In the example below, the three solutions are scored 3,5 and 1 for Business Value (i.e. the multiplier for the Weighting)
e. Calculate Scores: Multiply the score of each idea by the weight of each criterion and sum these to get a total score for each idea. This quantifiable score will help in comparing ideas.
f. Rank the Ideas: Organize the ideas in descending order based on their total scores. This ranking will reveal which ideas are most worth pursuing based on your established criteria and weights.
g. Review and Select Ideas: With the prioritized list or matrix, stakeholders can decide which ideas to implement. This step may also involve discussions to consider any qualitative factors not captured in the scoring.
ii. PayOff Matrix
A Payoff Matrix is a tool for evaluating and prioritizing process improvement ideas based on their potential impact and required effort. This matrix typically divides improvement initiatives into four quadrants along two axes: payoff (or impact – high or low) on the horizontal axis and difficulty (or effort – high or low) on the vertical axis. By categorizing ideas into these quadrants, decision-makers can more effectively allocate resources and prioritize actions that offer the best return on investment.
Below are the steps for creating a payoff matrix:
a. Identify Improvement Ideas: Begin by listing potential improvement ideas. These can emerge from brainstorming sessions, customer feedback, employee suggestions, or analysis of performance data.
b. Define Impact and Effort: Clearly define what constitutes “high” and “low” payoff and difficulty within the context of your organization. The payoff could be measured in terms of customer satisfaction, cost reduction, revenue increase, or efficiency gains. Difficulty could encompass time, cost, manpower, or technical complexity required to implement the idea.
c. Assess Each Idea: Evaluate each improvement idea against the predefined criteria for payoff and difficulty. This assessment often requires input from cross-functional teams to ensure accurate evaluation.
d. Plot on the Matrix: Place each idea on the matrix according to its assessed impact and effort. This visual representation helps in comparing the initiatives relative to each other.
![](https://www.appletongreene.com/wp-content/uploads/Figure5_8-400x277.jpg)
Payoff Matrix
The four quadrants should be analyzed as follows (see Figure 5.8):
o Implement (High Payoff, Low Difficulty)- Quick Wins: These initiatives are highly desirable as they promise significant benefits without requiring substantial resources. They should be prioritized and implemented first.
o Challenge (High Payoff, High Difficulty) – Major Projects: These ideas could transform the business but need significant resources. They require thorough planning and might be executed in phases.
o Possible (Low Payoff, Low Difficulty) – Fill-ins: While these initiatives don’t offer major benefits, their low cost and ease of implementation make them worthwhile as secondary projects
o Kill (Low Payoff, High Difficulty) – Thankless Tasks: These are the least desirable and should be reconsidered or dropped, as they consume resources without substantial benefits.
The Payoff Matrix not only aids in prioritizing process improvement ideas but also facilitates strategic discussions about resource allocation and project scheduling. Below is a guide for how to interpret and use the matrix:
Prioritize and Implement Quick Wins: Initiatives in this quadrant are ideal starting points. They build momentum and can often fund or justify more significant projects by demonstrating value early on.
Challenge and Strategically Plan Major Projects: Given their resource intensity, these projects need to be thoroughly scrutinized for value. If given the go-ahead, they require careful planning. They may also be broken down into smaller, more manageable pieces that fit into other quadrants.
Leverage Fill-ins: These initiatives can be implemented as and when resources allow without detracting from more impactful projects.
Kill or Reassess Thankless Tasks: Initiatives in this quadrant should be critically reassessed and preferably avoided. If they must be implemented (e.g. due to regulatory or compliance requirements), consider ways to minimize their resource drain.
The following are outputs from the Improve Phase:
o Future State (‘To-Be’) Process Model (see Workshop 1, Course Manual 2, Section 2.5)
o SIPOC (see Workshop 1, Course Manual 2, Section 2.5)
o FMEA (see Course Manual 4, Section 4.4.3)
o Prioritization Matrix
o Payoff Matrix
o Pilot test plan and report
Exercise 5.1
5.2.3. Control
The final phase ensures that the improvements are sustained over time. Control mechanisms are implemented to monitor the process and maintain the gains, including control charts, response plans, and continuous process monitoring. The Control phase is crucial for embedding the changes into the organization’s culture and practices.
Below, these tools are explored in detail:
i. Control Charts
Control Charts are a type of run chart used to monitor process performance over time. They graphically display data points in time order and are accompanied by control limits representing the process variation. These charts are pivotal for distinguishing between normal process variation (common cause) and variation indicating a process change (special cause). When a data point falls outside the control limits or a pattern emerges within the limits, it signals that the process may be out of control and requires investigation. Control Charts are essential for maintaining process stability and are a foundational quality management tool.
Below are steps for using control charts:
a. Data Collection: Collect data from the process. This data should be representative of the process performance after improvements have been implemented.
b. Selecting the Right Chart: Choose the appropriate control chart type based on the data type (attribute or variable) and the data distribution.
c. Setting Control Limits: Calculate the upper and lower control limits. These are typically set at ±3 standard deviations from the process mean. Any point outside of these limits indicates a potential issue.
d. Plotting Data: Plot the collected data over time against the control limits.
e. Interpreting the Chart: Regularly review the chart to identify any points outside the control limits or patterns that indicate a shift in the process (like trends or cycles).
![](https://www.appletongreene.com/wp-content/uploads/Figure5_9.jpg)
Control Chart
ii. Response Plans
A Response Plan outlines specific actions to respond to process variations identified through control charts or other monitoring tools. A predetermined set of steps guides team members on what to do when a potential issue is detected, ensuring that responses are quick and effective to prevent defects. The plan typically includes identifying the potential variation or issue, corrective actions to be taken, responsible parties for each action, and communication protocols. Response Plans help minimize downtime and mitigate the impact of process variations, ensuring that the process continues to operate within the desired specifications.
Below are steps for developing a Response Plan:
a. Identify Potential Issues: Based on the improvements made and historical data, identify areas where the process could deviate or fail.
b. Define Response Actions: For each potential issue, define specific actions that should be taken to address it. This might include adjusting a machine, retraining staff, or conducting a root cause analysis.
c. Assign Responsibilities: Assign who is responsible for monitoring the process and who is responsible for implementing the response actions.
d. Document the Plan: Ensure the response plan is documented and accessible to all relevant personnel.
e. Train the Team: Train the team on the response plan to ensure they understand how to act when an issue arises.
IMAGE TOO DETAILED
iii. Continuous Process Monitoring
Continuous Process Monitoring involves the ongoing collection and analysis of process performance data to ensure that the process remains under control and continues to meet customer requirements. Continuous monitoring enables organizations to identify trends, predict potential issues before they occur, and implement preventive measures. It is a dynamic approach that supports continuous improvement by constantly seeking opportunities to optimize process performance.
Below are steps for continuous process monitoring:
:
a. Establish Regular Review Cycles: Set up a schedule for regular process reviews. This could be daily, weekly, or monthly, depending on the nature of the process.
b. Utilize Real-Time Data: Real-time data monitoring tools should be used (where possible) to provide ongoing visibility of the process performance.
c. Engage Process Owners: Involve those who work with the process daily in the monitoring efforts. They can provide valuable insights and quick detection of anomalies.
d. Feedback Loop: Establish a feedback loop where insights and data from the monitoring process are used to make further improvements or adjustments.
The following are outputs from the Control Phase:
o Control Charts
o Response Plans
o Continuous Process Monitoring Plan
Case Study: Baggage Handling Process, Kenya Airways
Problem
Kenya Airways (the country’s premier airline) plays a significant role in its economy. However, in recent years, competition has been growing, resulting in a need for the airline to respond appropriately.
For example, a poll to determine the primary reasons underlying consumer and employee dissatisfaction revealed that both stakeholders were displeased with baggage handling process outcomes. This customer dissatisfaction threatened operating profitability, sales growth and competitive advantage. From an employee perspective, workers were unhappy with the extra effort required to clear backlogs. These issues resulted in additional expenses for the company associated with customer reimbursement costs and employee overtime pay.
How Six Sigma Helped
The Six Sigma DMAIC approach helped shed light on the nature of the issue and the best course of action. Customer feedback from the Voice of the Customer exercise indicated that 64% of complaints were about inadequate luggage handling. This data, which was acquired via social media surveys, supported the Define stage. Kenya Airways examined the data collected and verified its accuracy for the Measure phase to create a baseline.
The team examined the top five causes of baggage delays during the Analyze phase.
These top five reasons accounted for 87% of the overall delay time (or 702 hours annually) spent on delays. With 536 hours accounted for, load connection contributed to the most significant delay out of these five reasons, representing 67%.
Improving communication between the dispatchers and the ramp division and ensuring that they were using the same system was a solution identified as significantly impacting the overall delay time.
A pilot test was conducted for inbound flights from selected locations (e.g. Dubai). After a successful pilot trial run, the process was rolled out to every flight departure and arrival.
Results
The airline reported a reduction in baggage connection delays by 65% after the implementation of these changes. Increased cohesion between the dispatch and ramp teams is another project outcome. The speed at which baggage is loaded and unloaded has increased dramatically. A plan for continuously monitoring the process was also implemented to ensure that the improvements were sustained.
Exercise 5.2
Course Manual 6: Lean Management
Lean management is a systematic approach to running an organization that supports continuous improvement and focuses on delivering maximum value to customers while minimizing waste. This course manual will explore the core aspects of lean management practices, including waste reduction, 5S and visual management. It will also examine the Plan-Do-Check-Act (PDCA) lean methodology in detail.
Lean management originated from the Toyota Production System (TPS) and has since evolved into a universally applied methodology across various industries. The central focus is maximizing customer value while minimizing waste, creating a more efficient and effective process.
Below, we examine several critical lean tools.
1.19 Lean Tools
6.1.1. Waste Reduction
Waste reduction is a fundamental aspect of lean management. Waste (or ‘Muda’ in Japanese) is any activity that consumes resources but creates no value for the customer. Lean management identifies eight types of waste: defects, overproduction, waiting, unutilized talent, transportation, unnecessary inventory, unnecessary motion and extra processing. Organizations can improve efficiency, reduce costs, and increase customer satisfaction by eliminating these wastes.
Below, we explore these eight types of waste, providing real-world examples to highlight how organizations can identify and mitigate inefficiencies in their processes.
i. Defects
Defects refer to any work or output that is flawed and does not meet quality standards, necessitating rework or scrap. For instance, a defect in a car manufacturing plant could be a misaligned door panel. This type of waste leads to time and resources spent on inspection, rework, or replacement, directly impacting profitability and customer satisfaction.
ii. Overproduction
Overproduction is producing more than is needed, faster than required, or before it is required, e.g., a bakery producing more bread than it can sell in a day, resulting in unsold goods going stale and being discarded. Overproduction ties up capital in unsold inventory and increases storage costs.
iii. Waiting
Waiting waste occurs when employees or machinery are idle due to unbalanced workloads or inefficient process flows. An example is workers waiting for materials to arrive on an assembly line. This downtime is a lost opportunity for productivity and delays the overall process.
iv. Non-Utilized Talent
Non-utilized talent is underutilizing employees’ skills, abilities, and knowledge. For example, a highly skilled technician performing basic tasks that are not commensurate with their expertise wastes human potential and results in less effective operations.
v. Transportation
Transportation waste is the unnecessary movement of products or materials between processes. In a distribution center, the excessive movement of goods from one end to another without adding value is an example of this type of waste. This increases the risk of damage and adds to the cycle time.
vi. Inventory Excess
Excess inventory encompasses any supplies above what is required to meet immediate needs. A retail store stocking more items than it can sell reasonably showcases this waste. It ties up capital, increases storage costs, and risks obsolescence.
vii. Motion
Motion refers to unnecessary movements by people. An example is a worker walking back and forth to retrieve tools due to a poorly organized workspace. Such inefficiencies lead to increased processing time and worker fatigue.
viii. Extra-Processing
Extra-processing waste involves doing more work or using more resources than necessary to produce a product or service. An instance of this is double-checking paperwork due to a lack of trust in the process’s accuracy, resulting in additional labor costs and time without adding value to the end product.
Teams should conduct regular “waste walks”, which are observational tours of the physical or digital workspace to identify non-value-added activities or waste. By critically examining how work is being done, observers can see firsthand how processes are executed, where inefficiencies lie, and how waste manifests in various forms.
Below are the steps for conducting a lean waste walk:
a. Prepare for the Walk:
Define Objectives: Clearly define what you aim to achieve with the walk. Objectives could range from identifying waste in a specific process to understanding the flow of materials through a department.
Assemble a Team: Select a cross-functional team with members from different departments. Diversity in the team ensures a broad perspective on the processes being reviewed.
Educate Participants: Ensure all participants understand the Lean principles, especially the categories of waste. This knowledge is crucial for identifying waste effectively during the walk.
b. Conduct the Walk:
Observe and Record: Walk through the targeted area and observe the processes in action. Take notes and record observations without interfering with the work being done. It’s essential to approach this step with an open mind and resist the urge to jump to solutions immediately. See Table 6.1 below for a completed waste walk template for baking a custom cake.
Engage with Employees: While the primary goal is observation, engaging with employees can provide invaluable insights. Ask questions to understand their perspective on the process, challenges they face, and ideas for improvement.
c. Analyze Findings:
Review Observations: Post-walk, compile and review all observations with the team. Classify the identified waste according to the relevant waste categories.
Prioritize Issues: Not all identified wastes are equal. Prioritize them based on their impact on the process and ease of elimination.
d. Develop and Implement Action Plans:
Brainstorm Solutions: For each priority waste, brainstorm potential solutions with the team. Consider involving employees who perform the work in this step as well.
Plan Implementation: Develop action plans for the agreed-upon solutions. Assign responsibilities and timelines for each task.
Monitor Progress: Implement the solutions and monitor their progress over time. Adjust plans as necessary based on feedback and results.
e. Reflect and Repeat:
Review the Impact: After implementing changes, review their impact on the process. Have the changes reduced or eliminated the identified waste?
Continuous Improvement: Lean is about continuous improvement. Regularly scheduled Lean Waste Walks can help maintain focus on waste reduction and process improvement.
6.1.2. 5S
5S represents a disciplined approach to organizing and maintaining a productive work environment. It is commonly used in manufacturing, warehousing, and office settings. The five S’s stand for Sort, Set in Order, Shine, Standardize, and Sustain.
Below, we explore each ‘S’ in detail, providing examples to illustrate their application in a virtual or physical workplace setting.
i. Sort
The first step, Sort, involves differentiating between necessary and unnecessary items and removing the latter. This step reduces clutter, frees up space, and lessens the risk of distraction or misplacement. For instance, tools and materials not essential for current production processes are removed or stored elsewhere in a manufacturing plant. This results in a more organized and efficient workspace where workers can immediately access what they need. An example of sorting a digital workspace would involve deleting or archiving redundant files to reduce clutter.
ii. Set in Order
Once sorting is done, the next step is to organize the remaining items. This involves arranging tools and materials in a manner that promotes workflow efficiency. The principle of “a place for everything, and everything in its place” is critical here. For example, a mechanic’s shop might arrange tools according to their usage frequency, with the most commonly used tools being the most accessible. Labels and color coding can be used to facilitate quick identification and return of items to their designated spots. An example of sorting a digital workspace might involve ensuring that files are moved to the correct folder where they logically belong.
iii. Shine
Shine emphasizes keeping the workplace clean and orderly. Regular cleaning and inspection are crucial to prevent equipment malfunctions and maintain a pleasant work environment. For instance, a daily cleaning routine in a restaurant kitchen ensures hygiene standards are met and kitchen equipment is in good working order. This step goes beyond mere cleanliness, fostering a sense of pride and care among employees for their workspace.
iv. Standardize
Standardization establishes norms and procedures to maintain the first three S’s. This step ensures that Sort, Set in Order, and Shine are not one-time activities but are integrated into daily work routines. An example would be a retail store implementing daily checklists for employees to ensure that all items are correctly sorted, organized, and the store is clean before opening hours. Such practices ensure consistency in maintaining order and cleanliness.
v. Sustain
The final step, Sustain, involves developing a culture where 5S becomes a way of life rather than a one-off project. This is often the most challenging step, requiring ongoing commitment and discipline. It includes regular training, continuous improvement, and the involvement of everyone in the organization. For example, a company might hold monthly 5S training sessions and encourage employee suggestions for continuous improvement.
6.1.3. Kaizen
A Japanese term meaning “change for the better” or “continuous improvement,” kaizen is a core lean management principle. Below, we delve into the concept of Kaizen in the Lean management system, exploring its fundamental principles and implementation strategies.
At the heart of Kaizen are several guiding principles:
o Continuous Improvement: The fundamental principle of Kaizen is the belief that there is always room for improvement in any process or product, no matter how efficient it currently seems.
o Employee Involvement: Kaizen requires the active participation of employees at all levels. It values their insights and encourages them to propose and implement improvements.
o Customer Orientation: Understanding and meeting customer needs is a crucial driver of Kaizen, ensuring that improvements align with customer satisfaction.
o Process Orientation: Focusing on processes rather than outcomes, Kaizen seeks to identify inefficiencies and bottlenecks to streamline operations.
o Standardization and Stability: Standardizing successful practices ensures that improvements are maintained, creating a stable platform for future changes.
Implementing Kaizen within an organization involves several steps:
a. Cultivating a Kaizen Mindset: The first step is fostering a culture that embraces change and continuous improvement, involving training and education to inculcate Kaizen values among employees.
b. Employee Empowerment: Employees are encouraged to identify areas for improvement and suggest changes. This empowerment increases their engagement and commitment to the process.
c. Small Changes: Unlike radical transformations, Kaizen focuses on small, manageable improvements that collectively lead to significant changes over time.
d. Regular Reviews: Continuous monitoring and review of the processes are essential to identify further improvement areas and to institutionalize successful changes.
6.1.4. Visual Management
Lean visual management is predicated on the principle that transparent, easily accessible, and straightforward information aids decision-making, problem-solving, and continuous improvement. It transforms abstract data and workflows into tangible, visual formats that are immediately understandable to anyone within the organization, irrespective of their role or level of expertise. This method fosters a culture of clarity, accountability, and engagement, aligning all team members towards common goals.
Implementing lean visual management is grounded in several fundamental principles: simplicity, relevance, visibility, and adaptability. Simplicity ensures that visual tools are easy to understand and use. Relevance guarantees that only pertinent information is displayed, avoiding information overload. Visibility ensures that information is accessible to everyone who needs it, and adaptability allows the visual management systems to evolve with changing business needs and objectives.
Lean visual management employs a variety of tools and techniques, each designed to streamline processes and enhance efficiency. Two of the most widely used include:
Andon Systems
These provide real-time alerts and status updates on processes, allowing for swift identification and resolution of issues. Andon systems are composed of three main components:
Signal: Providing process performers with the authority to activate the andon cord, which stops the production line in an emergency, ensuring that problems do not cascade downstream in the process where they are more difficult and costly to resolve.
Alert: Making the queue status (the andon light and boards – see Figure 6.1) extremely visible, especially when there is an issue.
Resolve: Promoting collaboration between supervisors and operators to identify the underlying source of the issue and ensure a long-term solution is put in place.
Empowerment is one of the fundamental tenets of Andon systems as it gives process performers the power and accountability to halt the production process and request help.
![](https://www.appletongreene.com/wp-content/uploads/Figure6_1-1.jpg)
Andon Board
Gemba Walks
While not a tool but a practice, Gemba Walks involve regular, structured walks through the workplace. Leaders observe and discuss visual management tools in use, fostering a culture of continuous improvement and problem-solving.
In summary, lean management is not just a set of tools but a mindset focused on creating a culture of continuous improvement. Its practices are instrumental in helping organizations achieve operational excellence and deliver maximum value to their customers.
In the following section, we explore a lean implementation methodology: PDCA (Plan-Do-Check-Act).
Exercise 6.1
1.20 Lean Methodologies
The Lean PDCA (Plan-Do-Check-Act) methodology is a continuous improvement process used in business and manufacturing to improve processes and products. It is derived from the Deming Cycle, also known as the Shewhart cycle, and integrates principles from Lean Management. The PDCA cycle is central to Lean thinking and management, providing a simple but effective problem-solving and continuous improvement approach.
6.2.1. Plan
The first phase of the Lean PDCA cycle is ‘Plan’. This stage involves identifying a problem or an opportunity for improvement. The key activities in this phase include:
i. Defining the Problem: Clearly stating the problem or improvement area.
ii. Collecting Data: Gathering relevant data to understand the current situation.
iii. Analyzing the Problem: Using tools like Root Cause Analysis or techniques previously discussed (see Course Manual 5, Section 5.2.1) to identify the underlying causes of the problem.
iv. Developing Hypotheses: Formulating possible solutions or improvements.
v. Planning for Implementation: Creating a detailed action plan, including resources needed, timelines, and responsibilities.
6.2.2. Do
The ‘Do’ phase is about implementing the plan. This stage is characterized by:
i. Testing the Solution: To test its effectiveness, the solution is implemented on a small scale, using a pilot program or a trial run.
ii. Documenting the Process: Keeping detailed records of the implementation process and any deviations from the plan.
iii. Engaging the Team: Involving employees and stakeholders in the implementation process ensures everyone understands their roles.
6.2.3. Check
In the ‘Check’ phase, the implementation results are evaluated involving:
i. Analyzing Results: Comparing the outcomes of the test implementation against the expected results.
ii. Identifying Learnings: Understanding what worked and what didn’t, and why.
iii. Gathering Feedback: Soliciting feedback from stakeholders and team members.
6.2.4. Act
The final phase, ‘Act’, focuses on standardizing and implementing the successful solution on a broader scale. This includes:
i. Standardizing Successful Practices: If the solution proves successful, standardizing the process and integrating it into regular operations.
ii. Adjusting the Plan: If the results are unexpected, refine the plan based on learnings and insights.
iii. Continuous Improvement: Applying the PDCA cycle continuously to other areas of the process or organization for ongoing improvement.
Lean PDCA integrates seamlessly with Lean principles such as waste elimination, value stream mapping, and continuous flow. By applying PDCA in the context of these Lean principles, organizations can ensure that their improvement efforts are aligned with the overall objectives of reducing waste, improving efficiency, and increasing customer value.
1.21 Integration of Lean and Six Sigma
Lean and Six Sigma are methodologies originating from distinct historical and operational contexts. The integration of these two methodologies began as organizations recognized that Lean and Six Sigma principles could complement each other. While Lean focuses on speed and efficiency by eliminating waste, Six Sigma focuses on quality and precision by reducing variability. Fusing these two approaches allows organizations to achieve faster and more efficient processes without compromising quality. Many organizations utilize Lean Six Sigma – a combination of tools and techniques from both methodologies – to drive process optimization initiatives.
Just as a skilled builder surveys their project, assessing the materials, the structure’s purpose, and the environmental challenges before selecting the right tools from their toolbox, a process optimization team operates in a remarkably similar manner when choosing between Lean and Six Sigma methodologies to ensure the success of a project. For example, a sledgehammer might be selected for demolition, while a more precise hammer might be used for finishing work. In process optimization, Lean tools are selected to optimize efficiency and flow, e.g., eliminate non-value-added activities (waste), streamline processes, and improve workflow. On the other hand, Six Sigma is akin to choosing precision tools that identify and reduce variability and defects in a process. Tools from the Six Sigma methodology, like DMAIC, are selected for projects that require detailed statistical analysis to improve quality and consistency.
The core principles of Lean Six Sigma revolve around eliminating waste (Lean) and reducing process variation (Six Sigma). This integrated approach focuses on improving process flow and quality simultaneously, leading to higher efficiency and effectiveness in operations. Lean Six Sigma utilizes tools and techniques from both methodologies, such as value stream mapping from Lean and statistical process control from Six Sigma. The DMAIC (Define, Measure, Analyze, Improve, Control) framework from Six Sigma provides a structured methodology for tackling problems and improving processes. At the same time, Lean principles guide identifying and eliminating non-value-adding activities.
Case Study: Invoice Capture-To-Payment Execution Process
Problem
About 360 full-time employees and 180 contractors worked at the Chesapeake, Virginia, centre, which offered a variety of accounting transaction processing and financial statement preparation services to the Transportation Security Administration (TSA), Coast Guard, and Department of Homeland Security (DHS). The centre also started providing full accounting services for the Domestic Nuclear Detection Office. Every year, the centre handles about 2.5 million transactions.
The complexity of the process increased rapidly in a relatively short period, resulting in overworked and stressed employees and systems, which led to an increase in rework cycles, delays, mistakes, fines, duplicate payments, and other issues. However, budgetary constraints meant addressing the problem by recruiting additional employees was not an option.
How Lean Six Sigma Helped
The process optimization project started by creating a Value stream map to determine where non-value-added time was spent (see Figure 6.3). The group then created a measurement system to quantify the time spent in these non-value-added activities. The group also started tracking queue volumes in relation to process activities over time.
This facilitated process transparency, e.g., establishing that the process cycle time was 14 days with a process sigma level of 1. The project also collected VoC data by surveying TSA clients over the phone.
Lean Six Sigma tools, including process mapping, basic statistical analysis, FMEA, cause-and-effect analysis, etc., were extensively used to determine the root cause of identified problems and suitable solutions.
Two Kaizen events were also held. At one of these events, focus was placed on resolving an issue with the Authorised Certifying Officer (ACO) invoice approval queue (which had been identified as a bottleneck by the value stream mapping exercise) due to the large number of items in it (an average of 175 invoices per day, with occasional spikes to almost 700).
As a result of the focus at this kaizen event, the queue dropped to an all-time low of one invoice.
Subsequently, a plan was implemented to maintain an appropriate amount of daily work in progress (WIP). The group calculated that 80 invoices in the queue would equal one day of work in progress. The finance centre management teams now use control charts to manage the work in progress (WIP) to no more than 80 invoices daily with a stretch target of 50.
Result
Since the kaizen event, the approval queue volume has not exceeded 83 invoices. The improved process has resulted in a reduction in overtime and a decrease in interest and late fines.
The more profound understanding of the underlying source of the issue obtained from this initiative has also provided a foundation for the finance centre to devise improvement strategies that simplify and streamline the overall process.
Exercise 6.2
Course Manual 7: Change Process
1.22 Introduction
In Workshop 1, Course Manuals 10-12, we explored the pivotal element of people in the process excellence journey. For the remainder of this workshop, we will examine the “process of change” instead of the optimization of a specific process as we have done thus far in this workshop (see top right circle in Figure 7.1). Specifically, we will focus on agile change methodologies.
Without a structured change process, the organization might not adequately assess the risks associated with the change, leading to unforeseen challenges that could jeopardize the change initiative’s success. Additionally, the absence of a structured approach can hinder communication and transparency, leaving employees feeling uninformed and resistant to change. This resistance can slow down or even derail the implementation of necessary changes. Finally, neglecting to follow a structured change process can impact the organization’s ability to learn and adapt, as it misses out on opportunities to evaluate the effectiveness of the change and incorporate feedback into future initiatives.
Agile methodologies have revolutionized the way projects are managed and executed. This workshop will focus on how it can be used to manage process optimization projects. Among these methodologies, Scrum stands out as a prominent framework. This course manual aims to explore the core principles of Agile and how Scrum is derived from these principles. It will also compare it with the traditional waterfall methodology, an alternative change management methodology.
1.23 Agile Principles
Agile is based on twelve core principles outlined in the Agile Manifesto. These principles focus on customer satisfaction through early and continuous delivery of value, embracing change even in late development stages, delivering valuable change frequently, close collaboration between business stakeholders and change agents, supporting and trusting individuals to get the job done, face-to-face conversation as the best form of communication, measuring progress primarily through delivered change, maintaining a sustainable pace of change, continuous attention to excellence and good design, simplicity, self-organizing teams, and regular reflection on how to become more effective.
7.2.1. Scrum Derivation from Agile
Scrum is a subset of Agile and embodies its principles through its practices and roles, as follows:
o Iterative Change: Reflecting Agile’s emphasis on frequent delivery, Scrum uses short, time-boxed iterations called Sprints, typically lasting 2-4 weeks, to deliver incremental change.
o Collaboration and Communication: Scrum enhances team collaboration, a core Agile principle, through daily stand-up meetings, sprint planning, and reviews.
o Responding to Change: Scrum teams regularly reflect on their effectiveness and adjust accordingly, aligning with Agile’s focus on adapting to change.
o Empowerment and Trust: Scrum empowers teams to self-organize and make decisions, resonating with Agile’s principle of building projects around motivated individuals.
7.2.2. Comparison with Waterfall Methodology
The term ‘Scrum’ is borrowed from rugby, representing a team moving down the field together. This metaphor is apt for the Scrum methodology, where the team works in unison towards a common goal, supports each other, and adapts to overcome obstacles, much like a rugby team in a scrum situation. This approach contrasts with a relay race, which represents a more segmented, sequential process like the Waterfall methodology (see Figures 7.2 and 7.3)
o Sequential vs. Iterative: Waterfall is a linear and sequential approach, where each phase must be completed before the next begins. In contrast, Scrum is iterative, allowing for simultaneous phases of work and more flexibility.
o Change Management: Waterfall struggles with late-stage changes, whereas Scrum is designed to accommodate and adapt to changes even in later stages of development.
o Feedback and Testing: In Waterfall, feedback is typically received after the completion of the project. Scrum, aligning with Agile principles, incorporates regular feedback throughout the project lifecycle.
o Risk Management: Scrum’s regular iterations allow for the early discovery of issues and risks, whereas in Waterfall, risks may not become apparent until the testing phase.
Scrum, rooted deeply in Agile principles, offers a dynamic and collaborative approach to project management, starkly contrasting to the linear, structured Waterfall methodology. The rugby metaphor aptly describes Scrum’s team-centric, adaptive, and iterative nature. By focusing on flexibility, team empowerment, and customer collaboration, Scrum, as an Agile framework, has become a preferred choice for many organizations seeking efficiency and adaptability in their project management processes.
Exercise 7.1
1.24 Scrum Planning
In Scrum, aggregate planning involves laying out the broader objectives and goals of the project. This high-level planning is less about specifics and more about setting the direction and vision of what needs to be achieved. It typically encompasses the following elements, starting from broader to more granular planning:
7.3.1. Roadmap Planning
This is the strategic view that outlines the major objectives or goals of the process optimization project. It sets the stage for what the Scrum team aims to achieve in the long run.
7.3.2. Release Planning
While still high-level, release planning is more granular than roadmap planning. It focuses on what incremental process improvements will be delivered to the customers and when. This phase often involves forecasting and considering business priorities, market conditions, and resource availability.
7.3.3. Sprint Planning
The sprint – a time-boxed period, usually two to four weeks, during which specific process changes will be implemented – is a crucial concept in Scrum.
Sprint planning is where Scrum teams plan their work in detail for the next sprint. During sprint planning, the following occurs:
o Sprint Goal Definition: The team and the Product Owner establish a sprint goal that aligns with the process optimization vision and roadmap.
o Task Breakdown: The team selects items from the product (process) backlog and breaks them down into specific tasks. This process ensures that the team understands what needs to be done and can realistically commit to completing these items within the sprint.
7.3.4. Capacity Planning
The team assesses its capacity, taking into account team members’ availability and other commitments, to ensure that the sprint plan is achievable.
1.25 Scrum Estimation
Effective task estimation is critical to successful project management in Agile methodologies. However, it is often fraught with challenges, such as cognitive biases like the halo and bandwagon effects. Agile teams have developed several techniques to counter these issues, such as Planning Poker and using user stories instead of tasks. This section explores these challenges and techniques, the components of a user story, story readiness, and the definition of ‘Done’.
7.4.1. Estimation Challenges
i. The Halo Effect occurs when past performance or perceptions of an individual or team unduly influence the estimation of future tasks. For example, if a team member previously delivered high-quality work under tight deadlines, their tasks might be underestimated in the future, disregarding the complexity of the new tasks.
ii. The Bandwagon Effect is a groupthink phenomenon where individuals go along with the majority view during estimations rather than contribute their independent analysis, leading to inaccurate estimations as dissenting, yet potentially accurate, opinions are not considered.
Planning Poker is an Agile estimation technique that mitigates these biases. It involves team members making estimates independently using numbered cards, followed by a discussion to reach a consensus. This process ensures diverse viewpoints are considered, reducing the impact of individual biases and promoting a more accurate estimation.
Scrum encourages relative sizing for estimating work, which involves assessing the size of a task in comparison to other tasks rather than assigning absolute hours or days to it. This more flexible method acknowledges the inherent uncertainty in estimating complex work.
The Fibonacci series (1, 2, 3, 5, 8, 13, 21, 34, 55, 89…) is often used in this context. This sequence is preferred because as the numbers grow, they do so in a way that reflects the increasing uncertainty with larger tasks. This non-linear progression encourages more accurate estimation as it avoids the false precision of linear scales and helps teams make better judgments about the relative effort required for backlog items.
To illustrate, we revisit the Credit Card Application-to-Activation process. Assume the effort required to implement two improvement solutions – (1) simplification of the online application form by reducing the number of fields and (2) offering potential applicants a quick pre-approval process that uses a soft credit check – are being estimated. Rather than estimate the effort (e.g. time required) for each solution, the project team may rank the first idea 3 points and the second 5 points, indicating that the team believes the second idea requires more effort than the first.
Note: Using the ten numbers in the sequence, as shown above, is recommended so that the solution that requires the least relative effort is rated as ‘1’ and the solutions that require the most relative effort are rated ‘89’. This keeps the estimation effort manageable.
7.4.2. User Stories
In Agile, work items are often specified as user stories instead of tasks. User stories are small, manageable pieces of work that can be completed within a sprint. They are detailed enough to be actionable and testable.
This approach has several advantages:
i. Focus on Value: User stories are framed from the user’s perspective, ensuring the team focuses on delivering value to the customer.
ii. Clarity and Context: User stories provide more context and clarity, detailing what needs to be done, for whom and why.
iii. Enhanced Communication: Stories facilitate better communication and understanding among team members, stakeholders, and customers.
A well-written user story typically includes three components (see Figure 7.4):
o Who (Personas): This identifies the user or persona for whom the story is written, providing insight into their needs and motivations.
o What: This describes what the user wants the product to do.
o Why: This explains the user’s need, providing context and justification.
The Critical-To-Quality (CTQs) discussed in Course Manual 3 Section 3.4.2 can assist in writing user stories. For example, revisiting the Mortgage Application-to-Offer process, the user story could read as follows:
As a mortgage applicant,
I want an offer to my submitted mortgage application within 28 days,
so that I can proceed to the completion of my mortgage promptly.
In contrast to sprints, an epic is a more extensive work item that cannot be completed in a single sprint. Epics are broken down into smaller user stories for execution.
7.4.3. User Story Readiness
In Agile and Scrum frameworks, the concept of story readiness is critical to the success of a Sprint. It refers to the state where user stories are sufficiently prepared for implementation. Two key concepts integral to story readiness are the INVEST criteria and the Definition of Done. If a sprint is executed with user stories that are not ready, the risk exists that the project team will lack clarity on whether the user story has been completed or not.
Below, we explore how these concepts guide teams in preparing and executing user stories effectively within a Sprint.
The INVEST Criteria
The INVEST criteria, an acronym coined by Bill Wake, outlines the characteristics of a good user story. It stands for Independent, Negotiable, Valuable, Estimable, Small, and Testable.
i. Independent: Each story should be self-contained, with minimal dependencies on other stories. This independence facilitates easier planning and testing.
ii. Negotiable: Good stories leave room for discussion and adaptation. They are not overly prescriptive but a starting point for conversation about requirements.
iii. Valuable: A story must deliver value to the customer. The focus should be on the outcome and benefit for the user, not just on completing a set of tasks.
iv. Estimable: Teams must be able to estimate the effort required to complete a story. If a story is too vague or complex, it needs further refinement.
v. Small: Stories should be small enough to be easily manageable and completed within a single Sprint but large enough to provide significant value.
vi. Testable: There must be clear acceptance criteria to evaluate whether the story is completed as intended, ensuring quality and meeting the user’s needs.
The Definition of Done
The Definition of Done is a clear and concise list of criteria that indicates when a user story is considered complete. It ensures consistency and quality in the deliverables and helps prevent work from being carried over into subsequent Sprints.
i. Clarity and Consensus: The Definition of Done must be clearly defined and agreed upon by the entire Scrum team, including Subject Matter Experts (SMEs), process analysts, and the product (process) owner.
ii. Quality Assurance: It should include criteria that ensure the quality of the work, such as passing all tests meeting defined standards, and documentation requirements.
iii. Alignment with User Acceptance: The criteria should align with the user acceptance criteria defined in the user story, ensuring that the work meets customer expectations.
iv. Review and Adaptation: The Definition of Done should be periodically reviewed and adapted as the team understands the project and its standards evolve.
The quantitative CTQs measures should be closely aligned with story readiness. Extending the example of the Mortgage Application-to-Offer process, the acceptance criteria could read as follows:
Given the receipt of a set of mortgage applications in a given period
When an offer was made to the applicants
Then, for at least 80% of these applications, the time from receipt to offer should be no more than 28 days.
In Sprint planning, the INVEST criteria and the Definition of Done complement each other to ensure story readiness. The INVEST criteria guide the creation and refinement of user stories, ensuring they are well-structured and prepared for implementation, while the Definition of Done serves as a benchmark for the completion of these stories, setting clear expectations for the team.
Doing so ensures the following benefits.
Prioritization and Refinement: During backlog refinement and Sprint planning, the INVEST criteria help prioritize and break down stories to ensure they are ready for Sprint.
Quality and Consistency: The Definition of Done ensures that each story meets a consistent level of quality, reducing the risk of rework and ensuring customer satisfaction.
Efficient Execution: With clear and well-defined stories and completion criteria, the team can work more efficiently, focusing on delivering value rather than clarifying ambiguities.
7.4.4. Sprint Velocity
Sprint velocity is a critical metric in Scrum and Agile methodologies, used to quantify the amount of work a team can complete in a single sprint, typically measured in story points or any other unit of measure the team uses to estimate user stories. This metric is crucial for planning and forecasting future work, as it provides a realistic picture of the team’s capacity and productivity over time.
To determine sprint velocity, a team should follow these steps:
i. Estimation of User Stories: During the sprint planning phase, user stories are estimated in story points or other units typically using the numbers in the Fibonacci series (1, 2, 3, 5, 8, 13, … – see Section 7.4.1). This estimation reflects the complexity, effort, and time required to complete each task.
ii. Completion of User Stories: At the end of the sprint, the team calculates the total number of points for all the user stories that have been completed. User stories that are not fully completed are excluded from the velocity calculation.
iii. Average Calculation: To get a more accurate picture, calculate the average velocity over several sprints, which smoothens any anomalies due to unusual sprints (e.g., ones with more holidays or unexpected challenges).
In conclusion, Scrum planning effectively balances long-term vision with the flexibility needed for agile execution. At the aggregate level, it sets the strategic direction, while sprint planning focuses on the immediate tasks with a high level of detail. Relative sizing and the Fibonacci series in estimation acknowledge the complexity and uncertainty inherent in process optimization, allowing for more realistic and adaptable planning. This blend of strategic foresight and adaptable execution makes Scrum a practical framework for managing simple and complex process optimization projects.
Case Study: Claim Submission-to-Settlement Process, Medical Division of a Major Insurance Provider, Hong Kong
Problem
The organization experienced increased operating costs due to its high claim processing costs. Its siloed way of working also led to the organization becoming overwhelmed with claims, leading to poor customer experience. Finally, the lack of a suitable data capture mechanism resulted in a lack of transparency regarding progress assessment and benefit tracking.
How Agile Helped
The CEO launched an agile transformation program to tackle these projects. However multi-disciplinary Scrum team required for the program were geographically dispersed and, as previously mentioned, worked in departmental silos. Hence, a design thinking workshop was organized at the start of the initiative as a means of bringing them together and helping them comprehend the Voice of the Customer. The Scrum team consisted of eight people: the CEO, Head of Data Analytics, two data scientists, a medical doctor, and a nurse, among others.
The team aimed to use Scrum to develop a customer-centric organization and improve its culture, including increased transparency and a change to a more flexible working style.
The Scrum team worked in 2-week sprints and collectively established their goals, which they worked towards each sprint. Additionally, they held all the Scrum Ceremonies, which improved communication within the dispersed team. Due to the COVID-19 pandemic, they could not collaborate in person frequently and worked primarily virtually.
The following were some of the criteria that the teams adopted as their acceptance criteria (.i.e. Definition of Done):
o New or improved processes would adhere to published guidelines and not breach regulatory standards.
o All knowledge-sharing sessions, conclusions, or training should be documented.
o All data visualization dashboards design should adhere to the guidelines established by the Global Data Analytics Team.
The Scrum team held multiple knowledge-sharing meetings amongst the data science & analytics, clinical & network, and proposal & customer service teams to achieve greater team collaboration and transparency. This allowed them to share best practices and improve communication between the various departments while also better understanding the customer journey and business process.
Results
The Scrum Team delivered several improvements to the organization’s business processes. For example, they developed a data model to assess the expense of medical claims and determine which operation had the highest number of claims. This data model resulted in a unified data view, leading to better measurement, transparency, and decision-making.
The team also created a framework and data model to collect cost information for each episode of care (EOC).
Additionally, team members experienced increased happiness and empowerment (assessed via a post-implementation survey).
Exercise 7.2
Course Manual 8: Scrum Practices
8.1 Introduction
As discussed in the previous course manual, Scrum is an Agile project management framework that helps teams work together to deliver and sustain process optimization projects of varying complexity. In this course manual, we will examine Scrum artifacts and ceremonies. Additionally, we will explore Scrum roles in detail and explain the key responsibilities associated with each role.
8.2 Scrum Artifacts
Scrum artifacts provide vital information that the project team and stakeholders require to successfully guide, prioritize, and track the progress of their projects. By maintaining these artifacts, Scrum teams ensure transparency, foster communication, and focus on delivering value to the customer.
The primary Scrum artifacts include the Project Charter, Product (Process) Backlog, Sprint Backlog, and the Increment.
Below, we examine each artifact in detail and explain the purpose it serves:
.
8.2.1. Project Charter:
The Project Charter is a document that outlines the project’s purpose, scope, objectives, and stakeholders. It includes the problem statement, goals, timeline, and team roles. The charter provides a clear direction and ensures alignment among all stakeholders.
The Project Charter is typically created at the start of the project and shared with team members and stakeholders for transparency, comments, corrections, etc. However, as the Agile methodology recommends review at the end of every sprint cycle, the project milestones, objectives and acceptance criteria are liable to change. When these occur, a new version of the project charter should be created and distributed.
The Project Charter typically contains the following components:
o Executive Summary
o Project Scope
o Assumptions
o Expected Deliverables
o Key Stakeholders
o Team Members
o Project Resources
o Communication Plan
o Risks
8.2.2. Product (Process) Backlog
The Product (Process) Backlog is a prioritized list of process improvement ideas or solutions agreed upon and is the single source of requirements for any changes to be made to the process. Though Scrum typically refers to a product backlog, as this artifact will be utilized to optimize processes, we will refer to it as the process backlog for the remainder of this training program.
The process backlog contains:
o List of Agreed Process Improvement Solutions: Detailed descriptions of the improvement solutions that the process requires.
o Process Fixes: Identified issues or finetuning of changes previously implemented that need addressing.
o Implementation Work: Necessary tasks required to implement process changes, e.g. technical work or conducting process performer training.
o Knowledge Acquisition: Work to learn more about customers, technology, or the domain.
The Product (Process) Owner dynamically prioritizes the process backlog based on business value, customer needs, urgency, and the potential to deliver a cohesive product increment. The Scrum team subsequently uses the top-priority items in the backlog to plan their sprints.
8.2.3. Increment
The increment artifact in Scrum encapsulates the culmination of completed Process Backlog items during a sprint, combined with the increment value from all previous sprints. It represents the tangible outcome of the current sprint’s efforts, demonstrating progress towards the final optimized process.
The Increment is essential for ensuring that potential value-adding changes are continually being made to the process, reflecting the Scrum team’s commitment to delivering incremental value to the customer.
This artifact is a critical measure of progress and success, as it must meet the Definition of Done agreed upon by the team and be in a usable condition, Though the Product (Process) Owner may decide to keep it as an internal project document (as opposed to sharing it with project stakeholders), it is recommended that this artifact is shared widely as the Increment fosters transparency, enabling stakeholders to see real progress and facilitating feedback for future iterations.
Exercise 8.1
8.3 Scrum Ceremonies
Integral to the Scrum framework are its ceremonies or rituals, which create a routine and structure for the work. Scrum ceremonies provide the structure and rhythm needed for teams to navigate complex projects and achieve successful outcomes.
These ceremonies are essential for keeping the team aligned, the project on track and ensuring continuous improvement. This section explores the key Scrum ceremonies: Sprint Planning Meetings, Daily Scrum, Sprint Review, and Sprint Retrospective.
8.3.1. Sprint Planning
The sprint planning meeting is held at the beginning of each Sprint to allow the team to discuss and decide what to accomplish in the upcoming Sprint.
The Product (Process) Owner (PO) presents the prioritized process backlog items (PBIs) to the team, which then selects the PBIs they can commit to completing during the Sprint, breaking them down into tasks and estimating their effort.
8.3.2. Daily Scrum (Huddle)
The Daily Scrum (also referred to as a “huddle” or “stand-up”) is a short, daily meeting, often held at the same time and place every day, lasting no longer than 15 minutes, designed to synchronize activities and create a plan for the next 24 hours.
The purpose of this meeting is not simply for team members to provide updates, as it offers an opportunity for collaboration and problem-solving.
During the meeting, each team member typically answers three questions:
1. What did I accomplish yesterday? This helps in understanding progress and setting the context for current priorities.
2. What will I do today? This focuses on immediate actions and how they align with the sprint goal.
3. What obstacles are impeding my progress? Identifying and addressing impediments promptly is crucial in maintaining the momentum of the sprint.
These answers ensure that every team member knows what needs to be done and is actively engaged in finding solutions to challenges. This active engagement is crucial in driving the project forward and achieving the sprint goals.
8.3.3. Sprint Review
Held at the end of each Sprint, the Sprint Review is a working session where the Scrum Team and stakeholders review their accomplishments during the Sprint. This session aims to inspect the incremental changes made to the process and adapt the Process Backlog if needed.
During the Sprint Review, the team demonstrates the work accomplished, and the Product (Process) Owner discusses the current state of the Process Backlog. The team and stakeholders collaborate on what to do next.
8.3.4. Sprint Retrospective
This ceremony occurs after the Sprint Review and before the next Sprint Planning Meeting.
The Retrospective allows the team to inspect its internal processes and create a plan for improvements to be enacted during the next Sprint.
The team discusses what went well during the Sprint, what problems it encountered, and how those problems were (or were not) solved.
8.4 Scrum Roles
Iterative and incremental practices characterize Scrum. Central to its successful implementation are three primary roles: the Product (Process) Owner, the Scrum Master, and the Team Member. Each role comes with distinct responsibilities and is essential for the smooth functioning of the Scrum process.
Below, we examine each of these roles in detail, together with its associated responsibilities.
8.4.1. The Product (Process) Owner
The Product (Process) Owner plays a crucial role in the Scrum framework. They act as a bridge between the stakeholders and the development team, ensuring that the team works on tasks that bring the most value to the business. The Process Owner (see Course Manual 1, Section 1.4.3) will typically fulfil this role; hence, we will refer to the Process (rather than the Product) Owner for the remainder of this workshop.
Responsibilities of the Process Owner
o Defining the Process Vision: The Process Owner is responsible for outlining the process vision, ensuring that it aligns with the company’s goals and customer needs.
o Managing the Process Backlog: They prioritize and maintain the process backlog, a list of all agreed improvement solutions and requirements for the product. This includes ensuring user stories and acceptance criteria are complete and meet the required standard.
o Prioritizing Needs: They prioritize the needs based on business value and ensure the team knows the priorities.
o Stakeholder Communication: The Process Owner is the primary communicator with stakeholders, gathering feedback and conveying it to the team.
o Making Decisions: They have the authority to decide what process changes are implemented.
The Process Owner bridges the gap between customer needs and the development team.
Three key attributes are essential for an effective Process Owner:
i. Close to the Customer
A Process Owner must deeply understand the customer’s needs, preferences, and pain points. This understanding enables them to articulate customer demands accurately and ensures that the process delivers real value.
They must engage continuously with customers and stakeholders to gather feedback and insights, which are critical in shaping the process vision and optimization efforts.
ii. Responsible for the Process Backlog
The Process Owner is the custodian of the process backlog, which includes prioritizing and refining backlog items to ensure clarity and alignment with customer needs.
They must balance factors such as market trends, business priorities, and technical feasibility while prioritizing backlog items, ensuring the team is always working on the most valuable tasks.
iii. Available to the Team
The Process Owner must be readily available to the project team to provide guidance, clarify requirements, and make swift decisions, thereby preventing delays and ensuring smooth progress.
Their constant interaction and collaboration with the team helps build a shared understanding of process goals and maintain alignment throughout the optimization journey.
8.4.2. The Scrum Master
The Scrum Master is akin to a coach for the Scrum Team, ensuring that the Scrum practices and rules are followed. They facilitate communication and cooperation among all participants of the project.
Responsibilities of the Scrum Master
o Facilitating Scrum Ceremonies: They facilitate key Scrum meetings, including sprint planning, daily stand-ups, sprint reviews, and retrospectives.
o Removing Obstacles: The Scrum Master helps remove impediments or guides the team to remove obstacles that might affect their performance.
o Coaching Team Members: They ensure the team understands and adheres to Scrum theory, practices, and rules.
o Shielding the Team: The Scrum Master protects the team from external interruptions and distractions.
o Continuous Improvement: They encourage and help the team continuously improve its processes and practices.
Key attributes for a Scrum Master include:
i. Servant Leadership
The Scrum Master should exhibit a servant leadership style (see Workshop 1 Course Manual 10 Section 10.4), serving the team by removing impediments, facilitating processes, and ensuring the team can work effectively without external distractions or blockers.
They lead by example and place the team’s needs above their own, fostering an environment of collaboration, empowerment, and continuous improvement.
ii. Facilitation Skills
A Scrum Master must effectively facilitate Scrum ceremonies such as daily stand-ups, sprint planning, reviews, and retrospectives), ensuring they are productive and focused.
They should be skilled in conflict resolution, helping the team navigate disagreements and fostering a culture of open communication.
iii. Agile Advocate
The Scrum Master champions Agile principles and practices, ensuring the team adheres to Scrum methodologies.
They educate the team, stakeholders, and the organization about Scrum, promoting its adoption and understanding.
8.4.3. Team Members
Team members are the individuals who deliver improvements to the process. They include Subject Matter Experts (SMEs), Process Analysts and other specialists who contribute to the optimization of the process.
Responsibilities of the Team Member:
o Delivering Incremental Process Improvement: They are responsible for delivering potential value-adding incremental improvements to the process during each sprint.
o Self-Organizing: Team Members are expected to be self-organizing, managing their workload and collaboratively deciding how to achieve the sprint’s goals.
o Cross-functionality: They should have diverse skills or be willing to learn new skills to help the team be cross-functional.
o Participating in Scrum Ceremonies: Each member actively participates in all the Scrum ceremonies and shares the responsibility of meeting the sprint goals.
o Continuous Learning and Adaptation: Team Members should continually seek ways to improve themselves and their work process.
In conclusion, the roles of Process Owner, Scrum Master, and Team Members are interdependent. The Process Owner defines the process optimization vision, the Scrum Master facilitates the process optimization, and the Team Members deliver the required improvements. Effective collaboration and understanding of each role’s responsibilities are essential for the success of a Scrum project. By fulfilling their unique roles and working together as a cohesive unit, these three Scrum roles contribute to driving successful, efficient, and adaptive product development.
Case Study: Medical Bill Creation-to-Payment Process, Hospital Sírio-Libanês, Brazil
Problem
Hospital Sírio-Libanês is a world-renowned Brazilian medical facility which treats more than 120,000 patients annually across 40 specialities. However, given the significant number of patients treated, the hospital struggled to respond to their needs while operating efficiently. Projects established to address this problem utilized the waterfall methodology, resulting in inconsistent and error-prone tracking.
A collision of working models also occurred. The organization prioritized local issues over global ones and operated in hierarchical, departmental silos, with each group possessing its unique ways of working.
How Agile Helped
To address these and other related challenges, the hospital began its journey to become an Agile organization. The objectives of the agile transformation included maximizing value delivery by reducing waste and other non-value activities.
Establishing an Agile Centre of Excellence was the first step in their transformation journey. Scrum was selected as their primary framework due to its simplicity and perceived organizational fit.
The first project selected involved several organizational departments and was quite complex. The project’s goal was to shorten the average time to receive bills and decrease the number of initial rejections reported by health operators. The process backlog included items related to process automation, protocol enhancements, database registration improvements for medications and medical supplies, and product package improvements. Five scrum teams were created to deliver this project simultaneously.
The Product (Process) Owner was in charge of optimizing the value of the Scrum team’s work and had an extensive understanding of the business and the process. As the spokesperson for organizational stakeholders, the Product (Process) Owner acted as an effective liaison between them and the Scrum teams.
The COVID-19 pandemic impacted the Scrum team after a month and a half of work, forcing them to switch to remote work. However, the impact was lessened as the transparency enabled by Scrum provided visibility into the day-to-day work of the Scrum teams and each team member. The Scrum Events facilitated teamwork and communication, which guaranteed the delivery schedule.
Results
Among the deliverables that the Scrum teams produced included:
o Products that significantly reduced the time to receive payment and decreased the number of charge rejections by automating various information exchange processes with operators. The initial loss of revenue was reduced by R$4.5 million/month, from 10% in 2019 to 7.5% in 2021.
o Solutions which were built in 4 days, enabling rapid response to the unexpected service demands created by the COVID-19 pandemic, such as an Emergency Care flow control solution. These solutions maximized care capacity and reduced patient wait times, resulting in a better customer experience.
Exercise 8.2
Course Manual 9: Scrum Optimization
1.26 Introduction
As earlier established, Scrum, as an Agile framework, emphasizes efficiency, flexibility, and productivity in project management. One way it achieves this is by optimizing the Scrum team’s performance. In this course manual, we will examine the factors that enable this optimization, namely the team size, an emphasis on collective performance and transparency.
Below, we examine each of these factors in detail.
1.27 Team Effectiveness
As earlier established, Scrum, as an Agile framework, emphasizes efficiency, flexibility, and productivity in project management. One way it achieves this is by optimizing the Scrum team’s performance. In this course manual, we will examine the factors that enable this optimization, namely the team size, an emphasis on collective performance and transparency.
Below, we examine each of these factors in detail.
9.2.1. Optimal Team Size
The optimal size of a Scrum team is seven or fewer members. This recommendation is rooted in the principles of effective communication and collaboration. Below, we explore why a team size of seven or fewer is considered ideal in Scrum, including the impact of team size on communication channels.
.
In Scrum, smaller teams are preferred for several reasons, as outlined below:
i. Enhanced Communication: Smaller teams facilitate more manageable and effective communication. As team size increases, the complexity of communication channels grows exponentially, making it harder to maintain clear and concise communication.
ii. Increased Flexibility and Agility: Smaller teams can swiftly adapt and respond to changes. In larger groups, the agility and speed that Scrum aims to achieve can be hindered by the sheer logistics of coordinating many people.
iii. Better Focus and Collaboration: With fewer people, it’s easier to align on goals and work collaboratively. Each team member’s contribution becomes more significant, and there’s a stronger sense of accountability and ownership.
iv. Impact on Communication Channels: The number of potential communication channels in a team can be calculated using the formula n x (n-1)/2, where ‘n’ is the number of team members. For example, a team of 7 has 21 (7 x 6 = 42/2) potential communication channels, while a team of 10 has 45 (10 x 9 = 90/2). This exponential increase illustrates how larger teams can face significant challenges in maintaining effective communication, potentially leading to misunderstandings, inefficiencies, and a dilution of communication clarity and purpose.
9.2.2. Emphasis on Collective Performance
In the Agile methodology, particularly in Scrum, the focus is predominantly on the team’s performance as a whole rather than individual accomplishments. This team-centric approach is a fundamental shift from traditional project management methodologies that often emphasize individual contributions. This section delves into how focusing on teams rather than individuals can significantly enhance performance and discusses the concepts of self-organization, transcendent purpose, and the necessity of being cross-functional in a Scrum team.
i. Team Performance Over Individual Achievements
In Scrum, the success of a project is attributed to the team’s collective efforts rather than the prowess of individual members. This focus brings several benefits:
o Encourages Collaboration: By valuing team performance, Scrum encourages collaboration, knowledge sharing, and mutual support, leading to more innovative and comprehensive solutions.
o Reduces Pressure on Individuals: Shifting the focus from individual performance to team output helps reduce undue pressure on single team members, fostering a more supportive and less stressful work environment.
o Leverages Diverse Skills: A team-based approach allows for the amalgamation of diverse skills and perspectives, leading to more well-rounded and effective problem-solving.
ii. Self-Organization and Individual Performance
Self-organization is a fundamental principle in Scrum, where teams are given the autonomy to manage their work and make decisions. This autonomy results in:
o Increased Accountability: Teams that organize their work tend to take greater ownership of the outcomes, leading to improved performance.
o Empowerment of Team Members: Self-organization empowers team members, which can lead to increased motivation and satisfaction.
o Natural Management of Individual Performance: In a self-organizing team, members naturally hold each other accountable, and performance issues are often addressed internally without external management intervention.
iii. The Role of Transcendent Purpose
A transcendent purpose, or a sense of working towards a greater good, which was discussed in Workshop 1 (see Course Manuals 11 and 12), is also crucial in Scrum teams for several reasons:
o Enhances Motivation: A shared, meaningful goal beyond completing tasks can significantly boost the team’s motivation and commitment.
o Aligns Efforts: A transcendent purpose ensures that all team members are aligned in their efforts, working collaboratively towards a common objective.
o Improves Resilience: Teams with a clear, compelling purpose are more resilient in the face of challenges and are better equipped to navigate through obstacles.
iv. Cross-Functional Teams
Cross-functionality is another essential aspect of Scrum teams. When selecting team members, care should be taken to ensure that the various perspectives, knowledge and skillsets required for delivery is represented in the team. Where this is missing, team members must be willing to acquire the required knowledge or skills. The benefits of cross-functional teams include:
o Comprehensive Skill Set: Cross-functional teams possess a wide range of skills, enabling them to handle various aspects of a project without depending on external resources.
o Faster Problem-Solving: With diverse expertise within the team, problems can be addressed more quickly and efficiently.
o Enhanced Learning Environment: Working in a cross-functional team provides learning and skill development opportunities, as team members share knowledge and learn from each other’s areas of expertise.
9.2.3. Transparency
Transparency is a prerequisite for the success of Scrum initiatives, and it is built into the framework through its artifacts, ceremonies, and other constructs.
Transparency in Scrum involves openly sharing information about the project’s progress, obstacles, and processes. This openness is crucial for several reasons:
o Fosters Trust: Transparency builds trust among team members and with stakeholders. When everyone has access to the same information, it creates a sense of fairness and collective responsibility.
o Enables Accurate Decision-Making: Transparent practices ensure that all decision-makers have the necessary information to make informed decisions.
o Facilitates Adaptation: In Agile and Scrum, adaptability is key. Transparency clearly explains where changes are needed, allowing teams to adapt rapidly and effectively.
o Enhances Collaboration: When team members fully understand what others are working on, collaboration is encouraged and helps align efforts towards common goals.
Below we discuss two key Scrum artefacts that enhance transparency: the Kanban board and the Burndown chart.
i. Kanban Board
The Kanban board is an essential Scrum artifact for visualizing work and ensuring transparency. A typical Kanban board includes three columns: To Do, Doing, and Done (see Figure 9.1). The board may be a physical or virtual artifact, but it is essential to be easily accessible to team members and stakeholders.
o To Do: This column lists all the tasks that must be completed. It provides a clear view of the workload and priorities.
o Doing: Here, tasks that are currently in progress are displayed. This column is crucial for understanding team capacity and tracking ongoing work progress.
o Done: Completed tasks move to this column. It serves as a clear indicator of progress and accomplishment.
The Kanban board is effective in promoting transparency because:
o It provides a real-time, visual representation of the team’s workflow.
o It helps identify bottlenecks and impediments, as team members can immediately see where tasks pile up.
o It enables stakeholders to quickly gauge project progress without needing to delve into detailed reports.
![](https://www.appletongreene.com/wp-content/uploads/Figure9_1-1-300x200.jpg)
Figure 9.1 – Kanban Board
ii. The Burndown Chart
The Burndown chart is another critical artifact that facilitates transparency in Scrum. It is a graphical representation that shows the amount of work left to do versus the time remaining for a Sprint.
The vertical axis typically represents the amount of work (often measured in hours or story points), while the horizontal axis represents time (see Figure 9.2).
As the Sprint progresses, the chart shows the remaining work decreasing, ideally reaching zero by the end of the Sprint.
The Burndown chart enhances transparency by:
o Providing a clear and straightforward visual of whether the Sprint is on track. If the work is not “burning down” at the expected rate, it’s a clear signal that issues may need attention.
o Helping teams adjust their workload and priorities. Additional tasks might be pulled from the backlog if the work is being completed faster than anticipated.
1.28 Sprint Deliverables
Sprint deliverables are the tangible outputs produced at the end of a Sprint. These deliverables are small, incremental parts of the larger project and are expected to be usable and potentially value-adding changes to the process, ensuring continuous progress towards the final optimized process. Sprint deliverables are based on completing small, manageable portions of work, allowing for more focused and efficient development. Additionally, because Sprints are short, teams can adapt and change direction based on feedback or changing requirements.
Central to this approach is the development of Minimum Viable Process Change(s) (MVPC), which is instrumental in minimizing inventory waste and de-risking project delivery. Below, we explore the concept of Sprint deliverables, the role and importance of the MVPC, and how this approach contributes to reducing waste and de-risking the delivery process through regular feedback.
9.3.1. Minimum Viable Process Change(s) (MVPC)
The MVPC is a critical concept in Agile methodologies, particularly concerning Sprint deliverables. It refers to the most basic process change(s) that can be implemented during the sprint and released at its end to provide value to the customer. Note that in the case of process design (see Course Manual 3), Sprint deliverables can be considered the Minimum Viable Process (MVP). However, as most process optimization initiatives focus on existing processes, we will refer to the MVPC for the remainder of this course manual.
The MVPC approach offers the following benefits:
o Minimizing Inventory Waste: The MVPC approach minimizes inventory waste (one of the seven wastes in Lean methodology) by avoiding implementing process changes that customers do not want or need.
o Efficient Use of Resources: By focusing on the minimum necessary process changes, resources are efficiently utilized, focusing only on what is essential to meet customer needs and receive feedback.
o Early and Continuous Delivery: The MVPC allows for the early and continuous delivery of value to the customer, ensuring that implementation efforts are aligned with customer requirements from the outset.
![](https://www.appletongreene.com/wp-content/uploads/Figure9_3-1-300x169.jpg)
Figure 9.3 – Minimum Viable Process Change(s) Loop
9.3.2. De-risking Delivery
One of the fundamental advantages of focusing on Sprint deliverables and the MVPC is de-risking the project delivery process, which is enabled by:
o Early and Regular Feedback: Regular reviews at the end of each Sprint, along with the early release of the MVPC, provide opportunities for early and continuous feedback from stakeholders and customers.
o Course Correction and Adaptability: This feedback enables the team to make necessary adjustments quickly, reducing the risk of prolonged change implementation in the wrong direction.
o Validation of Customer Needs: Regularly delivering working value-adding changes to the process ensures that the team consistently meets and validates customer needs and expectations.
Exercise 9.1
1.29 Alignment with OODA Loop and PDCA Cycle
The OODA (Observe, Orient, Decide, Act) loop is a decision-making process initially developed by military strategist John Boyd. It is a dynamic, iterative approach emphasising speed, agility, and adaptability. The Scrum framework shares several core principles with the OODA loop, making integrating these methodologies beneficial for enhancing the agility and responsiveness of Scrum teams.
Below, we compare the OODA loop with the Scrum framework, highlighting similarities:
o Observe: The first step in the OODA loop involves gathering information about the environment. In Scrum, this is akin to the Review and Retrospective phases where teams assess their progress and the challenges faced during the Sprint. For example, the Process Owner might bring certain changes to market conditions or regulations to the attention of the Scrum team that impact the process.
o Orient: This step involves understanding the information within the context of the environment and one’s own goals. In Scrum, this is reflected in the Sprint Planning sessions where teams prioritize tasks and align them with the project objectives. Following on from the abovementioned example, the Process Owner and team might conclude that certain changes in the process backlog are more critical than previously thought, which provides the team an opportunity to re-prioritize those changes.
o Decide: Here, a decision is made based on the current understanding. In Scrum, this decision-making happens throughout the Sprint, as teams self-manage and adapt their approach to work.
o Act: Finally, the decision is implemented. In Scrum, this is the execution of the Sprint itself, where the team works on the tasks and delivers incremental value, factoring in the learning.
The PDCA (Plan-Do-Check-Act) cycle, which was introduced in Section 6.2, also aligns closely with both OODA and Scrum, as described below:
o Plan: Similar to the Orient phase of OODA, planning in PDCA and Sprint Planning in Scrum involves setting objectives and deciding the course of action.
o Do: This stage, akin to the Act phase in OODA, involves implementing the plan. In Scrum, this is the execution of the Sprint.
o Check: Corresponding to the Observe phase in OODA, this stage in PDCA involves monitoring and evaluating the results. In Scrum, this is mirrored in the Review and Retrospective phases.
o Act: This is about taking action based on the evaluations made in the Check phase, similar to the Decide phase in OODA. In Scrum, this informs the next Sprint’s planning and task prioritization.
Integrating OODA with Scrum and PDCA offers numerous benefits:
o Enhanced Agility: OODA’s rapid and iterative nature complements Scrum’s focus on flexibility and adaptability, allowing teams to respond quickly to changes.
o Improved Decision-Making: The OODA loop’s emphasis on continuous observation and orientation helps teams in Scrum make more informed decisions.
o Better Risk Management: The iterative approach of observing and orienting allows teams to identify and mitigate risks more effectively.
o Continuous Improvement: By linking OODA and PDCA within the Scrum framework, teams establish a culture of continuous improvement, regularly assessing and refining their processes.
1.30 Waste Reduction in Scrum
Lean management concepts (discussed in Course Manual 6) can be readily integrated with Scrum to identify and eliminate various forms of waste, thereby enhancing productivity and value delivery.
Waste can erode a team’s sense of purpose and motivation. When team members engage in tasks that don’t contribute to the overall goal or are overburdened, it can lead to a lack of engagement and a feeling that their work is not meaningful,which is detrimental to the collaborative and driven spirit that Scrum aims to foster.
9.5.1. Identifying Waste in Scrum
Three forms of waste are particularly relevant to Scrum. These are discussed below:
i. Mura (Inconsistency): In Scrum, Mura can manifest as uneven workloads or inconsistent practices across sprints. This inconsistency can lead to periods of high stress followed by underutilization, disrupting the rhythm and flow of the team.
ii. Muda (Waste): Muda refers to activities that consume resources but do not add value. The eight waste types discussed in Section 6.1.1 are all manifestations of Muda. In Scrum, this could include unnecessary documentation, over-engineering, or prolonged meetings that don’t contribute to the sprint goal.
iii. Muri (Overburden): Muri in Scrum is often seen as overloading team members, setting unrealistic sprint goals, or expecting them to work on multiple tasks simultaneously, which reduces efficiency and can lead to burnout and decreased morale.
During the Sprint Retrospective, the team should actively seek to identify manifestations of these waste forms in their process and collaboratively find solutions. This establishes a culture of continuous assessment, adaptation, and a shared commitment to excellence and efficiency.
9.5.2. Strategies for Reducing Waste in Scrum
The following recommendations will facilitate the reduction of waste in the Scrum process:
o Avoiding Context Switching: Encourage team members to focus on one task at a time. Context switching, or multitasking, can lead to Muda as it often reduces efficiency and increases the chance of errors.
o Capturing and Fixing Defects Close to Source: Immediate attention to defects prevents the amplification of errors and reduces the waste of reworking. This approach aligns with the Agile principle of maintaining a sustainable pace and delivering high-quality work.
o Reducing Emotional Waste of Unreasonable Expectations: Setting realistic goals and avoiding practices that do not add value, such as unnecessary meetings or excessive reporting. It’s important to align expectations with the team’s capacity and the sprint’s objectives.
o Measuring Output and Outcomes Rather Than Hours Worked: Focus on the results rather than the time spent. This shift in perspective encourages efficiency and aligns team efforts with the end goals, reducing Muda and Muri.
o Clear Definition of Sprint Goals: Ensure that each sprint has a well-defined goal that all team members understand, reducing the risk of irrelevant work (Muda).
o Sustainable Pace: Avoid overloading sprints with more work than can reasonably be completed, which addresses Muri.
o Empowering Teams: Allow teams to self-manage and make decisions about their work, which can reduce unnecessary oversight and increase efficiency.
o Continuous Improvement Culture: Foster a culture where the identification and elimination of waste is continuous, and team members are encouraged to provide feedback and suggestions.
1.31 Eliminating Complacency
Over time, Scrum teams risk getting stuck in a “happy bubble”, a state of complacency where teams, satisfied with their current performance, become resistant to change and improvement. In this section, we discuss how teams can avoid the trap of complacency.
9.6.1. Busting the Happy Bubble
The Sprint Retrospective is a critical Scrum ceremony for fostering continuous improvement and preventing team complacency. This ceremony, held at the end of each Sprint, is not just a routine meeting but a powerful tool that drives teams towards higher efficiency and productivity.
Below, we explore how the Sprint Retrospective helps break the ‘happy bubble’ of complacency, encourages continuous improvement, and empowers teams to complete more stories in successive Sprints.
The Sprint Retrospective plays a crucial role in addressing complacency by:
o Encouraging Open Feedback: By creating a safe space for team members to openly express their views and concerns, retrospectives prevent issues from being overlooked or ignored.
o Reflecting on Challenges: Teams are encouraged to discuss their successes, failures, and challenges, fostering a culture of transparency and continuous learning.
o Identifying Areas for Improvement: Regular retrospectives help identify even the smallest areas for improvement, ensuring that the team does not settle into complacency but is always looking for ways to enhance their performance. Figure 9.4 shows a Starfish Sprint Retrospective template to assist the team in identifying things to start, stop, keep and do more or less of.
o Continuous Feedback Loop: A regular feedback loop keeps teams alert to their performance and open to changes.
o Adaptation to Change: It fosters an environment where adaptation and embracing change become the norm rather than the exception.
o Prevention of Stagnation: By continuously challenging the status quo, retrospectives ensure that teams do not become stagnant but are constantly evolving and improving.
The Sprint Retrospective is also a key driver of continuous improvement in the Scrum process, which it does by fostering the following:
o Iterative Learning: Each retrospective allows teams to reflect on their processes, practices, and interactions, learning from each Sprint to make the next one better.
o Actionable Insights: By focusing on actionable insights, retrospectives ensure that the lessons learned are translated into concrete actions for future Sprints.
o Efficiency Enhancements: By regularly examining their work methods, teams can identify inefficiencies and bottlenecks, leading to more streamlined workflows. In the following section, this topic will be explored in additional detail.
o Better Planning and Estimation: Retrospectives help teams refine their planning and estimation skills, leading to a more realistic and achievable commitment of user stories in future Sprints.
o Team Dynamics and Morale: Discussing successes and failures at the Retrospective can improve team dynamics, morale, and collaboration, which are critical for increasing productivity.
1.32 Increasing Sprint Velocity
Sprint velocity is a metric used in the Scrum framework to quantify the work a team can complete in a single sprint. It is calculated by adding points for all completed items in the sprint. These points typically represent a combination of the effort, complexity, and work required for each task or user story. The velocity is used to predict how much work a team can get done in future sprints, helping with planning and efficiency improvements.
Increasing sprint velocity involves a mix of better estimation, process improvement, and team dynamics enhancement.
Below are some recommendations for achieving this objective:
o Refining Estimation Techniques: Teams should regularly review and refine their estimation techniques to ensure they are as accurate as possible. Techniques like Planning Poker can facilitate consensus-based, reliable estimations.
o Limiting Work in Progress (WIP): Implementing WIP limits prevents teams from spreading themselves too thin and helps maintain focus, thereby increasing efficiency and velocity.
o Effective Sprint Planning: Ensuring that thorough and realistic sprint planning sets the stage for a successful sprint. Matching the sprint’s workload with the team’s capacity is essential.
o Regular Retrospectives: These meetings allow teams to reflect on what worked well and what didn’t. Implementing lessons learned from retrospectives can lead to continuous improvement in process and productivity.
o Improving Team Skills and Cross-functionality: Investing in the skills development of team members and encouraging cross-functionality can lead to a more versatile and efficient team, which can positively impact velocity.
o Minimizing External Interruptions: Shielding the team from unnecessary interruptions allows them to focus more on sprint tasks, which can improve velocity.
o Using Velocity for Forecasting, Not Performance Measurement: Velocity should be used as a planning tool, not a performance indicator. Using it to measure team performance can lead to inflated estimates and reduced accuracy.
o Focus on Quality and Sustainable Pace: Prioritizing quality and maintaining a sustainable working pace ensures that the team doesn’t face burnout and that the velocity achieved is sustainable over the long term.
o Effective Communication and Collaboration: Enhancing communication and collaboration within the team and with stakeholders can streamline the work process, improving velocity.
Determining and improving sprint velocity is an iterative process that requires a balance of accurate estimation, efficient processes, team skill development, and a focus on quality. It is essential to bear in mind that the goal is not to chase higher velocity numbers but to achieve a sustainable and realistic pace that ensures high-quality output and team well-being. By regularly evaluating and adapting their approach, teams can use sprint velocity effectively for planning and continuous improvement.
Case Study: Software Requirement-to-Release Process, Parmenion, United Kingdom
Problem
Parmenion provides a proprietary software platform that enables financial advisory firms to develop their bespoke investment offerings. Users highly regard their platform, which has won numerous industry accolades. Parmenion manages the assets of about 80,000 retail clients for UK IFAs, amounting to over £9 billion. Their typical client is in their early 60s and invests about £250,000.
Parmenion continuously improves its investment platform, processes and offerings to maintain its competitive advantage. However, even though they delivered releases regularly, there were occasionally long lead times due to imprecise requirements and incomplete acceptance criteria.
How Agile Helped
The leadership team and the CEO sponsored a Scrum initiative to address these problems, which commenced with training for about a third of the organization’s employees. This was combined with group sessions that sought to involve everyone in the company in the change initiative. For example, an open invitation was extended to all employees to attend Sprint Reviews. These sprint reviews turned into fantastic occasions to observe the advancements the teams had accomplished and get input on future tasks from a broad spectrum of stakeholders, ranging from support teams to executives. Approximately one-third of the workforce participated in the Sprint Reviews, freely exchanging thoughts and asking questions. People showed that they felt very comfortable giving their opinions, which enhanced trust within the company.
Five internal Scrum teams and one external Scrum Team from a development partner were led by two Scrum Masters. The teams were cross-functional (except for one team dedicated to data analytics), and they successfully completed all tasks required for delivery, including software development, testing, release engineering, and business analysis.
The teams were organized utilizing a single process and value flow. Every task had a single workflow with distinct entrance and exit points established. The larger tasks were tracked on a Kanban board at the portfolio level. At each level of the portfolio, the four Kanban metrics—Work In Progress count (WIP), Cycle Time, Throughput, and Work Item Age—were tracked.
Results
From the commencement of the initiative, several benefits were evident, such as:
o Greater participation from all employees in the business’s product delivery, with employees openly vocalizing their enthusiasm for attending the Sprint Review.
o Greater transparency around the status of project activities.
o A decrease in the quantity of work in progress
o A quicker turnaround time for work
o Increased employee satisfaction
o Decreased costs associated with delays by making choices more quickly—from weeks to days—around the streamlined single stream process.
The metrics demonstrated that:
o The number of shipped features climbed from single digits to roughly 30–40 every month, suggesting a shift from massive, ongoing projects to a more iterative way of working.
o Greater than 50% of backlog items now satisfy the agreed-upon criterion of ready, up from 18%.
o The average time to resolve blockers was reduced from c.19 days to c. 4 days. Most internal blockers was resolved after a day or two.
o Over 18 months, delivery team engagement metrics increased by 24%.
Exercise 9.2
Project Studies
Project Study (Part 1) – Customer Service
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 2) – Finance
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 3) – Human Resources
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 4) – Marketing
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 5) – Information Technology
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 6) – Operations
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 7) – Management
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 8) – Legal
The Head of this department is to provide a detailed report which demonstrates that the learnings from Competitive Advantage have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 9) – Compliance
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Project Study (Part 10) – Strategy
The Head of this department is to provide a detailed report which demonstrates that the learnings from Process-Oriented Thinking have been implemented and are fully operational within their department. The report should describe how it was implemented, which resources were utilized, challenges that were encountered and how they were resolved, as well as implementation results.
01. What is Process-Oriented Thinking?
02. Technology Enablers
03. Process Design
04. Process Discovery
05. Process Adjustment
06. Lean Management
07. Change Process
08. Scrum Practices
09. Scrum Optimization
Please include the results of the initial evaluation and assessment.
Program Benefits
Management
- Data-driven insights
- Operational transparency
- Performance optimization
- Cost reduction
- Process automation
- Risk mitigation
- Compliance assurance
- Resource allocation
- Decision support
- Continuous improvement
Finance
- Fraud detection
- Cash flow optimization
- Cost containment
- Financial compliance
- Revenue forecasting
- Working capital
- Expense analysis
- Invoice processing
- Financial visibility
- Budget control
Operations
- Process efficiency
- Bottleneck identification
- Resource optimization
- Lead time reduction
- Workforce productivity
- Quality improvement
- Inventory management
- Supplier collaboration
- Workflow automation
- Capacity planning
Client Telephone Conference (CTC)
If you have any questions or if you would like to arrange a Client Telephone Conference (CTC) to discuss this particular Unique Consulting Service Proposition (UCSP) in more detail, please CLICK HERE.