Agile vs Lean Six Sigma
November 2nd 2011
Are these two related in any way? Is Lean Six Sigma some ancient process improvement method that has nothing to do with software development? Now that I've covered Kanban vs. Scrum I thought it would be a time to compare Agile to Lean Six Sigma.
Five domains of system categorization
Before we dig into Lean Six Sigma I should clarify some terms that will be used. A leader's framework for decision making defines basics for Cynefin Framework, which is available at Cognitive Edge. This sense-making framework defines five categories for any system: simple, complicated, complex, chaotic, and disorder. Basically, it says that work in different domains should be managed differently. The fastest way to get a grasp of the ideas is to watch this video:<iframe width="560" height="315" src="http://www.youtube.com/embed/N7oz366X0-8" frameborder="0" allowfullscreen=""></iframe>
What is Lean Six Sigma?
As most of you already know what Agile software development is all about, I won't go into that. However, Lean Six Sigma is something that the majority of people have no idea of. Some people see Lean Six Sigma as a desperate attempt to combine the old-fashioned scientific Six Sigma method to the Lean method as that is now the hot topic. Lean was derived from Toyota Production System and Six Sigma has nothing to do with that, right? Not so fast. Six Sigma method was originally developed by Motorola in the 1970s and it is a systematic way of improving processes. Later, Lean concepts were added and Lean Six Sigma was born. You can read the full history e.g. here. Jeff Liker comments on Lean vs. Six Sigma like this:
If a company that does not have a history of process improvement has started with lean and used lean tools to work on flow issues, setting up cells and supermarkets, will this be enough? The answer is a resounding no. All they have done is set the stage for identifying problems. They need a problem solving method. Some companies are using six sigma in this way and that is fine. The serious ones learn that the green belt training given to the work groups will solve the large majority of problems. But then you still need to evolve over time into other aspects of the system–team leader role, andon, standardized work, visual management, hoshin kanri, etc. I say evolve because it is a long-term process in which you try something, make mistakes, reflect, adjust, and then as you get good at that you add more to begin to build a system. The underlying framework was also taught by Deming and it is PDCA.
Lean Six Sigma defines an improvement process, tools, and roles that should be present in a Lean Six Sigma project. I think that role belt system (black belt, green belt, etc.) part is somewhat comical and can actually hinder Sig Sigma adoption in organizations. However, the theory is that you can take any process (sales process, production of goods, etc.), check how much defects it's currently producing, and initiate an improvement project to decrease the defects thus increasing your company's value. Lean Six Sigma defines 5 phases that improvement project should go through: Define: What is the problem, how it should be improved, and how the improvement should be seen? Measure: Measure the process using historical and current data. Analyze: Analyze the gathered data to find insights. Improve: Improve the process by finding key factors of the process. Control: Ensure that modifications are used and deviations from the target are corrected before they result in defects. These phases form a DMAIC acronym that simply states the order of process improvement activities. In addition to the phases, Lean Six Sigma defines a whole host of tools to be used in every phase. The tools are very much statistical such as ANOVA and Regression analysis. This is basically what you can read in Wikipedia but does not answer to the question: "What is a Lean Six Sigma project like in practice?".
A manufacturing Lean Six Sigma sample
Your company manufactures tables. You have a problem that 30% of the tables' surfaces are curved and have to be scrapped. This costs you annually 300.000€. Your employees all have some ideas how to improve the process and these "improvements" have been tried out. However, the long term scrap average remains at 30%. You start digging into the problem and find out that over 200 variables have some influence over the end product. You are wondering whether there are few vital variables and if optimal levels for these could be found, the scrap would be dramatically reduced. As it turns out, there usually is. Your challenge is to find these. You start reducing the number of variables, you do this by guessing, interviewing experts, and using tools suchs as 5 whys and FMEA. You gather data from the current production and use historical data to find patterns. Usually historical data is completely useless as causal relationship is almost impossible to deduct from it. However, by continuing this guesswork, you can reduce the number of variables to about 10. You are quite confident that some combination of these variables will produce the result you are after. How do you test all combinations of the variables? One at a time? No, you will design an experiment. A factorial experiment. You will make a mathematically clever test plan which allows you to change multiple variables at a time and be able to componentize the effects of each variable using statistical software such as Minitab. You can only run about 40 tests and gather information about very large number of combinations. These tests reveal you that variable A should be set to 10.5, variable B should be set to 3232. You make modifications to the production line and check the results. Defect rate drops to 5%. Great! How long did this take? From weeks to months, it depends on your business.
Err... I am not manufacturing tables.
Right. Now it gets tricky. Six Sigma sees people producing software as a process, and so it could be improved. Agile people respond that they are producing a once-in-a-lifetime solution in a complex domain which requires innovation and creativity, you cannot use that statistical stuff in here. I think both camps are partially right. Why? Please, read on. Lean concepts fit better together with Agile. Drive down the waste and you are improving. But a Six Sigma approach to improve e.g. Scrum team's performance would go something like so:
- Come up with potential process variables that could cause number of defects to reduce. E.g. unit test coverage, ATDD test coverage, and usage of pair programming.
- Design an experiment where different features will get done using different combination of values for selected variables. A small sample of a test plan could be: -unit test coverage 100%, ATDD test coverage 0%, no pair programming -unit test coverage 0%, ATDD test coverage 100%, pair programming...
- Execute an experiment and find out that 80% of defects could be reduced by pair programming and keeping unit test coverage at 100%. ATDD test coverage has little effect on the end results (Note that I do not know if this is the case, this is just an example!)
The problem is that I can already sense how a Scrum team would react to this suggestion. Are you [insert your favorite word] kidding? "Trust me, I am a Lean Six Sigma Black Belt" will not help starting the experiment either. Lean Six Sigma works with a Taylorist mindset and tries to find causal relationship between an action and an output. It is a neat way of working when causal relationship can be detected in a complicated domain but falls short when working with complexity. There is nothing wrong with experimentation per se, but if the approach is completely out of the group's norms there is a slim chance to get the experiment done. Statistical stuff in Six Sigma is quite hard and takes time to learn. If the team cannot fully understand what the statistics behind the method are, acceptance of experimentation approach might not happen.
Not all people agree that software development is all complex. I ran into an article about this stuff. Joseph Pelrine says:
Unfortunately, the typical Agilist perception of complexity is not quite aligned with any of the main scientific definitions of the term. Agile literature abounds with romanticised, subjective interpretations of terms such as complexity, self-organisation, emergence, which can only be understood by remembering that ‘a little knowledge is often a dangerous thing.
He continues the analysis of software projects and state that 38.5% of work is in the complex domain. Many people are saying that "Software project is complex" but Cognitive Edge blog claims:
The activities tend to be weighted more to the complicated and complex domains, with activities related to the coding aspect of software development landing in the complicated (or sometimes simple) domain, and activities associated with project management landing in the complex (sometimes chaotic) domain.
Software development can benefit by being treated as a complex domain (and not an ordered one), and taking advantage of the toolbox of social complexity, namely the Cynefin method. The field (as well as many other fields of human endeavour) would benefit even more from a multi-ontological approach, understanding that there are multiple domains involved, taking the best techniques for the various domains, and combining them in an appropriate and flexible manner. This multi-ontological approach would also go a long way towards resolving the infighting and bickering now taking place between the "Agile" and "Lean" communities.
Of course he is boosting Cynefin method, but I am fine with that. I am also aware of "sick stigma" pet name that is used by Dave Snowden, which is actually pretty funny. Anyways, I see no point of using pure Six Sigma when dealing with complex domain. However, for some complicated problems method contains useful stuff.
Understanding variation, measurement and multivariate experiments
For me, these three were the most important aspects that I learned in a Lean Six Sigma Black Belt training. Firstly, by knowing if some variation is random (that is inherently present in all processes) or a special cause is a starting point when improving any process. If decisions are made based on random variation, improvement is headed to a dead end. Secondly, all measurement contains error and if you don't know how much error your measurement system is causing, you are once a again making wrong decisions. Take a Scrum team's velocity for an example. If the velocity is increasing sprint by sprint, it doesn't necessarily mean that team is actually improving. "Improvement" could be caused by the measurement error when team changes how it estimates work. Thirdly, even though multivariate experiments do not fit neatly in to the software development process, they can be used to gain better understanding of a service or a product the team is building. Google Website Optimizer has multivariate testing tools available and Lean Startup movement is advocating usage of experiments. "The scientific method", as Lean Six Sigma calls itself, has something that is usable also in a software product. Our Lean Six Sigma course instructor said that Lean Sigma Sigma could be seen as design of experiments on steroids, which I completely agree with. During this week I visited The division of Pharmaceutical Technology to get more insights how Design of Experiments are used in pharmaceutical research. Problem in Finland seems to be that there are only few organizations that are actually teaching Design of Experiments. All-in-all, I try look beyond flame wars and to learn as much as I possibly can from different disciplines. It's like in MMA:
Since the late 1990s both strikers and grapplers have been successful at MMA though it is rare to see any fighter who is not schooled in both striking and grappling arts reach the highest levels of competition.
Not the best metaphor but you get the point. I have heard multiple times that creativity needs variation. Without it, creativity is diminished. I understand what people are saying and agree with it but I do not think it is quite this simple. It is not sufficient for an organization to just add variation to development process and collect the innovative fruits. Organization must learn to see patterns emerging in complex domain and have skills to productize them. Unfortunately, variation in many software projects is only seen by the customer in form of bugs and low quality. This has nothing to do with new innovations.
Lean Six Sigma criticism
Even though the method has some interesting tools, its approach to management feels somewhat outdated. Improvement process is driven by black belts that solve (at least in theory) organization's problems one by one thus increasing organization's value. Of course stupid workers are only in the way of genius black belts. The problem is that after improvement has been "installed" and black belts leave, workers can return to old ways of working, because they didn't own the new solution in the first place. Especially in the knowledge work, team must take ownership of its problems and appropriate solutions. I feel that the whole black belt system does not work with Scrum because there should not be specialized titles in the first place. The team should be cross-functional, it should have all the skills needed to turn a backlog into a working solution and not rely on outsiders. Experiments should be owned by the whole team, not one person. Of course, then you encounter the problem that in Finland you have a very small chance of finding anyone who can actually design experiments, should you need one. One could also state that Six Sigma did not invent Design of Experiments and that is the most valuable part of the whole method. If its management style is not suitable for complex domain, what is left?
Are Agile and Lean Six Sigma friends or foes? Are they compatible? They deal with different problems. Agile is suited for complex work, Lean Six Sigma might be usable in simple and complicated domains. Experimentation and probing mindset is useful in both domains. What I intend to do is to bring understanding of variation and design of experiments as tools for Agile software teams.
A Podcast on complexity by Dave Snowden