The Lean Product Development System Model

Posted on Sunday, June 13, 2010 by Alexandre Poitras

The Lean Product Development System Model used at Toyota is an interesting one. It is divided in three primary subsystems : process, people and tools and technology. The 3 subsystems can be further defined in 13 principles (here's the 5 first ones) :


The process Subsystem :

Principle 1 : Establish Customer-Defined Value to Separate Value-Added Activity from Waste

Basically, defining what a customer values so we can suppress waste. There are 2 kinds of wastes
  1. Waste created by poor engineering that results in low levels of product or process performance (think about crappy code or bad architecture).
  2. Waste in the product development process itself (writing useless documentation, wasting time on communication, ...).

Principle 2: Front-Load the Product Development Process While There is Maximum Design Space to Explore Alternative Solutions Thorougly

Exploring alternatives technologies is much cheaper at the beginning of the project than at the end. Think about experimenting different frameworks, would you do that at the beginning of the project or at the end? Most project I've seen didn't even experiment different options, they just picked one and hoped for the best.

Principle 3 : Create a Leveled Product Development Process Flow

Flows are a great tools to show the waste in a process.

The people Subsystems :

Principle 4 : Utilize Rigorous Standardization To Reduce Variation And Create Flexibility and Predictable Outcomes

There are 3 kinds of standardization :
  1. Design standardization (architecture, modularity and reusable or shared components).
  2. Process standardization
  3. Engineering skill set standardization (think about emphasing continuous formation in your enterprise).
Principle 5 : Develop a Chief Engineer System to Integrate Development from Start to Finish

A chief engineer is responsible for an entire project and can tell you its exact status. The chief engineer is not just a project manager but a leader and a technical systems integrator; it is to this individual that difficult decisions are brought for resolution.

What is software development?

Posted on Wednesday, May 19, 2010 by Alexandre Poitras

I often hear people complaining about lean flows. They say they can't be applied to software development because they were invented for production activities and therefore are not suitable for software development. But is that right?

Before answering this question, first we have to look at software development and to understand what kind of activity it truly is. People in the agile and in the software craftmanship communities argue that is a unique activity that can't be compared to any other applied sciences out there. There is a kind of "we are different" thinking in the software industry. I strongly disagree with that. I believe software development has a lot in common with engineering in general but with some specifics of course.

So what are these specifics? If you look at a classical engineering project, it is normally separated in two very distincts phases : the design phase and the construction phase. For example, let's say we want to construct a building. First the engineering team design the building plan to determine exactly how the building is going to be built, determing its shape, the heating and ventilation system, the materials to be used, ... This process is creative and iterative (oh doesn't it remind you of something?). Then once this is done all these details are passed to the construction team who actually construct the edifice. The construction is done by construction workers in a repetive and predictive way. In this kind of projects, the design phase is usually the risker : it's hard to predict how much time it's going to take. On the other hand, the construction phase is much more predictable but is much more expensive and takes a longer time so most costs come from this phase.

Now back to software development. In software development, our main goal is to create virtual artifacts called programs. Programs are designed (written) by humans but they are constructed from a set of instructions by a compiler. There is no human intervention needed and this process is usually short and free. So basically what we have there is a big design phase to write the program (yes programming is designing) and a completely free construction phase since we are dealing with a virtual world and the compiler does it for free.

So what does it mean? It means that in the end software development is just a big product development effort. And that we can apply engineering methods to software development. I know I know in the software world a lot of people hate engineering methods but I believe it mostly comes from the fact that most managers have always treated writing code as a construction phase and not as a design effort and therefore applied the wrong process. But we have to realize, writing code is not trivial, it is as hard as designing a car or a building and we should treat it as a creative design activity not as stupid repetitive task that any code monkey can accomplish. If we apply design engineering principles, we might have better results. We can't continue having this hippie hacking culture. We need more rigor, more discipline, more risk assessment and more importantly we need to stop failing so many projects :-) We are much better than that.

In my next post, I'll show how to apply lean flow to product development.

On Scrum "cross-functional" teams, lean flows, ...

Posted on Saturday, May 15, 2010 by Alexandre Poitras

One of the tenets of Scrum is that a project team should be cross-functional. I find that most Agile proponents always use that term without really thinking about what it really means. Most of them assume that a cross-functionnal team is a team on which every members knows everything about everything. But if you think about it, a cross-functionnal team may as well be a team made of several specialists and be as much cross-functionnal as a team made of generalists. I think the important aspect here is to have a self-reliant team (a work cell in the lean language).

I don't understand all the hate against specialization. In fact, you can make a good argument that specialization is probably one of the major contributor to modern society. Without it, it would have been impossible to witness the rapid technical progress of the last centuries. Of course, overspecilization is a common plague in the modern society but jumping on the other extreme and dumping the baby with the bath water is not a better solution. On simple projects, I guess it's possible to have a team with no specialities but on larger projects requiring deep technical skills in several fields it's just not possible. For instance, I used to work on a software which had to interact with electronics devices so we had no choice but to collaborate with electrical engineers on our team. Same thing in the gaming industries, in any projects, you have sound specialists, game designers, graphics designer, ... It's obvious you need specialists in these cases but even in "regular" projects you always need some specialists : think about the usuability of the software, the QA, the deployment, ... Even among developers, specialization is necessary on larger projects since there is no way one developper can have a clear vision of all the different modules.

So we have to accept specialization and all the problems coming with it, mainly synchronising all the different tasks that need to be done to implement a feature. Unfortunately, Scrum just doesn't give a lot of help in that matter.


Enter the worl of lean flows :

The lean movement comes from the manufacturing industry. It is a departure from the traditional mass-production paradigm that has dominated the industry for the last century. Mass production consists of creating large amount of parts without evaluating the need of the enterprise and moving them along the production chain in batch. This method of production leads to many kinds of waste. In the software industry, this can be compared to the old cascade method in which large batchs of documentation or code are produced before being moved to the next step in the development chain. Analysts were producing a massive amount of requirements before handling them to the developers which in turn were turning this documentation in code which was only tested when the application was completty written. As with mass production, this led to all kinds of problems.

In the manufacturing industry, the lean movement departed from this way of thinking by introducing the notion of singe-piece flow. Single-piece flow is the state where parts are manufactured one at a time, and flow throughout the manufacturing and supply chain as single unit transferred as customer's order. Of course, this is an ideal state. In any real situations, there would be cases between different steps were small buffers of parts need to be keep but any lean expert would always do its best to reduce the numbers of these cases. In the software world, this would mean that a feature would be analyzed, designed, implemented, tested by different persons and delivered in a single flow as the customer order it.

But wait isn't what we're already doing in Scrum? Yes and no. The introduction of iterations is a step toward achieving real flows. In Scrum, a flow is defined in a very simplistic way using 3 steps "todo", "in progress", "completed". What has to be done and in which order in these phases is usually left to the developers. Most Scrum teams usually try to solve these problems by stating two definitions "ready to sprint" and "done" which usually take the form of a checklist and state when a feature is ready to be switched to the "in progress" or "completed" states. Even though, these artifacts help provide a little bit of guidance to the team, they don't go far enough. The different members of the team are still left in the cold to figure out what activites have to take place and in which order. For instance, when a story is marked as done it is most of the time not complety done : most of the team I worked with didn't have a done definition that was including all the necessary steps to go in production. This was left out of the process and the necessary tasks were done in a ad hoc manner at the time of a release.

Even more important, the goal of a flow is to reveal inefficiencies (waste) and bottlenecks.









In Scrum, there is no mechanism to show the bottlenecks of a process. You can only see that the realization of a task takes a long time or that the task is blocked. You have no idea why or where it is stuck unless you ask the team members. The burndown chart doesn't help in that matter, when there is a bottleneck you can see some bumps on the graphs but the chart doesn't capture any information on where the delay is coming from. Compare that to a real flow where you can see exactly where the task is stuck and which allows the use of a much more refined chart, the Culmulative Flow Diagram :


















(ok the picture is not the best exemple, but you see my point)

Another very important aspect of the lean philosophy : flows are driven by the pull principle, which means no parts can be made unless there is a need to. This is very important since the pull principle allow us to dump the iteration concept but this is the subject for another post.

Of course, simple Scrum workflows can work but it rely mostly on informal communication. Informal communication is not bad and is an essential part of any project but you can't rely only on this form of communication in large scale projects requiring much more complex synchronisation.

The Kanban movement inspired from the lean philosophy is much closer to putting in place real flows but it still lacks some other important features that are necessary to be able to support a single-piece flow namely a leveled schedule but I'll leave this for another post.