This article is the second in a three-part series exploring how you can keep a better handle on your software development project to ensure it doesn’t spin out of control or face cancellation before the product’s release. One major reason for having difficulties in managing software projects is the intractable presence of uncertainty in developing software, causing traditional methods of project management to be ineffective. Recent developments in software project management theories and practices are addressing this uncertainty.

In the first article of this series, I identified internal management within software projects as a key to the success or failure of developing useable, effective, and used software. Central to this failure, the level of uncertainty in these kinds of projects defies using the more common assumptions, tools, and methods of traditional project management.

But, if we can’t use traditional project management, what can we do?

It’s a question that has many current software thinkers, researchers, and practitioners proposing ideas, methods, and practices that range from “just make a few modifications” to a wholesale “you have to change the very nature of the business!”

Here are three examples (among many) where really good, experienced, and successful software developers are proposing changing the way software project managers manage projects. Among the differences to watch for, notice how each handles the “unknownness” of the project, whether it’s as tangible as project resource allocation or as intangible as knowing how the “real product” of the project should be defined.


Steve McConnell is a familiar face to some software teams at the Lab. Most recently, he’s been teaming with two other software professionals to present a software development course and do some consulting with individual code teams. Steve is a remarkably skilled and experienced software professional, as well as an accomplished author. The Project Survival Guide is his third book.(1)

In giving context to McConnell’s guidelines, consider “making things” from a process point of view. If traditional project management is concerned with making “something”, then the underlying principle is that you can get better at it if you improve all of the processes used to make it. The basic idea is that you figure out how to make “something” by very formal, measurable, and recordable processes, analyze what went right or wrong when you’re finished, improve the processes, and then do it again. After making the next “something”, improve the processes again, etcetera, until you’ve optimized the processes to produce the “something” in a very reliable, repeatable, estimatable, and controllable way. This is why early bridges failed and later bridges did not. We learned something in between and built the new knowledge into the processes.

The Software Engineering Institute’s (SEI’s) Capability Maturity Model (CMM)2 is the analogous approach to software projects. It is a process improvement-based model for producing software “somethings”.

From this perspective, the project management techniques recommended by McConnell are not radically different. He’s very serious about traditionally recognized software development practices such as technical reviews, staged development, change control, planning and plans, risk management, measured quality, predictive schedules, and estimated costs. Where he does differ from the CMM, which wants to drive out “mistakes” by process improvement, is the recognition of the inability of the project team to know everything about a project before it begins. He describes this inability by using the term “cone of uncertainty” in many aspects of the project. In estimating a project, for example, the effect is definite:

“The cone of uncertainty has strong implications for software projects estimation. It implies that it is not only difficult to estimate a project accurately in the early stages, it is theoretically impossible.” (Ref. 1, p. 32)

Because of that uncertainty, process improvement will never be able to proactively, or iteratively, eliminate mistakes. This means that

“Success in software development depends on making a carefully planned series of small mistakes in order to avoid making unplanned large mistakes.” (Ref. 1, p. 36)

With the assumption that you can’t prevent mistakes, his recommended practices put a lot of effort into the beginning of the project.

Have high visibility&emdash;a constant involvement of the customer, stakeholders, and management; everybody knows everything that’s going on, all of the time.
Relentlessly hold design reviews, quality reviews, code reviews, etc.&emdash;find the mistakes when they’re cheap, instead of later on, when they can cost from 50 to 200 times the earlier cost.
Give something to users in staged development&emdash;consistently and frequently release what you have so the user can be involved as soon as possible in seeing the product. “Staged releases force development teams to “converge” the software&emdash;in other words bring it to a releasable state&emdash;multiple times over the course of a project. Converging the software reduces the risks of low quality, lack of status visibility, and schedule overruns, . . .” (Ref. 1, p. 238)
Work with 80% rather than 100%&emdash;”Try to complete 80% of the requirements before beginning architecture and 80% of the architecture before beginning detailed design. Eighty percent is not a magic number, but it is a good rule of thumb: it allows the team to do most of the requirements work at the beginning of the project while implicitly acknowledging that it is not possible to do all of it.” (Ref. 1, p. 58)
With no apologies for a disciplined and structured approach, and yet appreciating the fact that uncertainty prevents you from doing all of the planning that you might like, Steve believes very strongly that

“A successful project should be one that meets its cost, schedule, and quality goals within engineering tolerances and without padding its schedule or budget. After detailed plans have been made, the current state of the art supports meeting project goals within plus or minus ten percent or better. This level of performance is currently within the reach of the average software project manager.” (Ref. 1, p. 4)


It was in the late 1980s that “process improvement” began to reach a fervor. The SEI CMM, for example, was announced in 1987. Five years later, several program managers of the U.S. Department of the Navy had grown disillusioned with waiting on the promises of organizational “process improvement” in the area of software development. They formed the Software Program Managers Network (SPMN) to help develop more practical, applicable, and timely methods of project management.

In 1994, SPMN sponsored a Best Practices Initiative that initiated a three-prong approach to developing what they needed. The first effort, the AIRLIE Council, was a group of a dozen or so nationally recognized experts in the area of software engineering and management. The Council included such dignitaries as Roger Pressman, Ed Yourdan, Howard Rubin, and Tom DeMarco. The product of the Council was a list of “Nine Best Software Practices.”[3]

The second effort was a number of focus groups that made further recommendations of “43 Best Supporting Software Practices” that connected to and supported the first list. The third effort was the formation of an oversight committee to watch the other two councils.

The movement to “best practices” was a specific and major departure from the “process improvement” orientation of both the traditional project management methodology (via the Project Management Institute[4]) and the CMM (via the SEI). It was a recognition that the uniqueness of software projects, the inherent uncertainty in both the product and process of management, and the volatile nature of project change, resisted the stability and repeatability needed for process-control-based management.


If the percent of uncertainty in a project was on a scale with traditional project management on the left, Steve McConnell is close to the left, the Airlie Council is close to center, and project management based on adaptive complex systems is on the far extreme right of the scale. The newest and hottest method for software development comes by way of Jim Highsmith, a consultant and author, and is a by-product of some of the work done at the Santa Fe Institute. It takes the accounting of project uncertainty to a new level.

“Why would we think high-change, high-speed, and high-uncertainty projects should also be predictable and controllable? They are boundable and manageable, but not predictable and controllable.” [5]

In this case, then, the idea of “process improvement” is inappropriate: no projects will be substantially the same. A whole new mindset is needed, and it is centered on the need to develop a “learning project environment”.

Jim’s project would look like this:

Project planning is very important. Develop overall plans, but don’t drive for a low level of specificity. Instead, leave the specific planning for first “timebox” of the project.

A “timebox” is a definite length of time, chosen by the project team and usually 3 to 4 months, where a “mini-project” of sorts is constructed. Project deliverables are set for the end of the timebox (if not a fully functional code, then a well-defined piece of it), metrics identified and measured, estimates recorded, and anything else that is determined to be important.

The project is started with a fully dedicated, small, senior team.

Towards the end of the timebox, all work is wrapped up for delivery, including metrics, reports, documentation, a working version of the software, etc.

A full “post-mortem” exercise is held with all team members, customers, users, stakeholders, and managers. All parts of the project are reviewed and discussed.

The major question to be discussed and answered is “WHAT DO WE KNOW NOW THAT WE DIDN’T KNOW THEN?”
With that information, literally anything about the project can change. This might even include changing the very purpose of the project! (A traditional project manager would have a heart attack.)

All estimates, metrics, directions, activities, milestones, and priorities are now reset for the next timebox and the project continues.

Iteratively, openly, deliberately the project cascades through a series of timeboxes until the project deliverables are reached.

It is usually true that the product of the project will be anything except what was initially expected!

A few of the assumptions behind this methodology include:

Every project is a learning situation. Don’t focus on individual performances, but how the team functioned and the quality of “learning” that took place.

Every project is unique. Instead of going for “process improvement”, base your progress on “people improvement”. A major project goal will be to develop agile, adaptive, and emotionally intelligent teams that can flourish in high-change, high-speed, and high-uncertainty projects. “Learning about oneself&emdash;whether personally, at a project team level, or at an organizational level&emdash;is key to agility and the ability to adapt to changing conditions.”

Every project MUST continually include the customer, user, stakeholder, manager, or anyone else that has vested interest. They must “learn” just like the team, they must suffer through the hard decision-making, and they must adapt their expectations as the project unfolds.

Every project depends on good postmortem reviews. “Postmortems tell more about an organization’s ability to learn than nearly any other practice. Good postmortems force us to learn about ourselves and how we work.” (Ref. 5)

This is entirely different from traditional process-oriented techniques. Specifically, Highsmith says:

“Speed, change, and uncertainty don’t succumb to optimizing practices. It goes beyond a shift from a linear to an iterative lifecycle&emdash;it goes to the heart of the difference between a fundamental belief in adaptation rather than optimization.” (Ref. 5)


Some software project teams know exactly what should be done and how it should be done and when it should be done.

Some software project teams don’t know, and can’t know, any of the what, the how, and the when. Those projects may rely on information that is invented or discovered during the actual activities of the project.

Most software project teams are probably somewhere in between.

In all cases, trying to manage the projects without recognizing the existence and level of uncertainty, however much it might be, will be a prescription for a doomed project. Fortunately, some people are getting smarter and are learning how to mitigate the circumstances so that having successful projects can come closer to being routine.

In the next installment, in “GrassRoots Software Management: Simple Things To Do,” I’ll have a list of practices from a review of recent project management methodologies and best practices.


1. Steve McConnell, The Project Survival Guide (Microsoft Press, Redmond, WA, 1998).

2. Carnegie Mellon University, Software Engineering Institute, “Software, Systems Engineering, and Product Development CMMs┬«.” Available: .

3. Software Program Managers Network, Airlie Software Council, “Nine Principal Best Practices.” Available: .

4. For more information on project management, see the Project Management Institute Web site. Available: .

5. Jim Highsmith, “Managing Complexity,” in “Application Development Strategies Newsletter, August 1998,” Cutter Information Corp.