Agile development and the Central Florida Software Symposium
The No Fluff Just Stuff conference came to Orlando two weeks ago, and as always it did not disappoint. Although the turnout this year was smaller than last year, the quality of the conference was top notch. Many of the presentations for this conference addressed the Agile software development process.
A major aspect of software development that I enjoy is to witness satisfied customers using my software. In spite of heavy weight processes, endless meetings, feature creep, blown deadlines, and other disruptions that don’t help with creating software, finally delivering the product is an immensely rewarding experience.
This is why Agile appeals to me. As a software developer, the only thing that matters at the end of the day is whether you have software to deliver, and that is one of the major principles of Agile. Many other aspects of Agile were covered in depth by Bruce Tate, Neal Ford, and Venkat Subramaniam. I will attempt to give a brief summary of the major points.
As previously mentioned, working software is the highest priority in the Agile process. To that end, progress is measured from the customer’s perspective based on the number of user stories completed. Note that user stories are not the same as requirements. A requirement usually reads something like this: “The system shall <insert function here>.” Stories are more in depth, and speak in terms of the needs of the end user as opposed to a bullet list of functions. This provides a deeper understanding of the expectations that users have for the system.
So what is the definition of a completed story? The answer, quite succinctly, is a completed screen. End users don’t care about your DAOs or your automated builds or passing unit tests (more on that later.) They want software that they can actually use. This provides an incentive to build software in vertical stacks instead of horizontal slices. For example, if writing a distributed application with a Swing client, build the DAOs, services, and UI screens required for a piece of functionality instead of sitting down and trying to write all of the DAOs up front (or even worse, the entire UI.) This has two benefits: for one, it allows the customer to see progress after an iteration (as opposed to just having a bunch of DAOs.) In addition, it helps build confidence that the architecture and design of the system will actually work.
This brings up another point. Those that architect a system should also be helping to code it. For one, if you are coding all the time then you are painfully aware of leaky abstractions. Decisions that are made at the architectural level will have an impact all the way to the code, and the only way to understand that impact is to have the skill and desire to write code. Again, the most important artifact is working code, and you can’t code in PowerPoint! We see examples of this all of the time. For instance, projects like Spring, Hibernate, and Ruby on Rails all evolved from actual working production code. These are projects that were essentially refactored to be generic enough to be usable by the masses. The unpopular EJB 1.x and 2.x specs were designed by committee, and only really put to the test after release.
Test driven development is a major factor in the success of an Agile project. As time passes and iterations are completed, aspects of the project will change. Perhaps the customer will request a modification to a story. Or some new stories have similar functionality to older stories, so you have duplicate algorithms (or copy/paste code.) In these instances, TDD will help when it comes to refactoring code in order to address these issues. By having a test that expresses the expectation of a story, this greatly increases your confidence in the code that provides the functionality. When removing duplicate code or splitting a method into two or three, having a test that exercises that code will verify that the refactoring didn’t change the behavior of the system.
Tools such as unit tests, code coverage, and dependency analysts are a great measure for the overall health of a project. Note that these metrics are strictly for use by development. (Mis)use of these metrics by management would be misguided, as they do not provide a relevant measurement of progress. It should be emphasized that completed stories are the metric that truly matters, not achieving 100% code coverage.