Agile is great. Agile is wonderful. Agile is the savior of all things related to Software Engineering.
On paper.
In practice, classic Agile is difficult, confusing, frustrating, and just downright hard to implement. Because of knowing that many people would argue the preceding statement, we need some contextualization here. Therefore, a clarified statement reads:
At a small company with less than 10 software developers and a handful of hardware engineers, where the projects change constantly and the number of people working on any single project can change from week to week, classic Agile is difficult, confusing, frustrating, and just downright hard to implement.
Some aspects of Agile are awesome for small teams at small companies. This is doubly true when the current projects closely follow the needs and opportunities for the business. Smaller companies do not have the luxury of buffers between the Engineers and the opportunities. Often new opportunities require new features or tweaks to existing ones to land the deal. This places the development group immediately in the middle of the uncertain and ever-changing sales cycle.
When this happens, the development team feels like a rudderless ship in a storm with the winds of change blowing them around onto different projects whenever a big gust comes through.
This manifests in constantly shifting development groups and continually changing projects. At the surface, an agile method seems to be ideal for attacking these changes.
However, two of the big issues with using Agile in this environment are regarding calculating velocity and doing project estimation. We could consider these tightly coupled aspects as two sides of the same coin. The problem that arises, as stated above, is that when projects fluctuate often and the people working on the project vary, it is nearly impossible to calculate group velocity and therefore no metric data exists to support a validated estimation technique.
Basically, without a consistent means to measure velocity, estimation stays rooted in the realm of the WAG (Wild Ass Guess). In classic Agile, removing a key metric like velocity initiates a breakdown of the whole process. Instead of moseying along in this manner while the process crumbles around us, why not be proactive and deconstruct the whole method first and then rebuild it with solutions for the gaps and issues so that the overall process is stronger and geared towards the people, team, and company using the system?
The following explains how one team built success in a custom agile process and created one solution to the challenges around the estimation of use cases.
One of the key challenges for agile in this environment concerns how to present apples-to-apples comparison and prioritization for the overall effort required to complete business targets and features. This covers both the prioritization needs of the development team while also considering the customer needs for the organization.
The show Rick and Morty actually provides an amazing and simple answer to this conundrum and in doing so, for a small team and that team’s process, solves a critical Agile dilemma.
The Problems with Agile
A “point” to one person is not the same “point” to another person, both in terms of estimation understanding and in terms of actual output. In this scenario, static development teams do not exist. Therefore, the classic agile team modeling is not valid. It predicates velocity as a metric upon a LOT of assumptions that the articles and instructional books and videos never really get into. Classic agile teams necessitate the removal of a lot of variables:
- Static Teams—The prototypical team is 3-5 people, and it does not change. That allows the team to estimate effort together and for measured velocity to be a predictor for future effort.
- Individual Contributions Are Minimized—The classic team approach minimizes discrepancies in what points mean to individuals and applies that logic to a team. That is great when there are static teams. It is not great when people move around and the whole calculation starts over each time a change occurs.
- Team Effort Maximizes Output Against Skill Level—When planning and estimating for future projects, having a team velocity allows for reliable and higher-confidence estimation. When attempting to apply individuals against future projects when there are a wide range of skills (beginner, intermediate, expert) and pacing (slow vs fast) variables that make this planning difficult, if not outright impossible, without assigning people to future projects upfront.
- Past Metrics Drive Future Planning—The velocity calculation is key to a lot of the classic agile modeling. It is only possible after some period of work where velocity measurements average out to a reasonable confidence level. At that point velocity measurements are available to predict future projects. When changing teams and priorities, this cycle starts all over again and never gets to where this data can drive the process forward.
So how do we get around these issues and provide reasonable estimations for future projects? How do we standardize what a “point” means in a way that we can apply it in a general sense for planning? How do we remove as many of the variables as possible?
Rick and Morty provides an amazing and simple answer to this conundrum and in doing so, for a small team and that team’s process, solves a critical Agile dilemma.
Meet Mr. Meeseeks
What is an average Software Engineer? They seem to always initially have a friendly and helpful demeanor, willing to assist anyone who asks. They like to solve problems and complete the task in front of them. If that task is outside their abilities to solve, they will stay on it like a dog on the hunt. As time goes by and the task remains unfinished, their attitude and mental state begin to worsen dramatically. If the task goes long enough, they are even prone to violent behavior and outright insanity. Software Engineers asked to pair program are known to distrust and attack one another as their sanity decays, although they will continue to work on finding any possible solution to the task at hand, including collaborating with other Software Engineers. The psychological and physical symptoms of this painful existence can manifest in less than 24 hours.
The funny thing is, the description above is the description of Mr. Meeseeks, a character from Rick and Morty. I simply replaced “Mr. Meeseeks” with “Software Engineer” and voilà! The reflection of the fictional character against an average Software Engineer is frighteningly similar.
In the show Rick and Morty, Mr. Meeseeks appear when someone presses the button on a Meeseeks box. When this event occurs, Mr. Meeseeks springs into existence and exists only long enough to fulfill a singular purpose. After they serve that purpose, they expire and vanish into thin air. Borrowing from the show, their motivation to help others comes from the fact that existence is painful for a Meeseeks, and the only way to remove the pain of existence is to complete the provided task. Physical violence cannot harm them. The longer Meeseeks stay alive, the more sanity they lose. In the show, the main character Rick warns the Smith family to keep their tasks simple.
Unfortunately, life does not always follow along with the recommendations of uber-smart characters on the small screen. Just as the Smith family in Rick and Morty gives increasingly complex tasks to their growing collection of Meeseeks, we define ever more complex problems for Software Engineers every single day.
Using Meeseeks for Project Estimation
The concept of Mr. Meeseeks is being used in an attempt to minimize the variables as much as possible. The descriptions and setup above serve an important purpose. When pressed, the button on the Meeseeks box produces a clone of the same ability. Every time the number grows beyond one or two individuals there is additional complexity which highlights the fact that adding a second person to a project does not linearly scale up the output to 2x the current output. This idea of diminishing returns is even more apparent and influential the longer a project goes on.
Therefore, the concept of a Meeseeks serves to provide a simple framework for project estimation. This framework seeks a reduction of the open variables, and normalization of the metrics used for estimation. We accomplish this through the following effort:
- We defined a “resume”, skill set, and ability expectations for Mr. Meeseeks. The goal is to define a generic and average member of the team.
- This definition of a Mr. Meeseeks is used to estimate projects and feature additions to existing products
- We estimate in a vacuum. This means that project estimation will disregards current or future projects or personnel. It does not consider people’s movement between projects, interruptions, or any other external influences.
- While many variables are being removed by estimating using the Meeseeks approach, we minimize additional variables through assumptions. We assume that all Meeseeks will work on the project from start to finish. We assume that no support issues, vacations, or changing business needs will interrupt the project.
- Project estimation targets an “ideal project”. This means that using the Meeseeks assumptions we will estimate for the most efficient way to finish a project. This does not mean the fastest or least amount of Meeseeks to get there. This will target the ideal, most efficient group to get the job done.
- We create a “fudge factor” metric for each project to add buffer time to the estimated project timeline. This metric attempts to define risk and potential unknown roadblocks for the project.
Implementing The Estimation Solution
We are NOT attempting to provide an accurate representation of how long a project will take. We ARE attempting to provide a consistent measure of the effort needed for this project against that needed for another project. When estimating, we don’t know how many people will be available to work on the target project or the skill level of those people. So we eliminate those variables while striving to provide a consistent measure of one project against another.
We should recognize that these look like timelines. We are trying to be agile and timelines and agile often appear to be concepts at odds with each other. There are also concerns that perceived timelines drive measurements against the development team. Noted. Please move on.
A process has to start somewhere, and requirement at hand concerns evaluation of scope for projects or targets. This process attempts to meet that goal. This also requires revisiting the collected data against the current known variables present at the project kickoff. This provides a chance to both revisit the numbers and adjust them as needed.
Therefore, we are shooting for the following goal:
Estimation Planning generates the Meeseeks-based “ideal project” metrics. This also allows a review the requirements and use cases for the intended work. The primary goal here is to provide consistent data from project to project and target to target so that prioritization provides more value for roadmap projects and general release planning.
For implementation, we have defined five metrics for estimation to provide a sense of scope for the target being estimated. These are also used to compare different projects or features against each other.
Remember – the Meeseeks concept defines a generically average member of the team!
Measure #1 – Technical Complexity
This is a measure of how technically complex the project is. This is an independent metric of time. There can be a technically complex project that takes a day or two and a project with very low complexity that takes weeks because of the sheer volume of work involved.
We rate technical complexity on a 1- to 5-star value:
- * (1-star) Simple & Known Complexity—The technology is simpler and less complex. A Meeseeks understands this technology. This requires no learning curve.
- ** (2-stars) Simple & New—The technology is simpler and less complex. However, nobody on the team, including Meeseeks, has used the technology. This requires a short learning curve.
- *** (3-stars) Complex & Understood—The technology has a higher level of complexity. At least one team member has experience with this technology, but not a Meeseeks. This requires a learning curve.
- **** (4-stars) Complex & New – The technology is a high level of complexity and no team member has experience with the tech. This requires a substantial learning curve.
- ***** (5-stars) Rocket Science—The technology is very complex and requires a large effort in research, understanding, and/or architecture along with a significant amount of effort in learning to get up the learning curve.
Measure #2 – Ideal # of Meeseeks
This is a count of the ideal number of Meeseeks to work on this project through to completion. Don’t forget that that the estimation is for work in a vacuum, independent of other interruptions. This should target a number from 1 to 5 Meeseeks. If the number needed is larger, then the scope is too large and requires breakdown into more management chunks.
Measure #3 – Ideal # of Development Sprints
This is a measure of the estimated number of sprints to complete this effort. This is an idealized, general effort, and assumes a standard two-week sprint. This is not crunch time and should not reflect any padding. Values should be in the following set: 0.25 (1-3 days), 0.5 (1 week), 1 (1 sprint, 2 weeks), 2 (2 sprints, 4 weeks), 3 (3 sprints, one month). If the number needed is larger, then the scope is too large and requires breakdown into more management chunks.
Measure #4 – Fudge Factor
This is a measure of the fudge factor (or buffer) for this project or target. While this should take into account the complexity, this represents more and should account for the complexity, number of people, potential risk, and ideal number of sprints.
The numbers here represent the padding added to any project timelines. Valid values are: 1 (10%), 2 (20%), 3 (30%), 4 (40%), and 5 (50%). If the fudge factor does not fit into the values, then the project or target needs review and possibly a breakdown into smaller chunks.
This method produces results that provide the organization with an ability to complete direct comparisons between development efforts. This allows for prioritization that benefits both the development team and the business.
Overall, this is a way to get everyone on the same page and heading in the same direction. Remember, the best process is the one that works for you and your team. At the end of the day, you are not judged on how closely your process matches that of someone else’s definition of a method. Your output and results dictate your level of success, so practice continuous improvement and change your processes to focus on those results!
Thanks for reading!
Comments are closed, but trackbacks and pingbacks are open.