The big IT project is running late… a decision needs to be made that will affect next year’s budget and there seems to be no option that will appease everyone. Will budget be thrown away by continuing on the current path? Should the project be killed? Is the project thisclose to achieving a breakthrough? All of the status reports in the past few months have stated “we’ve been making good progress”, but suddenly the project has surpassed promised deadlines and is over-budget.
One common complaint from upper management is that the provided data is so filtered and optimistically skewed that it is hard to obtain a realistic status report on the project. It is even tougher to make the big decisions because every metric is slanted to take a predetermined path. It is very tempting to peel away that layer of sunshine and look at the raw bug reports. You don’t want to find out at the last minute that the project has been derailed. Can bug tracking reports provide you with an objective and unfiltered look on a project? They could. However, before moving down that path, here are 5 considerations:
#1 – One-Sidedly Pessimistic
Bug reports provide a listing only of what doesn’t work. Quality Assurance is there to report on what does work and what doesn’t. If you are solely relying on the bug database and not any sort of feature pass/fail checklist, you are only getting one (negative) side to the story. A raw bug count can be deceiving. There could be many bugs that are duplicates or severe bugs that haven’t been discovered in the code (and aren’t counted). There can also be bugs that really aren’t material problems with the software, but more about employees signaling: “Hi, I’m here contributing to the project” (especially if they know higher-ups are reviewing). A high bug count doesn’t necessarily equate to bad software.
#2 – False Conclusions
Normally there are more bugs found at testing time than there are developers to fix them. Due to this imbalance, the bugs get warehoused in a developers queue. Bugs are naturally assigned to the people who are most likely to fix them. This typically results in experienced senior developers getting buried in bug assignments. A casual observer may look at Programmer “A’s” name listed next to 100 bugs, and think: “Wow, “A’s” code must really stink.” In actuality, “A” is the best person for the job, and the bug count illustrates the team’s confidence in his abilities. Likewise, Programmer “Z” might have only a few bugs assigned to him – because he can only currently handle fixing minor bugs.
#3 – Past Performance Doesn’t Guarantee Future Results
The disclaimer, “past performance doesn’t guarantee future results” popularly used in financial advertisements works for defect tracking too. The rate that bugs are being submitted and fixed can be volatile and depends on a range of outside factors.
|Can I easily repeat this bug?||Does the bug report give me enough information to find the problem?|
|Can I access that block of code to test it?||Will the change I’m planning to make break something else?|
|Is it ready to test, or am I wasting my time testing incomplete code?||Is there a bug more important for me to fix than this one?|
Day-to-day events may not indicate a trend. A cavalcade of bugs may be submitted in a short timespan because a new area has been opened for testing. Likewise, there may be an avalanche of fixes completed because a developer has fixed an underlying problem or plowed through a number of small easy-to-fix bugs. There may be dry spells where all of the easy-to-find bugs have already been discovered. A developer might be working on fixing a particularly thorny bug that takes a lot of time and nothing else gets fixed. Longer-term weekly/monthly reports given to upper management smooth out the short-term gyrations that are found in daily bug tracking. These reports may better illuminate developing trends than the raw data.
#4 – Observer Effect
A bug tracking system should foremost be used to track bugs. If it is used for multiple roles, however, there is a very real possibility that “tracking bugs” will be bumped to a secondary priority. If high-level management is scrutinizing bugs submitted, and the team is feeling the heat, it may create some of the following dysfunctional behaviors:
- Tough bugs will be reassigned back and forth to different personnel in a game of “hot potato”.
- Submitting new bugs might be squelched (even though new problems have been found).
- Important but hard-to-fix bugs might be de-prioritized and buried with lower priority ratings.
- Bugs might be marked as “fixed” (even though they are not), moved to re-test, re-tested, re-found, and re-submitted. Creating a lot of unnecessary “re’s”.
With the Observer Effect, simply measuring something can actually change what is being observed. For example, someone taking a satisfaction survey might try to tweak their answers to be more agreeable with the person giving the survey. When looking at the bug repository, be careful you aren’t only measuring bug counts and bug priorities, because those metrics will start to skew with (possibly unintentional) management pressure exerted.
#5 – Huge Volume
Another reason it may not be worthwhile to comb through all of the bugs is that it will simply take too much time. A bug tracking system that can hold the work of dozens of people (or more) is very difficult to synthesize – even if given the proper time. One phrase that is frequently used but also completely applicable is “trying to wrap your arms around the problem.” Trying to do so could lead to micromanagement of the project and getting sidetracked with a myriad of relatively unimportant details.
A bug tracking system can be valuable but only if used appropriately with all ramifications considered. In my opinion, a bug tracking system is a treasure chest of skeletons. If the right management decisions are made, those skeletons will be buried and will never be encountered by customers. If improperly used, it can lead to poor consequences such as misleading conclusions, micromanagement, and biased data.
What are your thoughts on defect tracking systems? How much is your code development driven by high-priority bug reports? If you have any additional thoughts on how to best implement a defect tracking system, please share below.
Chris Hasbrouck is a Consultant II at SDLC Partners, a leading provider of business and technology solutions. Please feel free to contact Chris at firstname.lastname@example.org with any questions on this blog post or to further discuss software testing.