You don’t need a bug tracking system


I see them all the time. Defect tracking systems with hundreds or thousands of defects. With a rising trend. Non-blocking defects end up in the defect tracker. “We’ll fix that later”, the team says. It is delayed work.

Every defect contributes to technical debt. Not paying off technical debt works like debt in a bank account. It will only increase and over time it becomes harder to add new features.

Pay off debt immediately. Fix defects immediately before implementing new features.
This practice will make you go fast and keep you going fast.

In your defect tracking systems you may discover:

  • Items nobody understands anymore.
  • Items with high severity levels that are aged.
  • Items that reflect cosmetic preferences.
  • Items that serve as reminders for developers.
  • Items that describe code that can be improved.
  • etc.

Clean the clutter:

  • Remove all items that nobody understands.
  • Remove all items that don’t have any real impact.
  • Remove all ‘critical/severe’ items that are aged or update their severity level.
  • Remove all items related to unsupported versions.
  • Remove reminders. If it is really important, it will come back again.
  • Remove all code improvement suggestions. These are mostly subjective anyway.
  • Plan for fixing real defects that have an impact and put them in your product backlog.

A software defect is an error, flaw, failure or fault that causes your software to produce an incorrect or unexpected result or cause it to behave in unintended ways. If an entry doesn’t fit this definition it does not deserve space in your defect tracker.

But maybe you don’t need a defect tracker at all. Features are not considered DONE when there are known defects. Fix known defects before releasing. Defects found after a release can be added to the product backlog and prioritized just like any other feature.

This approach also leads to delayed work, but allows a more enterpreneurial approach to defects. When a defect is regularly de-prioritized it can be removed from the product backlog. The product backlog also does not serve as a reminder list.

Sprint Demos can be a Scam


The idea is great. With proper User Stories and Acceptance Tests there should be no surprises during the Sprint Demo. The Product can be released to the User after a successful Sprint Demo.

Reality can be that after a successful Sprint Demo there are still several issues found in production. These issues can be related to usability, but also to real defects in the Product.

Sprint Demos can give a false sense of progress and only show that part of the System is working (the subset of features planned for the Sprint).

How is that possible when all the Sprint Demos went perfectly well? What are we missing here?

Do your Users find more issues in production than anyone expected?

There are two main issues I have seen with Sprint Demos:

  1. It is easy to show a demo that gives a false sense of progress when using mocking and stubbing. What we are demonstrating that bits and pieces seem to be working, but we are not really demonstrating an integrated system.
  2. Acceptance Tests have a different purpose than System Integration Tests.  Developer sometimes confuse these to be the same. Acceptance Tests cannot be used to prove proper System Integration.

The purpose of Acceptance Tests is proving to the User that a feature is present and working as expected. The purpose of System Integration Tests is proving to the Programmer that the System Components work together properly.

Joe Rainsberger has written about how Integrated Tests are a scam. These are not the same as Integration Tests. I recommend reading his blogs on the topic (see below).

Integrated Tests rely on the correct implementation of more than a single piece of nontrivial behavior of the System.

Integration Tests focus on checking integration points between subsystems, systems, or any other nontrivial client/supplier relationship.

Unless the Team writes proper micro-tests, it will be hard to pinpoint the root-cause of a failing Integrated Test. A combination of micro-tests and System Integration Tests are be able to find root-causes of system issues.

Here is some recommended reading. After reading I suggest thinking about what this means for how you write tests.

My recommendations:

  • Learn to write proper micro-tests
  • Learn to write contract tests and collaboration tests
  • Learn to write integration tests at integration points in your system
  • Unlearn writing integrated tests

Blindly following the mechanics of the Scrum framework does not guarantee that a Team develops a quality system.

Technical development and testing practices are needed to reap the benefits of Scrum and eliminate a false sense of progress and quality.

Refactoring = waste


When Kent Beck published his book on XP his message was clear. Refactor mercilessly.

Refactoring is a form of rework and in lean that is considered waste. So was Kent mistaken about this?

Is there any proof that refactoring and Evolutionary Design (ED) are better than Big Upfront Design (BUD)? Agilist seem to have a strong believe that BUD leads to more waste than ED. The cost of refactoring is considered lower than the cost of building the wrong system.

However, is there any proof to this or is it just beliefs and gut feelings? I have certainly seen absolute poor design coming from ED, but so have I seen this with BUD. The difference? BUD seems to deliver structured crap. ED seems to deliver unstructured spaghetti crap.

“Now hold on”, I hear you saying. That must mean they did not refactor! Maybe… The success of refactoring seems to depend on several things. Skill, experience, motivation, courage, discipline. Discipline is only easy when it has become a habit. As long as we need to think about it, it is doomed to fail. If we are still dependent on willpower, then there is a fair chance we won’t refactor when we know it is needed.

Don’t get me wrong. I much more like the ED way of working, but I am just not sure it can be considered a superior to BUD.

ED is allows earlier feedback. This makes it easier to adjust when you find out you are working in the wrong direction. ED allows responding to change, but this also has it’s limits. If the entire goal of the software system changes, it may mean the whole software must be changed.

Refactor mercilessly. Is that even a good idea? I have seen teams stranding and take forever to deliver the next feature, because they needed to refactor more. When you encounter such a situation, stop them? Refactoring is only allowed on code that is impacted by adding the next feature.

One question to find out if your stand-up meeting is useful


The original objective of the scrum stand-up meeting is to assure that all team members make a commitment towards their peers. That means is that you will do as you say.

The idea that by saying what you will do in front of your peers is that you will feel more social pressure to actually do it.

The idea of standing up is that the meeting stays within 15 minutes. The assumption is that we don’t like to stand for much longer.

The scrum stand-up meeting is not a status update meeting.

There are three questions everyone is supposed to answer:

  1. What did I accomplish yesterday?
  2. What will I do today?
  3. What obstacles are impeding my progress?

This will create more transparency with what’s happening in the team, right? This is not necessarily true.

Add the following question after everyone has had their turn and find out if your stand-up meeting was a waste of time. Aks several team members.

What were the answers of team member X or Y to the three questions?

When several people can’t answer this question your stand-up meeting was a waste of time.

What can you do about this? Here are some suggestions to make it more effective:

  1. Focus on collaboration.
    1. Who needs to work or talk to who? Make sure they both agree to do so.
    2. When using a scrum or kanban board make sure that team members actually discuss stuff that is on the board. Stay sharp that people don’t wonder off doing other stuff. Is your board representing current reality? If not, inspect and adapt.
  2. Adapt the meeting structure (and length) to support what the team needs.
    1. Allow asking different questions or having some discussion.
    2. Allow for a different timing when needed to support alignment, collaboration and information sharing.
  3. Focus on agreements and decisions.
    1. Make sure the team aligns on goals and priorities and make agreements.
    2. Make sure the team calls on missed agreements.
    3. Make sure existing agreements can be re-negotiated.
  4. When a team is strong on collaboration, you may even make this meeting optional.
    1. Facilitate the meeting every day anyway.
    2. Make it everyone’s personal responsibility to decide to join or skip.
    3. All decisions made during the meeting are binding unless re-negotiated and agreed upon with those present.
  5. If you want to stick to the original scrum questions find out why people can’t answer the suggested additional question. You may have a team culture issue here.
    1. Did they not pay attention (this time)?
    2. Is the information provided to them not interesting enough? If so, why?
    3. Is there too much or too little information to make it useful?

Happy stand-up next meeting!

The simplest and most effective Definition of Done


The Definition of Done describes the criteria when a user story finished. It regularly is an intensive lists of criteria. An example could be:
  1. All code is checked in;
  2. All code has written tests;
  3. All tests are passing;
  4. Code review conducted and passed;
  5. Functional documentation updated, reviewed and approved;
  6. Design documentation updated, reviewed and approved;
  7. User documentation updated, reviewed and approved;
  8. QA check passed;
  9. etc.

I have not seen this work very well. Teams claim they are done, but the story is “done” somewhere in a staged system. Some work is left to deploy it to production. 

This type of list starts small and simple and grows over time. When the team misses something a new line is added.

When that line is not applicable for all user stories the team starts cheating. They need to make exceptions in order to ever finish a user story.

The efford needed to move user stories to production may vary and becomes part of the next iteration(s). Consequently, less time is available to build new features and the velocity becomes unpredictable.

Here is a simple and very effective Definition of Done that works much better. Are you ready for it?

The user story is deployed to production and can be used by the end-user.

Rockstar developers consider stuff like: all code is checked in; all tests are passing; etc. part of their de facto standard. Neither discussion nor a special list is needed for this. Specific attention points can be added to the acceptance criteria.

Use the above definition and you will be fine.

What is the difference between agile, scrum, xp and lean?


Agile development has been around since the mid-late 90s. It is a bit confusing for those new to it though. Agile, scrum, lean, xp … what else? How do they relate to each other? Are they the same or different things?

According to some agile refers to a way of thinking. According to others it is a collective noun for scrum, xp, lean, and other practices, frameworks and methodologies that are considered agile.

The confusion is not so strange. All the agile methodologies involve a different way of thinking about software development than most of us are familiar with.

Here is a short overview of the most well-known agile methodologies / frameworks.

  • XP – a collection of both technical and iterative planning practices, like Test Driven Design, refactoring, continuous integration, but also user stories, backlogs and iterations.Screen Shot 2014-07-22 at 21.02.49
  • Scrum – product management framework. Defines roles, artifacts and activities for product planning and delivery.
  • Lean – methods focussed on maximizing value for the customer and eliminating waste in the end-to-end delivery process.
  • Kanban – a single practice from Lean. A way to visualize the delivery pipeline and flow and bottlenecks.
  • DSDM – Dynamic Systems Development Method. An agile project delivery framework.
  • FDD – feature driven development. We don’t hear much about FDD these days.
  • Crystal – Alistair Cockburn’s version of agile development.

Scrum and Lean are currently most popular and both are often complimented with XP. They lack technical practices that properly support iterative and incremental development. It is strongly advisable to address both the non-technical and technical practices.

Additional practices that have come along the last few years are:

  • Continuous Deployment – end to end continuous delivery of new features, bug fixes and changes. Push button releases.
  • DevOps – the combination of technical practices to support continuous delivery with focus on intensified communication and integration of development and operations.

Do you really need that feature?


Do you really need that feature? Yes? Oh, because your competitor has it?
Ok, I get it. I have some questions for you to answer.

  • How many users of your competitor’s product are using that feature?
  • How often?
  • Do they like it?
  • How many bought the product because of that feature?
  • How many would not miss that feature?
  • How much money is your competitor making on that feature?
  • How many users of your compititor would switch to your product because of that feature?
  • What would happen if you did build that feature?

Think twice before you jump and demand yet another feature. Copying features can be a waste of money and hardly ever makes your product stand out in the market.

Just because your competitor has it, does not mean it’s a good idea have it too. You’ll enter price competition and your competitor has a head start. You may never earn back your investment.

Red Flag – There is no customer


In agile development we value customer collaboration. That requires a customer to collaborate with, agree? The first principle of the agile manifesto is:

Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.

Let’s focus on the first sentence of this principle. This clearly states that we want to satisfy “the customer”.

Who is “the customer”? Very often teams only deal with a project leader. The project leader’s role is to play the role of the customer. To me that is a clear smell. Yet, it is reality for many teams.

Everyone who’s ever been to a decent agile introduction training has been told that we work in close collaboration with the customer.

Now why do we consider real user access important? Well, we want to make sure we actually build software that our users love and are happy about. The answer is actually in the last part of the above principle: “delivery of valuable software”.  By talking to a real user you avoid building stuff they actually don’t need. You are not wasting your or someone else’s money and time and you avoid damaging your reputation.

So what are your options if you don’t have access to a real customer at the moment?

  • Keep things small. Small teams have a better chance to get closer to a real customer.
  • Use other ways to get access to real user feedback. Dropbox allows users to post feature requests and have other users vote on features they want. The advantage of such a strategy is that you even get a prioritized backlog.
  • Collect usage data from your application. Monitor what parts of your application are used most. Try to monitor how users use the software and when. You can maybe ask for in-app feedback. Make sure your users know and have an option to participate in this though. People may dislike it when they find out and did not know you where collecting data.

Even these options may be hard to implement for you. Eventually, it boils down to if your organization is able to understand the value of real customer feedback. That requires a humble attitude. An attitude where we don’t assume that we know what the user wants. We often confuse what we want ourselves with what the user of your software product might want.

 

Agile Dysfuctions


Agile development has been around for maybe close to 15 years now. Tim Ottinger and I started talking about “Take Back Agile“. My incentive for this was frustration and shame with what we have achieved in all those years.

The success stories about agile development often seem exaggerated. Most implementations of agile do not come close to what the stories tells us.

Joshua Kerievsky moved to Anzen. With Anzen we are looking at software development from a safety perspective. Safetely for customers, users, investors, developers, basically anyone who deals with software.

The advantage of being in the agile development professional for so many years is that it is easy to see patterns of success and failure. Unfortunately the latter still prospers despite agile development.

This post is the first in a serie that I am about to write.

I will start posting about red flags, problems, failure modes and defenses we have come to know over the years. My intention is to share this knowledge so you can use it to practice safer software development.