Who is responsible for quality? The facile answer is QA, but that doesn't work very well in reality. Quality is the responsibility of the team as a whole, developers included. You can't test quality into a product – if everyone isn't concerned with it, it isn't going to happen. Without quality, then your product is merely okay at best, close to the goal but not quite there. In an agile world, quality is largely determined by acceptance, and stories can't get accepted until they are both coded and tested. A common question is,
Photo by LarimdaME
"If QA needs to test what dev creates within the same iteration, what do the developers do at the end of the sprint?" Well, how about... testing?
Unit tests can always be improved, even when practicing test-driven development. And just because QA creates a test case doesn't mean a developer can't execute or automate it. Something our team does periodically is a multi-user test, a "pokeathon" if you will. We get the extended team (support, sales, etc. are invited as well as the engineers) together for an hour, create a conference bridge so we can all talk to one another, and start banging away on a single install of our webapp. This unscientific and undisciplined method of testing has uncovered a wealth of issues for us, improved our overall quality, and turned out to be a bit of fun at times, too. I highly recommend trying it out.
Post title is a lyric from Horseshoes & Hand Grenades by Trent Summar and the New Road Mob.
Nice post. I assume you want a certain level of quality in the product before attempting the group test to avoid overlap with known problems.
Posted by: Mike Lunt | March 11, 2009 at 03:12 AM
While you have to have enough stuff implemented to make it worth people's time to participate, we've actually been pretty liberal about what we use for the pokeathon testing. We've had the app crash during the test (once about five minutes after starting!), and then started a discussion about what information we'd like to persist about the crash and techniques for figuring out what happened as a group. Pretty interesting.
Another benefit of the rough sessions was they were (naturally) earlier in the release cycle, so the participants in later sessions were able to both see the progress and verify for themselves that they couldn't cause the same trouble after a fix. I believe the openness helped build credibility for us as a group that listens and acts on feedback.
Posted by: Klobetime | March 12, 2009 at 10:05 PM