About the Blog

  • Software is our business, and perfecting the art and science of delivering it is our mission. The contributors to this blog are passionate about the impact that great teams and good software can have on an organization’s bottom line. They bring decades of experience designing, developing and delivering great software, and each is playing a critical role in Borland’s own transformation.

Blogging Authors

« What is a Product Owner in Scrum | Main | Roadmaps? Agile don't need no stinkin' roadmaps! »

March 11, 2009

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Mike Lunt

Nice post. I assume you want a certain level of quality in the product before attempting the group test to avoid overlap with known problems.

Klobetime

While you have to have enough stuff implemented to make it worth people's time to participate, we've actually been pretty liberal about what we use for the pokeathon testing. We've had the app crash during the test (once about five minutes after starting!), and then started a discussion about what information we'd like to persist about the crash and techniques for figuring out what happened as a group. Pretty interesting.

Another benefit of the rough sessions was they were (naturally) earlier in the release cycle, so the participants in later sessions were able to both see the progress and verify for themselves that they couldn't cause the same trouble after a fix. I believe the openness helped build credibility for us as a group that listens and acts on feedback.

The comments to this entry are closed.