We all want to create a quality product. Every team–from Executives to Sales, Marketing, Support, and Development–talk about the quality of the product. But the truth is, no matter how much teams speak about quality, chances are your company doesn’t have a clear or unified definition of what quality is.
Even worse, companies usually don’t have a clear approach to measure and continuously improve the quality of their product. If you ask ten people in the organization to define quality, you’ll end up with ten different answers. And that’s one reason why quality is so elusive and hard to implement.
For example, the Development team might define quality relative to the number of bugs or stability. Sales might describe it as how easy the product is to sell. And the Executive team might define it as how well the product supports company objectives.
There’s also the view of the customer/user. For them, a quality product might be simply something that meets their needs. All different definitions, all very valid, and as a business-savvy Product Manager, you need to make sense of them all.
As a Product Manager, it is your responsibility to create a quality product. The first step is to agree on a company-wide definition and then agree on the metrics to enforce that quality.
The key here is to focus on creating metrics. Having metrics and a baseline to measure against gives you the confidence to factually say you have a quality product based on your company’s definition. And having clear definitions and targets makes everything easier for you because now you know how to measure success.
Also, keep in mind that it is more efficient to define and agree on these metrics early on during the innovation journey.
Now you can focus on writing detailed requirements that incorporate quality as a critical element of every new feature. Detailed doesn’t mean you should go back to writing long Product Requirement Documents (PRD). It means that when writing requirements (or stories or epics or whatever you use), these requirements should have clear acceptance criteria and should include metrics you can evaluate before launch.
Related post: Internet of Things: A Primer for Product Managers.
To measure the quality of your product, you need to answer these questions first.
Failure to meet any of these areas would imply that your product does not have the right level of quality (as defined by you and the company), and therefore, it is not ready to be launched to market. This evaluation is not a one-time thing, though. It should be incorporated into your sprints and releases. It needs to become the way you approach building software.
How will you measure the quality of the proposed solution?
Here you are trying to answer the question: will this solution meet users’ definition of quality and your company’s UX definitions of quality? Before the development team builds the new features, these validations should be done using mockups, other research, or “Lean” techniques.
- You have run user testing to validate that the proposed features solve the user’s problem(s).
- User testing shows that your proposed solution is intuitive and easy to use.
- The proposed user design adheres to the company’s interaction design and visual design patterns.
- Onboarding functionality has been designed for all new features.
How will you measure the quality of implementation?
These items answer the question: did the development team build what was defined in the requirements and design specs? Metrics revolve around functional tests and manual demonstrations showing that every story/task was implemented according to specification.
- The functionality of all features matches the requirements (front-end and back-end).
- The implementation of the UX design in all form factors (web, tablet, phone, etc.) matches what was defined by the design team and approved by the Product Manager.
How will you measure performance and stability?
These items answer the typical questions that the development team considers “QA.”
- No severity 1 bugs are present (or whatever categorization you use).
- The product performs according to the established metrics (page load time, number of concurrent sessions, etc.).
- The product is stable and doesn’t crash or hang.
- New features didn’t break any of the existing functionality.
Related post: How to Protect Your IoT Product from Hackers.
How will you measure the quality of your whole product offering?
- Pre-sales material and collateral correctly describe the problem your product solves and the functionality defined by the Product Manager.
- The new features are included in a demo for the Sales team to use.
- The new features include the proper hooks/functionality required by the support team.
This partial list gives you an idea of the quality areas you need to consider. But you and your team will need to determine which quality areas make sense for your company. You’ll need to agree on:
- A definition of quality
- The metrics you will use
- The process to evaluate quality before launching every new release
The Bottom Line
This conversation should revolve around outcomes and consequences of missing the metrics instead of tactical details and tools. It is straightforward to jump into the minutia and focus on bugs or unit testing. As you start this process, make sure you focus on the big picture first. Start from the business goals, and then implement the metrics that make sense for your company in its current state. No more and no less.
Most companies, especially smaller ones, probably haven’t thought about their definition of quality in that much detail. So if you bring this discussion to the table, you’ll probably be ahead of many companies out there.
So, how do you measure the quality of your product? Leave a comment below to keep the conversation going.
11 Comments
-
Thanks for sharing great information with us. I really benefited from your blog. again thanks
-
The article is very informative which is worth reading it . Thanks for sharing
-
The article about Quality is very impressive and got good information from this content. Thanks for info.
-
Good content which is worth reading it .Thanks
-
The article is very interesting with very good content. Thanks for info
-
Good article which is worth reading it. Thanks
-
Thank you for the explanation one question ~ What should be the measure of quality for an electronics product ?
-
Author
There is no single way to measure it. I recommend looking at the criteria include in the post and discuss with your team how you plan to measure it. Set a baseline with measurable KPIs and then start tracking continuous improvement against that baseline. I hope that helps.
-
-
Good post, thanks for putting it up. One of the dimensions I see companies wrestle with is getting the balance right between shipping software that is ‘ready enough’ as an active or late beta, released as Minimum Viable Product to start getting traction and running the risk of user abandonment or sluggish uptake if it’s not quite ‘ready enough’ to pass user experience criteria. I’ve seen it happen for small and large scale systems examples, usually as a user. It begs the questions of when is the MVP viable in the customers experience, and what sort of quality criteria and Quality of Service measures are used when defining the products QC.
It’s easy as a Product Manager or Startup to make the jump to release early when under cash-flow burn pressure, but with significant risk if it’s done too early. In the case of regulated systems where there is single supplier dependency and no competition e.g. Toll Road payments these often form the worst UX journeys as there is little pressure to provide a decent UX other than minimal service.
I do find that having this sort of stuff defined up front, using things like Product Descriptions in PRINCE2 PMM force you to think in clear terms about what the QC need to be, which can then inform a CX or UX evaluation when joining all the product elements together into a customers journey. Would be interested to hear stories of how people visualise and conceptualise the user journey through their interaction with the product and its value chain. I have my own ways of doing it, with sharp focus on process cycle times and process outputs and humane error handling for the exceptions, but it would be good to see other responses on the topic.
-
Wow! You have hit on an extremely important topic. Discussions could go on for days and we would only be scratching the surface. That said, I would like to share some thoughts on the topic.
Measuring Quality
I would like to introduce one very simple, yet powerful measure of quality – it works well when there is a large number of discrete products. This is nothing new, so I cannot take credit for it – the method is commonly known as “Stick Rate.” The Stick rate is defined here as (the number of product returns or RMAs)/(number of product shipments). This holistic approach provides an overall measure of product quality. The beauty of this method is that it is easy to track and aggregate large volumes of the data. The disadvantage of this method is, it does not give much insight into the specific reasons why a product was returned and there is can be considerable lag in the data collection process. However, as product continues to move and find its way to the end user, the trends become valid and you can rest assured, if there are problems the customers will make note of it. Regardless, the Stick Rate is a general indication of product quality. Careful diagnosis will be required to determine the actual product issues and failures. Alas, we are not looking to determine failures but for a measure of quality. Naturally this leads one to the question, what is quality?
Defining quality
Sure we understand the essence of quality but practically speaking what is it? I submit what we as product managers truly strive to understand is the perceptions our customers have about our products. Ultimately it’s the customer who decides the quality of a product. For example, Let’s assume we have a product which has met all of its design and market criteria and passes every unit quality test. Now, a customer orders this product and expects to receive it in 3 business days but it shows up one week late. The upset customer may very well place a quality stigma on this product when indeed the product functions perfectly and as intended. Or, what if the customer ordered blue and the delivered product was orange. Nothing wrong with the product quality per se, but the quality of our order fulfillment process reflects negatively on the product; the customer is displeased and this time sends it back (RMA) presumably for a blue one. Or, maybe there is absolutely nothing wrong with the product but it does not live up to the expectations of the customer. Maybe the product documentation is insufficient or maybe the end user is just ill informed.
The physical product is just one link in the product-chain spanning the distance from the designer(s) through the factory to the customer. A chain consisting of all of the systems, operations and deliverables that touch/support the product – design, materials, fit and finish, documentation, applications support, sales support, shipping, product performance, warranty, price, user interface, out-of-box experience, marketing/brand image, ease of use, functionality… collectively come into play when judgments or perceptions regarding quality are made.
In the end, the entire product support infrastructure must be of sufficient quality to maintain a quality product and the marks given will only be as good as the weakest link.-
Thank you for such an elaborate and comprehensive comment Joseph! I agree with you that the overall product support infrastructure should be considered as part of the quality process. Otherwise, everything else falls apart.
Thanks for reading,
Daniel
-