Services
Process
Our Work
Team

How it works

Product Feedback Loop Audit

The following summarizes the core characteristics of the product feedback loop that we assess during an audit to determine the biggest improvement opportunities.
1
Plan
Define the One Metric That Matters (OMTM) that will drive meaningful progress for the organization. Identify and capture Risky Assumptions: the untested, good-faith beliefs about problems, solutions and implementation details around improving this OMTM. Create a sprint plan matching these hypotheses with tactics to drive meaningful improvements.
One Metric That Matters
O.M.T.M
  • North Star metric alignment
  • Business goals into actionable, focused hypothess and related tactics
  • Institutional knowledge & risky assumptions in goal setting
Get in depth from the beginning
  • Use of past work velocity / story points in realistic bandwidth scoping
  • Clarity of measurement and learning integration with build
  • Input representation for product stakeholders
  • Handoff and review processes between product strategy and design, as well as design and engineering
2
Build
Execute a one to four week sprint that will create the ability to gather actionable data on testing risky assumptions on how to improve the plan's OMTM. Simple is best, the less code the better. Where there is code, it's readable, modular and well-tested.
Effectiveness & Architecture Assessment
  1. Architecture Assessment
  2. Code Quality Assessment
  3. Overall Level of Developer Experience
Configuration and Deployment:
Setting up development, testing, and production application environments.
Usage and Effectiveness of QA and UAT:
Assessing quality assurance and user acceptance testing.
3
Measure
Confirm instrumentation for data collection is correct and that it is being stored well for efficient analysis. Monitor the pace of measurement to ensure enough data will be collected over an appropriate period of time to provide real insight into the risky assumptions that drove the plan into action. Just enough data, too much leads to the risk of time-related external factors (exogenous variables) and analysis paralysis.
Data
Measurement Monitoring Process
  • Data Quality Monitoring: Ensuring high-quality data collection.
  • Collection Pace Tracking: Monitoring data collection pace.
  • Transparency for Stakeholders: Ensuring accessibility of measurement systems for product stakeholders.
Application Monitoring & Instrumentation
Instrumentation of Data Capture: Ensuring efficient data collection.
Pipelines and Transformations: Monitoring data flow and transformations.
Depth of Logging Systems: Assessing the depth of aggregate and individual user events logging.
4
Learn
Analyze and discuss how the OMTM changed along with the measurements to accept or reject the plan's Risky Assumptions. Run a retro to identify team patterns to start, stop and continue. Assess the importance of the OMTM, whether it should be re-elected for the next plan or whether new candidates should be considered.
Analysis & Reporting
Mapping Measurements to Risky Assumptions:
Assessing effectiveness in mapping specific measurements to risky assumptions.
Qualitative and Quantitative Analysis:
Evaluating the application of qualitative and statistically-significant quantitative analysis.
Continuous Improvement & Knowledge Management
Capturing Start, Stop, Continue Patterns:
Evaluating how well the loop retrospective process captures start, stop, and continue patterns.
Recording and Disseminating Institutional Knowledge:
Assessing the recording, dissemination, and application of new institutional knowledge gained from loop learnings.

Let's Build Together

Have questions to the next steps in your growth? Lightstrike has the answers.
@2024 Lightstrike, LLC All Rights Reserved
Made in Plasmic