One of our core strategy offerings was helping CxOs wrap their heads around what required to transition towards "data-driven growth." As most of our clients fit into the category of "mature but agile" software leaders, they had arrived at the conclusions: 1) they are collecting valuable customer interaction data and 2) they require a dedicated enterprise data function to interpret this data and stay competitive. But where these leaders struggled was how to port the insights generated from their data science teams into actionable directions that could be utilized by their customer-facing teams (marketing, sales, success, support, etc.).
The inevitable first solution presented by the client was to simply expose their capable teams to the generated insights and then compare their relative effectiveness to status quo. However, we cautioned against jumping into this process without further analysis. While a simple solution seemed enticing as it would reduce lead times to potential growth, there are a plethora of considerations that, without being addressed, could impact the utility of these insights. Some of the more significant challenges include:
- Team members may not have the necessary skills or resources to address the insights, rendering them useless to actually drive growth.
- Team members may not have the required education in the parameters and constraints of the insights to appropriately extrapolate its coverage; this can especially come into play when insights are not updated on a daily cadence. Additionally, tangential to the issue of coverage is the possible false assumption that the insight establishes causation rather than correlation.
- Even if the insight is deemed "significant," the predictive strength may be unclear to the teams utilizing it, potentially resulting in either over or under reliance / fitting of behavior to the insight.
- A hierarchy may not be established between generated insights or between insights and other data elements, such as team member assessment or real-time customer response.
To rectify these issues, the safest option to develop a pilot system, where a clear experiment is defined, tested, and then critically evaluated before putting into practice. But before we recommend doing so, a clear set of principles must be established to help mitigate potential logistical questions, such as:
- How can we isolate the effect of targeted insights when functions are inherently interrelated (e.g., success <-> support)?
- Should we develop and track a new operational metric that is aligned to the direct effect of the insight, or should we judge all pilots by core metrics such as net retention rate?
- Have we established a clean baseline or control group to evaluate pilot results? Similarly, should we compare insight-driven actions to the status quo or the complete absence of any action?
- Should the completion of an insight-driven actions affect future insight generation (i.e., a feedback loop)?
- Should complexity or cost be a factor when evaluating pilot efficacy?
- And maybe most importantly, how do we protect our current operations during the process of testing new insights, especially when on large sample sizes are required to be significant?
A panacea for the gap between data-driven insights and customer actions remains elusive, but guiding our clients through these decision points have made the prospect of building a repeatable, reliable pilot system achievable.