Searching for answers in the telecommunication industry
In the first part of this series, we asked what success actually is (an individually defined and deliberately pursued result) and how it can be made measurable in the software world (via causal goal specifications). As a conclusion, we noted that events that did not happen cannot be measured – which is fundamentally logical, but still frustrating. In today’s article, I would like to focus on the productive aspect: how can meaningful key indicators be found that deliver good and consistent data for measuring success?
Good and bad data
As already illustrated in the first part of the series, it is important to formulate “simple” goals and to find unambiguous (i.e., not misinterpretable) key performance indicators (KPIs). Good – meaning countable – KPIs fulfill the following criteria:
- low complexity (e.g., clicks instead of the result of a long chain of events)
- no dependencies or situative distinctions
- little room for interpretation (e.g., numeral values instead of subjective categories (e.g., “2” instead of “good”))
- no unnecessary ballast (see “Product lifecycle”)
Software is alive, and depending on the individual characteristics and release cycles, this can potentially mean frequent adjustments to scope and display of content. Features are being extended, improved or removed, user interfaces (UIs) are being modernised, supplemented and become individually configurable. And the KPIs? In the ideal case, they are aligned with the changes and deliver comparable numbers over the course of the lifecycle. This has the advantage that the progress of the KPIs can be compared over several releases and that, because of it, statements about reaching our goal (i.e., our success) can be made.
However, it has to be kept in mind that such statements are only valid when the changed area has been adapted just within a certain scope. In case of radical changes, such a comparison is no longer legitimate, and the old KPI simply is not existent anymore and is replaced by a new one, respectively. How big the adapted scope can be while still keeping the KPIs alive depends completely on the definition of the particular value: the more precisely a data point is defined, the more difficult it is to continue with it when changes occur. From this follows a certain paradox about the concept of defining values that are as unambiguous as possible. Of course, a more “open” definition can be adapted more easily for a new situation than a very restrictive one.
Here, the solution lies in identifying goals as well: if the goal does not change during the product lifecycle, KPIs that are closely related to the goal can be used continuously. Because of this, good KPIs are those that don’t measure a (potentially volatile) UI element, but a state that is relevant to the goal (e.g., a triggered order).
Design follows function?
Sadly, the high significance of sales figures again and again leads to whole releases being oriented towards how a certain KPI can (artificially) be boosted. This, however, only leads to short-term success and often comes at the expense of long-term stable results. As an (admittedly extreme, but striking) example, it would be easy to increase the number of button clicks by simply displaying the button more often. Due to the random principle alone, the number of clicks would rise. The quality of the software will not increase to the same extent, though. On the contrary: the actual users are potentially annoyed by the repeated display of the button and will then stop using the software. After an initial peak, the rate of clicks will then decrease.
Examples for practice
In the following I would like to introduce a few generic KPIs that have proven themselves as stable in practice and therefore can be used for defining a goal:
- Number of downloads of a software
This is a relatively “banal” KPI that can be measured rather well (except for slight fuzziness, e.g., due to cancelled downloads); based on this KPI there cannot be a statement about whether the user actually uses the software, but at least it indicates that the user was interested in the software enough to have downloaded it. Exactly this interest of the end consumer is what many of our ISP clients view as success, because this way, the first hurdle (how does the user even receive the software?) is already taken. Of course this KPI is also influenced by how well the download is promoted.
- Use of the software
This KPI may seem like one of the most important ones at first glance, but in practice, it is hardly tangible. What does “use” even mean here? The expectations of different ISPs diverge significantly on this issue, which is why we at mquadr.at have decided to use a flexible system: we differentiate between active “presence” of the software on the one hand (e.g., it is running in the background on the customer’s PC and communicates with the user automatically in case of a problem), and actual call-ups of functions and tools on the other. For the latter it is not relevant whether the user has started them (e.g., by clicking) or the software has become active automatically (e.g., due to a recognized error or a different defined trigger), as the result is the same: the software has been used for a specific purpose and, in this way, constitutes a success.
A different option for measuring success is the definition of goals via milestones. In order to do so, certain “points” within the software are defined that the user is supposed to reach. This principle is often followed by aims for web applications in which specifically defined URLs are evaluated as “milestones” (e.g., an URL that is shown after completing an order). In our software, however, this can be certain screens that the customer is supposed to reach.
- Exclusion principle
The aforementioned KPIs all assume the user is handling the software just as the creator intended and is following a predefined flow, for example. However, practice teaches us that often this is not the case: users sometimes behave unpredictably, are distracted or interrupted by environmental factors, and, for this reason, are influenced by things that the software cannot consider (even though good product design takes common scenarios into account). Here, the exclusion principle can help: success is when the customer has used the software, and a certain scenario has not occured, e.g., error messages, references to the support hotline, etc. In the first article of this blog series it was pointed out that events that did not happen cannot be measured, but in this case, it is about events that did not happen within the software – and those can very well be measured.
Especially the last-mentioned approach of the exclusion principle has proven itself as reliable for defining goals in the software world, which is why we and our ISP customers like using it. The individual utilisation of said KPIs is still a matter of the respective goal definition, though.
Unfortunately nowadays it is not enough to have good ideas. One also has to prove that they are good – it is important to not be seduced by short-term (artificial) data points, but to find long-term consistent KPIs and keep a realistic outlook on the definition of the goal. For this, different approaches can be used, which have their own advantages and disadvantages in different scenarios. When defining a personal goal, there often has to be a weighting of KPIs: there are indicators that can be precisely measured but are not meaningful, while others might have high significance but potentially can only be measured inaccurately.