In the mid-1990’s, organizations began building software designed to track different metrics for their employees. About an hour after any one of those systems were implemented, at least one employee would have figured out a way to game the system.

I was not one of those employees. I was, by contrast, a vehement rule-follower who was aghast that a fellow employee (We’ll call him Jeremy) would return and re-sell warranties at the end of a shift. Jeremy noted that the system was designed to count every new sale towards our quota, and didn’t discount returns from that number, so he had found a way to meet our excessive sales metrics without putting in the work—or, in some cases, putting in more work.

This particular example requires some human interference, but nevertheless represents a truth in the way that we collect data, one that becomes of even greater importance if you want to use that data to benchmark outcomes-based service: You need to plan against bias when you’re developing your technical criteria. It’s very easy to lean your reporting in a bit to make it more favorable towards a specific sales-level agreement, for instance, but there are, in fact, major consequences to doing that.

Let’s think about the repercussions of the example above. Sure—Jeremy hit his number, but there’s now a misrepresentation in the system of the total number of warranties sold. Research has shown that increased warranty usage has a direct tie to customer loyalty, so that means that any piece of forecasting is now inaccurate. And, of course, now there’s an inflated percentage of warranties sold on that date last year, so when next year came around, and we weren’t able to beat last year’s number, it actually makes us look worse.

This is something that Mike Gosling from Cubic mentioned in this week’s excellent episode of the podcast (rate, comment, and subscribe). In it, he talks about setting up systems because, politically, they’re favorable for your company. While that can often be a good consideration, especially when developing SLAs, he warned privately around setting up data collection systems that were engineered in a way to produce a politically-favorable result. In the example above, the employee figured out how to rig the system. In many cases, the bug is actually a feature of the software.

Here’s a simple example of how this could happen in practice even without a Jeremy: Imagine you have two facilities that you manage, one that requires 50 repairs a year, and one that requires 10. The larger facility has an 80% SLA compliance rate. The smaller one has a 50% rate. Organizationally, you’re shooting for 75%. How will you calculate the performance of these two locations?

If you take the numbers in aggregate (45 of 60 jobs met SLA requirements), then horray, you’ve hit your quota. Enjoy your bonus. Measuring that specific way, though favorable, ignores the fact that there may be serious regional, logistical, or workforce challenges that you’re ignoring at one specific jobsite. You’re not getting an accurate picture of your business, and you’re doing your business and your customer a disservice.

This oversight is easy to see with less than a hundred jobs and only two job sites, but for an actual business, who then has to layer in additional layers of complexity, and hundreds, if not thousands of jobs a year, it’s easy to see how biased thinking and aligning numbers to meet your narrative can be so easy.

So what do you do? Here are a few things that we’ve seen businesses find success with.

From many data sources, build one source of truth. I rarely stop talking about the importance of having a single source of truth for your business that runs through service, sales, operations, and so on. That truth, though, is only as powerful as the data powering it. Garbage-in-garbage-out is even an oversimplification in this case. The data needs to come from good sources, yes, but it needs to come from diverse sources, and, importantly, if some of your customers aren’t as sophisticated as others, perhaps your business has a blind spot.

Audit your processes on a rolling basis. Take any one technician, account, site, or day, and see how it anecdotally matches up to your numbers. Is it way off? Are technicians logging their appointments properly? I spoke to a guy not too long ago who discovered that an entire branch was logging appointments at the end of the day, rather than starting and ending them in real-time. Any time you turn over a rock it’s incredible to see what sort of worms wriggle out.

Rethink your workflow. Sometimes it’s hard to see that we’re doing things simply because that’s the way that we’ve always done them. Oftentimes, it takes a head-cracker to come in and shake things up. Don’t be afraid to look outside your division—or your business—for the talent to think about the data you’re collecting in new and different ways.

Even if you do all of this, there’s still going to be the potential of drift, and there may always be a Jeremy skulking about to step on the scales. Because of them, improvement is never done. The company eventually patched out his exploit, but it took nearly a year for them to address it. We are all tasked with making our businesses a little better today than it was yesterday, and that’ll never change, but it’s a lot easier to do if you start out on the right foot.

Tom Paquin
Author

Contributor, Future of Field Service