In 2012 I was introduced to the CEO of the mortgage division of one of the ten largest banks in the US. For this story, I’ll call him Tom. Eighteen months prior the division had launched what was considered a strategically vital data warehouse and reporting initiative under the auspices of a team of PhD economists. Their plan was to utilize an agile development framework to create a series of reports to help them better manage the business. To date the project had cost just over two million dollars. However, due to a variety of issues the team had failed to produce a single deliverable. Internal concerns had been growing for some time, but after spending two million with nothing to show for it, Tom got involved to understand the problem and hopefully rehabilitate the project. Against this backdrop we sat down to discuss the situation. What follows is a summary of our conversation.
After introductions and Tom’s recap of the project’s history, I began asking questions to get a better understanding of what had and had not been done. I started at the top. “Did you figure out the business questions you were trying to answer?” Somewhat puzzled, he replied, “No. Nobody has ever asked me that before. Why’s that important?” “Well,” I said, “the only reason to ever write a report is to answer a business question. If you don’t have a question to answer, there’s no need to write a report. Laying out your questions—in plain English that everyone can understand and agree to—is the first thing you should do.”
Still perplexed, Tom said, “Our PhDs said the first thing we needed to do was to get all the data from the entire organization cleaned up and pulled together in one place.” To which I replied, “No, you need to get all of the data you need to answer your questions cleaned up and pulled together in one place. Why waste time collecting and cleansing a bunch of data that you’re never going to use? Now, if you’re working on a master data management or governance initiative and have unlimited budgets and time to scour and verify all the data your organization has ever produced because you think that one day you might need it and just want to be ready, then maybe collecting everything makes sense. But I thought you said this initiative was time sensitive and you needed results quickly. Did I miss something?” “No” Tom said, “you’re right—what you’re saying makes sense.”
Undaunted, I tried another angle. “Did you identify the metrics you were trying to analyze?” Again the answer came back, “No, we didn’t. Why do you ask?” I explained, “Business is about execution. Whether you’re processing a new mortgage application or handling a customer inquiry in a call center, those teams are navigating some process to move a ‘ball’ from Point A to Point B as fast and efficiently as they can to hopefully generate a profit—or at least minimize expense. You can think of it like sending boxes down a conveyor belt from beginning to end. Although there are thousands of derivations on these themes, there are really only three types of metrics that apply to any business process: volume, velocity and efficiency. How much do you move down the line? How fast does it transit the system? And how efficiently does it transit the system? If you know the metrics that you’re trying to analyze, you can figure out the data you need to calculate those metrics—which is another way of narrowing down the data you need to collect and cleanse. But equally important, once you know your metrics, you can define your criteria for evaluating those metrics.”
Tom stopped me at that point. “What do you mean by defining criteria for evaluating metrics?” I said, “Look there are only three basic questions that management ever needs to ask about any business performance measure. The first is—for that measure—how and where do you draw the line between what you consider satisfactory and unsatisfactory performance. Think of it as organizational triage—you figure out who’s in the worst shape so you can either stabilize or save them, or, being extremely cold-blooded, you write them off as dead. Once you know where you’re going to draw that line and what you consider to be adequate versus inadequate performance, then you go about the process of assigning all of your process participants to one of those two groups. Essentially, you’re deciding who or what is ok, and who or what’s not ok. Following the triage analogy, if someone is perfectly healthy, they don’t need your attention. You need to focus on the sick. The last criteria you need to establish is how you’re going to calculate either the cost or opportunity of a given participants performance. If someone missed a production target, but the cost of that miss is only a dollar, then who cares. On the other hand, if another participant missed the target by one million dollars, then that’s where you need to focus. You’ll get a lot more bang for your buck by fixing 10% of a million dollar problem that 100% of a one dollar issue. Those three questions are what I mean by defining your evaluation criteria.”
Tom was beginning to look a little dejected as he said with a sigh, “We didn’t do any of that.” I actually felt bad asking my next questions—sensing the answers but feeling the need to confirm my suspicions regardless. “Did you guys ever map the horizontal workflows of the processes you wanted to track, or lay out the organizational reporting rollups that you were going to use to manage the business? If you did, that would at least give you a sense of the significant steps along the way and data necessary to track that activity.” Again the answer came back “No.”
The preceding vignettes are a significant condensation of a conversation that lasted over an hour and a half. I’ve omitted many details to protect the company’s and Tom’s privacy and highlight the issues that led to the project’s failure.
At the end of what I feared had long since become—for Tom at least—a very dispiriting conversation, I attempted a light-hearted conclusion to highlight the irony of where I felt the project now stood. “So, let me get this straight. You fired a two million dollar cruise missile. No one ever defined the target, and now you’re wondering why you didn’t hit it?” Tom looked at me, paused for a moment in obvious distress, and said, “Well, when you put it that way, that’s exactly what we did.” Then in the next breath he said, “When can you come out and walk the team through what you just went through with me?”
Once on site I fairly quickly discovered that “agile” had been used as a pretext to do whatever the economists wanted to do whenever they wanted to do it. Indeed there was not even an understanding of what an agile development process entailed. Within a week we had replacied the original team with software professionals who both understood agile and the execution framework I described above. Roughly one month later the project yielded its first fully functioning deliverables. Under the new regimen it continued to produce results for the next couple of years.
Eighteen months and two million dollars had been wasted on what could have been accomplished in five weeks for $50,000 had the organization taken a few days to ask up front, “What are we trying to do and what do we need to accomplish to achieve that objective.” The understanding of the need to define a clear target dates to antiquity, “Where there is no vision, the people perish.” The same holds true for big data projects, but few heed that advice. Recent research has shown that 85% of all big data and artificial intelligence projects fail for one basic reason—the data scientist don’t know what they’re looking for. Said differently, they don’t know where the target is—and that makes it really hard to hit.