Earlier I described a few approaches for handling a customer crisis. I’d like to share a few incidents I happened to be involved in. As always, there’s the “theory” and then there’s “reality”. So while some of these incidents ended on a positive note, others became what I would politely call “valuable lessons learned”. For obvious reasons, I will cannot name the customers, or the company I worked for at the time. But I assure you that the events described below did happen, at least according to my own personal memory…
The first incident helped me understand how important it is to empathize with a customer and understand their pain. I was fairly new to my position, when I was asked to attend a call with a customer who supposedly had a problem that involved a product I was responsible for. Turned out the customer was a medium size retail chain, that owned several hundreds of stores. The product I was responsible for was used in conjunction with other products from my company as part of an in-store IT solution. The customer IT manager who was on the call reported that there seemed to be some sort of an interoperability problem that caused the system to “freeze” several times a day. When the problem occurred, it disrupted some in-store operations, including the cash registers. The IT team had to “restart” the failed system and it resumed normal operations, that is until the next freeze happened.
Having to “restart” a system a few times a day didn’t sound like a “crisis” to me. After all, I was used to restarting my own PC a few times a day. I didn’t fully understand why I was summoned so urgently to attend the call. But as our support manager asked the customer to reiterate to us the consequences, the picture became much clearer, as did the urgency. Having to manage a few hundred stores with limited IT personnel meant that the average time to restart the system at any given store could take well over an hour. A retail store without functioning cash registers is a dead horse; a very expensive dead horse for that matter. When the customer IT manager described the estimated financial losses caused throughout the chain by this “small interoperability problem” I was literally shocked. Solving the problem was definitely urgent; very urgent.
A cross-functional team that involved engineers from all relevant product teams was immediately assembled. After a day or so of intensive troubleshooting the culprit was identified, and a software patch was provided to the customer. The situation was monitored for another few days, till our support manager was reassured by the customer that the problem didn’t occur again and all store operations were fine.
During the follow-up call, the customer IT manager sounded extremely relieved. He couldn’t thank us enough for quickly reacting to the situation and allocating all the needed resources to solve the problem ASAP. We re-earned the trust of a fairly large customer, and most likely saved our company from a multi-million-dollar law suit.
The lesson learned: What may seem like a trivial problem to you, may actually be a huge problem for the customer. It is important to quickly understand the significance and consequences of the problem from the customer point of view and assign priority accordingly.
The next incident involved a large European financial services company. They have been using a product that I was responsible for, and were generally happy with its functionality. That is, until they purchased a product from another company that was supposed to interact with our product. In theory, everything should have worked just fine. Our product literature started that we support an industry standard protocol that facilitates communication with such 3rd party products. The other company stated that they support the same industry standard protocol. So everything should have been just fine, right? Well… not quite.
When the customer deployed the two products together, they failed to interoperate, and disrupted some of their IT operations. Since our product was installed at the customer facility for quite some time, they assumed the problem was with the 3rd party product they recently added. The other company did some troubleshooting and claimed that the source of the problem was some protocol functions my company didn’t fully implement. While we tested protocol interoperability with a few other 3rd party products, we didn’t test it with that particular one.
Needless to say, the customer wasn’t very happy when they turned to us. They claimed that since our product literature started that we support the standard protocol, we must resolve the problem ASAP. We tried our best to explain that the standard practice is to test product interoperability in a lab before deploying them into a production environment. And that if the customer had consulted us before purchasing, let alone deploying, the 3rd party product we would have advised them to conduct such testing before making a decision. Our explanations fell on deaf ears. The customer already spent time and money on a “combined solution” which didn’t quite work. It was clear to them that it was our responsibility to solve the problem.
It was a large customer, and they were furious. So we promised we will solve the problem, in a matter of days. But when our engineers started digging deeper, it turned out that some significant architectural changes must be implemented before that particular 3rd party product could be supported. We just didn’t plan in advance to support the type of functionality required. At that point, we could either get back to the customer, deliver the “bad news” and discuss possible courses of action. Or we could keep on trying to find a simpler way to make the solution work. Nobody likes to deliver bad news, so we opted to keep on trying. Days stretched into weeks, and weeks stretched into months. We simply couldn’t come up with a simpler solution that will make the two products work together. After a few months of back and forth with the customer, we finally mustered the courage to tell them that we do not have a practical solution for the interoperability problem, and that we cannot afford to allocate the resources required to re-architect our product in order to make it work.
This was not a friendly chat, to say the least. The customer felt they were mislead by us, and that we failed to meet our commitments to them. We tried to explain that we did our best, and that had the customer consulted with us about the interoperability before they purchased and deployed the 3rd party product, all this aggravation could have been avoided. But to no avail. The customer remained upset with our team and refused to do any future business with us.
The lesson learned: it is very important to set expectations properly. And when there is a gap between expectations and reality, it is best to deliver “bad news” quickly and discuss ways to move forward.
The final crisis story I will share is about a major European manufacturer. They purchased a product I was responsible for and decided to deploy it, as a pilot project, in some of their sites. Initially, things looked very promising. The product was deployed, it worked as advertised and the customer saw real benefits in using it. We started conversations about having them purchase additional units and deploying them throughout all their sites. A multi-million-dollar deal was in the making. And then all hell broke loose.
We got a call from the customer, who reported that some of their servers crashed, and they believe the crash was somehow caused by our product. They had to shut down a few servers, and suffered a disruption to their operations. It didn’t take much to realize this was a major crisis.
We quickly assembled a task force, sending a couple of our best engineers on site, and having a fully staffed technical team standing by at the headquarters to help the troops on the ground. There wasn’t much that could be done besides working the problem around the clock. Then came a call from our sales team, who was quite concerned about the prospects of the large follow-on deal they were negotiating with the customer. “We need executive support, and we need it now!” was the message. I got on the next flight to Europe.
When I got to the customer site, I was taken into a conference room to meet their director of IT operations. I don’t know if you were ever yelled at your face by a furious person. If you were, you could picture the scene. The IT operations director kept firing questions, such as: “do you even bother testing your product before you ship it?”, and “do you even know what quality means?”. I kept my calm as much as I could while trying to answer his questions. But everything I said seemed to just fuel his rage. Eventually he stared at me and angrily shouted: “when will I received a bug-free product from you guys??” I paused for a second, and responded “in ten years, and perhaps not even then”. He fell silent, confused by my answer.
It was a bit of a gamble I admit. I knew he was expecting an answer along the lines of “in the next 24hrs”, but I also knew I can’t promise such a thing. We simply didn’t know how long it will take to find the problem, and how soon can we fix it. And I certainly couldn’t guarantee a “bug free” version of the software…
Using the opportunity that the IT operations director was stunned by my answer, I added: “I am sure you have used software products for many years, so you know that a bug-free version is non-existent. What is really important is how your vendor reacts to a situation when there is a problem.” I noticed he was actually listening, so I continued saying “we sent our best engineers on site to troubleshoot the problem, and assembled the best minds back at the headquarters to help them. We are doing everything we possibly can to resolve the situation. And that’s what you should expect from your vendors”. The whole atmosphere shifted at that moment. We knew that we’re in it together.
Luckily, we uncovered the problem a few hours later, and had a fix ready the next day. The customer operations were restored, and the product continued to deliver value in their environment. To our sales team delight, the customer subsequently went ahead and invested in deploying the product across their entire organization. They became one of our best references.
The lesson learned: being yelled at is part of the job. Don’t take it personally and don’t get dragged into a shouting match. Stay focused on making sure the customer knows you’re in the same boat and will do whatever it takes to help them.
Don’t get me wrong, handling customer crisis isn’t fun. But it is part of the job, and if done properly can turn into an opportunity. So if you haven’t practiced/learned any “crisis management” skills, it might be a good idea to do so.