Skip to main content

Concept explainer: Why YOU need AIOps ►

Watch the following video to learn the many reasons that you need AIOps.

Why you need AIOps:  What are the typical areas of improvement Ops teams are concerned about?

One is cost.

Are you reducing the number of actionable tickets?  Are you reducing the configuration errors?

And how about service quality.

Are you reducing the number of incidents that cause business impact?  Are you reducing the duration of those incidents happening?

And change velocity.

Is your IT system agile enough to change with the market demands?

1_BackgroundBlueRadial_00_00_36_13_Still003.jpg

In most cases, the answers to these questions aren’t positive. Why is that?

To keep things relatively simple, let’s say your org has two applications. Application A is run by the DevOps team, with one data center somewhere running a Kubernetes cluster, with virtualized computers, virtualized storage, and maybe some kind of virtualized networking.

2_BackgroundBlueRadial_00_01_00_06_Still004.jpg

Application B is run the same way, but by a different team, in a different location, with their own data center or in a cloud somewhere.

3_BackgroundBlueRadial_00_01_12_24_Still005.jpg

The two applications communicate via APIs, and don’t really know each other at all.

4_BackgroundBlueRadial_00_01_18_08_Still006.jpg

Then here’s an SDWAN that this business is connected by, and it is also running on somebody's data center somewhere.

5_BackgroundBlueRadial_00_01_25_27_Still007.jpg

The end user side is equally complex, too. Let's say they are running a fiber via transit Internet back to our own data center.

6_BackgroundBlueRadial_00_01_34_07_Still008.jpg

You are an Ops staff member here, responsible for keeping application A going. You just made a configuration change to a load balancer and accidentally introduced an error. That error may have been detected by a monitoring system, but with so much data coming in from our network, the alert was buried in the noise and you didn’t see it.  Or it is entirely possible that the symptoms of the error did not go outside of the bounds of the outdated health rules.

7_BackgroundBlueRadial_00_01_46_11_Still009.jpg

A few days later, the end users of both applications are calling our support lines to report the issue. The problem is, they aren’t saying, “hey, the load balancer for application A is not load-balancing!”

10_BackgroundBlueRadial_00_02_14_08_Still012.jpg

Or, “Application B is not getting a response from Application A when it makes an API call!”

11_BackgroundBlueRadial_00_02_18_07_Still013.jpg

The issue manifesting in the end user’s world is something like, “my order management system is not responding!”

12_BackgroundBlueRadial_00_02_24_25_Still014.jpg

So users create a ticket with the app team that supports application A, and a separate ticket for application B.

13_BackgroundBlueRadial_00_02_34_02_Still015.jpg

The problems are acknowledged, but the application A and B teams don’t know about the ticket the other team has received.

14_BackgroundBlueRadial_00_02_42_05_Still016.jpg

When the issue stays unaddressed and becomes a Sev1 incident, the two teams finally talk to each other in a war room.

15_BackgroundBlueRadial_00_02_51_14_Still017.jpg

Eventually, they identify that the issue is outside of the application, and they bring in the SRE team and network team. By then, the issue has violated the SLA. But now all the teams are in the war room. So that’s progress, right?

16_BackgroundBlueRadial_00_03_02_05_Still018.jpg

Well, the next problem is, none of these teams use the same tools.

The application support teams are using tools such as Thousand Eyes, AppDynamics, Dynatrace, Oracle, New Relic, or SAP.

But the datacenter Ops teams are using tools like Splunk, Nagios, Elasticsearch, SolarWinds, VMware, ScienceLogic, syslog, or SCOM.

The DevOps team would be using yet another set of tools like Datadog and Prometheus.

And no one is familiar with how to look at the other teams’ tools, so correlating the insights from all those tools is not easy.

17_BackgroundBlueRadial_00_03_41_05_Still019.jpg

Sounds familiar? What's happening here?

These are the typical obstacles that block today’s Ops teams from achieving operational efficiency.

Number one, every team is flooded with data yet lacks information. In our example, if the Ops team could catch the error triggered by the config change with the load balancer, the whole cascading effect would have been avoided. But we are all overwhelmed by the noise.

18_BackgroundBlueRadial_00_04_11_07_Still020.jpg

Two is complexity. Platform complexity continuously increases. Networks used to be very simple, so identifying root cause and impact was easy. This is no longer true. Applications and services are heavily API-driven, everything is becoming cloud-based. As in our example, the root cause is often not the application you observe with the issues. Each affected party raises a ticket with whom they think is responsible for the cause, so the ticket count increases. Identifying all the relevant parties takes time. By the time you have everything you need in the war room, the issue has already gotten to the Sev 1 state.

21_BackgroundBlueRadial_00_04_43_28_Still023.jpg

Three is difficulty predicting customer experience. Again, as in our example, the end users are often the incident detection system. This not only affects the MTTR but is also a huge CSAT issue.

22_BackgroundBlueRadial_00_05_10_17_Still024.jpg

Business change only increases the Ops team’s total cost of ownership. If you don’t respond to the changes demanded by the market, you will lose to the competition. Yet constant changes in business practices cost the Ops teams who must support them. Rules must be re-written often to accurately detect problems.

23_BackgroundBlueRadial_00_05_29_27_Still025.jpg

ITSM silos delay response. The teams are not talking to each other until things are critical. Also, the toolsets they use don’t match. In our example, they all see the problem, but they don’t know how to read what the other team’s tools are saying. There’s also a human factor that is an added obstacle.

24_BackgroundBlueRadial_00_05_36_20_Still026.jpg

The fact is, despite the best effort to set up the most accurate thresholds and policies to trigger alarms, you still get too much noise. So, you must filter these alerts and identify the ones that truly require attention. And in most cases, to do this accurately requires human eyes. From the data we gathered, it typically takes 1.2 minutes to assess one alarm

25_BackgroundBlueRadial_00_06_12_29_Still027.jpg

And the operators typically go through 16-20 alerts before finding something that requires a ticket. That’s already 20 minutes.

26_BackgroundBlueRadial_00_06_21_11_Still028.jpg

And, of course, there’s always a possibility of multiple operators identifying the same issue and creating duplicate tickets.

28_BackgroundBlueRadial_00_06_33_14_Still030.jpg

So, it’s possible that before the true investigation starts, you are already out of the SLA time frame. Then it takes a few minutes to identify the assignee.

29_BackgroundBlueRadial_00_06_46_19_Still031.jpg

Then, suppose you are a network operator. You'll go to your monitoring tool, and check response time, latency, capacity utilization, and errors to first assess if the issue is truly a problem.

30_BackgroundBlueRadial_00_06_56_13_Still032.jpg

If you are an app ops engineer, you’ll look at application response time, database wait time, garbage collection time, a bunch of node dot JS things maybe, and errors.

31_BackgroundBlueRadial_00_07_07_27_Still033.jpg

You must look at that across all the app servers in the Kubernetes cluster, in the Dynatrace tool, and in the AppDynamics tool. This part typically takes 50 minutes.

32_BackgroundBlueRadial_00_07_17_21_Still034.jpg

And let’s say you identify that it’s a real problem. If I'm the causal stakeholder, I would try to remediate the issue.

33_BackgroundBlueRadial_00_07_25_27_Still035.jpg

If I’m the impact stakeholder, I need to notify the causal stakeholder with what I know and work through them to implement a solution. How much time did it take to get to this point? How much money are we spending to support this process?

34_BackgroundBlueRadial_00_07_37_12_Still036.jpg

How automatic is your automation? You may say, “we create tickets automatically, so this is not applicable to us.”

But let's verify we are talking about the same thing. You might have a rule in your settings to create a ticket automatically when a certain alert comes in.

35_BackgroundBlueRadial_00_07_56_04_Still037.jpg

But if you’ve followed along thus far, you realize that only automates a small part of the process and the rest remains manual.

36_BackgroundBlueRadial_00_08_02_09_Still038.jpg

Also, these manually set rules are unable to catch newly arising issues.

37_BackgroundBlueRadial_00_08_08_29_Still039.jpg

Lastly, the difference in tools and processes is another obstacle for operational efficiency. Some of your teams may use ServiceNow, others may use Jira and MS Teams... If there is a solution to unite the siloed teams to tackle the complex incidents, it must not disrupt the existing process and ecosystem.

38_BackgroundBlueRadial_00_08_23_27_Still040.jpg

So to summarize, the modern operations face these challenges, and all of them are quite costly.

39_BackgroundBlueRadial_00_08_39_25_Still041.jpg

So what to do? This is where AIOps products step in to address these challenges. Learn more about AIOps Incident Management and see how. Thanks for watching!