IT Operations Management (ITOM)

Running Wild with Operations Analytics and Gary Brandt

Running Wild with Operations Analytics and Gary Brandt


In this unprecedented blog post, we sojourn across the rugged ITOA countryside with Hewlett Packard Enterprise’s Functional Architect, as he reveals candid details. In a 1:1 interview, Gary chats about what it means to have a complete solution, the impact Operations Analytics has had, and—like all IT professionals—managing costs and expectations.

And now, without further ado, we present:

Running Wild_Gary Brandt-1920x1080-KO.jpg

Question 1: What kind of “rugged terrain” did HPE IT have to endure in the quest for a more automated analytics solution?

Answer: The sheer size and complexity of our HPE IT data centers was the "mountain" we set out to climb. Attempting to automate the ingestion of data from over 60k server, 70k network devices, 2000+ DBs, plus event data across multiple traditional and private cloud data centers was a daunting task.   There were very few "trail maps" that we could rely on that would lead us down a path to collect this data as close to real-time as possible for troubleshooting.

Question 2: Before embarking on the automation journey, what tools were considered to be the most important to pack?

A: For this trek we needed to pack tools that gave us a variety of capabilities.  We needed tools that could collect metrics, events, and logs from a variety of data sources.  We needed tools that could make sense of and deliver value from the vast amount of data collected.   We needed tools that could present meaningful results in an intuitive fashion and in a timely manner.  As a large IT enterprise, we needed to tools that offered the greatest scalability with minimal required administration. 

We found HPE Operations Analytics to be the “Swiss army knife” we needed, offering a range of capabilities to address our tool list.   Since our team was relatively small, we also needed tools that could make our solution as ‘turn-key’ as possible.  With its virtual appliance form factor, Operations Analytic was a good choice for us to do rapid deployments into our virtual environment.  It also ships with a wide variety of data collectors that integrated into our existing monitoring products and custom data sources.  

Since our “hikers” are not expert Sherpas, i.e. they are not Data Scientists, we needed tools in our pack that could automatically provide analytic capabilities tailored for troubleshooting IT Operations environments.  Operations Analytics came with a rich set of machine learning algorithms and analytic functions designed for the IT Operations use cases, thus eliminating the need for having a data scientist on staff.  This allowed us to immediately start using to product. We also did not have the time nor budget to build dashboards and visualize the data from scratch. Operations Analytics came with several rich dashboards, intuitive search language, and guided troubleshooting.  We could also easily extend and tailor these dashboards or use them as templates for configuring custom dashboards.

Built on a virtual platform and with its Vertica DB backend, Operations Analytics had the scalability we need to tackle our mountain.

Question 3: What are some reasons many organizations have failed to finish their trek across the IT landscape? (Preparation, planning, tools, etc.)?

A: Big data analytics is not for the faint of heart.  I think the expertise required, as well as the complexity and cost aspects of big data solutions are typically underestimated.  Big data solutions are data hungry and it is not trivial to feed it vast amounts of data from various disparate data sources across a large enterprise, and then apply meaningful analytics and search techniques.  Many vendor tools require a data scientist to create models, then train the model, and build algorithms to mine your big data platform.  After which you need create the proper dashboards to display your results.  The costs of these collective efforts could easily outweigh the realized benefits. Many tools and vendors in the market advertise great features and capabilities, but downplay the heavy lifting an IT organization has to do to get results or misjudge the time in which to achieve them.

Unmanaged expectations is also a factor.  There is a ton of hype in the market about how big data and analytics can transform your business.  Customers need to understand their problem space and be realistic about their goals.     

Question 4: What are some details can you share with us about the impact of Operations Analytics on HPE IT and the overall ITOA landscape?

A: One of our HPE IT application teams supporting our HPE Storage business was able to get a holistic view of the performance of their complex application ecosystem, which enabled them for the first time to examine how the impact of one part of their environment has on another with actual data.  This alone allowed them to troubleshoot problems much faster and with fewer SMEs.  Additionally, they could now study the behavior of their applications over time through historical baselines and predictive view, from which they can now take proactive action to avoid problems.  

Our HPEIT database team has seen a dramatic reduction in MTTR Oracle DB problems, while also reducing labor costs by reducing the number of SMEs involved troubleshooting such issues.  By using log analytics and machine learning that automatically suggests to the DB support teams the most significant entries in millions of lines of DB log data, they can find problems without having to convene multiple experts to labor in manual troubleshooting efforts.

Question 5: Looking at where HPE IT started the trek and where it is now, what are the top 3 takeaways?

A: I would not say that we have reached to top of our mountain yet, but we are a long way past base camp.    Along the quest we have learned quite a bit.

  1. Data has a lot to say.   Unlike traditional monitoring tools and techniques use in our data center, where we instrument a known point and observer for predefined conditions, with big data analytics we have gained new insights into our data that describes the behavior or our environment; sometime in surprising ways.  These insights help explain the symptomatic events that get detected by traditional monitoring. 
  2. Fewer SMEs required.  While it is hard to quantify the tangible benefits sometimes, one aspect we have consistently observed is the reduction in the number of SME required to troubleshoot problems by using Operations Analytics. 
  3. Exponential Possibilities.   With every new data source that we bring into the tool, or with every new user team who uses it, we get feedback and suggestions for all sorts of possibilities and new use cases to try.

Thanks again Gary for taking the time to share his advice and story with the ITOA community! If you're interested in learning more about HPE Operations Analytics or seeing a demo, please visit,

  • operational intelligence
About the Author