IT Operations Management (ITOM)

Meet the OMi Management Pack DevKit

Meet the OMi Management Pack DevKit


Guest post by Norbert Andres, Ops Bridge, Software Architect


HP recently released a development kit for creating Management Packs for Operations Manager i (OMi) operations management software.

In this blog post, I will demonstrate how the DevKit can be used to create a management pack that reads metrics from Apache access log files and submit them to the agent store. This blog post assumes basic knowledge of OMi and Monitoring Automation.


First, download the DevKit from, Downloads => Operations Manager I => Tools & Scripts => OMi Management Pack Development Kit.

Download the DevKit as well as the OMi Monitor Framework. The Monitor Framework is a Content Pack that needs to be uploaded using the OMi Monitor Framework.


To begin the process you will need to extract the OMi Management Pack Development Kit into an empty directory – no installation is required. The only prerequisite is to have a recent version of perl.

Once perl is installed you can start.


Now, copy the files and Collector_Template.yml from the example directory to the main directory of the DevKit and rename them to and ApacheAccessLogCollector.yml.

The yml-File is the main entry point for the monitor framework: in here global properties are defined.

Change the yml file to match this:


application: ApacheLogReader
domain: WebServer
description: Reads metrics from apache log files
generate_availability: false
generate_discovery: false
logfile: /var/log/apache2/access.log
# logfile, string, Path to access.log, Enter the complete path to the apache access.log file.


The application value is used for events, logging…


The domain influences the directory in Monitoring Automation where you will find the policies and aspects and the path of a metric in coda. The “Script” specifies the name of the perl file. Since we only want to read in a log file, we don’t need availability monitoring and discovery.


In the next section are the parameters that would be exposed in Monitoring Automation during deployment. In this case we want to have the log file name and path configurable. You can set a default or also leave it empty. For example: for username and passwords you should not specify a default value, but for known port numbers or paths the Management Packs will be a bit easier to use.

In the last line we comment the parameter -> this information will appear in Monitoring Automation during deployment and is also used for verification of the input the user provided.


The syntax is: first the key, then the type (e.g. string, password, int or you can define an enumeration), label, description. You can also flag a parameter as optional by appending [optional] at the end of the line. In this case Monitoring Automation will allow you to leave the value empty during the assignment of the Aspect.


Now let’s switch over to our perl module file.

We only need this stub, so you can remove all the comments:


package ApacheAccessLogCollector;
use base 'Collector';

use strict;
use Config::Tiny;

sub run
  my $self = shift;

  return 1;  # success


In the perl module you can implement the run method for event submittal and metric collection and/or the topology method to submit topology. Since we only want to collect metrics we can remove the method topology.


In the run method we will implement now the logic to process the access.log file.

We assume the standard format, e.g. - - [03/Feb/2015:11:23:35 +0200] "GET /index.php?img=gifLogo HTTP/1.1" 200 4549

This is the code in the run method:
  my $config  = $self->getConfig();
  my $logFile = $config->[1]->{logfile};

  my $position = $self->getPosition($logFile);

  $self->log("$logFile, position: $position");
  $position = $self->readFile($logFile, $position);
  $self->log("Log: $logFile, next position: $position");

  $self->savePosition($logFile, $position);


First we get the configuration object and the name of the log file. We try to retrieve the position where we stopped last time, read in the log file, save the new position and return the success value “1”.


Here is the code for getting and storing the position where we finished processing last time is straight forward: I will not comment it further here:


sub savePosition($)
  my ($self, $logFile, $position) = @_;

  my $fileConfig;

  $fileConfig = Config::Tiny->read('logreader.cfg', 'utf8');
  if (!$fileConfig)
    $fileConfig = Config::Tiny->new;

  $fileConfig->{_}->{$logFile} = $position;

  $fileConfig->write('logreader.cfg', 'utf8');

sub getPosition($)
  my ($self, $logFile) = @_;

  my $fileConfig = Config::Tiny->new;
  $fileConfig    = Config::Tiny->read('logreader.cfg', 'utf8');

  my $position = $fileConfig->{_}->{$logFile};
  if (!$position)
    $position = 0;

  return $position;


Basically we store the position in a log file using the Config::Tiny module.

The interesting part is the readFile method:


sub readFile($)
  my ($self, $logFile, $position) = @_;

  open(FILE, $logFile);

  my $size = -s $logFile;
  if ($size < $position)
    $position = 0;

  seek(FILE, $position, 0);
  while(my $line = <FILE>)
    my @items = ($line =~ /(".*?"|\S+)/g);

    my %metric = ('metric' => 'return_code', 'value' => $items[6], 'data_class' => "APACHE", 'domain' => "APACHE");
    my %metric = ('metric' => 'response_bytes', 'value' => $items[7], 'data_class' => "APACHE", 'domain' => "APACHE");

    $position = tell(FILE);


  return $position;


We check the position – if the number is bigger than the size of the file the log file got re-created, so we start at the beginning.

Then we parse the log file line by line starting at $position.

Here we submit the metrics to the agent metric store:

my %metric = ('metric' => 'return_code', 'value' => $items[6], 'data_class' => "APACHE", 'domain' => "APACHE");


We check the position – if the number is bigger than the size of the file the log file got re-created, so we start at the beginning.

Then we parse the log file line by line starting at $position.


Here we submit the metrics to the agent metric store:

   my %metric = ('metric' => 'return_code', 'value' => $items[6], 'data_class' => "APACHE", 'domain' => "APACHE");



The attributes metric and value are required – the other two are used to influence how the metric is stored in the agent.

This is all we need to do – the only step left is to convert this to a management pack we can upload to OMi.



Included with the DevKit is the tool that generates a content pack out of the two files we just worked on.

The syntax is very easy: run the command

ContentCreator[.bat|.sh] –yml ApacheAccessLogCollector.yml


The result is a zip file you can upload using the content manager of OMi.

One thing is missing: we use a perl module (Config::Tiny) which is not being shipped with the agent.

To add this module create a par file (perl archive), either using the perl packager or simply use a zip utility and create a zip file with the directory structure lib\Config\, rename the zip file to ApacheAccessLogCollector.par and place it in the same directory as the ApacheAccessLogCollector.yml.

The ContentCreator tool will pick it up automatically if this naming convention is followed.


The result will be in a sub directory “output”. In the management pack you will find the instrumentation (which contains the perl module), a ConfigFile policy (which was generated directly from the yml-file) and a discovery policy which is not being used (since we decided to not generate topology). In addition you get an Aspect “ApacheLogReader Collector” which you would deploy onto the apache node.


In a second sub directory called “Config” you will find a file called ApacheAccessLogCollector.cfg. In this files all IDs are stored that got used in the management pack. It is very important to keep this file in case you want to generate this management pack again (then the IDs must stay the same) or if you want to generate a second version using the command:


ContentCreator[.bat|.sh] –yml ApacheAccessLogCollector.yml –major 1 –minor 1


With this command, the aspect and policy IDs stay the same – only the version IDs should change.

If the file gets lost you can still re-create it manually but of course it is better to avoid this work.


As you can see very little code is required to get metrics from a log file into the agent store. Now, instead of dealing with OMi policies you can concentrate on writing the code that is necessary to retrieve the metrics.


In a future blog post we will discuss how to create availability and performance monitoring as well as how to create graph mappings using the DevKit. Also we will show how to send events e.g. in case you discover the application is not functioning correctly or – for this example – the log file doesn’t exist.


You can find more information in the development guide that can be found here.


For more information about HP Operations Manager i visit the homepage here.

Get more information on HP Operations Bridge solutions and the OMI product here


This is the first blog in a series, search on the tag OMIContent to see more blogs concerning OpsBridge integrations, management packs, and BSM Connectors content"


News :
The OM2Opsbridge program including license exchange details will go live March 12th 2015.
Search for the tag OM2OpsB to find blogs discussing this program and evolution to OpsBridge.

We are pleased to announce the HP BSM Integration for BMC Impact Manager by Comtrade, version 1.1.

The HP BSM Integration for BMC Impact Manager by Comtrade enables you to establish a link between BMC Impact Manager and HP Operations Manager i 10 (OMi).
The key features of this release are:

    Support of Operations Manager i 10 and BSM Connector 10
The installation package and the integration guide are available here

Read more about the integration.



  • operations bridge
0 Kudos
About the Author