KEMBAR78
PDF Iwa Using | PDF | Search Engine Indexing | Chart
0% found this document useful (0 votes)
18 views84 pages

PDF Iwa Using

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views84 pages

PDF Iwa Using

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

IBM Operations Analytics Log Analysis

Version 1.3.8
1.3

User's Guide

IBM
Note
Before using this information and the product it supports, read the information in Appendix A,
“Notices,” on page 75.

Edition notice
This edition applies to IBM® Operations Analytics Log Analysis and to all subsequent releases and modifications until
otherwise indicated in new editions.
References in content to IBM products, software, programs, services or associated technologies do not imply that they
will be available in all countries in which IBM operates. Content, including any plans contained in content, may change
at any time at IBM's sole discretion, based on market opportunities or other factors, and is not intended to be a
commitment to future content, including product or feature availability, in any way. Statements regarding IBM's future
direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. Please
refer to the developerWorks terms of use for more information.
© Copyright International Business Machines Corporation 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents

Chapter 1. About this publication........................................................................... 1


Audience.......................................................................................................................................................1
Publications..................................................................................................................................................1
Accessing terminology online................................................................................................................ 1
Accessibility................................................................................................................................................. 1
Tivoli technical training................................................................................................................................ 1
Providing feedback.......................................................................................................................................1
Conventions used in this publication ..........................................................................................................2
Typeface conventions ............................................................................................................................ 2

Chapter 2. Searching and visualizing data.............................................................. 3


Search UI overview...................................................................................................................................... 3
Side bar icons............................................................................................................................................... 4
Search UI reference..................................................................................................................................... 4
Changing the search time zone....................................................................................................................8
Searching data............................................................................................................................................. 8
Query syntax.........................................................................................................................................10
Search results timeline........................................................................................................................ 18
List and Grid views............................................................................................................................... 18
Refining search results.........................................................................................................................19
Saving a search.....................................................................................................................................20
Saved searches.......................................................................................................................................... 20
Visualizing data.......................................................................................................................................... 21
Creating charts and graphs.................................................................................................................. 21
Percentile statistical functions.............................................................................................................22
Dashboards...........................................................................................................................................22
Aggregated Search.....................................................................................................................................24
Configuring Aggregated Search........................................................................................................... 25
Datacenter Topology UI....................................................................................................................... 33
Searching aggregated data.................................................................................................................. 33
Manage Aggregations UI reference..................................................................................................... 33
Managing aggregation templates.........................................................................................................35
Turning Aggregated Search on and off.................................................................................................38
Search dashboards.................................................................................................................................... 38
Alerts dashboard.................................................................................................................................. 39
Custom Search Dashboards...................................................................................................................... 39
Creating a Custom Search Dashboards.....................................................................................................40
Defining a Custom Search Dashboard................................................................................................. 40
Steps to create a Custom Search Dashboard...................................................................................... 41
Defining a search filter app.................................................................................................................. 67
Adding a shortcut to a Custom Search Dashboard to the Table view toolbar.................................... 73

Appendix A. Notices............................................................................................ 75
Trademarks................................................................................................................................................ 76
Terms and conditions for product documentation................................................................................... 76
IBM Online Privacy Statement.................................................................................................................. 77
....................................................................................................................................................................78
Trademarks................................................................................................................................................ 78

iii
iv
Chapter 1. About this publication
This guide contains information about how to use IBM Operations Analytics Log Analysis.

Audience
This publication is for users of the IBM Operations Analytics Log Analysis product.

Publications
This section provides information about the IBM Operations Analytics Log Analysis publications. It
describes how to access and order publications.

Accessing terminology online


The IBM Terminology Web site consolidates the terminology from IBM product libraries in one convenient
location. You can access the Terminology Web site at the following Web address:
http://www.ibm.com/software/globalization/terminology.

Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. In this release, the IBM Operations Analytics Log Analysis user
interface does not meet all accessibility requirements.

Accessibility features
This information center, and its related publications, are accessibility-enabled. To meet this requirement
the user documentation in this information center is provided in HTML and PDF format and descriptive
text is provided for all documentation images.

Related accessibility information


You can view the publications for IBM Operations Analytics Log Analysis in Adobe Portable Document
Format (PDF) using the Adobe Reader.

IBM and accessibility


For more information about the commitment that IBM has to accessibility, see the IBM Human Ability
and Accessibility Center. The IBM Human Ability and Accessibility Center is at the following web address:
http://www.ibm.com/able (opens in a new browser window or tab)

Tivoli technical training


For Tivoli® technical training information, refer to the following IBM Tivoli Education Web site at http://
www.ibm.com/software/tivoli/education.

Providing feedback
We appreciate your comments and ask you to submit your feedback to the IBM Operations Analytics Log
Analysis community.

© Copyright IBM Corp. 2023 1


Conventions used in this publication
This publication uses several conventions for special terms and actions, operating system-dependent
commands and paths, and margin graphics.

Typeface conventions
This publication uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip:, and Operating system considerations:)
• Keywords and parameters in text
Italic
• Citations (examples: titles of publications, diskettes, and CDs
• Words defined in text (example: a nonswitched line is called a point-to-point line)
• Emphasis of words and letters (words as words example: "Use the word that to introduce a
restrictive clause."; letters as letters example: "The LUN address must start with the letter L.")
• New terms in text (except in a definition list): a view is a frame in a workspace that contains data.
• Variables and values you must provide: ... where myname represents....
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type
• Values for arguments or command options

2 IBM Operations Analytics Log Analysis: User's Guide


Chapter 2. Searching and visualizing data
This section outlines how to use the IBM Operations Analytics Log Analysis Search workspace to search
your indexed data and to display this data in charts and dashboards.
To find the root cause of a problem experienced by users such as slowness or a failure, you can search
through data such as log files, traces, configuration information, and utilization data. This type of search is
iterative because the results for one search might lead to a set of other searches. An example of iterative
search is finding the connection timeout in the error logs, which could lead you find the connection pool
utilization details.

Search UI overview
Use this topic to help you to get started with the Search UI.
The following screen shot shows the capabilities of the Search UI:

1. Sidebar
Use the UI icons on the side bar to open the Getting Started UI, a saved search, a search dashboard or
the Administrative Settings UI.
2. Search Patterns pane
The Search Patterns pane lists the fields that are found in your search. To filter the search for a field,
click on the field and click Search.
3. Discovered Patterns pane
The Discovered Patterns pane lists fields and values. To display discovered patterns, you need to
configure the source types in the data source.
4. Timeline pane
The Timeline pane displays the current search results filtered by time. To drill down to a specific time,
click on a bar in the graph.
5. Timeline slider
Use the time line slider icon to narrow and broaden the time period.
6. Search box
Enter search queries in the Search field. When you click on a field in the Search or Discovered
Patterns pane, the query is displayed in this field.
7 Time filter list
Use the time filter list to filter search results for a specified time range.

© Copyright IBM Corp. 2023 3


8. Data source filter
Use the data source filter icon to filter search results for a specific data source.
9. Table view / Grid view
Use the List View / Grid View icon to switch between both view. Use the list view to identify search
results quickly. Use the table view to display search results in a tabular format.

Side bar icons


Use the side bar to quickly navigate the user interface.
The following table explains the available icons.

Table 1. . Side bar icons


Icon Name Description
Getting Started icon Use guided demonstrations and
find links to useful information.

Saved Searches icon Run saved and example


searches.

Search Dashboards icon Run custom and sample search


dashboards to view charts based
on search results.
Administrative Settings icon Create and administrate the data
model, users, and roles. Display
server statistics.
Manage Alerts icon 21 Create
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise and administrate
alerts. This feature is only
available in the Standard Edition.

Datacenter Topology icon 21 View


Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise the hierarchy
of multiple instances of Log
Analysis that represent the
topology of your data centers.
The instance that are logged into
is highlighted in red.

Search UI reference
Use this reference topic to help you to navigate the Search user interface (UI).

Buttons and fields on the Search UI


Table 2. Buttons and fields on the Search UI
Button or Field Name Description
Search button and field Search for a keyword or enter a
search query.

Data Sources icon Filter the search for specific


data sources or groups of data
sources.

4 IBM Operations Analytics Log Analysis: User's Guide


Table 2. Buttons and fields on the Search UI (continued)
Button or Field Name Description
Time filter icon Specify a relative or custom time
filter.
Save Quick Search icon Save the search query and
results for later use. To view the
saved searches, click the Saved
Searches icon on the side bar.
Columns to be displayed icon Filter the columns that are
displayed in the Grid and Table
views.
Plot chart icon Select the columns that you want
to graph and click the Plot Chart
icon.
List View icon To open the List View, click this
icon.

Grid View icon To open the Grid View, click this


icon.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
ContentServer
i5
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
WAS
Fix B2C
B2B
Oracle
Pack p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Panagon
pSeries
Pack
Documentum
Domino.Doc i53
Mgr
Professional
Developer
5.4
Express5.1
5.0
10
Enterprise Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
21 Documentum
Cloudscape
Telesales
Fix
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Domino.Doc i53
Mgr
Professional
5.4
Express
Pack5.1
5.0
10
Enterprise 21 Launch icon To open the Search UI for
the selected aggregated record.
This feature is only available to
Standard Edition users.

Time line graph


Table 3. Time line graph
Field, icon or button Description
Time line slider icon Filter the time that is displayed in
the timeline graph.
Y-axis Shows the number of log events
for each bar.

Chapter 2. Searching and visualizing data 5


Table 3. Time line graph (continued)
Field, icon or button Description
Bar Shows the number of log events
for the specific time. To view
more details, click a bar to drill
down to more details about that
time range.

Time zone button To change the time zone, click


the time zone button.
Log Events Granularity Displays the granularity of the log
records that are displayed. The
level depends on the time that
you have filtered for. You can
drill down from years to months
to days to hours to minutes and
seconds.
Time Range Describes the time range that is
displayed in the timeline.

Plot Chart editor


Table 4. Buttons and fields on the Plot Chart editor
Button or Field Name Description
Generate Count check box To generate a count of the
selected columns, ensure that
this check box is selected.
Selected Columns A list of the columns that you
selected in the Grid view.

Plot Chart (Current Page Data) To create a chart of the data


on the current page, click the
Plot Chart (Current Page Data)
button.
Plot Chart (All Data) To create a chart of all the data
in the results, click the Plot Chart
(All Data) button.

6 IBM Operations Analytics Log Analysis: User's Guide


Render Chart editor
Table 5. Render chart editor
Button or Field Name Description
Clear All To clear the graph and close the
window, click Clear All.

Create New Dashboard To create a new dashboard based


on the data in the graph, click the
Create New Dashboard icon.
Add Charts to Existing To add the chart data to a
Dashboard dashboard, click the Add Charts
to Existing Dashboard icon.
Hide Portlet icon To hide the graph, click the Hide
Portlet icon.
Settings icon To change the type of chart or the
values that are displayed on the
axes, click the Settings icon.
Close Portlet icon To close the graph and delete
the chart, click the Close Portlet
icon.

Chart Settings editor


Table 6. Chart settings editor
Button or Field Name Description
Render To create a chart, click Render.

Visualization tab
Title Enter a name for the chart.

Chart Type Select the kind of chart that you


want to use.
Parameters: x-axis Select the value that you want to
display on the x-axis.
Parameters: y-axis Select the value that you want to
display on the y-axis.
Query tab
Query String The query string that is used by
the search that generated the
results.
Time Filters Select a time filter that for the
chart.
Datasource Filters Filter the chart data for specific
data sources.

Selected Columns The columns that the graph is


based on.

Chapter 2. Searching and visualizing data 7


Table 6. Chart settings editor (continued)
Button or Field Name Description
Generate Count Indicates whether a count was
generated when the chart was
plotted.

Change Time Zone dialog box


Table 7. Change Time Zone dialog box
Button or field Name Description
Time zone Select a time zone from the list,
or type an entry in the field.
Default time zone selection check Select this check box to use the
box. specified time zone for all future
searches.

Changing the search time zone


By default, Log Analysis converts all times to the browser time zone.
The browser time may not match the time displayed in Log Analysis if there are issues with Daylight
Savings Time.
For more information about this issue, see the Search time zone does not match browser topic in the
Troubleshooting guide.
If you do not want to use the default time zone, you can change it. To change the time zone:
1. Click the time zone.
2. Select the city or region that represents the time zone that you want to use. For example, if you are
in Ireland and you want to set the time zone as Greenwich Mean Time (GMT) you can select Europe/
Dublin (Greenwich Mean Time.
3. If you want to use this time zone in subsequent searches, click the Use this time zone as the default
for all searches check box. This step is optional.
When you select a default time zone, you need to use the default time zone that is most commonly
used by Log Analysis users because this helps Log Analysis to process log records more quickly and
efficiently.
4. To save your changes, click OK

Searching data
You can search ingested data such as log files for keywords. Search results are displayed in a timeline and
a table format.

Before you begin


Before you can search, you must first define a data source and ensure that the log file is configured for
consumption or you can load the sample data.

Procedure
1. From the Search workspace , click the New Search or Add Search tab to open a new search table.
Enter the search query.

8 IBM Operations Analytics Log Analysis: User's Guide


2. Optional: You can filter data source by name, description, host name, log path, or tags or enter * to do
a wildcard search. To limit the extent of the search to an individual data sources and any descendant

data sources, select a leaf node from the Data Sources tree ( ).
3. In the Time Filter pane, click the Time Filter list ( ) and select the time period for which
you want to search. Select Custom to specify a start time and date, and an end time and date for your
search.
4. In the Search field, type the string for which you want to search in the log files. To view distribution
information for all logs, in the Search field, type the wildcard character (*).
To search for a partial string, type an asterisk (*) at the start and end of your search string. For
example, to search for strings that contain the phrase hostname, type *hostname*.
To narrow your search based on a service topology, type the service topology component on which
you want to base your search, followed by a colon (:), followed by your search string. For example,
service:DayTrader.
5. Click Search.
The first search you perform after the IBM Operations Analytics Log Analysis processes have been
restarted might take longer to complete than subsequent searches.
The user interface refreshes every 10 seconds. The updated results are displayed in the progress bar.
Maximum search results: The search returns a maximum of 1000 records by default. This
limit applies only to raw searches and not facet queries. This limit can be configured in
unitysetup.properties file property: MAX_SEARCH_RESULTS=1000. Do not to use a high value
for the MAX_SEARCH_RESULTS parameter. When a large number of results are returned, it degrades
search performance.

Results
A graph displaying the distribution of matching events in the log is displayed. Log records containing a
match for your search term are also displayed in Table view.
When you search for a specific term, the term is highlighted within the individual log records to facilitate
faster analysis. If you search for a partial term, each term that contains the search phrase is highlighted.
Fields that contain only tagged values, in other words values that are contained within angled brackets
(<>), are not highlighted. If a field contains values that are tagged and values that are not tagged, the
tagged terms are removed and the remaining terms are highlighted as appropriate.
If your search spans data that is stored in the archive, IBM Operations Analytics Log Analysis displays the
initial results while it retrieves the rest of the data. You can interact with the initial search results while
IBM Operations Analytics Log Analysis generates the search results. The progress bar displays the search
progress.
To display the latest results during the search, click We have more results for you. To stop the search,
close the tab. To start another search while you are waiting for the first search to complete, click the Add
Search tab.

What to do next
If you want to load data that contains tags and want to keep the tagging, you can disable highlighting. To
disable highlighting:
1. Open the unitysetup.properties file.
2. Locate the ENABLE_KEYWORD_HIGHIGHTING property and set it to false.
3. Save the file.
4. To restart IBM Operations Analytics Log Analysis, enter the following command:

<HOME>/IBM/LogAnalysis/utilities/unity.sh -restart

Chapter 2. Searching and visualizing data 9


Related concepts
“Query syntax” on page 10
This section describes the combination of words, keywords, and symbols that you can use when
searching for phrases using IBM Operations Analytics Log Analysis.

Query syntax
This section describes the combination of words, keywords, and symbols that you can use when
searching for phrases using IBM Operations Analytics Log Analysis.
The query syntax is based on the Indexing Engine query syntax. For more information, see:
https://wiki.apache.org/solr/SolrQuerySyntax
Indexing Engines use a number of different query parser plug-ins. Log Analysis supports the Lucene query
parser plug-in. For more information about the Lucene query syntax, see:
http://lucene.apache.org/core/5_1_0/queryparser/org/apache/lucene/queryparser/classic/package-
summary.html

Standard keywords and operators


This topic lists the keywords and operators that you can use when searching in IBM Operations Analytics
Log Analysis.
Note: The operators such as AND and OR, which are part of this query syntax, are case sensitive. You
need to use capitals for these operators.
OR
This is the default operator. Either term or expression must be matched in the results. A variation
to this keyword is or. For example, to search for a specific severity or message classifier, enter
severity:M OR msgclassifier:"WLTC0032W".
+
To get AND like functions, use the + operator. You must add + as a prefix to these queries.
For example, to search for a specific severity and message classifier, enter +severity:W
+msgclassifier:"WLTC0032W".
AND
As an alternative to the + operator, you can use the AND operator. For example, to search for a specific
severity and message classifier, enter severity:W AND msgclassifier:"WLTC0032W".
""
Enables you to group individual terms into phrases that are searched for as a unit. For example,
"document clustering".
()
Enables you to group expressions to guarantee precedence. For example, document AND (cluster OR
clustering).
*
Wildcard operator that can be replaced in the returned value with a number of characters. This can be
either passed as an operator to the sources or expanded when the meta.wildcard-expand option
is turned on. For example, test* might return test, tests or tester. You can also use the wildcard in
the middle of the search term. For example, t*est.
Note: You cannot use this wildcard as the first character in a search. For example, you cannot use
*test.
?
Wild character operator that can be replaced in the returned value with a single character. This can be
either passed as an operator to the sources or expanded when the meta.wildcard-expand option
is turned on. For example, te?t might return text or test.
Note: You cannot use this wildcard as the first character in a search. For example, you cannot use ?
test.

10 IBM Operations Analytics Log Analysis: User's Guide


+
Must operator. Forces the use of a keyword. For example WAS +and DB2 searches for strings that
contain the keyword and.
field:
Enables you to restrict your query to a specific field. For example, ID:123A or
msgclassifier:"WLTC0032W". These operators are activated for every field defined in your syntax.
By default, the search engine supports the title field. When you are creating a search collection,
you can extract any number of contents, for each document, and relate these contents to searchable
fields. This is specified in the form of the source associated with each collection.
NOT
The specified term or expression must not be matched in the search results. Variations to this
keyword are ! and -. For example, to search for log records that contain WAS ID but that do not
contain DB2 ID, enter "WAS ID" NOT "DB2 ID".
Note: You cannot use this operator for a single term.

Additional keywords and operators


This topic lists additional keywords that are more specific to the search and indexing operations
performed by the search engine.
Range searches
To search for records in a range, use a range query. Range queries can include the terms in the range
or they can exclude them. To include the query range terms, use brackets, for example:

[<search term> TO <search term>]

To exclude the query range terms, use braces, for example:

{<search term> TO <search term>}

Results are returned in lexicographical order.


For example, to search for all the log records modified on or between two dates, enter:

mod_date:[20020101 TO 20030101]

The search returns all the log records that have been modified in 2003, that is all the log records
where the mod_date field is within the specified range.
You can also use range queries to search for fields that are not dates. For example, to search for all
the log records that contain an ID between A to D but that do not include A or D, enter:

title:{A TO D}

DateMath queries
To help you to implement more efficient filter queries for dates, you can use DateMath queries.
For example, here are 4 possible DateMath queries:
• timestamp:[* TO NOW]
• timestamp:[1976-03-06T23:59:59.999Z TO *]
• timestamp:[1995-12-31T23:59:59.999Z TO 2007-03-06T00:00:00Z]
• timestamp:[NOW-1YEAR/DAY TO NOW/DAY+1DAY]
For more information, see the DateMathParser topic in the Lucene documentation at:
http://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/util/DateMathParser.html

Chapter 2. Searching and visualizing data 11


Escaping special characters
Regular expression or regex queries are supported by the query syntax.
Log Analysis supports escaping for the following special characters:

+ - && || ! ( ) { } [ ] ^ " ~ * ? : \ /

To escape a special character, use a back slash (\) before the special characters. For example, to search
for the query (1+1):2, enter:

\(1\+1\)\:2

To find multiple terms, use brackets. For example, to search for moat and boat, enter:

/[mb]oat/

Example queries
View example queries that use search patterns and regular expressions to search for entries.

Search patterns/regular expressions


The search patterns and regular expressions that you can use on a field depend on the Data Type of the
fieldName. Check the IndexConfiguration to determine the Data Type of the fieldName that you want to
query. The following table explains the search patterns that can be used for each Data Type.

IndexConfigu IndexConfigu Internal Remarks Search Pattern / Solr RegEx


ration Data ration represent supported
Type ation
settings

TEXT Searchable string String stores a word/ <fieldName>:/<Apache solr


and (sortable sentence as an exact regEx>/
and/or string without performing
tokenization etc. OR
filterable)
Commonly used for <fieldName>:<search string>
storing exact matches,
for example, for faceting. See Example 1 below.

TEXT Searchable text_gener Text typically performs <fieldName>:<search string>


al tokenization, and
secondary processing See Example 2 below.
(such as lower-casing
etc.).
Used for all scenarios to
match part of a sentence.

12 IBM Operations Analytics Log Analysis: User's Guide


IndexConfigu IndexConfigu Internal Remarks Search Pattern / Solr RegEx
ration Data ration represent supported
Type ation
settings

DOUBLE Searchable tdouble Square brackets indicate <fieldName>:[ * TO *]


an inclusive range query
<fieldName>:{ * TO * }
where matching values
include the upper and <fieldName>:{ * TO * ]
lower bound.
See Example 3 below.
Curly brackets indicate
an exclusive range
query where matching
values are between
the upper and lower
bounds. This excludes
the upper and lower
bounds themselves.
Using square brackets
and curly brackets both
types where one end of
the range is exclusive
and the other is exclusive

LONG Searchable tlong As described in DOUBLE <fieldName>:[ * TO *]


Data type in the row
above. <fieldName>:{ * TO * }
<fieldName>:{ * TO * ]

DATE Searchable tdate As described in DOUBLE <fieldName>:[ * TO *]


Data type in the row
above. <fieldName>:[ * TO * }
Example:
OrderDate:
[2015-11-23T00:00:00Z TO
2016-11-24T00:00:00Z}

Specifying a fieldName
The logRecord field is the value of the actual log record index. It cannot be used in Apache Solr RegEx
expressions. logRecord is always defined as ‘text-general’, and its value are tokenised. If fieldName is not
specified in the query, then logRecord is used for the field name by default. For example, the query

+"Transaction id 1234" +"error code 456"

is equivalent to the query

+logRecord:"Transaction id 1234" +logRecord:"error code 456"

It is therefore important that you specify a fieldName in your query.

Example 1 (TEXT DataType, sortable and/or filterable)


Case1: Using the Solr RegEx {fieldName}:/{Apache Solr RegEx Expr}/.

Chapter 2. Searching and visualizing data 13


In this example, you want to find instances of the error code 6789 in the SUMMARY field that have a
response time of more than 5 seconds, as in this entry:

fieldName:SUMMARY
fieldContents: "Transaction 12345 has failed with response time of 10 seconds and error code of
6789."

The syntax for querying using regular expressions is: {fieldName}:/{Apache Solr RegEx Expr}/
The query for this example is:

SUMMARY:/.* ([6-9]|[1-9][0-9]) seconds.*6789\./

This query specifies that the fieldName SUMMARY is to be searched, and the Solr RegEx that must be
matched in this field. The Solr RegEx specifies that it must be able to find a single digit integer in the range
6-9 OR two digits that range from 10-99, immediately followed by the character sequence ' seconds', and
then a series of characters (.*) that ends with the character sequence '6789' (with escape \ for dot)
Alternatively, the numerical range feature operator (<>) can be used:

SUMMARY:/.* <6-99> seconds.*6789\./

Case2: Using a regular Solr search pattern {fieldName}:{search string}.


In this example, you want to find instances where:
• the SUMMARY field contains error code 6789.
• the HOSTNAME field contains myhost.ibm.com.
• the USER field does not contain sysadmin.
as in this entry:

fieldName:SUMMARY
fieldContents: "Transaction 12345 has failed with response time of 10 seconds and error code
6789."

fieldname:USER
filedContents: myhost.ibm.com
filedContents: myhost.ibm.com
filedContents: remotehost.ibm.com

fieldname:USER
filedContents:sysadmin
filedContents:user1
filedContents:user2

The query for this example is:

+SUMMARY: "*error code 6789*" +HOSTNAME:myhost.ibm.com -USER:sysadmin

or

+SUMMARY: "*error code 6789*" AND HOSTNAME:myhost.ibm.com NOT USER:sysadmin

Example 2 (TEXT DataType, without sortable and/or filterable)


In this example, you want to find instances where:
• the SUMMARY field contains 6789.
• the SUMMARY field contains Transaction 12345.
• the USER field does not contain sysadmin.

14 IBM Operations Analytics Log Analysis: User's Guide


such as in this entry:

fieldName:SUMMARY
fieldContents: "Transaction 12345 has failed with response time of 10 seconds and error code of
6789."

fieldName:User
filedContents:sysadmin
filedContents:user1
filedContents:user2
...

The query for this example is:

+SUMMARY:"error code 6789" +SUMMARY="Transaction 12345" -User:sysadmin

As the text gets tokenized, multiple AND class should be used for the SUMMARY field.
Note: Similar queries can be performed on fields with a TEXT DataType that are sortable and/or filterable.

Example 3 (Numeric DataTypes)


This is an example of range searches for a numeric (double or long) data type on the following entry.

fieldName:SerialNum
fieldContents:1
fieldContents:2
fieldContents:3
...
fieldContents:11
fieldContents:12

The query + SerialNum:[3 TO 10] returns records with a value from 3-10.
The query + SerialNum:{3 TO 10] returns records with a value from 4-10.
The query + SerialNum:{3 TO 10} returns records with a value from 4-9.

About Apache Solr regular expressions


Allowed characters
Any unicode characters can be used in Solr RegEx, but certain characters are reserved. The reserved
characters are: . ? + * | { } [ ] ( ) " \.
If you enable optional features, (see below), then these characters may also be reserved: # @ & < > ~.
Any reserved characters must be escaped with a backslash (\), including backslash (\\).
Characters are interpreted 'literally' when they are surrounded by double quotes, (except double quotes
itself). For example, loganalysis"@developer.com".
The following regular expression are not supported by Solr RegEx:
• \w word
• \b word
• \d digit
• \s whitespace
• ^ start of string
• $ end of string
• \t tab
• \n newline
• \r carriage return
The following table provides information on the operators that can be used in Solr RegEx.

Chapter 2. Searching and visualizing data 15


Operators for Solr RegEx Examples

Match any character. For the string "loganalysis":


Use period (.) to represent any character. • logana..... #matches
• .og…l.sis #matches

Match one or more. For the string "sssooolllrrr":


Use plus sign (+) to match preceding shortest
• s+o+l+r+ # match
pattern one or more times.
• ss+oo+ll+rr+ # match
• s+.+ # match
• ss+oooo+ # no match

Match zero or more. Use asterisk (*) to match For the string "mmmnnn":
preceding shortest pattern zero or more times.
• m*n* # match
• m*n*o* # match
• .*nnn.* # match
• mmm*nnn* # match

Match zero-or-one. For the string "yyyzzz":


Use question mark "?" to match preceding pattern
zero or one time. • yyy?zzz? # match
• yyy?zzzz? # match
• .....?.? # match
• yy?zz? # no match

Specify min to max. For the string "aaabbb":


Use curly brackets ({}) to specify a minimum and
(optionally) a maximum number of times that the • a{3}b{3} # match
preceding shortest pattern can repeat. • a{2,4}b{2,4} # match
The allowed forms are: • a{2,}b{2,} # match
1. {4} # repeat exactly 4 times • .{3}.{3} # match
2. {3,6} # repeat at least three times, and at most • a{4}b{4} # no match
6 times
• a{4,6}b{4,6} # no match
3. {2,} # repeat at least twice
• a{4,}b{4,} # no match

Grouping. For the string "ababab":


Use parentheses (()) to form sub-patterns. • (ab)+ # match
The quantity operators ({}) listed above operate • ab(ab)+ # match
on the shortest previous pattern, which can be a • (..)+ # match
group.
• (...)+ # no match
• (ab)* # match
• abab(ab)? # match
• ab(ab)? # no match
• (ab){3} # match
• (ab){1,2} # no match

Alternation. For the string "aabb":

16 IBM Operations Analytics Log Analysis: User's Guide


Use the pipe symbol (|) as an OR operator. • aabb|bbaa # match
The match will succeed if the pattern on either the • aacc|bb # no match
left-hand side OR the right-hand side matches. • aa(cc|bb) # match
The alternation applies to the longest pattern, not • a+|b+ # no match
the shortest. • a+b+|b+a+ # match
• a+(b|c)+ # match

Character classes. For the string "abcd":


Ranges of potential characters can be represented • ab[cd]+ # match
as character classes by enclosing them in square
• [a-d]+ # match
brackets ([]).
• [^a-d]+ # no match
A leading ^ negates the character class. The
allowed forms are:
• [abc] # 'a' or 'b' or 'c'
• [a-c] # 'a' or 'b' or 'c'
• [-abc] # '-' or 'a' or 'b' or 'c'
• [abc\-] # '-' or 'a' or 'b' or 'c'
• [^abc] # any character except 'a' or 'b' or 'c'
• [^a-c]# any character except 'a' or 'b' or 'c'
• [^-abc] # any character except '-' or 'a' or 'b' or
'c'
• [^abc\-] # any character except 'a' or 'b' or 'c'
or '-'

Optional Feature for Solr RegEx Example

For complement: Use tilde(~) to negate the For the string "abcdef":
shortest pattern next to it. • ab~df # match
For instance, "ab~cd" means: • ab~cf # match
Starts with a • ab~cdef # no match
Followed by b • a~(cb)def # match
• a~(bc)def # no match
Followed by a string of any length that it anything
but c
Ends with d

For interval: Use angle brackets (<>) to specify the For the string: "solr90":
numeric range • solr<1-100> # match
• solr<01-100> # match
• solr<001-100> # no match

For any string: Use @ to match any string in


its entirety. This could be combined with the
intersection and complement above to express
"everything except".
For example: @&~(solr.+) # anything except string
beginning with "solr"

Chapter 2. Searching and visualizing data 17


Apache Solr is an open source product, and the following notes are for guidance only. For the full set of
RegEx features that are compatible with Apache Solr, and for additional help and support, please see:
• the Apache Solr documentation: https://lucene.apache.org/solr/guide/7_5/index.html
• the Apache Solr forum: https://lucene.apache.org/solr/community.html

Search results timeline


You can use the timeline slider to view the search result timeline display for a specific duration.
You can use the timeline slider to view the logs for a specific duration. You can zoom in and out to change
the range of the data displayed. If there are a large number of dates in the log file, the timeline might
display them as ### rather than displaying the dates. Use the timeline scroller to zoom in and display the
appropriate date information.
The Timeline does not display data at the seconds level for data ingested using IBM Operations Analytics
Log Analysis Version 1.1. A message is displayed indicating that the data was indexed using a previous
version of IBM Operations Analytics Log Analysis. For this data, the Timeline cannot display information
for multiple events that occur in time periods of less than one minute.

List and Grid views


Log records are displayed in both in a grid view and a list view. The default view is the List view. Log
records are displayed in the grid view can be ordered by column for easy analysis. This view can be
customized and used to display information in a range of ways:
Sorting in Grid view
You can also sort the information in the table columns by clicking on the column header. Not all
columns are sortable. The Index configuration determines the fields that can be sorted.
The _datasource field is an internal field and cannot be sorted or used for charting purposes. If you
want to sort your data by data source or to create a chart, create a field in the Index configuration for
this purpose. This field can be used for sorting and in charts.
The order in which the fields are displayed is governed by the associated index configuration and
source type. You can use the Index Configuration editor to add new fields or adjust the order of
existing fields.
For more information about editing index configuration, see the Editing an index configuration topic in
the Administrating and Installing Guide.
Toggle views
Click List View or Grid View button to toggle between views. In each of these views, the button
displayed allows you to toggle to the alternative view.
Customizing the columns displayed
To configure Grid view to display only the columns that you require, click the Select Columns icon on
the Grid view toolbar, remove the columns that you do not want to display, and click OK.
Display a chart of your results
You can display the distribution of values for a number of the columns as a chart. To display the chart,
select the column and click the Plot Column icon on the Grid view toolbar. The distinct values used to
plot the chart can be viewed as hover help for each chart sector. The Plot feature is available for some
values only. Where available, the Plot Column button is active when the columns are selected.
If the chart title contains a loading icon, the chart is loading data from the archive. The chart is
automatically updated when all the searches are complete. If you log out before the search is
completed, the search stops.
Running a Custom Search Dashboard from the Grid view toolbar
If you have configured a shortcut to a Custom Search Dashboard, click the icon on the toolbar to
launch the Custom Search Dashboard.

18 IBM Operations Analytics Log Analysis: User's Guide


Using a Custom Search Dashboard to display selected data from a column or cells
If you have created the required Custom Search Dashboard, you can select and display the contents
of a column or individual cells. To display the data, select a column or individual cell in Grid view and
then launch the application. If you select a column, only data from the currently displayed page is
displayed by the application.
Related concepts
“Custom Search Dashboards” on page 39
Custom Search Dashboards allow you to create and execute custom scripts and display the output of
those scripts in a dashboard.

Refining search results


You can refine the search results.
You can narrow your search, by adding extra criteria in the search field. For example, the string
severity : E returns log lines that contain errors. Alternatively, you can perform a free text search
for a value in a column. All of the log lines that contain that text are returned. If more than 100 log lines
are returned, click the arrows to view more log lines.
Note: If a host file contains the character sequence ::1 next to the host name, ::1 might be displayed as
the value in the sourceip column.
You can also refine your search in these ways:

Search Patterns
To refine your search, use the values in the Search Patterns pane. For each new search, the list of fields
with which you can filter your search is updated and listed in the Search Patterns pane. The number of
occurrences of each value that is found is displayed with each unique keyword added as a child node.
Click a keyword to add it to the Search field.
The keyword is added in the format field:"value". You can add multiple keywords to refine your
search. If you want to run an OR query, type the word OR between each added keyword search string.
When you add all of the search criteria, click Search to return log lines that contain the values that you
specified.

Discovered Patterns
When you search a data source that has been configured with a Source Type that uses the Generic
annotator, the results of the search are listed in the Discovered Patterns pane.
For each new search, the list of fields with which you can filter your search is updated and listed. The
counts in the Discovered Patterns pane indicate the number of records that contain a specific key or
key-value pair. A key-value pair might occur multiple times in a record, but the total reflects the number of
records in which the key-value pair occurs. The count of the value of nodes in a key-value pair tree might
exceed the key count when multiple values occur for the same key in a single record.
Click a keyword to add it to the Search field. The keyword is added in the format field:"value".
You can add multiple keywords to refine your search. If you want to run an OR query, type the word OR
between each added keyword search string. When you add all of the search criteria, click Search to return
log lines that contain the values that you specified.

Data Source filtering


Refine your search by selecting a Data Sources leaf node. When you select a leaf node in the Data
Sources tree, your search is refined to search only that data source and any descendant data sources. The
Data Sources tree is defined by selecting a service topology node when you configure your data source.
For more information, see Editing groups in the service topology JSON file.

Chapter 2. Searching and visualizing data 19


Time Filters
Us the Time Filters list to refine your search based on a selected time period. Select a value from the list
to limit the search period. The time period chosen limits the search time period based on the log entries.
For example, choosing Last Hour limits the search to the final 60 minutes of log file entries.

Selecting a timeline value


Click a value in timeline to refine your search based on that value. Log events can be visualized up to
second-level granularity.

Selecting a time zone


To change the time zone that is used in one or all of your searches, click the Browser time link in the
timeline chart. For more information about changing the time zone, see Changing the search time zone in
the Searching and visualizing data guide.

Saving a search
After you search for a keyword or series of keywords, you can save your search so that you can run it again
at a later time. The searches you save are added to the Quick Searches pane.

About this task


Any directories that you created to organize your Quick Searches cannot be deleted. The directory
structure is maintained.

Procedure
To save a search:
1. In the Search workspace, click the Save Quick Search icon.
The Save Quick Search dialog box is displayed.
2. Enter a value in the Name and Tag fields. Adding a tag allows you to contain similar searches within a
folder.
3. (Optional) Specify a time range as an absolute or relative time. The default option is relative time.
4. Click OK.
The search is saved to the Save Quick Search pane.

What to do next
To use a saved search pattern, browse to the saved search in the Quick Searches pane and double-click
the search pattern that you want to launch. You can also edit and delete the search from the right-click
menu.

Saved searches
To display a list of saved searches, click the Saved Searches icon.
The following saved searches are available by default after you install the sample data:
sample WAS System Out
Example search that displays results from WebSphere® Application Server.
sample DB2 db2diag
Example search that displays results from DB2®.
sample MQ amqerr
Example search that displays results from IBM MQ.
sample Oracle alert
Example search based on alerts for the Oracle sample data.

20 IBM Operations Analytics Log Analysis: User's Guide


sample App transaction log
Example search based on the sample application's transaction log.
sample Omnibus events
Example search based on the sample Omnibus events.
sample Windows OS events
Example search based on the sample Windows OS events.

Visualizing data
You can create charts and graphs that help users to process information quickly and efficiently.

Creating charts and graphs


After you select one or more columns in the grid view, you can use the Plot Column button to create
charts to display the results.

About this task


To create a chart that is based on a specific field, you can create a search query and plot a chart based
on the results. If your query returns blank fields, the chart might not display. To ensure that this does not
happen, use queries that return data for the specific field. For example, to search for the severity field,
enter severity:*TO*.

Procedure
1. In the Search workspace, select Grid view.
2. Select one or more columns.
3. Click the Plot Column icon in the Grid view toolbar.
The Plot Chart UI is displayed.
4. To display counts for the selected columns, select the Generate Counts check box.
5. If the columns that you selected contain dates or numeric values, you can use the Granularity field to
specify the granularity.
You can use this setting only for columns that contain dates or numeric values and can be filtered.
6. If the columns that you selected contain numeric values, you can also apply statistical functions on the
values.

• Workgroup
SQL
System
System
AltiVec
Sybase
Studio
FixEntry
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
Orchestrator
Cloudscape
Telesales
Documentum
Domino.Doc z9
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 If
i53 you are using the Entry Edition, you can use the sum, min, max, avg, and count
functions.
• Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
FixStart
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
Orchestrator
Cloudscape
Telesales
Documentum
Domino.Doc z9
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 If
you use the Standard Edition, you can use the missing, sumOfSquares, stddev, and
i53

percentile functions.
7. To plot the chart on 100 or less of the records that you selected, click Plot Chart (Current Page Data).
8. To plot the chart on all the indexed data, click Plot Chart (All Data). If one or more of the fields in
the selected columns is not filterable, the charts are only plotted if the total number of records is less
than 1000. To change this setting, you must modify the MAX_DATA_FACETS_IN_CHART property in
the unitysetup.properties file.

Results
The graph is rendered. To change the graph type, use the Edit icon.
If the chart title contains a loading icon, the chart is loading data from the archive. The chart is
automatically updated when all the searches are complete. If you log out before the search is completed,
the search stops.

Chapter 2. Searching and visualizing data 21


Percentile statistical functions
You can use the statistical function to return values for specified percentiles.
You can use the Search REST API or the user interface to create percentile statistical functions.
For example, to query the maximum and minimum results for the 50th, 95th, and 99th percentiles with
the Search REST API, you enter "stats": ["min", "max", "percentile,50,95,99"]. The facet
response is:

{
"min": 10,
"max", 1000,
"percentile":
{
"50": 100,
"95": 200,
"99": 225
}
}

Percentile queries are not calculated incrementally, unlike the other queries that are used in Log Analysis.
This fact means that the query needs to run over the entire time range before it can return any results.Log
Analysis limits the number of asynchronous windows that can run simultaneously for this function. This
property is set in the MAX_NON_INCREMENTAL_WINDOWS property in theunitysetup.properties. The
default value is 2.
For example, if you specify a percentile query based on a time range from August 1 2015 to August 10
2015 and MAX_NON_INCREMENTAL_WINDOWS=5 and COLLECTION_ASYNC_WINDOW=1d, only the most
recent 5 days of data that is returned by the query are considered for percentile evaluation.

Dashboards
You can use dashboards to collate multiple charts, which are created during problem diagnosis, on a
single user interface (UI).
For example, imagine an organization uses IBM Operations Analytics Log Analysis to monitor all the server
logs that it generates. The system administrator wants to be able to view the most critical errors, the
highest severity errors, and the total number of errors on a single UI. To facilitate this scenario, you create
a dashboard that is called System Admin and you add the charts that show the required information to it.
The data that is displayed on the dashboards is based on charts. For more information, see the Charts
topic under Custom Search Dashboard > Steps to create a Custom Search Dashboard > Application
files in the Extending IBM Operations Analytics Log Analysis section.

Sample dashboards
Sample dashboards are included as part of the sample content for the following Custom Search
Dashboard samples:
• Sample_EventInsightpack_v1.0.0.0
• Sample_AppTransInsightpack_v1.0.0.0
• Sample_weblogInsightpack_v1.0.0.0
• WASInsightPack_v1.1.0.3

22 IBM Operations Analytics Log Analysis: User's Guide


Creating dashboards
You can create a dashboard to visualize data from multiple sources on a single UI.

About this task


This procedure describes how to create a dashboard and chart. You can also add a chart to an existing
dashboard. To add a chart to an existing dashboard, click Add Charts to an Existing Dashboard. Select a
dashboard from the list.
Use the drill-down feature to search the data for records that correspond to the area of the chart that you
select. When you drill down on a field in a chart, a new search is created that is based on the field that you
selected.
The drill-down feature is only supported for dashboards that are created in the UI. The drill-down feature
is not supported in some cases where the underlying chart query does not support it. For example, if the
query searches two date fields.

Procedure
1. Open an existing search UI or click Add New Search to create a new search UI.
2. Switch to the Grid View and plot the charts that you would like to include in the dashboard.
You cannot use more than eight charts in a single dashboard.
3. To plot the charts that you would like to include in the dashboard, select the columns that you are
interested in and click the Plot column icon. Click Plot Chart (All Data).
If you want to use the drill-down feature, you must click Plot Chart (All Data). You cannot use the Plot
Chart (Current Page Data) button. The drill-down function is not supported for this option.
You can also run a number of statistical operations on the selected columns. Select one of the
following functions from the Summary Function drop-down list.
min
The minimum values in a field.
max
The maximum values in a field.
sum
The sum of the values in a field.
avg
The average of the values in a field.
count
The count of the values in a field.
missing
The number of records for which the value for a field is missing.
sumOfSquares
Sum of the squares of the values in a field.
stddev
The standard deviation of the values in a field.
4. To create the dashboard, click the Create New Dashboard button. Enter a name and a tag. The tags
are used to define groupings.
5. Save the dashboard.

What to do next
After the dashboard is created, a Custom Search Dashboard is automatically generated that represents
the dashboard. You can view the Custom Search Dashboard and dashboard on the Search UI under
Search Dashboard > Dashboards.

Chapter 2. Searching and visualizing data 23


To view the visualization and query settings for the charts that you added to a custom dashboard, click
Search Dashboard > Dashboards > Dashboard name. Select the required chart. Click the Settings icon.
There are two tabs, the Query and the Visualization tabs.
Information about the query that provides the information that is visualized in the chart is displayed on
the Query tab. This query is saved when the chart is created. To change the time filter from the default
relative setting to match the absolute time that is used by the Search UI, open the Query tab and select
the setting from the list. To view the effect of changing this setting without changing the dashboard, click
Render.
The chart type and parameters are detailed on the Visualization tab. These settings are the settings that
you made when you created the chart. For more information about these settings, see the Application files
and Charts topics in the Custom Search Dashboards section of the Extending IBM Operations Analytics Log
Analysis guide.
To ensure that your dashboard displays current information, you can use the auto-refresh feature to
regularly refresh a dashboard at scheduled intervals. For more information on configuring automatic
dashboard refreshes, see the Administering IBM Operations Analytics Log Analysis guide.

Deleting search dashboards


If you no longer need a dashboard, you can delete it.

About this task


There are two types of dashboard that is displayed in the Search Dashboards list on the UI, dynamic
dashboards and Custom Search Dashboards. Dynamic dashboards do not contain any custom logic
and the type value that is defined in the associated JSON file is DynamicDashboard. Custom Search
Dashboards are customized dashboards that are installed as part of an Insight® Pack. They do contain
custom logic.
The delete functions differently for each type. When you delete a dynamic dashboard, the dashboard
and all related data are deleted. When you delete a Custom Search Dashboard, the Custom Search
Dashboard file extension is changed to .DELETED for deletion by the IBM Operations Analytics Log
Analysis administrator.

Procedure
1. Open the Search UI.
2. Open the Search Dashboards list and select the dashboard that you want to delete.
3. Right-click the dashboard and click Delete. Confirm that you want to delete it when prompted.

Results
If the dashboard is a dynamic dashboard, the dashboard and associated data is deleted. If the dashboard
is a Custom Search Dashboard, the Custom Search Dashboard file extension is changed to .DELETED. You
can contact the IBM Operations Analytics Log Analysis administrator and ask them to delete the Custom
Search Dashboard if appropriate.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Aggregated Search
Use Aggregated Search to search aggregated data from multiple instances of Log Analysis.
You can install multiple instances of Log Analysis. For example, your environment might consist of data
centers in different regions. You install an instance of Log Analysis in each region. The Aggregated Search
feature helps you to view, search, and drill down into the aggregated data from these instances of Log
Analysis.
Before you can use Aggregated Search, you must configure it, including how you want to aggregate data
that is sent from the children to the parent node. For more information, see “Configuring Aggregated
Search” on page 25.

24 IBM Operations Analytics Log Analysis: User's Guide


After you configure Aggregated Search, you can use it to help you to get an operational overview of your
data centers. You can view and search aggregated data in your parent node. You can also drill down from
this node to the child nodes to view the data there. For more information, see “Searching aggregated
data” on page 33.
Only Standard Edition users can use this feature.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Configuring Aggregated Search
To configure Aggregated Search, complete the following steps.
1. To specify the topology of your cluster, create the required JSON file on each Log Analysis server in
your cluster. For more information, see “Defining the topology for Aggregated Search” on page 25.
2. Configure Aggregated Search automatically or manually. You can also configure Lightweight Directory
Access Protocol (LDAP) for user authentication as part of this step.
For more information about how to configure it automatically, see “Configuring Aggregated Search
automatically” on page 29.
For more information about how to configure it manually, see “Configuring Aggregated Search
manually” on page 30.
3. Create a data aggregation template to specify how data is aggregated. For more information, see
“Creating aggregation templates” on page 35.
4. Review the information about how you need to set up your users, roles, and access permissions
to ensure that records are visible. For more information, see “Role-based Access Management for
Aggregated Search” on page 31.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Defining the topology for Aggregated Search
Before you use the Aggregated Search feature, you must define the topology.

About this task


Create the JSON file on each Log Analysis server in your cluster.

Procedure
1. Create a JSON file and specify the topology of the data center in it. Specify the parameters described
in the Topology parameters table.

Table 8. Topology parameters


Parameter Description
OS_user The operating system user name. Specify the
name of the user who installed Log Analysis on
the server.
OS_password The password for the operating system user. This
parameter is optional. If you do not specify it,
the script prompts you for it when you configure
Aggregated Search automatically.
LA_home The directory where Log Analysis is installed on
the node.
LA_host The fully qualified host name for the Log Analysis
server. To facilitate single sign-on (SSO), use a
fully qualified host name.

Chapter 2. Searching and visualizing data 25


Table 8. Topology parameters (continued)
Parameter Description
LA_port The port number that is used by Log Analysis.
This value must be an integer. You cannot use a
string.
LA_user The Log Analysis user name.
LA_password The Log Analysis user password.
If you are automatically configuring Aggregated
Search, you can specify a plain text password or
you can leave this parameter blank. The utility
provided encrypts the password when it copies
the topology. If you do not specify a password,
the configuration script prompts you for it when
you configure Aggregated Search automatically.
If you are manually configuring it, you can use
plain text or encrypted passwords. You can use
the unity_securityUtility.sh to encrypt
the password on each server.

children The names of the immediate descendants of the


topology node.
connected The names of the nodes that can be accessed
from the node that is defined in this section of the
JSON. If you do not specify any values or specify
a blank array, all nodes can be accessed from this
instance of Log Analysis.
sshPort The Operating System port that is used for Secure
Shell Authentication (SSH). If you do not specify a
value, the default value of 22 is used.
2. Save your changes.

Example
The following example displays the structure of the input JSON object.

{
"AP": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server1.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["India", "Australia"],
"connected":[]
},
"Europe": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server2.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["UK", "France"]
},
"India": {
"OS_user": "unity",
"sshPort": "22",

26 IBM Operations Analytics Log Analysis: User's Guide


"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server3.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["Pune", "Bangalore"],
"connected": ["AP", "Pune", "Bangalore"]
},
"Australia": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server4.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["Sydney", "Perth"],
},
"UK": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server5.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["London"],
},
"France": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server6.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["Paris"],
},
"Pune": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server7.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": [],
},
"Bangalore": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server8.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": [],
},
"Sydney": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server9.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": [],
},
"Perth": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server10.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": [],
},
"London": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",

Chapter 2. Searching and visualizing data 27


"LA_host": "server11.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": [],
},
"Global": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server12.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["AP", "Europe"],
"connected":[]
},
"Paris": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server13.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": [],
}
}

The topology is displayed in the following screen shot:

When you export an aggregation template, the level is specified in the JSON file. The top or root node is
the Global node and it designated level 0 in an exported aggregation template. The regions are AP and
Europe and are on the second level and are designated level 1 in the template. The countries are defined
on the third level and are assigned level 2. The bottom level contains the cities and is designated as level
3. You can only import templates from the same level. You cannot import templates from another level.
For example, if you export a template from the Pune node on the bottom level, you cannot import it into
the India node.

What to do next
After you create the topology, you need to enable Aggregated Search.
If you are automatically configuring Aggregated Search, run the utility. For more information, see
“Configuring Aggregated Search automatically” on page 29.

28 IBM Operations Analytics Log Analysis: User's Guide


If you are configuring Aggregated Search manually, copy the topology file to each Log Analysis server.
Note the directories where you save these files as you must specify it in the unitysetup.properties
file when you enable the Aggregated Search feature. For more information, see “Configuring Aggregated
Search manually” on page 30.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Configuring Aggregated Search automatically
To automatically configure Aggregated Search, complete the following steps.

Before you begin


Create a topology before you start. To specify the topology, create a JSON file with the required
parameters.
Use plain passwords in the topology. The aggregationConfigUtil.sh script encrypts these
passwords when it copies the topology to the nodes. The password is optional. If you do not specify
it, users are required to enter the password when they run the script.

About this task


You use the <HOME>/IBM/LogAnalysis/utilities/aggregation/aggregationConfigUtil.sh
script to configure Aggregated Search.

Procedure
1. Go to the <HOME>/IBM/LogAnalysis/utilities/aggregation directory.
2. To copy the topology file and enable Aggregated Search in the unitysetup.properties file, enter
the following command:

./aggregationConfigUtil.sh -tf ~/<topology_file_name> -n all


-o setuptopology

where <topology_file_name> is the name of the topology file. For example, example.json.
3. To set up the Java™ certificates, enter the following command:

./aggregationConfigUtil.sh -tf ~/<topology_file_name> -n all


-o setupjavakeystores

4. You can also configure Lightweight Directory Access Protocol (LDAP) for user authentication. This
step is optional and is only required if you want to use LDAP. To configure LDAP, enter the following
command:

./aggregationConfigUtil.sh -tf ~/<topology_file_name> -n all


-o configldap -lp ~/ldapRegistryHelper.properties

5. You can also configure single sign-on (SSO) to facilitate the drill-down feature. To configure SSO, you
must set up LDAP as described in step 2. This step is optional and is only required if you want to use
SSO when you drill down. To configure SSO and update the server.xml file with the domain name, enter
the following command, specifying the path to the keys file:

./aggregationConfigUtil.sh -tf ~/<topology_file_name> -n all


-o configsso -kf ~/<path_ltpa_file>

where <path_ltpa_file> is the path to the key file that is copied to all nodes in the cluster. For example,
home/la/ltpa.keys.
6. Restart each instance of Log Analysis. To restart Log Analysis, log in each server and enter the
following command:

<HOME>/IBM/LogAnalysis/utilities/unity.sh -restart

Chapter 2. Searching and visualizing data 29


What to do next
To verify that SSO is configured correctly, enter the following command:

./aggregationConfigUtil.sh -tf ~/<topology_file_name> -n all -o checksso

If any certificate errors occur when you use the checksso option, enter the following command:

./aggregationConfigUtil.sh -tf ~/<topology_file_name> -n all -o prepsso

You can also configure Aggregated Search with a single command. You can omit any options that you do
not want to use such as configsso. For example:

./aggregationConfigUtil.sh -o setuptopology,setupjavakeystores,configldap,configsso
-tf ~/<topology_file_name> -n all -lp ~/ldapRegistryHelper.properties
-kf ~/<path_ltpa_file>

After you configure Aggregated Search, you can create an aggregation template to specify how the data is
aggregated and is sent from the children nodes to the parent nodes. For more information, see “Creating
aggregation templates” on page 35.

Configuring Aggregated Search manually


To manually configure the Aggregated Search feature, complete these steps.

Before you begin


Create the topology file and copy the file to a directory on each Log Analysis server in your cluster. For
more information, see “Defining the topology for Aggregated Search” on page 25.
The Log Analysis password that you specify can be in plain text or it can be encrypted. You can use the
unity_securityUtility.sh utility to encrypt passwords on each node. To encrypt the password, use
the utility on the server that you are copying the topology file to.

About this task


Repeat the following step for each Log Analysis server that is specified in your topology file.

Procedure
1. To stop the Log Analysis server, enter the following command:

<HOME>/IBM/LogAnalysis/utilities/unity.sh -stop

2. Open the <HOME>/IBM/LogAnalysis/wlp/usr/servers/Unity/apps/Unity.war/WEB-INF/


unitysetup.properties file.
3. To enable the Aggregated Search feature, change the following property value to true. The default
value is false. For example:

ENABLE_AGGREGATION = true

4. Specify the path to the JSON file that contains the topology information.
For example:

DC_TOPOLOGY_PATH=<path_topology_json>

where <path_topology_json> is the path to the file that contains the topology information. For
example, /home/la/datacenter_topology.json.
5. Specify the name of the local node. This name must correspond to the name specified in the topology
JSON file.
For example,

30 IBM Operations Analytics Log Analysis: User's Guide


DC_TOPOLOGY_NODE_NAME=<node_name>

Where <node_name> is the name of the node.


6. Save your changes.
7. To import the Java client certificates, complete the following steps:
a. Copy the <HOME>/IBM/LogAnalysis/wlp/usr/servers/Unity/resources/security/
client.crt file from the current node to temporary directory. If your current node has one
parent node and two child nodes that are specified in the topology, copy 3 client.crt files, one
from each node, to the temporary directory.
b. After you copy the file, enter the following command to import it:

<HOME>/IBM/LogAnalysis/ibm-java/bin/keytool -import
-file <crt_file_path>
-keystore <HOME>/IBM/LogAnalysis/wlp/usr/servers/Unity/
resources/security/key.jks
-alias <alias_name> -storepass loganalytics

where <crt_file_path> is the full path to the temporary directory you saved the client.crt
file. For example, /tmp/AP_cert/client.crt. <alias_name> is the name of each certificate
that you want to import for the parent and children nodes. Use a unique alias for each certificate
that you import.
8. If you want to use LDAP for authentication, configure it. This step is optional and is only required for
authentication.
9. You can also configure single sign-on (SSO). This step is optional. It is required to allow users to drill
down without having to sign in to each node.
10. To start the Log Analysis server, enter the following command:

<HOME>/IBM/LogAnalysis/utilities/unity.sh -restart

What to do next
After you configure Aggregated Search, you can create an aggregation template to specify how data that is
sent from the children nodes is aggregated. For more information, see “Creating aggregation templates”
on page 35.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Role-based Access Management for Aggregated Search
To view aggregated data, the user must have read access to the data sources whose data was aggregated.
Aggregated data flows upwards from child nodes to the parent node. The drill-down feature allows users
in the parent node to view data that is generated in the child nodes. Users in the parent node can search
all aggregated data but they can drill down only to the records that they have access to. Access is defined
in the data source that collected the data.
For example, consider the following situation. Assume that this example has three instances of Log
Analysis:
• Asia Pacific (AP). It represents the region and is the global node that is at the top of the hierarchy.
• India. It represents the country and is a child node that is in the middle of the hierarchy.
• Bangalore. It represents the city and is a child node that is at the bottom of the hierarchy.
The Bangalore node contains two data sources. Different users have access to these data sources, as
listed in the following table.

Table 9. Bangalore data sources and users


Data source Users with read access
DS1 user1 and user3

Chapter 2. Searching and visualizing data 31


Table 9. Bangalore data sources and users (continued)
Data source Users with read access
DS2 user2 and user3

DS1 contains 10 records and DS2 contains 5. user1 can view records from DS1. user2 can view records
from DS2 only. user3 can view all 15 records.
You create two aggregation templates in the Bangalore node. The templates are called Aggregation1
and Aggregation2. The data that is aggregated by Aggregation1 is collected by DS1. The data that is
aggregated by Aggregation2 is collected by DS2. Aggregation1 uses the COUNT function to count the
number of records that are collected by DS1, in this case 10. Aggregation2 uses the COUNT function
to count the number of records that are collected by DS2, in this case 5. Both templates send the
aggregated data to the India node. It is sent from here to the _aggregations data source. Aggregation1
sends one aggregation record with an aggregated value of 10 to the India node. Aggregation2 sends one
aggregation record with an aggregated value of 5 to the India node. Both aggregation records are loaded
by the _aggregations data source.
In the India node, you create an aggregation template, Aggregation3, which uses the SUM functions
to add the aggregated values in the _aggregations data source. Aggregation3 creates one aggregation
record with an aggregated value of 15 and sends it to the AP node where it is loaded by the _aggregations
data source.
When users log in to the AP node and search the _aggregations data source, one aggregation record
with an aggregated value of 15 is displayed. When they drill down to the India node, 1 or 2 aggregation
records with aggregated values of 10 and 5 are displayed. This discrepancy depends on the users access
as assigned in the data source in the child nodes, in this case DS1 and DS2.
When user1 logs in to the AP node and searches the _aggregations data source, one aggregation record
with an aggregated value of 15 is displayed. When user1 drills down, one aggregation record with a value
of 10 is displayed. This aggregation record is generated from the Aggregation1 template that is specified
in the Bangalore node. user1 cannot view the aggregation record with a value of 5 that is generated by
the Aggregation2 template. This situation occurs because user1 does not have access to the view records
from DS2. When user1 or user3 and user 2 or user 3 logs in to the Bangalore node, they can view 10 and
5 records that are loaded by DS1 and DS2. These records are the basis for the aggregation records in the
other nodes.
When user2 logs in to the AP node, one aggregation record is displayed with an aggregated value of 15.
When user2 drills down to the India node, one aggregation record is displayed with an aggregated value of
5. They can also drill down to view five records in the Bangalore node. These records are the records that
are associated with DS2. user2 cannot view the records that are associated with DS1.
When user3 logs in to the AP node, one aggregation record with a value of 15 is displayed. When user3
drills down to the India node, two aggregation records are displayed with aggregated values of 10 and 5.
They can also drill down to view 15 records that are associated with DS1 and DS2 in the Bangalore node.
This example is summarized in the following table:

Table 10. RBAM for Aggregated Search example


Node user1 user2 user3
AP 1 aggregation record 1 aggregation record 1 aggregation record
with an aggregated with an aggregated with an aggregated
value of 15 value of 15 value of 15
India 1 aggregation record 1 aggregation record 2 aggregation record
with an aggregated with an aggregated with aggregated values
value of 10 value of 5 of 10 and 5

32 IBM Operations Analytics Log Analysis: User's Guide


Table 10. RBAM for Aggregated Search example (continued)
Node user1 user2 user3
Bangalore 10 records from DS1 5 records from DS2 15 log file records. 10
from DS1 and 5 from
DS2.

Datacenter Topology UI
Use the Datacenter Topology UI to view the hierarchy of the Log Analysis servers in your data center and
to navigate from one Log Analysis server to another connected Log Analysis server search page.

To open the Datacenter Topology UI, click the Datacenter Topology icon ( ).
The Datacenter Topology window displays all of the datacenter nodes. The current node is highlighted in
red.
You cannot access Log Analysis server nodes that are highlighted in grey. The nodes that can be
accessed are those which are specified in the connected parameter in the topology JSON file. For more
information, see “Defining the topology for Aggregated Search” on page 25.

Searching aggregated data


After you configure Aggregated Search, you can search the aggregated data.

About this task


To search aggregated data, you filter for the _aggregation data source as described in this task. You can
also use the standard search features. For more information, see “Searching data” on page 8.

Procedure
1. Click the New Search or Add Search tab to open a new search table.

2. Select the _aggregation data source from the Data Sources tree ( ).
3. Click Search.
4. To view the aggregated search details, click Grid.
5. Select the log record for which you want to drill down to a specific aggregation.
6. Click the Launch icon. A new window displays the next level Log Analysis server UI with the
corresponding log file records.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Manage Aggregations UI reference
Use the Manage Aggregations UI to create, edit, delete, import, and export aggregation templates.
The following table lists the icons on the Manage Aggregations UI:

Table 11. Manage Aggregations UI


Icon Name Description
Add template Create a template.

Delete template Delete the selected templates.

Chapter 2. Searching and visualizing data 33


Table 11. Manage Aggregations UI (continued)
Icon Name Description
Import template Import the selected templates.

Export template Export the selected templates.

The following table lists the fields on the Add Aggregation Template UI:

Table 12. Fields on the Manage Aggregations UI


Field Description
Name The name of the aggregation template.
Label Enter any further descriptions or labels that you
want to add.
Description The aggregation template description.
Associate with Select the source type with which the template is
associated.
Query Specify the query that you want to use to filter
log records for aggregation. You can use the same
query syntax that you use for any search. For more
information, see “Query syntax” on page 10.
Click this icon to open the Query Builder UI. Use it
to create a query based on the regions, aggregation
Query Builder icon
templates, and source types that you want to
search. You can use this UI only on regional nodes.
You cannot use it on the top or bottom nodes in
your topology as it is not available in these nodes.
Aggregation Function Select the aggregation function from the drop-
down list. Possible values are COUNT,MIN, MAX, and
SUM. If you use the COUNT value, you do not need
to use the Aggregation Attribute field.
Aggregation Attribute Select the attribute that the aggregation function is
run against.
Aggregation Window Select the aggregation time interval. You can select
1, 2, 3, 4, 6, 8, 12, 24 hours. An aggregation record
is created after half this time elapses. For example,
if you select 2 hours, a new aggregation record is
created every 1 hour that queries for data that is
created in the last 2 hours.
Threshold Operator Select the threshold operator from the drop-down
list. You can use less than, greater than, equal to,
less than or equal to, and greater than or equal to.
Threshold Specify the value that triggers the threshold
operator.
Datasource Type Select the datasource type from the drop-down
list. The type can be Physical or Logical.

34 IBM Operations Analytics Log Analysis: User's Guide


Managing aggregation templates
You can import, export, create, edit, and delete aggregation templates.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Creating aggregation templates
Before you can use the Aggregated Search feature, create an aggregation template. Aggregation
templates define the data that is aggregated and how it is aggregated before it is sent to the parent
node.

Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.

3. To create a new template, click the Add Template icon ( ).


4. Specify the required parameters as outlined in the following table.

Table 13. Fields on the Manage Aggregations UI


Field Description
Name The name of the aggregation template.
Label Enter any further descriptions or labels that you
want to add.
Description The aggregation template description.
Associate with Select the source type with which the template is
associated.
Query Specify the query that you want to use to filter log
records for aggregation. You can use the same
query syntax that you use for any search. For
more information, see “Query syntax” on page
10.

Query Builder icon Click this icon to open the Query Builder UI.
Use it to create a query based on the regions,
aggregation templates, and source types that you
want to search. You can use this UI only on
regional nodes. You cannot use it on the top
or bottom nodes in your topology as it is not
available in these nodes.
Aggregation Function Select the aggregation function from the drop-
down list. Possible values are COUNT,MIN, MAX,
and SUM. If you use the COUNT value, you do not
need to use the Aggregation Attribute field.
Aggregation Attribute Select the attribute that the aggregation function
is run against.

Chapter 2. Searching and visualizing data 35


Table 13. Fields on the Manage Aggregations UI (continued)
Field Description
Aggregation Window Select the aggregation time interval. You can
select 1, 2, 3, 4, 6, 8, 12, 24 hours. An
aggregation record is created after half this time
elapses. For example, if you select 2 hours, a new
aggregation record is created every 1 hour that
queries for data that is created in the last 2 hours.
Threshold Operator Select the threshold operator from the drop-
down list. You can use less than, greater than,
equal to, less than or equal to, and greater than or
equal to.
Threshold Specify the value that triggers the threshold
operator.
Datasource Type Select the datasource type from the drop-down
list. The type can be Physical or Logical.

5. To save the template, click OK.


6. To confirm that you want to save the template, click OK.

What to do next
You can import and export your templates. You can also edit and delete them.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Exporting aggregation templates
You can export aggregation templates to back up your data, and to import it into another Log Analysis
server.

Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.

3. Select one or more templates that you want to export, and click the Export Templates icon ( ).
4. Select the path to the directory where you want to export the template to in the File Selector field, and
click OK.

What to do next
The aggregation templates that you selected are exported in the JSON format. You can import these
templates into other instances of Log Analysis.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Importing aggregation templates
To import aggregation templates, complete these steps.

Before you begin


Ensure that the file that you are importing is in the JSON format and was exported from Log Analysis.
When you import an aggregation template, ensure that the required source types exist in the server that
you are importing the files into. The source type is specified in the data source configuration. For example,

36 IBM Operations Analytics Log Analysis: User's Guide


you want to import aggregated data from the DS1 data source, which uses the WASSystemOut source
type and DS2, which uses the WASSystemOutExtended source type into another Log Analysis server.
However, only the WASSystemOut source type exists on the server. When you try to import the template,
only the template that is associated with the WASSystemOut source type is imported. To avoid this issue,
create the missing source type.
You must import an aggregation template into a node that is on the same level in the hierarchy. The level
is defined by the children that are specified in the topology file. For more information, see “Defining the
topology for Aggregated Search” on page 25.

Procedure
1. Export the Aggregated Search aggregation template
For more information, see “Exporting aggregation templates” on page 36.
2. Click Administrative Settings.
3. Click the Manage Aggregations tab.

4. Click the Import Templates icon ( ). Select the file that you want to import and import the file.

Results
The aggregation templates are imported.

Editing aggregation templates


To edit an aggregation template, complete these steps.

Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.
3. Select the template that you want to edit.
4. To edit the template, click the Edit icon.
5. Edit the template values as required. For more information about the template values, see “Manage
Aggregations UI reference” on page 33.
6. To save the changes to the template, select OK.

Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Deleting aggregation templates
To delete an aggregation template, complete these steps.

Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.
3. Select one or more templates that you want to delete.
4. To delete the template, click the Delete Template icon (

).
5. To confirm that you want to delete the template, select OK.

Results
The selected Aggregated Search aggregation template is deleted.

Chapter 2. Searching and visualizing data 37


Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Turning Aggregated Search on and off
To turn the Aggregated Search feature on or off, complete the following steps.

About this task


If you choose to turn off the feature, do so for each instance of Log Analysis. If you turn off the feature for
some nodes and not others, it can disrupt the flow of data in your specified topology.

Procedure
1. To stop the Log Analysis server, enter the following command:

<HOME>/IBM/LogAnalysis/utilities/unity.sh -stop

2. To turn the feature on or off, edit the <HOME>/IBM/LogAnalysis/wlp/usr/servers/Unity/


apps/Unity.war/WEB-INF/unitysetup.properties file.
To turn the Aggregated Search feature on, change the following property value to true:

ENABLE_AGGREGATION = true

To turn the Aggregated Search feature off, change the following property value to false:

ENABLE_AGGREGATION = false

3. To start the Log Analysis server, enter the following command:

<HOME>/IBM/LogAnalysis/utilities/unity.sh -start

Search dashboards
To display a list of search dashboards, click the Search Dashboards icon on the side bar.
The Dashboards group contains the following search dashboards:
sample-events-hotspots
Displays example hot spot reports for the sample events.
WAS Errors and Warnings Dashboard
Displays example reports for errors and warnings that are generated by the sampleWebSphere
Application Server application.
Sample-Web-App
Displays example reports for errors and warnings that are generated by the sample web application.
The DB2AppInsightPack group contains the following search dashboards:
DB2 Information Links
Displays useful information links for more information about DB2.
DB2 Troubleshooting
Displays example reports for DB2.
The ExpertAdvice group contains the following search dashboard:
IBMSupportPortal-ExpertAdvice
Displays search results based on your searches.
The WASAppInsightPack group contains the following search dashboards:
WAS Information Links
Displays useful information links for more information about WebSphere Application Server.
WAS Errors and Warnings
Displays example reports based on errors and warnings in WebSphere Application Server.

38 IBM Operations Analytics Log Analysis: User's Guide


The WindowsOSEventsInsightPack group contains the following search dashboard:
Windows Events Log Dashboard
Displays example reports that are based in event data from the Windows operating system.
The alertsInsightPack group contains the following search dashboard:
Alerts Dashboard
Displays alert data based on the alerts that are created on the Manager Alerts UI. The charts display
the alert information grouped by type and data source and over time.

Alerts dashboard
Use the Alerts dashboard to view information based on the alerts that you create in Log Analysis.
This feature is not available in the Entry Edition.
The data that is displayed on the dashboard is based on the last time data from one of the data sources
used for alerts was loaded into Log Analysis. This means that there is a delay between changes to your
alerts and the information that is displayed on the dashboard. In this case, you need to wait until data is
loaded into Log Analysis again to see the latest data.
If the dashboard does not display any data, ensure that the time that you specified matches a period
when alerts were created.

Prerequisites
You must install IBM Operations Analytics Log AnalysisFix Pack 1.
You must create alerts in Log Analysis. For more information, see .

Using the dashboard


The dashboard displays information on four graphs:
TOTAL ALERTS BY TYPE
Displays the total number of alerts grouped by type.
TOTAL ALERTS BY TYPE OVER TIME
Displays the total number of alerts grouped by type for the chosen time range.
TOTAL ALERTS BY DATASOURCE
Displays the total number of alerts grouped by data source.
TOTAL ALERTS BY DATASOURCE OVER TIME
Displays the total number of alerts grouped by data source for the chosen time range.

Custom Search Dashboards


Custom Search Dashboards allow you to create and execute custom scripts and display the output of
those scripts in a dashboard.
To generate a Custom Search Dashboard, you create a JSON application file in the <HOME>/IBM/
LogAnalysis/AppFramework/Apps directory. You can create sub directories in this directory to
organize your applications.
After you have created your Custom Search Dashboard, the Custom Search Dashboards pane, in the
Search workspace, displays the list of Custom Search Dashboards in the folder structure that you have
specified.
To run a Custom Search Dashboard, locate it in the Custom Search Dashboards, right-click and click
Execute.

Chapter 2. Searching and visualizing data 39


Expert advice
Expert advice is a Custom Search Dashboard that provides links to contextually relevant information
to allow you to quickly resolve problems. Using the Expert advice Custom Search Dashboard, you can
select any column or cells in Grid view and launch a search of the IBM support portal (IBMSupportPortal-
ExpertAdvice.app). The Custom Search Dashboard searches for matches to unique terms contained in the
column that you have selected. This Custom Search Dashboard can be launched from the Custom Search
Dashboards panel in the left navigation pane of the Search workspace.
To increases the likelihood of success, the Custom Search Dashboard removes data that is specific to your
environment for each search term. For example, a log message that contained the search string unable
to access jarfile /myMachine/foo/bar/foobar.jar is not likely to return a specific match as
the server path is likely to be specific to a user. This is abbreviated to unable to access jarfile to
ensure better search results. The criteria used to exclude data can be configured.
To launch the Expert advice Custom Search Dashboard for the data returned by a search, select
a column or cell of data in Grid view, right-click and click Execute to launch the IBMSupportPortal-
ExpertAdvice.app Custom Search Dashboard.

Creating a Custom Search Dashboards


You can use Custom Search Dashboards to extract data from IBM Operations Analytics Log Analysis or
from any external source and present that data in a useful format such as a chart.
A Custom Search Dashboard must include a script to generate data, any parameters required by the
script, and a specification to define how that data is displayed. There are two types of output that are
generated by Custom Search Dashboards:
Dashboard data
Create a IBM Operations Analytics Log Analysis dashboard to display charts based on the data
generated by a Custom Search Dashboard. You can also use a dashboard to display HTML content.
Search filters
Generate keywords and populate Configured Patterns widget to drill down through a search.
When using the Insight Pack tooling in Eclipse, there is a folder called src-files/unity_apps in
the Insight Pack project, specifically for Custom Search Dashboard files. There are sub folders for
apps, chartspecs, and templates to contain application definitions, chart specifications, and template
definitions, respectively.

Defining a Custom Search Dashboard


To create a Custom Search Dashboard you must create an application file and also create a script that
extracts data to be displayed in your Custom Search Dashboard. Where appropriate, you can also create
a template file that contains an array of JSON objects. Templates define the structure for Custom Search
Dashboards and also allow you to share common components across Custom Search Dashboards.

Prerequisite knowledge
Creating a Custom Search Dashboard requires that you have a knowledge of these areas:
• Coding in one of the accepted formats: Python, Shell scripting, or Java programming
• JSON development

Custom Search Dashboard components


These are the components that are required for a Custom Search Dashboard:
Application
The application file is a JSON file that contains the configuration for your Custom Search Dashboard.
The JSON application file is saved with a .app file extension and must be located in the <HOME>/

40 IBM Operations Analytics Log Analysis: User's Guide


AppFramework/Apps/ directory or in a directory within this directory. Any directory structure you
create is reflected in the Custom Search Dashboards pane in the Search workspace.
The application file contains:
1. References to the script that defines your Custom Search Dashboard
2. Parameters that must be passed to the script to generate data, such as user credentials and host
name.
3. Specifications that define the layout of your Custom Search Dashboard and the charts that are
displayed, including the chart type, and/or the HTML to be displayed.
Script
The script that you run to define your Custom Search Dashboard can be a shell script, a python script
or a Java program. If you are running a Java program, you must bundle it as an executable JAR file.
The script performs the actions:
1. Generates an HTTP POST request to a IBM Operations Analytics Log Analysis URL.
2. Extracts the JSON that contains the data generated by the search in the HTTP POST request.
3. Generates JSON string representing HTML and/or data that the Charts can process.
Scripts can also be written to extract data from an external source instead of indexed data.
The script must be saved to the same folder as the application file and is called by the application file.
Note: For IBM Operations Analytics Log Analysis 1.2.0.3 and later, the IBM Operations Analytics Log
Analysis query syntax is based on the Indexing Engine query syntax rather than the IBM Data Explorer
query syntax, which was used in releases before 1.2.0.3. If your Custom Search Dashboard uses
query syntax from a version of IBM Operations Analytics Log Analysis that is older than 1.2.0.3, you
must update the query syntax used in your script to match the Indexing Engine based query syntax.
For more information, see Appendix A: Query syntax in the Searching Guide.
Templates
Templates contain JSON objects that are similar to the JSON used for the application file. The
template JSON objects contain an additional key named template with the value that is set to true.
Each template can reference one or more custom scripts.
The templates are saved with a file extension of .tmpl to the <HOME>/AppFramework/Templates/
directory or a directory within this directory. You must store the script file(s) referenced by the
template in the same folder as the template.
If an application file specifies a template, the script located in the same folder as the template is
executed. Templates can be used to create a common structure for a group of applications and also to
share common configuration elements across applications.
Script parameters can be marked as mandatory by adding the parameter required and setting the
value to true. You can also set a default value for any parameter.

Steps to create a Custom Search Dashboard


Complete these steps to create a Custom Search Dashboard.

Procedure
1. The source log file you want to use must be loaded and processed by IBM Operations Analytics Log
Analysis.
2. Using python, shell scripting, or Java, create a script. The script and its corresponding application
or template file must reside in the same directory: <HOME>/AppFramework/Apps. If no value is
specified for the type parameter at the top of the application file, the application file runs the script
from the same folder as the application file.

Chapter 2. Searching and visualizing data 41


Insight Packs and applications: If you want to include the application in an Insight Pack project, the
script and the associated application file must be in the same folder within the project: src-files/
unity_apps/apps under the project folder.
In the script, you need to specify the actions:
a. Connect to IBM Operations Analytics Log Analysis .
b. Use an HTTP POST to run a search request to a IBM Operations Analytics URL for JSON data. The
query uses the same method as the queries you can run in the Search Workspace. For the query,
use the JSON format for Search requests as described in the Search runtime API. If you need to
narrow down the returned data further, you can parse the data within the script.
c. Format the returned JSON data into a structure that can be consumed by the charts.
3. Create a JSON application file and save it to the <HOME>/AppFramework/Apps/ directory or a
sub-directory within this directory.
If you want to include the application in an Insight Pack project, create the JSON application file in the
src-files/unity_apps/apps folder. The application file references your script and specifies the
chart type and parameters for the dashboard display.
Note: If you are using the Insight Pack Tooling, build the Insight Pack containing the application, and
use the pkg_mgmt utility to install the Insight Pack on your IBM Operations Analytics system.
4. From the Custom Apps pane in the Search workspace, run your application and determine if it is
displaying your data as you intended. Amend the script and application file as required.
Refresh Custom Apps: If you do not see you application listed, refresh the Custom Apps pane in the
Search workspace in order to list the newly installed Custom app.
Note: If you close a chart portlet, you must run the Custom Search Dashboard again to reopen the
chart.

5. (Optional) If you want to create a template for your application, create a template in the directory:
<HOME>/AppFramework/Templates. If a template name has been specified in the type field,
the application file references that template and executes the script in the same directory as that
template. That is, the application file and script must be located in the same directory as the template
file.
Insight Packs and templates: If you want to include the application in an Insight Pack project, the
script, the application, and template file must reside in the same in the folder within the project:
src-files/unity_apps/templates under the project folder.

Scripts
The script that is executed can be written as a Python script, shell script, or a Java application packaged
as an executable JAR file. When you execute the application, the parameters are written to a temporary
file in JSON format, and the file name is passed to the script. The script reads the parameters from
the file. And the script generates the output in the standard JSON format required for the dashboard
specification.
Within an Insight Pack project, scripts must reside in the same folder as the application or template that
references it, either in src-files/unity_apps/apps or src-files/unity_apps/templates.
In the script, use an HTTP POST request to query a IBM Operations Analytics URL for JSON data. The
query uses the same method as the queries you can run in the Search Workspace. For the query, use
the JSON format for Search requests as described in Search REST API. If you need to narrow down the
returned data further, you can parse the data within the script.

Setting environment and connection in Python scripts


If you are using Python, you can import convenience methods in unity.py for interacting with the IBM
Operations Analytics server. unity.py is shipped with IBM Operations Analytics and can be found in

42 IBM Operations Analytics Log Analysis: User's Guide


PYTHONPATH . It contains methods for connecting to the Operations Analytics servers, as well as making
HTTP requests that include the required Cross Site Request Forgery (CSRF) tokens.
To use unity.py, simply include it in an import statement. For example:

from unity import UnityConnection, get_response_content


try:
import json
except ImportError:
import simplejson as json
import sys

To connect to the Operations Analytics server, create an instance of UnityConnection() and call the
UnityConnection.login() method. The UnityConnection() object takes the parameters:
• url - base URL to the Operations Analytics server; if you changed the default port number during install,
you should also change it here
• username - username used for authentication
• password - password used for authentication
For example:

# Create a new UnityConnection to the server and login


unity_connection = UnityConnection('https://Myhost:9987/Unity/',
'unityadmin', 'password')
unity_connection.login()

The parameters to the Python script are written to a file in JSON format, and the filename is passed to the
script. For example, the parameters can be passed in a file containing the following:

{
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {
"filter": {
"range": {
"timestamp":{
"from":"01/01/2013 00:00:00.000 EST",
"to":"01/01/2014 00:00:00.000 EST",
"dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
"logsources": [
{
"type": "logSource",
"name": "SystemOut"
}
],
}
},
]
}

Here is an example of how the parameters can be retrieved and parsed:

# Get the parameters


if len(sys.argv) > 1:
filename = str(sys.argv[1])
fk = open(filename,"r")
data = json.load(fk)

parameters = data['parameters']

for i in parameters:
if i['name'] == 'search':
search = i['value']
for key in search.keys():
if key == 'filter':
filter = search['filter']

Chapter 2. Searching and visualizing data 43


elif key == 'logsources':
logsource = search['logsources']

To make an http request that uses the Search runtime API, use the connection.post() and
get_response_content() methods. For example:

request = {
"logsources": logsource, // this is the value from the parameters
"query": "*",
"filter": filter // this is the value from the parameters
}

response = connection.post('/Search', json.dumps(request),


content_type='application/json; charset=UTF-8');
content = get_response_content(response)

# parse the results in found in content


.
.
.

Lastly, close the connection at the end of the Python script:

# close the connection


connection.logout()

Example search request


The search request retrieves JSON data and is the body in the HTTP POST request.
This is an example of a JSON search string that is the body HTTP POST request.

{
"start":0,
"results":1,
"filter":{
"range":{
"timestamp":{
"from":"5/31/2012 5:37:26.682 -0400",
"to":"5/31/2013 5:37:26.682 -0400",
"dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
"query":"severity:(W OR E)",
"sortKey":[
"-timestamp"
],
"getAttributes":[
"timestamp",
"severity"
],
"facets":{
"dateFacet":{
"date_histogram":{
"field":"timestamp",
"interval":"hour",
"outputDateFormat":"MM-dd HH:mm",
"nested_facet":{
"severityFacet":{
"terms":{
"field":"severity",
"size":10
}
}
}
}
}
},
"logsources":[
{
"type":"logSource",
"name":"/SystemOut"
}
]
}

44 IBM Operations Analytics Log Analysis: User's Guide


The request specifies:
• The number of records returned, in this case it is "results":1. It is helpful to set the number records
to a low number when you are building and testing your script.
• The date/time range is from May 31 2012 to May 31, 2013:

{"timestamp":{"from":"5/31/2012 5:37:26.682 -0400","to":"5/31/2013 5:37:26.682


-0400","dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"}}}

• The query searches for Warnings or Errors and the results are sorted by timestamp. Only the timestamp
and severity attributes are returned in the results.

"query":"severity:(W OR E)",
"sortKey":["-timestamp"],"getAttributes":["timestamp","severity"],

• The logsource is set as the WebSphere Application Server SystemOut log.

"logsources":[{"type":"logSource","name":"/SystemOut"}]

• (Optional) Facet requests that you can use to create sums, time interval, or other statistical data
for a given field/value pair. In this example there is a date_histogram facet with a nested terms
facet. Within each time interval returned by the date_histogram facet, the term facet, called
severityFacet, counts the number of each type of severity.

"facets":{"dateFacet":{"date_histogram":{"field":
"timestamp","interval":"hour","outputDateFormat":"MM-dd HH:mm",
"nested_facet":{"severityFacet":{"terms":{"field":"severity","size":10}}}}}},

1,000 or more results and Custom Search Dashboard: When a query in a Custom Search Dashboard
returns more than 1000 records, you get only 1000 results back. The search result returned includes
a field totalResults which shows total number of matching results. Another field numResults gives
the number of records returned. You can check these values in the Custom Search Dashboard script and
handle the results accordingly.

Example JSON output from search request


Run your script and review the JSON output. Adjust the script to get the output you need.

Search request output


When you post the HTTP Search request, it returns a JSON string in raw format. It is helpful to open the
raw output in a JSON editor. These samples show the output from the example search in raw format and
in the tabbed format displayed in JSON editors.
This sample is segment of the returned JSON in raw format.

{"searchRequest":{"start":0,"results":1,"filter":{"and":
[{"range":{"timestamp":{"from":"5\/31\/2012 5:37:26.682 -0400",
"to":"5\/31\/2013 5:37:26.682 -0400",
"dateFormat":"MM\/dd\/yyyy HH:mm:ss.SSS Z"}}},{"or":[{"phrase":
{"logsource":"SystemOut"}}]},
{"range":{"_writetime":{"dateFormat":"yyyy-MM-dd'T'HH:mm:ss.SSSZ","from":
"2013-05-25T00:00:00.000-0400","to":"2013-05-31T05:48:18.415-0400"}}}]},
"query":"severity:(W OR E)",
"sortKey":["-timestamp"],"getAttributes":["timestamp","severity"],
"facets":{"dateFacet":{"date_histogram":{"field":"timestamp","interval":"hour",
"outputDateFormat":"MM-dd HH:mm","nested_facet":{"severityFacet":
{"terms":{"field":"severity","size":10}}},"__usr_fast_date_query":"UTC"}}},
"logsources":[{"type":"logSource","name":"\/SystemOut"}],"collections":
["WASSystemOut-Collection1"]},"totalResults":805,"numResults":1,"executionInfo":
{"processingTime":33,"searchTime":54},"searchResults":[
{"resultIndex":0,"attributes":
{"msgclassifier":"TCPC0003E","threadID":"00000000","message":
"TCP Channel TCP_2 initialization failed. The socket bind failed for host
SystemOut.log
SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh
unity_search_pattern_insert.sh
was_search_pattern.txt x and port 9080. The port may already be in use._LOG_2_",
"_writetime":"05\/27\/13 12:39:03:254 +0000","logsourceHostname":

Chapter 2. Searching and visualizing data 45


"fit-vm12-230","logRecord":"[10\/21\/12 19:57:02:313 GMT+05:30] 00000000
TCPPort E TCPC0003E:
TCP Channel TCP_2 initialization failed. The socket bind failed
for host SystemOut.log SystemOut.logOneLiners SystemOut.logOneLinersNoTS
unity_populate_was_log.sh unity_search_pattern_insert.sh was_search_pattern.txt x
and port 9080.
The port may already be in use._LOG_2_",
"timestamp":"10\/21\/12 14:27:02:313 +0000","severity":"E","logsource"
:"SystemOut"}}],
"facetResults":{"dateFacet":[{"label":"13-295-2012 UTC","low":"10-21 13:00",
"high":"10-21 14:00","count":425,"nested_facet":{"severityFacet":{"counts":
[{"term":"W","count":221},{"term":"E","count":204}],"total":2}}},{"label":
"14-295-2012 UTC","low":"10-21 14:00","high":"10-21 15:00","count":380,
"nested_facet":{"severityFacet":{"counts":[{"term":"E","count":197},
{"term":"W","count":183}],"total":2}}}]},
"metadata":{"msgclassifier":{"filterable":true,"dataType":"TEXT","sortable":false},
...
""exceptionMethodName":{"filterable":false,"dataType":"TEXT","sortable":false},
"severity":{"filterable":true,"dataType":"TEXT"

This sample shows the same results formatted in a JSON editor.

{
"searchRequest": {
"start": 0,
"results": 1,
"filter": {
"and": [
{
"range": {
"timestamp": {
"from": "5/31/2012 5:37:26.682 -0400",
"to": "5/31/2013 5:37:26.682 -0400",
"dateFormat": "MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
{
"or": [
{
"phrase": {
"logsource": "SystemOut"
}
}
]
},
{
"range": {
"_writetime": {
"dateFormat": "yyyy-MM-dd'T'HH:mm:ss.SSSZ",
"from": "2013-05-25T00:00:00.000-0400",
"to": "2013-05-31T05:48:18.415-0400"
}
}
}
]
},
"query": "severity:(W OR E)",
"sortKey": [
"-timestamp"
],
"getAttributes": [
"timestamp",
"severity"
],
"facets": {
"dateFacet": {
"date_histogram": {
"field": "timestamp",
"interval": "hour",
"outputDateFormat": "MM-dd HH:mm",
"nested_facet": {
"severityFacet": {
"terms": {
"field": "severity",
"size": 10
}
}
},
"__usr_fast_date_query": "UTC"

46 IBM Operations Analytics Log Analysis: User's Guide


}
}
},
"logsources": [
{
"type": "logSource",
"name": "/SystemOut"
}
],
"collections": [
"WASSystemOut-Collection1"
]
},
"totalResults": 805,
"numResults": 1,
"executionInfo": {
"processingTime": 33,
"searchTime": 54
},
"searchResults": [
{
"resultIndex": 0,
"attributes": {
"msgclassifier": "TCPC0003E",
"threadID": "00000000",
"message": "TCP Channel TCP_2 initialization failed.
The socket bind failed for host SystemOut.log SystemOut.logOneLiners
SystemOut.logOneLinersNoTS
unity_populate_was_log.sh unity_search_pattern_insert.sh
was_search_pattern.txt x and port 9080.
The port may already be in use._LOG_2_",
"_writetime": "05/27/13 12:39:03:254 +0000",
"logsourceHostname": "fit-vm12-230",
"logRecord": "[10/21/12 19:57:02:313 GMT+05:30]
00000000 TCPPort E TCPC0003E:
TCP Channel TCP_2 initialization failed.
The socket bind failed for host SystemOut.log
SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh
unity_search_pattern_insert.sh was_search_pattern.txt x and port 9080.
The port may already be in use._LOG_2_",
"timestamp": "10/21/12 14:27:02:313 +0000",
"severity": "E",
"logsource": "SystemOut"
}
}
],
"facetResults": {
"dateFacet": [
{
"label": "13-295-2012 UTC",
"low": "10-21 13:00",
"high": "10-21 14:00",
"count": 425,
"nested_facet": {
"severityFacet": {
"counts": [
{
"term": "W",
"count": 221
},
{
"term": "E",
"count": 204
}
],
"total": 2
}
}
},
...
},
"logsource": {
"filterable": true,
"dataType": "TEXT",
"sortable": true
}
}
}

Chapter 2. Searching and visualizing data 47


Note: The TotalResults value shows the total number of matching results. The numResults
value shows the number of returned results. The search request specified the return of one result.
("results":1).

JSON output from a script


You can specify a Custom Search Dashboard with multiple charts with each chart pointing to a different
data element. Data elements can be in the form of data or HTML. The script returns this JSON structure
that contains the data that is used to populate the charts, or HTML to be displayed.
You can also configure Custom Search Dashboards to return error messages in the following format:

{
"error_msg": "<error_message_string>"
}

where <error_message_string> is the message that you want to display.


The following example shows the output parameters for a Custom Search Dashboard. The first data set
contains the correct data that is used to render the chart. The second data set contains an error message.

{
"data": [{
"rows": [{
"userId": "116",
"count": 9
}],
"fields": [{
"label": "userId",
"type": "TEXT",
"id": "userId"
},
{
"label": "count",
"type": "LONG",
"id": "count"
}],
"id": "DynamicDashboardSearch_0_1386061434000"
},
{
"error_msg": "CTGLA2107E : Custom applications need to be configured for
every use case with details of data sources. No valid data sources were specified
in the search request. Verify that the chosen data sources exist and that any
specified tags contain data source descendants. For further information on how to
configure the custom applications, refer to the documentation.",
"id": "DynamicDashboardSearch_1_1386061434000"
}]
}

Example Script
The full script example shown here contains all the required elements.
In this example script, the value for some elements are defined as variables inside the script. For
example, the elements of the search request such as the logsource and query are defined as variables.
Custom scripts should return date field data only in the supported formats for charts to render correctly.
Supported formats include yyyy-MM-ddTHH:mm:ssZ and MM/dd/yy HH:mm:ss:SSS Z.

from unity import UnityConnection, get_response_content


from urllib import urlencode
import CommonAppMod

try:
import json
except ImportError:
import simplejson as json

import sys
import re
from datetime import datetime, date, time, timedelta
import UnityAppMod

# Create a new Unity connection to the server and login


unity_connection = UnityConnection('https://myhost:9987/Unity/', 'unityadmin',
'unityadmin')unity_connection.login()

48 IBM Operations Analytics Log Analysis: User's Guide


##########################################################
# Calculates the current time, relative time, and forms timestamps
##########################################################
currentTimestamp = datetime.now()

# Currently chosen relative timestamp


newTimestamp = currentTimestamp - UnityAppMod.getLastYear()

# Format the timestamp


currentFormatted = UnityAppMod.formatTimestamp(currentTimestamp)
newFormatted = UnityAppMod.formatTimestamp(newTimestamp)

# Define the logsource for the search


logsource = '"logsources":[{"type":"logSource","name":"/SystemOut"}]'

# Define the output data


chartdata = []
timeUnit = 'hour'
timeUnitFormat = 'MM-dd HH:mm:ss.SSS Z'

timestamp = '{"timestamp":{"from":"' + newFormatted + '","to":"'


+ currentFormatted + '","dateFormat":"MM/dd/yy HH:mm:ss.SSS Z"}}'

# Parameter defaults
start = 0
results = 1

##########################################################
def getSearchData(query, facet, sortKey, attributes):

#Search query to be used as content for POST

body = '{"start":' + str(start) + ',"results":' + str(results) + \


',"filter":{"range":' + timestamp + '},' + \
query + ',' + \
sortKey + ',' + \
attributes + ',' + \
facet
#print body

if logsource:
body = body + ',' + logsource

body = body + ' }'


#print body

# Post the search request


response = unity_connection.post(
'/Search', body, content_type='application/json; charset=UTF-8');
content = get_response_content(response)

# Convert the response data to JSON


data = {}
try:
data = json.loads(content)
#print json.dumps(data, sort_keys=False,indent=4,
separators=(',', ': '))
except:
pass
if 'result' in data:
result = data['result']
if 'status' in result and result['status'] == 'failure':
msg = result['message']
print >> sys.stderr, msg
sys.exit(1)

return data
##########################################################

##########################################################
def getErrorsWarningsVsTime(chartdata):
# Define the query for the search
query = '"query":"severity:(W OR E)"'

# Define the facet to be used for the search


facet = '"facets":{"dateFacet":{"date_histogram":{"field":"timestamp",
"interval":"'+ timeUnit + '","outputDateFormat":"'
+ timeUnitFormat + '","nested_facet":

Chapter 2. Searching and visualizing data 49


{"severityFacet":{"terms":{"field":"severity","size":10}}}}}}'

#Define the sortKey to be used for the search results


sortKey = '"sortKey":["-timestamp"]'

# get the facetFields


facetFields = [
{ "id":"date", "label":"Timestamp", "type":"DATE" },
{ "id":"severity", "label":"severity", "type":"TEXT"},
{ "id":"count", "label":"Count", "type":"LONG"}
]
facetRows = []

# Define the attributes


attributes = '"getAttributes":["timestamp","severity"]'

# do the query
data = getSearchData(query, facet, sortKey, attributes)

if 'facetResults' in data:

# get the facet results


facetResults = data['facetResults']
#print json.dumps(facetResults, sort_keys=False,indent=4,
separators=(',', ': '))

if 'dateFacet' in facetResults:
# get the facetRows
dateFacet = facetResults['dateFacet']
CommonAppMod.dateSort(dateFacet)
for dateRow in dateFacet:
for severityRow in dateRow['nested_facet']['severityFacet']
['counts']:
facetRows.append({"date":dateRow['low'], "severity":severityRow
['term'], "count":severityRow['count']} );
#print facetRows

chartdata.append({'id':'ErrorsWarningsVsTime','fields':facetFields,
'rows':facetRows})
return chartdata

##########################################################

# Define the timestamp for the search


#timestamp = '{"timestamp":{"from":"' + newFormatted + '","to":"'
+ currentFormatted + '","dateFormat":"MM/dd/yy HH:mm:ss.SSS Z"}}'

# Define the logsource for the search


#logsource = '"logsources":[{"type":"logSource","name":"/SystemOut"}]'

getErrorsWarningsVsTime(chartdata)
unity_connection.logout()
#------------------------------------------------------
# Build the final output data JSON
#------------------------------------------------------

# build the chart data


dt = {'data':chartdata}

#Print the JSON to system out


#print( json.dumps(dt, sort_keys=False,indent=4, separators=(',', ': ')))
print json.dumps(dt, sort_keys=False,indent=4, separators=(',', ': '))

Application files
The application file JSON can be created by implementing the structure provided in the sample JSON
outlined in this topic.
If you are using the Insight Pack Tooling, the application file JSON can be created in the Insight Pack
project in the subfolder src-files/unity_apps/apps.

50 IBM Operations Analytics Log Analysis: User's Guide


The parameters defined for application files are:
name
Specify the name of the application file. This is the name of the .app file that displays in the Custom
Search Dashboards pane in the Search workspace.
type
(Optional) Where applicable, specify the name of the template used. The name you specify in this field
is the name of the template file excluding the file extension.
description
Specify the purpose of the application.
customLogic
Specify the details for the script including any parameters and outline the details for the output that is
to be displayed. These parameters are defined for customLogic:
script
Specify the name of the script that you want to execute. The script can be a Python or shell script,
or a Java application packaged as an executable JAR file. If you are going to use a template, the
script you execute must be in the same directory as the template. If you are not using a template,
save the script to the same directory as the application file.
description
Provide a description for the script.
timeout
(Optional) Specify in seconds the time after which the search queries issued by the Custom Search
Dashboard are stopped. This per-app parameter has no default value.
parameters
Complete a JSON array of parameters that are passed to the script. This array can be empty. The
parameters you pass depend on the requirements of the script that you are executing. In the first
set of parameters in the sample, the parameters for a search query are passed.
output
This section outlines the nature of the output that you want to display for your Custom Search
Dashboard.
type
Specify Data as the value for this parameter.
visualization
Add dashboard or searchFilters as a sub-parameter for this parameter. This determines
what is displayed when you execute a Custom Search Dashboard.
The dashboard requires that you specify the layout for the dashboard that you are creating.
Under the Dashboard parameter, add a columns sub-parameter and specify a numeric value to
indicate the number of columns that you require.
Add a chart as an additional sub-parameter for the dashboard parameter and add an array of
JSON objects to indicate what charts you want to include in your dashboard. Include only valid
chart types and also include the required parameters for each chart. For more information, see the
Charts section of this document.
The searchFilters parameter is for the search filter app and requires a relationalQuery
as the input parameter. The relational query is JSON object that consists of JSON keys: For more
information see Search filter app.

Example Dashboard visualizations


This example references the ExecuteSearchWithParameters.py script and specifies a bubble chart
based on data from the chartdata01 data set. The chartdata01 data set id is defined in the
ExecuteSearchWithParameters.py script and referenced in the chart specification.

"name": "Search Results from python script- Parameters",


"type": "SearchApps",

Chapter 2. Searching and visualizing data 51


"description": "Captures long running queries in maximo logs",
"customLogic": {
"script": "ExecuteSearchWithParameters.py",
"description": "View chart on search results",
"timeout":
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {
"start": 0,
"results": 10,
"filter": {"range": {"timestamp": {"from":
"12/03/2012 22:21:58.000 India Standard
Time",
"to": "12/03/2013 22:21:58.000 India Standard Time",
"dateFormat": "dd/MM/yyyy HH:mm:ss.SSS
Z" }}},
"logsources": [{"type": "tag","name": "*"}],
"query": "*"
}
},
{
"name": "url",
"type": "String",
"value": "http: //9.118.41.71: 9988/Unity/"
}
],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"columns": 1,
"charts": [
{
"type": "Bubble Chart",
"title": "Severity against Time",
"data": {"$ref": "chartdata01"},
"parameters": {"xaxis": "timestamp","yaxis": "severity"}
}
]
}
} } } }

Charts
The chart types specified here are supported by IBM Operations Analytics Log Analysis. The chart
specifications are contained in the <HOME>/AppsFrameowrk/chartspecs directory.
Displaying a chart: To display a chart, you execute your Custom Search Dashboard from the Search
workspace. If you close a chart portlet, you must run the Custom Search Dashboard again to reopen the
chart.
These parameters are defined for all charts:
type
Specify the type of chart required. The value must match the ID of a chart specification that is
contained in the <HOME>/AppFramework/chartspecs directory. The supported chart types are
outlined in this section.
title
Specify the chart title.
data
Specify the ID for the data element that is represented in the chart. This ID specified must match to
the ID provided in the dashboard specifications.
parameters
Fields to be displayed in the chart.

Line chart
The line chart is defined with these limitations:
• Chart name: Line Chart

52 IBM Operations Analytics Log Analysis: User's Guide


• Parameters: xaxis and yaxis
• Chart Specification:

{
"type": "Line Chart",
"title": "Line Chart ( 2 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "throughput"
}
}

• Aggregation To aggregate the throughput parameter, define the following code:

{
"type": "Line Chart",
"title": "Line Chart ( 2 parameters )",
"data": {
"$ref": "searchResults01",
"summarizeData": {
"column": "throughput",
"function": "sum"
}
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "throughput"
}

The summarizeData key determines whether aggregation is to be performed or not. column is the
name of numeric column that uses the LONG or DOUBLE data type to perform aggregation on. sum is the
aggregation function to be applied. Supported functions are sum, min, max.

Bar chart
The bar chart is defined with these limitations:
• Chart name: Bar Chart1
• Parameters: xaxis, yaxis, and categories
• Limitations: Only integer values are supported for the yaxis parameter.
• Chart Specification:

{
"type": "Bar Chart",
"title": "Bar Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "CPU",
"categories": "hostname"
}
}

Point chart
The point chart is defined with these limitations:
• Chart name: Point Chart
• Parameters: xaxis and yaxis
• Chart Specification:

{
"type": "Point Chart",
"title": "Point Chart ( 2 parameters )",

Chapter 2. Searching and visualizing data 53


"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "errorCode"
}
}

Pie chart
The pie chart is defined with these limitations:
• Chart name: Pie Chart
• Parameters: xaxis and yaxis
• Chart Specification:

{
"type": "Pie Chart",
"title": "Pie Chart ( 2 parameters )",
"data": {
"$ref": "searchResults03"
},
"parameters": {
"xaxis": "count",
"yaxis": "severity"
}
}

Cluster bar chart


The cluster bar chart is defined with these limitations:
• Chart name: Cluster Bar
• Parameters: xaxis, yaxis, and sub-xaxis
• Chart Specification:

{
"type": "Cluster Bar",
"title": "Cluster Bar ( 3 parameters )",
"data": {
"$ref": "searchResults02"
},
"parameters": {
"xaxis": "hostname",
"yaxis": "errorCount",
"sub-xaxis": "msgClassifier"
}
}

Bubble chart
The bubble chart is defined with these limitations:
• Chart name: Bubble Chart
• Parameters: xaxis, yaxis, and categories
• Chart Specification:

{
"type": "Bubble Chart",
"title": "Bubble Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "CPU",
"categories": "errorCode"

54 IBM Operations Analytics Log Analysis: User's Guide


}
}

The size of the bubble on the graph depends on the number of items in the parameter that is being
represented. In some cases, for example if you have a large bubble and a small bubble, the large bubble
may cover the smaller one.

Tree map chart


The tree map chart is defined with these limitations:
• Chart name: Tree Map
• Parameters: level1, level2, level3, and value
• Chart Specification:

{
"type": "Tree Map",
"title": "Tree Map ( 4 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"level1": "hostname",
"level2": "errorCode",
"level3": "severity",
"value":"CPU"
}
}

Two-series line chart


The two-series line chart is defined with these limitations:
• Chart name: Two Series Line Chart
• Parameters: xaxis, yaxis1, and yaxis2
• Chart Specification:

{
"type": "Two Series Line Chart",
"title": "Two Series Line Chart ( 3 parameters)",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis1": "throughput",
"yaxis2": "ResponseTime"
}
}

Stacked bar chart


The stacked bar chart is defined with these limitations:
• Chart name: Stacked Bar Chart
• Parameters: xaxis, yaxis, and categories
• Chart Specification:

{
"type": "Stacked Bar Chart",
"title": "Stacked Bar Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "hostname",
"yaxis": "CPU",

Chapter 2. Searching and visualizing data 55


"categories": "severity"
}
}

Stacked line chart


The stacked line chart is defined with these limitations:
• Chart name: Stacked Line Chart
• Parameters: xaxis, yaxis, and categories
• Chart Specification:

{
"type": "Stacked Line Chart",
"title": "Stacked Line Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "threadID",
"yaxis": "timestamp",
"categories": "MBO"
}
}

Heat map
The heat map chart is defined with these limitations:
• Chart name: Heat map
• Parameters: xaxis, yaxis, and count
• Chart Specification:

{
"type": "Heat Map",
"title": "Heat Map ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "messageClassifier",
"yaxis": "hostname",
"count": "throughput"
}
}

Example application file


This example shows an application file that references the 3Params_was_systemout.py script
example and specifies how multiple charts display the output JSON from the script.

Example application file and chart dashboard


This example shows the application file and the dashboard of charts it creates when you run
the application. The ErrorsWarningsVsTime and ExceptionByHost data sets are defined in the
3Params_was_systemout.py script and referenced in the chart specifications in this application file.

{
"name": "WAS Errors and Warnings Dashboard",
"description": "Display a dashboard of charts that show WAS errors and warnings",
"customLogic": {
"script": "3Params_was_systemout.py",
"description": "View charts based on search results",
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {

56 IBM Operations Analytics Log Analysis: User's Guide


"logsources": [
{
"type": "logSource",
"name": "SystemOut"
}
]
}
},
{
"name": "relativeTimeInterval",
"type": "string",
"value":"LastDay"
},
{
"name": "timeFormat",
"type": "data",
"value": {
"timeUnit": "hour",
"timeUnitFormat": "MM-dd HH:mm"
}
},
{
"name": "hostnameField",
"type": "string",
"value": "logsource_hostname"
}
],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"columns": 2,
"charts": [
{
"title": "Message Counts - Top 5 over Last Day",
"parameters": {
"xaxis": "date",
"yaxis": "count",
"sub-xaxis": "severity"
},
"data": {
"$ref": "ErrorsWarningsVsTime"
},
"type": "Cluster Bar"
},
{
"type": "Heat Map",
"title": "Message Counts - Top 5 over Last Day",
"data": {
"$ref": "ErrorsWarningsVsTime"
},
"parameters": {
"xaxis": "date",
"yaxis": "count",
"category": "severity"
}
},
{
"type": "Bubble Chart",
"title": "Java Exception Counts - Top 5 over Last Day",
"data": {
"$ref": "ErrorsWarningsVsTime"
},
"parameters": {
"xaxis": "date",
"yaxis": "count",
"size": "severity"
}

},
{
"type": "Two Series Line Chart",
"title": "Error and Warning Total by Hostname
- Top 5 over Last Day",
"data": {
"$ref": "ErrorsWarningsVsTime"
},
"parameters": {
"xaxis": "date",
"yaxis1": "count",
"yaxis2": "severity"
}

Chapter 2. Searching and visualizing data 57


},
{
"type": "Tree Map",
"title": "Messages by Hostname - Top 5 over Last Day",
"data": {
"$ref": "ErrorsWarningsVsTime"
},
"parameters": {
"level1": "date",
"level2": "count",
"level3": "severity",
"value": "count"
}

},
{
"type": "Stacked Bar Chart",
"title": "Java Exception by Hostname
- Top 5 over Last Day",
"data": {
"$ref": "ExceptionByHost"
},
"parameters": {
"xaxis": "date",
"yaxis": "count",
"categories": "hostname"
}

]
}
}
}

This sample shows a chart created from the example script and application files.

58 IBM Operations Analytics Log Analysis: User's Guide


Using a Custom Search Dashboard to display a search query or selected data
You can create a Custom Search Dashboard that displays, in a new tab, the full syntax for the current
search and also displays the contents of a selected column or individual cell.
Create an application and include the _query and _data parameters to ensure that your data is
displayed. The purpose of these parameters is:
_query
Displays a JSON string containing the currently active search. Specify _query in the name field for the
parameter. The type field must contain the value additionalParameterFromUI.
_data
Displays the data from any column or cell that you select in Grid view. To display the data, select a
column or cell in Grid view and then launch the application. If you select a column, only data from
the currently displayed page is displayed by the application. Specify _data in the name field for the
parameter. The type field must contain the value additionalParameterFromUI.

_rows
Displays the entire row data for a cell that you select in Grid view. To display the data, select
a cell in Grid view and then launch the application. The type field must contain the value
additionalParameterFromUI.

The application does not necessarily require all three parameters. Depending on your requirements, you
can create an application to display the search query JSON or the contents of a column, cell, or row.
The following example is an application that when executed displays both the search query and any
selected column or cell:

{
"name": "Sample HTML App",
"description": "Sample App to demonstrate use of _query and _data params",
"customLogic": {
"script": "SampleHTMLApp.sh",
"description": "Read data from a file and return",
"parameters": [
{
"name": "_query",
"type": "additionalParameterFromUI",
"value": []
},
{
"name": "_data",
"type": "additionalParameterFromUI",
"value": []
},
{
"name": "_rows",
"type": "additionalParameterFromUI",
"value": []
}
],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"columns": 2,
"charts": [
{
"type": "html",
"title": "Sample HTML",
"data": {
"$ref": "htmlData"
},
"parameters": {
"html": "text"
}
}
]
}
}
}
}
}

Chapter 2. Searching and visualizing data 59


Related concepts
Launching IBM Operations Analytics Log Analysis with context parameters
You can launch IBM Operations Analytics Log Analysis with search parameters or Custom Search
Dashboard in the URL that you specify. These URLs can be specified and launched from within
applications that are integrated with IBM Operations Analytics Log Analysis.

Launching IBM Operations Analytics Log Analysis with context parameters


You can launch IBM Operations Analytics Log Analysis with search parameters or Custom Search
Dashboard in the URL that you specify. These URLs can be specified and launched from within
applications that are integrated with IBM Operations Analytics Log Analysis.

Launching with search parameters in the URL


The URL format that you specify must be in the format:

https://<ip_address>:9987:/Unity/SearchUI?queryString=
<q>&timefilters=<t>&dataSources=<ds>

where <ip_address> is the IP address of the server on which IBM Operations Analytics Log Analysis is
installed and the parameters that you specify are:
queryString
(Required) Replace the <q> parameter with a valid velocity query.
timefilter
(Optional) Specify a time range as an absolute or relative time range in JSON format. If this parameter
is not specified, the default option Last 15 minutes is applied. For an absolute value, specify JSON
using this example:

{ “type”:“absolute”,
“startTime”:“24/06/2013 05:30:00”
“endTime”:“25/06/2013 05:30:00”
}

The timezone for the startTime and endTime is the user's timezone. If the user's default timezone
is the browser timezone, the absolute time is the browser timezone or the timezone that is set as
default for all sessions is used for querying.
For a relative time range, specify JSON using this example:

{ “type”:“relative”,
“lastnum”:“7”,
“granularity”:“Day”
}

datasources
(Optional) For this parameter, you specify a Data Source or a group of Data Sources in a JSON array
format. Each element in the array can be of type group or datasource. Specify group if you want to
specify a group of Data Sources. If a value is not specified for this parameter, all of the available Data
Sources are selected. The JSON format is indicated in this example:

[
{ “type”:“datasource”,“name”:"<datasource_name>" },
{ “type”:“group”,“name”:"<group_name>}", …

This example URL launches IBM Operations Analytics Log Analysis with the required parameters to
search for critical IBM Tivoli Netcool®/OMNIbus events in IBM Operations Analytics Log Analysis:

https://9.118.41.69:9987/Unity/SearchUI?queryString=*&timefilters=
{“type”:“relative”,“lastnum”:1,“granularity”:“year”}&dataSources=
[{“type”:“datasource”,“name”:/Omnibus-events}]

60 IBM Operations Analytics Log Analysis: User's Guide


Note: Any spaces that are required in your URL must be escaped. If you want to include a % character in
your URL in must be added as %25.

Launching with a Custom Search Dashboard name in a URL


The URL format that you specify must be in the format:

https://<ip_address>:9987:/Unity/CustomAppsUI?name=
<name>&<appParameters>=<params>

where <ip_address> is the IP address of the server on which IBM Operations Analytics Log Analysis is
installed and the parameters that you specify are:
name
(Required) Replace the <name> parameter with a valid Custom Search Dashboard name.
appParameters
If the Custom Search Dashboard that you want to launch requires that parameters are specified,
specify the required parameters in a JSON array in {key:value} format.
This example URL launches the IBM Operations Analytics Log Analysis Day Trader Custom Search
Dashboard:

https://9.120.98.21:9987/Unity/CustomAppsUI?name=
Day%20Trader%20App&appParameters=[]

Note: Any spaces that are required in your URL must be escaped. If you want to include a % character in
your URL, it must be added as %25.
Related concepts
Using a Custom Search Dashboard to display a search query or selected data
You can create a Custom Search Dashboard that displays, in a new tab, the full syntax for the current
search and also displays the contents of a selected column or individual cell.

Templates
A template file defines a set of custom scripts. For each custom script, it specifies the parameters with
the type, name, and the default value.
If you are using the Insight Pack Tooling, the template file JSON can be created in an Insight Pack project
in the subfolder src-files/unity_apps/templates. Any script files referenced by the template must
also reside in this folder.
In addition to the fields included for applications files, these additional parameters are included in the
template:
template
This is a boolean value that specifies that the file is a template. Specify true as the value for this
parameter.
parameter
Although this parameter is specified in the application file, additional values are required in a template
file for this parameter.
required
If this is set to true, the parameter is required. Custom Search Dashboards that use the template
must include this parameter.
default
Specify the default parameter value. For parameters that are not required, this value is used
where the application does not specify a value.

Example

Chapter 2. Searching and visualizing data 61


{
"name": "SearchApps",
"type": "SearchApps",
"template": true,
"description": "SearchApps Template",
"customLogic": [
{
"script": "ExecuteSearch.py",
"description": "View chart on search results",
"parameters": [],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"charts": [{"type": "Pie Chart","title":
"Errors/Warnings for last 7 days","yaxis": "msgclassifier"},
{"type": "Bar Chart","title":
"Errors/Warnings for each host",
"xaxis": "hostname","yaxis": "count"}]}
} } },
{
"script": "ExecuteSearchWithParameters.py",
"description": "View chart on search results",
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"default": {"start": 0,"results": 10,"filter": {
"range": {"timestamp": {"from": "12/03/2012 22:21:58.000
India Standard Time",
"to": "12/03/2013 22:21:58.000 India Standard Time",
"dateFormat": "dd/MM/yyyy HH:mm:ss.SSS Z"}}},
"logsources": [{"type": "tag","name": "*"}],
"query": "*"},
"required": false
},
{
"name": "url",
"type": "String",
"default": "http: //9.118.41.71: 9988/Unity/",
"required": false
}
],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"charts": [
{"type": "Pie Chart","title": "Errors/Warnings for
last 7 days","yaxis": "msgclassifier"},
{"type": "Bar Chart","title": "Errors/Warnings for
each host","xaxis": "hostname","yaxis":
"count"}]}
}
}
}
]
}

Building and Installing an Insight Pack with Custom Search Dashboards


The Insight Pack Tooling creates an archive file for the Insight Pack using the Build Insight Pack option
from the project's context menu.

Before you begin


The script and application definition files must be created in the Insight Pack project subfolder src-
files/unity_apps/apps. If you are using a template, the script, application definition, and template
files must be created in the Insight Pack project subfolder src-files/unity_apps/templates.

Procedure
• Any Custom Search Dashboard definitions, scripts, and template files found in the src-files/
unity_apps folder are included in the archive file.

62 IBM Operations Analytics Log Analysis: User's Guide


• When the Insight Pack archive file is installed using the pkg_mgmt utility, the application definition,
scripts, and template files are installed to the appropriate directories under <HOME>/AppFramework.
You may need to refresh the Custom Apps pane in the Search workspace in order to display newly
installed Custom Search Dashboards.

Example of full Custom Search Dashboard


This example shows a complete script, application file, and the resulting display for a Custom Search
Dashboard named Performance_Msgs.

Performance_Msgs.py script
This example shows the script Performance_Msgs.py, which collects data on performance messages.
For details, read the coding notes within the script.

######################################################### {COPYRIGHT-TOP} ###


# Licensed Materials - Property of IBM
# "Restricted Materials of IBM"
# 5725-K26
#
# (C) Copyright IBM Corp. 2013 All Rights Reserved.
#
# US Government Users Restricted Rights - Use, duplication, or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
######################################################### {COPYRIGHT-END} ###
#Install simple json before trying the script

#Installing simplejson
# Download the rpm from
http://pkgs.org/centos-5-rhel-5/centos-rhel-x86_64
/python-simplejson-2.0.9-8.el5.x86_64.rpm/download/
# Run the command rpm -i python-simplejson-2.0.9-8.el5.x86_64.rpm

from datetime import datetime, date, time, timedelta


import sys
from unity import UnityConnection, get_response_content
try:
import json
except ImportError:
import simplejson as json

#------------------------------------------------------
# getSearchData()
#------------------------------------------------------
def getSearchData(logsource, filter):

# Build the request parameters using the Search Request APIs.


# The logsource and filter values used in the query were passed
# in as parameters to the script.
request = {
"start": 0,
"results": 1,
"filter": filter,
"logsources": logsource,
"query": "*",
"sortKey":["-timestamp"],
"getAttributes":["timestamp","perfMsgId"],
"facets":{
"dateFacet":{
"date_histogram":{
"field":"timestamp",
"interval":"hour",
"outputDateFormat":"MM-dd HH:mm",
"nested_facet":{
"dlFacet":{
"terms":{
"field":"perfMsgId",
"size":20
}
}
}
}
}
}
}

Chapter 2. Searching and visualizing data 63


# Post the search request
response = connection.post
('/Search', json.dumps(request), content_type='application/json;
charset=UTF-8');
content = get_response_content(response)

#convert the response data to JSON


data = {}
try:
data = json.loads(content)
except:
pass
if 'result' in data:
result = data['result']
if 'status' in result and result['status'] == 'failure':
msg = result['message']
print >> sys.stderr, msg
sys.exit(1)

return data

#------------------------------------------------------
# dateSort()
#------------------------------------------------------
def dateSort(dateFacet):
# This function parses the UTC label found in the dateFacet in the
# format "mm-hh-DDD-yyyy UTC"
# and returns an array in the form [yyyy, DD, hh, mm]
def parseDate(dateLabel):
aDate = map(int, dateLabel.split(" ")[0].split("-"))
aDate.reverse()
return aDate

# call an in-place List sort, using an anonymous function lambda


# as the sort function
dateFacet.sort(lambda facet1, facet2:
cmp(parseDate(facet1['label']), parseDate(facet2['label'])))
return dateFacet

#--------------------------------------------------------------------------
# Main script starts here
#--------------------------------------------------------------------------

# define the URLs used for the http request


baseurl = 'https://localhost:9987/Unity'

connection = UnityConnection(baseurl, 'unityadmin', 'unityadmin')


connection.login()

# initialize variables
filter = {}
logsource = {}
chartdata = []

# Get the script parameters which were passed in via a temporary


# file in JSON format. The name of the temporary file is in argv[1]
if len(sys.argv) > 1:
filename = str(sys.argv[1])
fk = open(filename,"r")
data = json.load(fk)

parameters = data['parameters']
for i in parameters:
if i['name'] == 'search':
search = i['value']
for key in search.keys():
if key == 'filter':
filter = search['filter']
elif key == 'logsources':
logsource = search['logsources']

#------------------------------------------------------
# get the data to be returned
#------------------------------------------------------

# define the fields to be returned in the chart data


# each row will contain the date, perfMsgId, and count
fields = [

64 IBM Operations Analytics Log Analysis: User's Guide


{ "id":"date", "label":"Timestamp", "type":"TEXT"},
{ "id":"perfMsgId", "label":"perfMsgId", "type":"TEXT" },
{ "id":"count", "label":"count", "type":"LONG"}
]

rows = []

# call getSearchData() to post the search request and retrieve the data
data = getSearchData(logsource, filter)

if 'facetResults' in data:

# get the facet results


facetResults = data['facetResults']
#print
json.dumps(facetResults, sort_keys=False,indent=4, separators=(',', ': '))

if 'dateFacet' in facetResults:
# get the dateFacet rows
dateFacet = facetResults['dateFacet']

# the results of the dateFacet are not sorted, so call dateSort()


dateSort(dateFacet)

# iterate through each row in the dateFacet


for dateRow in dateFacet:
for msgRow in dateRow['nested_facet']['dlFacet']['counts']:
# create a row which includes the date, perfMsgId, and count
rows.append({"date":dateRow['low'], "perfMsgId":msgRow['term'],
"count":msgRow['count']} );
#print rows

# create chart data with the id perfMsgIdCountsOverTime


chartdata.append({'id':'perfMsgIdCountsOverTime','fields':fields, 'rows':rows})

# close the connection


connection.logout()

#------------------------------------------------------
# Create the HTML data to be returned
#------------------------------------------------------
html = "<!DOCTYPE html><html><body><h1>Hello World!</h1></body></html>"</p><p>
chartdata.append({"id": "htmlData", "htmltext": html })

#------------------------------------------------------
# Build the final output data JSON
#------------------------------------------------------

# build the JSON structure containing the chart data which will be the
# output of this script
appData = {'data':chartdata}

#Print the JSON to system out


print json.dumps(appData, sort_keys=False,indent=4, separators=(',', ': '))

Performance_Msgs.app file
This sample shows the Performance_Msgs.app application file that references the
Performance_Msgs.py script and specifies the chart to display for the Custom Search Dashboard.

{
"name": "Performance Messages",
"description": "Displays charts showing performance messages over time",
"customLogic": {
"script": "Performance_Msgs.py",
"description": "View chart on search results",
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {
"filter": {
"range": {
"timestamp":{
"from":"01/01/2013 00:00:00.000 EST",
"to":"01/01/2014 00:00:00.000 EST",

Chapter 2. Searching and visualizing data 65


"dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
"logsources": [
{
"type": "logSource",
"name": "MyTest"
}
],
}
},
],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"columns": 2,
"charts": [
{
"type": "Stacked Bar Chart",
"title": "Performance Message ID Counts - Last Year",
"data": {
"$ref": "perfMsgIdCountsOverTime"
},
"parameters": {
"xaxis": "date",
"yaxis": "count",
"categories": "perfMsgId"
}
},
{
"type": "html",
"title": "My Test - Expert Advice",
"data": {
"$ref": "htmlData"
},
"parameters": {
"html": "text"
}
}

]
}
}
}
}
}

Performance_Msgs chart
The chart shows the counts of each message for each day.

66 IBM Operations Analytics Log Analysis: User's Guide


Defining a search filter app
You can use the search filter app to create a customized search query in IBM Operations Analytics Log
Analysis.
A search filter app is a custom search request that:
• searches databases for strings and timestamps
• creates Configured Patterns from the found strings
• uses the minimum and maximum timestamps found in the database to frame the search time range.
Before you create the search filter application, define the database connections. For more information,
see “Database Connections” on page 67.
IBM Operations Analytics Log Analysis includes two files in the <HOME>/IBM/LogAnalysis/
AppFramework/Templates/SearchFilters directory that you can use to implement a customized
search filter app:
• searchFilter.jar is self executable JAR file that contains Java classes. The classes support the
core functions of the app.
• searchFilter.sh is a wrapper script that starts the searchFilter.jar file after the appropriate
class path is set.
You must copy these files to the directory in <HOME>/IBM/LogAnalysis/AppFramework/Apps that
contains your application (.app) file.

Supported databases
The following databases are supported by default
• Derby 10.10
• DB2 9.7
To add databases that are not supported by default, you must locate the JAR file that contains the class
4 JDBC driver for the database, change the location of the drivers in the classpath section of the
searchFilter.sh file.
1. Download the appropriate class 4 JDBC driver file for your database.
2. Copy the driver file to the <HOME>/IBM/LogAnalysis/AppFramework/Apps/<directory>
directory where <directory> is the directory that contains your search filter application.
3. Add the location of the new driver to the classpath parameter in the searchFilter.sh file.
At time of publication, Oracle Database 11g Release 11.1.0.0.0 is the only additional database tested.

Database Connections
Create a database connection to help your search filter application to uniquely identifies sources of
structured data.
You can associate semi-structured data, such as a log file, with structured data, such as the transaction
status in the application database. To make this association, you must define a Database Connection,
which specifies the details required by IBM Operations Analytics Log Analysis to access the database.
After you have defined a Database Connection, you can use it when you configure your search filter
application.

Adding a Database Connection


This topic outlines the steps that you must follow to configure an events database as a Database
Connection.

Procedure
To add a Database Connection:

Chapter 2. Searching and visualizing data 67


1. In the Data Sources workspace, click Add > Database Connections. The Add Database Connections
tab is displayed
2. You are prompted to supply the following information:
Name
The name of the Database Connection.
Schema Name
The name of the table containing the event, for example, MISC.
JDBC URL
The IP address and path for the events database, for example, jdbc:derby://
ip_address:1627//opt/unity/database/UnityDB.
JDBC Driver
The JDBC driver, typically org.apache.derby.jdbc.ClientDriver.
Username
Type the username for the Database Connection.
Password
Type the password for the Database Connection.
3. Click OK.
A new entry is displayed in the Database Connections list.

Editing a Database Connection


You can edit the details of an existing Database Connection.

Procedure
To edit a Database Connection:
1. In the Data Sources workspace, expand the Database Connections list.
2. Select the Database Connection that you want to edit and click Edit. The Database Connection is
opened in a new tab.
3. Edit the data source as required.
4. To save the changes, click OK.

Deleting a Database Connection


You can delete an existing Database Connection.

Procedure
To delete a Database Connection:
1. In the Data Sources workspace, expand the Database Connections list.
2. Select the Database Connection that you want to delete and click Delete.
3. Click OK

Creating the search filter app


You can create the search filter app to create a customized search query in IBM Operations Analytics Log
Analysis.

About this task


To create a search filter app:

Procedure
1. SearchFilter definition: Use this app definition to define your own search filter:

68 IBM Operations Analytics Log Analysis: User's Guide


{
"name": "Example app for search filter",
"description": "Example app for search filter",
"customLogic": {
"script": "searchFilter.sh",
"description": "Example app for search filter",
"parameters": [
{
"name": "relationalQuery",
"type": "relationalQuery",
"value": {
"dataSource": {DATASOURCE_DETAILS},
"SQL": "SQL_QUERY"
}
}
],
"output": {
"type": "searchFilters",
"visualization": {
"searchFilters": {}
}
}
}
}

2. Update the dataSource field with your datasource details.


For example:

"value": {
"dataSource": {
"schema": "ITMUSER",
"jdbcdriver": "com.ibm.db2.jcc.DB2Driver",
"jdbcurl": "jdbc:db2://9.12.34.56:50000/WALE",
"datasource_pk": 1,
"username": "itmuser",
"password": "tbsm4120",
"name": "datasource"
},

3. Update the SQL field with your query.


For example:

"SQL": "select * from ITMUSER.test_OLSC"

Results
After you run the app, the customized search is displayed in the configured pattern section of the user
interface and the data source and time filters are set. The keywords, if any, and the count information are
also displayed.
Note:
1. If the app output contains data sources, the values that match the search criteria are returned on the
user interface. If no data sources are returned, the existing selections are retained and used.
2. If the app output contains time filters, the values that match the search criteria are returned on the UI.
If no time filters are returned, the existing selections are retained and used.
3. If the app output contains keywords, IBM Operations Analytics searches the data sources and time
filters that are returned and displays the keywords and the search hit count in the configured patterns
widget UI.

Example
The following code example demonstrates the logic that is used for the search filter app:

{
"name": "Sample Search filter app",
"description": "App for getting search filter",
"customLogic": {

Chapter 2. Searching and visualizing data 69


"script": "searchFilter.sh",
"description": "App for getting search filter",
"parameters": [
{
"name": "relationalQuery",
"type": "relationalQuery",
"value": {
"dataSource": {
"schema": "ITMUSER",
"jdbcdriver": "com.ibm.db2.jcc.DB2Driver",
"jdbcurl": "jdbc:db2://9.12.34.56u:50000/WALE",
"datasource_pk": 1,
"username": "itmuser",
"password": "tbsm4120",
"name": "datasource"
},
"SQL": "select * from ITMUSER.test_OLSC"
}
}
],
"output": {
"type": "searchFilters",
"visualization": {
"searchFilters": {}
}
}
}
}

Note: The output > visualization sub parameter must be set to searchFilters.
The search filter app uses relationalQuery as the input parameters. The relational query is a JSON
object that consists of the following JSON keys:
• dataSource is the value of the data source key that defines database connection attributes such as
JDBC connection details, schema name, user credentials .
• SQL is the value of SQL key that defines the query that we want to run against the database
Note: If you are trying to run search filter against a data source registered with IBM Operations Analytics,
you can provide dataSource name instead of dataSource details. All details of the corresponding
dataSource are fetched from IBM Operations Analytics. The dataSource is created with the Database
connections option in the Data Sources workspace.

{
"name": "relationalQuery",
"type": "relationalQuery",
"value": {
"dataSource": "myDataSourceName",
"SQL": "select * from MYDB.MYTABLE"
}
}

where myDataSourceName is the name of a data source that is defined in IBM Operations Analytics Log
Analysis.

Data elements
There are three parameters required for data elements:
id
Specify an identifier for the data element. The Custom Search Dashboard uses this value to determine
the data that is displayed. The Custom Search Dashboard uses this value to determine the data that is
displayed for a particular chart.
fields
Specify an array containing field descriptions. Each field that you specify in the array has an ID, a
label, and a type.
rows
Using the rows array to specify a value for each of the fields that you have specified.

70 IBM Operations Analytics Log Analysis: User's Guide


HTML elements
There are two parameters required for HTML elements:
id
Specify an identifier for the data element. The Custom Search Dashboard uses this value to determine
the data that is displayed. In the application file, the chart specifications use this id to specify the data
element used for the chart.
htmltext
Specify the HTML that you want to display. A HTML portlet is used to display the HTML that you
specify.

Example
This is a sample JSON output which contains both the types of output which contains both chart data and
HTML output:

{
"data":[
{
"id":"ErrorsWarningsVsTime",
"fields":[
{
"id":"timestamp",
"label":"Date",
"type":"TEXT"
},
{
"id":"severity",
"label":"severity",
"type":"TEXT"
},
{
"id":"count",
"label":"count",
"type":"LONG"
}
],
"rows":[
{
"timestamp":"10-21 13:00",
"severity":"W",
"count": 221
},
{
"timestamp":"10-21 13:00",
"severity":"E",
"count": 204
}
]
},
{
"id":"htmlData",
"htmltext":",<!DOCTYPE html><html><body><div><h1>Sample HTML</h1>
</div></body></html>"
}
]
}

In this example, the data id ErrorsWarningsVsTime is defined in the script.

chartdata.append({
'id':'ErrorsWarningsVsTime',
'fields':facetFields,
'rows':facetRows})

In the application file chart specification, the id ErrorsWarningsVsTime is referenced to specify the
data set used in the chart.

{
"type": "Tree Map",
"title": "Messages by Hostname - Top 5 over Last Day",
"data": {

Chapter 2. Searching and visualizing data 71


"$ref": "ErrorsWarningsVsTime"
},
"parameters": {
"level1": "date",
"level2": "count",
"level3": "severity",
"value": "count"
}

Launching a new context tab in a HTML Custom Search Dashboard


You can provide a link within a HTML Custom Search Dashboard that launches a URL in a new tab within
the Custom Search Dashboard. The search or Custom Search Dashboard is executed with the context
parameters that you specify in the URL.
This example contains the URLs to launch searches with search parameters within a Custom Search
Dashboard:

{
"id": "htmlData",
"htmltext": "<!DOCTYPE html><html><body><br><div>
<a href=http://www.espncricinfo.com//>Normal URL -
- ESPN Cricinfo</a>
</div><br><div><a href=https://192.168.56.101:9987/Unity/CustomApps
UI?name=All%20Supported%20Visualizations&appParameters=[]>Custom App LIC
- - All Supported Visualizations</a></div><br><div><a href=https://192.168.56.101:99
87/Unity/SearchUI?queryString=diagnosticLevel:==Warning&timefilters=
{\"type\":\"relative\",\"lastnum\":\"1\",\"granularity\":\"year\"}&d
ataSources=[{\"type\":\"datasource\",\"name\":\"/DB2Diag1\"}]>Sear
ch UI LIC - - DB Log Events with Diagnostic Level - Warning</a></div>
<br><div><a href=https://192.168.56.101:9987/Unity/SearchUI?querySt
ring=diagnosticLevel:==Severe&timefilters={\"type\":\"relative\",\"la
stnum\":\"1\",\"granularity\":\"year\"}&dataSources=[{\"type\":\"datasour
ce\",\"name\":\"/DB2Diag1\"}]>Search UI LIC - - DB Log Events with Diag
nostic Level - Severe</a></div></body></html>" }

Note: Any spaces that are required in your URL must be escaped. If you want to include a % character in
your URL in must be added as %25.

Template-based search filter Custom Search Dashboard


A template has been provided for search filter Custom Search Dashboards. This template is located in the
<HOME>/IBM/LogAnalysis/AppFramework/Templates/SearchFilters directory.
To use this template, you must specify the template name in the type field:

{
"name": "Sample Search filter app",
"type":"SearchFiltersTemplate",
"description": "App for getting search filter",
"customLogic": {
"script": "searchFilter.sh",
"description": "App for getting search filter",
"parameters": [
{
"name": "relationalQuery",
"type": "relationalQuery",
"value": {
"dataSource": {
"schema": "ITMUSER",
"jdbcdriver": "com.ibm.db2.jcc.DB2Driver",
"jdbcurl": "jdbc:db2://9.12.34.56u:50000/WALE",
"datasource_pk": 1,
"username": "itmuser",
"password": "tbsm4120",
"name": "datasource"
},
"SQL": "select * from ITMUSER.test_OLSC"
}
}
],
"output": {
"type": "searchFilters",
"visualization": {
"searchFilters": {}

72 IBM Operations Analytics Log Analysis: User's Guide


}
}
}
}

If you use this template, the application directory for this application requires the application file only. You
do not need to copy the .sh and JAR script files into the application folder.

Modifying the search filter app core logic


If you want to extend the default implementation, the core logic of the search filter app is described here.

Procedure
1. Create a connection to the datasource or database that you specified in the input file using the Data
Sources workspace.
2. Run the SQL query that is part of the input file for the app.
3. Tokenize the output of the SQL query and remove common words such as verbs.
4. Pick out the timestamp column and pick up startTime as minimum timestamp and endTime as
maximum timestamp.
5. If you already know the data source against which you want to use this search filter, you can use the
logsource name in the output parameters. If you do not know this, use an asterisk (*).
6. Construct output in the JSON format. For example:

{
"keywords": [
"keyword1",
"keyword2",
],
"timefilters": {
"type": "absolute",
"startTime":"2012-06-24 05:30:00",
"endTime":"2013-06-24 05:30:00",
"lastnum": 7,
"granularity": "day"
},
"logsources": [
{
"type": "tag",
"name": "*"
}
]
}

Note: The source code of the default implementation (in Java) of the search filter app is in
included with IBM Operations Analytics Log Analysis in the <HOME>/AppFramework/Templates/
SearchFilters directory. You can update the default implementation based on your requirements.

Adding a shortcut to a Custom Search Dashboard to the Table view toolbar


You can add a Custom Search Dashboard to the Table view toolbar so that you can quickly launch your
Custom Search Dashboard.

Procedure
1. Open the CustomAppsConfigFile.json Custom Search Dashboard configuration file located in the
<HOME>/wlp/usr/servers/Unity/apps/Unity.war/configs directory.
2. Add a link, an icon image, and a tooltip for each Custom Search Dashboard for which you want to
create a shortcut. The syntax for the shortcut is:

[{
"url": "<url>",
"icon": "<icon>",
"tooltip": "<tooltip>"
}]

Chapter 2. Searching and visualizing data 73


where:
url
(Required)
The URL format that you specify must be in the format:

https://<ip_address>:9987:/Unity/CustomAppsUI?name=
<name>&<appParameters>=<params>

where <ip_address> is the IP address of the server on which IBM Operations Analytics Log
Analysis is installed and the parameters that you specify are:
name
(Required) Replace the <name> parameter with a valid Custom Search Dashboard name.
appParameters
If the Custom Search Dashboard that you want to launch requires that parameters are
specified, specify the required parameters in a JSON array in {key:value} format.
icon
Specify the path to the graphic that you want to use for your icon. If no icon is specified, the default
Custom Search Dashboard icon is used.
tooltip
Specify the text for the tooltip displayed when the mouse is placed on the icon. If no tooltip is
specified, the name of the Custom Search Dashboard is used as the tooltip text.
This example provides a link to a Custom Search Dashboard named AnomalyApp with a path to a
shortcut icon located in the Unity/images directory and a tooltip text Detect Anomalies:

[{
"url": "https://192.168.56.101:9987/Unity/CustomAppsUI?name=Anomaly
App&appParameters=[]",
"icon": "https://192.168.56.101:9987/Unity/images/context.gif",
"tooltip": "Detect Anomalies"
}]

Note: Any spaces that are required in your URL must be escaped. If you want to include a % character
in your URL in must be added as %25.
3. Save the file.

Results
Your shortcut is added to the Table view toolbar. Double-click the icon to launch your Custom Search
Dashboard.

74 IBM Operations Analytics Log Analysis: User's Guide


Appendix A. Notices

This information was developed for products and services that are offered in the USA.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program, or
service that does not infringe any IBM intellectual property right may be used instead. However, it is the
user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
United States of America
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual
Property Department in your country or send inquiries, in writing, to:

Intellectual Property Licensing


Legal and Intellectual Property Law
IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.

Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.

© Copyright IBM Corp. 2023 75


The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be
the same on generally available systems. Furthermore, some measurements may have been estimated
through extrapolation. Actual results may vary. Users of this document should verify the applicable data
for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without
notice, and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform
for which the sample programs are written. These examples have not been thoroughly tested under
all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
© Copyright IBM Corp. 2015. All rights reserved.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at
www.ibm.com/legal/copytrade.shtml.

Terms and conditions for product documentation


Permissions for the use of these publications are granted subject to the following terms and conditions.

Applicability
These terms and conditions are in addition to any terms of use for the IBM website.

76 IBM Operations Analytics Log Analysis: User's Guide


Personal use
You may reproduce these publications for your personal, noncommercial use provided that all proprietary
notices are preserved. You may not distribute, display or make derivative work of these publications, or
any portion thereof, without the express consent of IBM.

Commercial use
You may reproduce, distribute and display these publications solely within your enterprise provided
that all proprietary notices are preserved. You may not make derivative works of these publications, or
reproduce, distribute or display these publications or any portion thereof outside your enterprise, without
the express consent of IBM.

Rights
Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either
express or implied, to the publications or any information, data, software or other intellectual property
contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use
of the publications is detrimental to its interest or, as determined by IBM, the above instructions are not
being properly followed.
You may not download, export or re-export this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS
ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT,
AND FITNESS FOR A PARTICULAR PURPOSE.

IBM Online Privacy Statement


Privacy Policy Considerations
IBM Software products, including software as a service solutions, ("Software Offerings") may use cookies
or other technologies to collect product usage information, to help improve the end user experience,
to tailor interactions with the end user, or for other purposes. In many cases no personally identifiable
information is collected by the Software Offerings. Some of our Software Offerings can help enable you
to collect personally identifiable information. If this Software Offering uses cookies to collect personally
identifiable information, specific information about this offering's use of cookies is set forth below.
Depending upon the configurations deployed, this Software Offering may use session and persistent
cookies that collect each user's user name and password for purposes of session management,
authentication, enhanced user usability, and single sign-on configuration. These cookies cannot be
disabled.
If the configurations deployed for this Software Offering provide you as customer the ability to collect
personally identifiable information from end users via cookies and other technologies, you should seek
your own legal advice about any laws applicable to such data collection, including any requirements for
notice and consent.
For more information about the use of various technologies, including cookies, for these purposes,
see IBM's Privacy Policy at http://www.ibm.com/privacy and IBM's Online Privacy Statement at http://
www.ibm.com/privacy/details in the section entitled "Cookies, Web Beacons and Other Technologies"
and the "IBM Software Products and Software-as-a-Service Privacy Statement" at http://www.ibm.com/
software/info/product-privacy.

Appendix A. Notices 77
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, other countries, or both.

Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and service names might be trademarks of IBM or other companies.

78 IBM Operations Analytics Log Analysis: User's Guide


IBM®

Product Number:

You might also like