PDF Iwa Using
PDF Iwa Using
Version 1.3.8
1.3
User's Guide
IBM
Note
Before using this information and the product it supports, read the information in Appendix A,
“Notices,” on page 75.
Edition notice
This edition applies to IBM® Operations Analytics Log Analysis and to all subsequent releases and modifications until
otherwise indicated in new editions.
References in content to IBM products, software, programs, services or associated technologies do not imply that they
will be available in all countries in which IBM operates. Content, including any plans contained in content, may change
at any time at IBM's sole discretion, based on market opportunities or other factors, and is not intended to be a
commitment to future content, including product or feature availability, in any way. Statements regarding IBM's future
direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. Please
refer to the developerWorks terms of use for more information.
© Copyright International Business Machines Corporation 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
Appendix A. Notices............................................................................................ 75
Trademarks................................................................................................................................................ 76
Terms and conditions for product documentation................................................................................... 76
IBM Online Privacy Statement.................................................................................................................. 77
....................................................................................................................................................................78
Trademarks................................................................................................................................................ 78
iii
iv
Chapter 1. About this publication
This guide contains information about how to use IBM Operations Analytics Log Analysis.
Audience
This publication is for users of the IBM Operations Analytics Log Analysis product.
Publications
This section provides information about the IBM Operations Analytics Log Analysis publications. It
describes how to access and order publications.
Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. In this release, the IBM Operations Analytics Log Analysis user
interface does not meet all accessibility requirements.
Accessibility features
This information center, and its related publications, are accessibility-enabled. To meet this requirement
the user documentation in this information center is provided in HTML and PDF format and descriptive
text is provided for all documentation images.
Providing feedback
We appreciate your comments and ask you to submit your feedback to the IBM Operations Analytics Log
Analysis community.
Typeface conventions
This publication uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip:, and Operating system considerations:)
• Keywords and parameters in text
Italic
• Citations (examples: titles of publications, diskettes, and CDs
• Words defined in text (example: a nonswitched line is called a point-to-point line)
• Emphasis of words and letters (words as words example: "Use the word that to introduce a
restrictive clause."; letters as letters example: "The LUN address must start with the letter L.")
• New terms in text (except in a definition list): a view is a frame in a workspace that contains data.
• Variables and values you must provide: ... where myname represents....
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type
• Values for arguments or command options
Search UI overview
Use this topic to help you to get started with the Search UI.
The following screen shot shows the capabilities of the Search UI:
1. Sidebar
Use the UI icons on the side bar to open the Getting Started UI, a saved search, a search dashboard or
the Administrative Settings UI.
2. Search Patterns pane
The Search Patterns pane lists the fields that are found in your search. To filter the search for a field,
click on the field and click Search.
3. Discovered Patterns pane
The Discovered Patterns pane lists fields and values. To display discovered patterns, you need to
configure the source types in the data source.
4. Timeline pane
The Timeline pane displays the current search results filtered by time. To drill down to a specific time,
click on a bar in the graph.
5. Timeline slider
Use the time line slider icon to narrow and broaden the time period.
6. Search box
Enter search queries in the Search field. When you click on a field in the Search or Discovered
Patterns pane, the query is displayed in this field.
7 Time filter list
Use the time filter list to filter search results for a specified time range.
Search UI reference
Use this reference topic to help you to navigate the Search user interface (UI).
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
ContentServer
i5
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
WAS
Fix B2C
B2B
Oracle
Pack p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Panagon
pSeries
Pack
Documentum
Domino.Doc i53
Mgr
Professional
Developer
5.4
Express5.1
5.0
10
Enterprise Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
21 Documentum
Cloudscape
Telesales
Fix
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Domino.Doc i53
Mgr
Professional
5.4
Express
Pack5.1
5.0
10
Enterprise 21 Launch icon To open the Search UI for
the selected aggregated record.
This feature is only available to
Standard Edition users.
Visualization tab
Title Enter a name for the chart.
Searching data
You can search ingested data such as log files for keywords. Search results are displayed in a timeline and
a table format.
Procedure
1. From the Search workspace , click the New Search or Add Search tab to open a new search table.
Enter the search query.
data sources, select a leaf node from the Data Sources tree ( ).
3. In the Time Filter pane, click the Time Filter list ( ) and select the time period for which
you want to search. Select Custom to specify a start time and date, and an end time and date for your
search.
4. In the Search field, type the string for which you want to search in the log files. To view distribution
information for all logs, in the Search field, type the wildcard character (*).
To search for a partial string, type an asterisk (*) at the start and end of your search string. For
example, to search for strings that contain the phrase hostname, type *hostname*.
To narrow your search based on a service topology, type the service topology component on which
you want to base your search, followed by a colon (:), followed by your search string. For example,
service:DayTrader.
5. Click Search.
The first search you perform after the IBM Operations Analytics Log Analysis processes have been
restarted might take longer to complete than subsequent searches.
The user interface refreshes every 10 seconds. The updated results are displayed in the progress bar.
Maximum search results: The search returns a maximum of 1000 records by default. This
limit applies only to raw searches and not facet queries. This limit can be configured in
unitysetup.properties file property: MAX_SEARCH_RESULTS=1000. Do not to use a high value
for the MAX_SEARCH_RESULTS parameter. When a large number of results are returned, it degrades
search performance.
Results
A graph displaying the distribution of matching events in the log is displayed. Log records containing a
match for your search term are also displayed in Table view.
When you search for a specific term, the term is highlighted within the individual log records to facilitate
faster analysis. If you search for a partial term, each term that contains the search phrase is highlighted.
Fields that contain only tagged values, in other words values that are contained within angled brackets
(<>), are not highlighted. If a field contains values that are tagged and values that are not tagged, the
tagged terms are removed and the remaining terms are highlighted as appropriate.
If your search spans data that is stored in the archive, IBM Operations Analytics Log Analysis displays the
initial results while it retrieves the rest of the data. You can interact with the initial search results while
IBM Operations Analytics Log Analysis generates the search results. The progress bar displays the search
progress.
To display the latest results during the search, click We have more results for you. To stop the search,
close the tab. To start another search while you are waiting for the first search to complete, click the Add
Search tab.
What to do next
If you want to load data that contains tags and want to keep the tagging, you can disable highlighting. To
disable highlighting:
1. Open the unitysetup.properties file.
2. Locate the ENABLE_KEYWORD_HIGHIGHTING property and set it to false.
3. Save the file.
4. To restart IBM Operations Analytics Log Analysis, enter the following command:
<HOME>/IBM/LogAnalysis/utilities/unity.sh -restart
Query syntax
This section describes the combination of words, keywords, and symbols that you can use when
searching for phrases using IBM Operations Analytics Log Analysis.
The query syntax is based on the Indexing Engine query syntax. For more information, see:
https://wiki.apache.org/solr/SolrQuerySyntax
Indexing Engines use a number of different query parser plug-ins. Log Analysis supports the Lucene query
parser plug-in. For more information about the Lucene query syntax, see:
http://lucene.apache.org/core/5_1_0/queryparser/org/apache/lucene/queryparser/classic/package-
summary.html
mod_date:[20020101 TO 20030101]
The search returns all the log records that have been modified in 2003, that is all the log records
where the mod_date field is within the specified range.
You can also use range queries to search for fields that are not dates. For example, to search for all
the log records that contain an ID between A to D but that do not include A or D, enter:
title:{A TO D}
DateMath queries
To help you to implement more efficient filter queries for dates, you can use DateMath queries.
For example, here are 4 possible DateMath queries:
• timestamp:[* TO NOW]
• timestamp:[1976-03-06T23:59:59.999Z TO *]
• timestamp:[1995-12-31T23:59:59.999Z TO 2007-03-06T00:00:00Z]
• timestamp:[NOW-1YEAR/DAY TO NOW/DAY+1DAY]
For more information, see the DateMathParser topic in the Lucene documentation at:
http://lucene.apache.org/solr/5_1_0/solr-core/org/apache/solr/util/DateMathParser.html
+ - && || ! ( ) { } [ ] ^ " ~ * ? : \ /
To escape a special character, use a back slash (\) before the special characters. For example, to search
for the query (1+1):2, enter:
\(1\+1\)\:2
To find multiple terms, use brackets. For example, to search for moat and boat, enter:
/[mb]oat/
Example queries
View example queries that use search patterns and regular expressions to search for entries.
Specifying a fieldName
The logRecord field is the value of the actual log record index. It cannot be used in Apache Solr RegEx
expressions. logRecord is always defined as ‘text-general’, and its value are tokenised. If fieldName is not
specified in the query, then logRecord is used for the field name by default. For example, the query
fieldName:SUMMARY
fieldContents: "Transaction 12345 has failed with response time of 10 seconds and error code of
6789."
The syntax for querying using regular expressions is: {fieldName}:/{Apache Solr RegEx Expr}/
The query for this example is:
This query specifies that the fieldName SUMMARY is to be searched, and the Solr RegEx that must be
matched in this field. The Solr RegEx specifies that it must be able to find a single digit integer in the range
6-9 OR two digits that range from 10-99, immediately followed by the character sequence ' seconds', and
then a series of characters (.*) that ends with the character sequence '6789' (with escape \ for dot)
Alternatively, the numerical range feature operator (<>) can be used:
fieldName:SUMMARY
fieldContents: "Transaction 12345 has failed with response time of 10 seconds and error code
6789."
fieldname:USER
filedContents: myhost.ibm.com
filedContents: myhost.ibm.com
filedContents: remotehost.ibm.com
fieldname:USER
filedContents:sysadmin
filedContents:user1
filedContents:user2
…
or
fieldName:SUMMARY
fieldContents: "Transaction 12345 has failed with response time of 10 seconds and error code of
6789."
fieldName:User
filedContents:sysadmin
filedContents:user1
filedContents:user2
...
As the text gets tokenized, multiple AND class should be used for the SUMMARY field.
Note: Similar queries can be performed on fields with a TEXT DataType that are sortable and/or filterable.
fieldName:SerialNum
fieldContents:1
fieldContents:2
fieldContents:3
...
fieldContents:11
fieldContents:12
The query + SerialNum:[3 TO 10] returns records with a value from 3-10.
The query + SerialNum:{3 TO 10] returns records with a value from 4-10.
The query + SerialNum:{3 TO 10} returns records with a value from 4-9.
Match zero or more. Use asterisk (*) to match For the string "mmmnnn":
preceding shortest pattern zero or more times.
• m*n* # match
• m*n*o* # match
• .*nnn.* # match
• mmm*nnn* # match
For complement: Use tilde(~) to negate the For the string "abcdef":
shortest pattern next to it. • ab~df # match
For instance, "ab~cd" means: • ab~cf # match
Starts with a • ab~cdef # no match
Followed by b • a~(cb)def # match
• a~(bc)def # no match
Followed by a string of any length that it anything
but c
Ends with d
For interval: Use angle brackets (<>) to specify the For the string: "solr90":
numeric range • solr<1-100> # match
• solr<01-100> # match
• solr<001-100> # no match
Search Patterns
To refine your search, use the values in the Search Patterns pane. For each new search, the list of fields
with which you can filter your search is updated and listed in the Search Patterns pane. The number of
occurrences of each value that is found is displayed with each unique keyword added as a child node.
Click a keyword to add it to the Search field.
The keyword is added in the format field:"value". You can add multiple keywords to refine your
search. If you want to run an OR query, type the word OR between each added keyword search string.
When you add all of the search criteria, click Search to return log lines that contain the values that you
specified.
Discovered Patterns
When you search a data source that has been configured with a Source Type that uses the Generic
annotator, the results of the search are listed in the Discovered Patterns pane.
For each new search, the list of fields with which you can filter your search is updated and listed. The
counts in the Discovered Patterns pane indicate the number of records that contain a specific key or
key-value pair. A key-value pair might occur multiple times in a record, but the total reflects the number of
records in which the key-value pair occurs. The count of the value of nodes in a key-value pair tree might
exceed the key count when multiple values occur for the same key in a single record.
Click a keyword to add it to the Search field. The keyword is added in the format field:"value".
You can add multiple keywords to refine your search. If you want to run an OR query, type the word OR
between each added keyword search string. When you add all of the search criteria, click Search to return
log lines that contain the values that you specified.
Saving a search
After you search for a keyword or series of keywords, you can save your search so that you can run it again
at a later time. The searches you save are added to the Quick Searches pane.
Procedure
To save a search:
1. In the Search workspace, click the Save Quick Search icon.
The Save Quick Search dialog box is displayed.
2. Enter a value in the Name and Tag fields. Adding a tag allows you to contain similar searches within a
folder.
3. (Optional) Specify a time range as an absolute or relative time. The default option is relative time.
4. Click OK.
The search is saved to the Save Quick Search pane.
What to do next
To use a saved search pattern, browse to the saved search in the Quick Searches pane and double-click
the search pattern that you want to launch. You can also edit and delete the search from the right-click
menu.
Saved searches
To display a list of saved searches, click the Saved Searches icon.
The following saved searches are available by default after you install the sample data:
sample WAS System Out
Example search that displays results from WebSphere® Application Server.
sample DB2 db2diag
Example search that displays results from DB2®.
sample MQ amqerr
Example search that displays results from IBM MQ.
sample Oracle alert
Example search based on alerts for the Oracle sample data.
Visualizing data
You can create charts and graphs that help users to process information quickly and efficiently.
Procedure
1. In the Search workspace, select Grid view.
2. Select one or more columns.
3. Click the Plot Column icon in the Grid view toolbar.
The Plot Chart UI is displayed.
4. To display counts for the selected columns, select the Generate Counts check box.
5. If the columns that you selected contain dates or numeric values, you can use the Granularity field to
specify the granularity.
You can use this setting only for columns that contain dates or numeric values and can be filtered.
6. If the columns that you selected contain numeric values, you can also apply statistical functions on the
values.
• Workgroup
SQL
System
System
AltiVec
Sybase
Studio
FixEntry
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
Orchestrator
Cloudscape
Telesales
Documentum
Domino.Doc z9
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 If
i53 you are using the Entry Edition, you can use the sum, min, max, avg, and count
functions.
• Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
FixStart
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
Orchestrator
Cloudscape
Telesales
Documentum
Domino.Doc z9
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 If
you use the Standard Edition, you can use the missing, sumOfSquares, stddev, and
i53
percentile functions.
7. To plot the chart on 100 or less of the records that you selected, click Plot Chart (Current Page Data).
8. To plot the chart on all the indexed data, click Plot Chart (All Data). If one or more of the fields in
the selected columns is not filterable, the charts are only plotted if the total number of records is less
than 1000. To change this setting, you must modify the MAX_DATA_FACETS_IN_CHART property in
the unitysetup.properties file.
Results
The graph is rendered. To change the graph type, use the Edit icon.
If the chart title contains a loading icon, the chart is loading data from the archive. The chart is
automatically updated when all the searches are complete. If you log out before the search is completed,
the search stops.
{
"min": 10,
"max", 1000,
"percentile":
{
"50": 100,
"95": 200,
"99": 225
}
}
Percentile queries are not calculated incrementally, unlike the other queries that are used in Log Analysis.
This fact means that the query needs to run over the entire time range before it can return any results.Log
Analysis limits the number of asynchronous windows that can run simultaneously for this function. This
property is set in the MAX_NON_INCREMENTAL_WINDOWS property in theunitysetup.properties. The
default value is 2.
For example, if you specify a percentile query based on a time range from August 1 2015 to August 10
2015 and MAX_NON_INCREMENTAL_WINDOWS=5 and COLLECTION_ASYNC_WINDOW=1d, only the most
recent 5 days of data that is returned by the query are considered for percentile evaluation.
Dashboards
You can use dashboards to collate multiple charts, which are created during problem diagnosis, on a
single user interface (UI).
For example, imagine an organization uses IBM Operations Analytics Log Analysis to monitor all the server
logs that it generates. The system administrator wants to be able to view the most critical errors, the
highest severity errors, and the total number of errors on a single UI. To facilitate this scenario, you create
a dashboard that is called System Admin and you add the charts that show the required information to it.
The data that is displayed on the dashboards is based on charts. For more information, see the Charts
topic under Custom Search Dashboard > Steps to create a Custom Search Dashboard > Application
files in the Extending IBM Operations Analytics Log Analysis section.
Sample dashboards
Sample dashboards are included as part of the sample content for the following Custom Search
Dashboard samples:
• Sample_EventInsightpack_v1.0.0.0
• Sample_AppTransInsightpack_v1.0.0.0
• Sample_weblogInsightpack_v1.0.0.0
• WASInsightPack_v1.1.0.3
Procedure
1. Open an existing search UI or click Add New Search to create a new search UI.
2. Switch to the Grid View and plot the charts that you would like to include in the dashboard.
You cannot use more than eight charts in a single dashboard.
3. To plot the charts that you would like to include in the dashboard, select the columns that you are
interested in and click the Plot column icon. Click Plot Chart (All Data).
If you want to use the drill-down feature, you must click Plot Chart (All Data). You cannot use the Plot
Chart (Current Page Data) button. The drill-down function is not supported for this option.
You can also run a number of statistical operations on the selected columns. Select one of the
following functions from the Summary Function drop-down list.
min
The minimum values in a field.
max
The maximum values in a field.
sum
The sum of the values in a field.
avg
The average of the values in a field.
count
The count of the values in a field.
missing
The number of records for which the value for a field is missing.
sumOfSquares
Sum of the squares of the values in a field.
stddev
The standard deviation of the values in a field.
4. To create the dashboard, click the Create New Dashboard button. Enter a name and a tag. The tags
are used to define groupings.
5. Save the dashboard.
What to do next
After the dashboard is created, a Custom Search Dashboard is automatically generated that represents
the dashboard. You can view the Custom Search Dashboard and dashboard on the Search UI under
Search Dashboard > Dashboards.
Procedure
1. Open the Search UI.
2. Open the Search Dashboards list and select the dashboard that you want to delete.
3. Right-click the dashboard and click Delete. Confirm that you want to delete it when prompted.
Results
If the dashboard is a dynamic dashboard, the dashboard and associated data is deleted. If the dashboard
is a Custom Search Dashboard, the Custom Search Dashboard file extension is changed to .DELETED. You
can contact the IBM Operations Analytics Log Analysis administrator and ask them to delete the Custom
Search Dashboard if appropriate.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Aggregated Search
Use Aggregated Search to search aggregated data from multiple instances of Log Analysis.
You can install multiple instances of Log Analysis. For example, your environment might consist of data
centers in different regions. You install an instance of Log Analysis in each region. The Aggregated Search
feature helps you to view, search, and drill down into the aggregated data from these instances of Log
Analysis.
Before you can use Aggregated Search, you must configure it, including how you want to aggregate data
that is sent from the children to the parent node. For more information, see “Configuring Aggregated
Search” on page 25.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Configuring Aggregated Search
To configure Aggregated Search, complete the following steps.
1. To specify the topology of your cluster, create the required JSON file on each Log Analysis server in
your cluster. For more information, see “Defining the topology for Aggregated Search” on page 25.
2. Configure Aggregated Search automatically or manually. You can also configure Lightweight Directory
Access Protocol (LDAP) for user authentication as part of this step.
For more information about how to configure it automatically, see “Configuring Aggregated Search
automatically” on page 29.
For more information about how to configure it manually, see “Configuring Aggregated Search
manually” on page 30.
3. Create a data aggregation template to specify how data is aggregated. For more information, see
“Creating aggregation templates” on page 35.
4. Review the information about how you need to set up your users, roles, and access permissions
to ensure that records are visible. For more information, see “Role-based Access Management for
Aggregated Search” on page 31.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Defining the topology for Aggregated Search
Before you use the Aggregated Search feature, you must define the topology.
Procedure
1. Create a JSON file and specify the topology of the data center in it. Specify the parameters described
in the Topology parameters table.
Example
The following example displays the structure of the input JSON object.
{
"AP": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server1.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["India", "Australia"],
"connected":[]
},
"Europe": {
"OS_user": "unity",
"sshPort": "22",
"LA_home": "/home/unity/IBM/LogAnalysis/",
"LA_host": "server2.example.com",
"LA_port": 9987,
"LA_user": "unityadmin",
"LA_password": "unityadmin",
"children": ["UK", "France"]
},
"India": {
"OS_user": "unity",
"sshPort": "22",
When you export an aggregation template, the level is specified in the JSON file. The top or root node is
the Global node and it designated level 0 in an exported aggregation template. The regions are AP and
Europe and are on the second level and are designated level 1 in the template. The countries are defined
on the third level and are assigned level 2. The bottom level contains the cities and is designated as level
3. You can only import templates from the same level. You cannot import templates from another level.
For example, if you export a template from the Pune node on the bottom level, you cannot import it into
the India node.
What to do next
After you create the topology, you need to enable Aggregated Search.
If you are automatically configuring Aggregated Search, run the utility. For more information, see
“Configuring Aggregated Search automatically” on page 29.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Configuring Aggregated Search automatically
To automatically configure Aggregated Search, complete the following steps.
Procedure
1. Go to the <HOME>/IBM/LogAnalysis/utilities/aggregation directory.
2. To copy the topology file and enable Aggregated Search in the unitysetup.properties file, enter
the following command:
where <topology_file_name> is the name of the topology file. For example, example.json.
3. To set up the Java™ certificates, enter the following command:
4. You can also configure Lightweight Directory Access Protocol (LDAP) for user authentication. This
step is optional and is only required if you want to use LDAP. To configure LDAP, enter the following
command:
5. You can also configure single sign-on (SSO) to facilitate the drill-down feature. To configure SSO, you
must set up LDAP as described in step 2. This step is optional and is only required if you want to use
SSO when you drill down. To configure SSO and update the server.xml file with the domain name, enter
the following command, specifying the path to the keys file:
where <path_ltpa_file> is the path to the key file that is copied to all nodes in the cluster. For example,
home/la/ltpa.keys.
6. Restart each instance of Log Analysis. To restart Log Analysis, log in each server and enter the
following command:
<HOME>/IBM/LogAnalysis/utilities/unity.sh -restart
If any certificate errors occur when you use the checksso option, enter the following command:
You can also configure Aggregated Search with a single command. You can omit any options that you do
not want to use such as configsso. For example:
./aggregationConfigUtil.sh -o setuptopology,setupjavakeystores,configldap,configsso
-tf ~/<topology_file_name> -n all -lp ~/ldapRegistryHelper.properties
-kf ~/<path_ltpa_file>
After you configure Aggregated Search, you can create an aggregation template to specify how the data is
aggregated and is sent from the children nodes to the parent nodes. For more information, see “Creating
aggregation templates” on page 35.
Procedure
1. To stop the Log Analysis server, enter the following command:
<HOME>/IBM/LogAnalysis/utilities/unity.sh -stop
ENABLE_AGGREGATION = true
4. Specify the path to the JSON file that contains the topology information.
For example:
DC_TOPOLOGY_PATH=<path_topology_json>
where <path_topology_json> is the path to the file that contains the topology information. For
example, /home/la/datacenter_topology.json.
5. Specify the name of the local node. This name must correspond to the name specified in the topology
JSON file.
For example,
<HOME>/IBM/LogAnalysis/ibm-java/bin/keytool -import
-file <crt_file_path>
-keystore <HOME>/IBM/LogAnalysis/wlp/usr/servers/Unity/
resources/security/key.jks
-alias <alias_name> -storepass loganalytics
where <crt_file_path> is the full path to the temporary directory you saved the client.crt
file. For example, /tmp/AP_cert/client.crt. <alias_name> is the name of each certificate
that you want to import for the parent and children nodes. Use a unique alias for each certificate
that you import.
8. If you want to use LDAP for authentication, configure it. This step is optional and is only required for
authentication.
9. You can also configure single sign-on (SSO). This step is optional. It is required to allow users to drill
down without having to sign in to each node.
10. To start the Log Analysis server, enter the following command:
<HOME>/IBM/LogAnalysis/utilities/unity.sh -restart
What to do next
After you configure Aggregated Search, you can create an aggregation template to specify how data that is
sent from the children nodes is aggregated. For more information, see “Creating aggregation templates”
on page 35.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Role-based Access Management for Aggregated Search
To view aggregated data, the user must have read access to the data sources whose data was aggregated.
Aggregated data flows upwards from child nodes to the parent node. The drill-down feature allows users
in the parent node to view data that is generated in the child nodes. Users in the parent node can search
all aggregated data but they can drill down only to the records that they have access to. Access is defined
in the data source that collected the data.
For example, consider the following situation. Assume that this example has three instances of Log
Analysis:
• Asia Pacific (AP). It represents the region and is the global node that is at the top of the hierarchy.
• India. It represents the country and is a child node that is in the middle of the hierarchy.
• Bangalore. It represents the city and is a child node that is at the bottom of the hierarchy.
The Bangalore node contains two data sources. Different users have access to these data sources, as
listed in the following table.
DS1 contains 10 records and DS2 contains 5. user1 can view records from DS1. user2 can view records
from DS2 only. user3 can view all 15 records.
You create two aggregation templates in the Bangalore node. The templates are called Aggregation1
and Aggregation2. The data that is aggregated by Aggregation1 is collected by DS1. The data that is
aggregated by Aggregation2 is collected by DS2. Aggregation1 uses the COUNT function to count the
number of records that are collected by DS1, in this case 10. Aggregation2 uses the COUNT function
to count the number of records that are collected by DS2, in this case 5. Both templates send the
aggregated data to the India node. It is sent from here to the _aggregations data source. Aggregation1
sends one aggregation record with an aggregated value of 10 to the India node. Aggregation2 sends one
aggregation record with an aggregated value of 5 to the India node. Both aggregation records are loaded
by the _aggregations data source.
In the India node, you create an aggregation template, Aggregation3, which uses the SUM functions
to add the aggregated values in the _aggregations data source. Aggregation3 creates one aggregation
record with an aggregated value of 15 and sends it to the AP node where it is loaded by the _aggregations
data source.
When users log in to the AP node and search the _aggregations data source, one aggregation record
with an aggregated value of 15 is displayed. When they drill down to the India node, 1 or 2 aggregation
records with aggregated values of 10 and 5 are displayed. This discrepancy depends on the users access
as assigned in the data source in the child nodes, in this case DS1 and DS2.
When user1 logs in to the AP node and searches the _aggregations data source, one aggregation record
with an aggregated value of 15 is displayed. When user1 drills down, one aggregation record with a value
of 10 is displayed. This aggregation record is generated from the Aggregation1 template that is specified
in the Bangalore node. user1 cannot view the aggregation record with a value of 5 that is generated by
the Aggregation2 template. This situation occurs because user1 does not have access to the view records
from DS2. When user1 or user3 and user 2 or user 3 logs in to the Bangalore node, they can view 10 and
5 records that are loaded by DS1 and DS2. These records are the basis for the aggregation records in the
other nodes.
When user2 logs in to the AP node, one aggregation record is displayed with an aggregated value of 15.
When user2 drills down to the India node, one aggregation record is displayed with an aggregated value of
5. They can also drill down to view five records in the Bangalore node. These records are the records that
are associated with DS2. user2 cannot view the records that are associated with DS1.
When user3 logs in to the AP node, one aggregation record with a value of 15 is displayed. When user3
drills down to the India node, two aggregation records are displayed with aggregated values of 10 and 5.
They can also drill down to view 15 records that are associated with DS1 and DS2 in the Bangalore node.
This example is summarized in the following table:
Datacenter Topology UI
Use the Datacenter Topology UI to view the hierarchy of the Log Analysis servers in your data center and
to navigate from one Log Analysis server to another connected Log Analysis server search page.
To open the Datacenter Topology UI, click the Datacenter Topology icon ( ).
The Datacenter Topology window displays all of the datacenter nodes. The current node is highlighted in
red.
You cannot access Log Analysis server nodes that are highlighted in grey. The nodes that can be
accessed are those which are specified in the connected parameter in the topology JSON file. For more
information, see “Defining the topology for Aggregated Search” on page 25.
Procedure
1. Click the New Search or Add Search tab to open a new search table.
2. Select the _aggregation data source from the Data Sources tree ( ).
3. Click Search.
4. To view the aggregated search details, click Grid.
5. Select the log record for which you want to drill down to a specific aggregation.
6. Click the Launch icon. A new window displays the next level Log Analysis server UI with the
corresponding log file records.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Manage Aggregations UI reference
Use the Manage Aggregations UI to create, edit, delete, import, and export aggregation templates.
The following table lists the icons on the Manage Aggregations UI:
The following table lists the fields on the Add Aggregation Template UI:
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Creating aggregation templates
Before you can use the Aggregated Search feature, create an aggregation template. Aggregation
templates define the data that is aggregated and how it is aggregated before it is sent to the parent
node.
Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.
Query Builder icon Click this icon to open the Query Builder UI.
Use it to create a query based on the regions,
aggregation templates, and source types that you
want to search. You can use this UI only on
regional nodes. You cannot use it on the top
or bottom nodes in your topology as it is not
available in these nodes.
Aggregation Function Select the aggregation function from the drop-
down list. Possible values are COUNT,MIN, MAX,
and SUM. If you use the COUNT value, you do not
need to use the Aggregation Attribute field.
Aggregation Attribute Select the attribute that the aggregation function
is run against.
What to do next
You can import and export your templates. You can also edit and delete them.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Exporting aggregation templates
You can export aggregation templates to back up your data, and to import it into another Log Analysis
server.
Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.
3. Select one or more templates that you want to export, and click the Export Templates icon ( ).
4. Select the path to the directory where you want to export the template to in the File Selector field, and
click OK.
What to do next
The aggregation templates that you selected are exported in the JSON format. You can import these
templates into other instances of Log Analysis.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Importing aggregation templates
To import aggregation templates, complete these steps.
Procedure
1. Export the Aggregated Search aggregation template
For more information, see “Exporting aggregation templates” on page 36.
2. Click Administrative Settings.
3. Click the Manage Aggregations tab.
4. Click the Import Templates icon ( ). Select the file that you want to import and import the file.
Results
The aggregation templates are imported.
Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.
3. Select the template that you want to edit.
4. To edit the template, click the Edit icon.
5. Edit the template values as required. For more information about the template values, see “Manage
Aggregations UI reference” on page 33.
6. To save the changes to the template, select OK.
Workgroup
SQL
System
System
Standard
AltiVec
Sybase
Studio
Start
DB2
C99
C89
CEP
ALL
IDS
Server
i5
Panagon
pSeries
ContentPack
Developer
Advanced
Business
WCS
zSeries
WC
xSeries
Oracle
iSeries
Express
WAS
Fix B2C
B2B
Oracle p5
z9
Orchestrator
Cloudscape
Telesales
Fix
Documentum
Domino.Doc i53
Mgr
Professional
5.4
Pack5.1
5.0
10
Enterprise 21 Deleting aggregation templates
To delete an aggregation template, complete these steps.
Procedure
1. Click Administrative Settings.
2. Click the Manage Aggregations tab.
3. Select one or more templates that you want to delete.
4. To delete the template, click the Delete Template icon (
).
5. To confirm that you want to delete the template, select OK.
Results
The selected Aggregated Search aggregation template is deleted.
Procedure
1. To stop the Log Analysis server, enter the following command:
<HOME>/IBM/LogAnalysis/utilities/unity.sh -stop
ENABLE_AGGREGATION = true
To turn the Aggregated Search feature off, change the following property value to false:
ENABLE_AGGREGATION = false
<HOME>/IBM/LogAnalysis/utilities/unity.sh -start
Search dashboards
To display a list of search dashboards, click the Search Dashboards icon on the side bar.
The Dashboards group contains the following search dashboards:
sample-events-hotspots
Displays example hot spot reports for the sample events.
WAS Errors and Warnings Dashboard
Displays example reports for errors and warnings that are generated by the sampleWebSphere
Application Server application.
Sample-Web-App
Displays example reports for errors and warnings that are generated by the sample web application.
The DB2AppInsightPack group contains the following search dashboards:
DB2 Information Links
Displays useful information links for more information about DB2.
DB2 Troubleshooting
Displays example reports for DB2.
The ExpertAdvice group contains the following search dashboard:
IBMSupportPortal-ExpertAdvice
Displays search results based on your searches.
The WASAppInsightPack group contains the following search dashboards:
WAS Information Links
Displays useful information links for more information about WebSphere Application Server.
WAS Errors and Warnings
Displays example reports based on errors and warnings in WebSphere Application Server.
Alerts dashboard
Use the Alerts dashboard to view information based on the alerts that you create in Log Analysis.
This feature is not available in the Entry Edition.
The data that is displayed on the dashboard is based on the last time data from one of the data sources
used for alerts was loaded into Log Analysis. This means that there is a delay between changes to your
alerts and the information that is displayed on the dashboard. In this case, you need to wait until data is
loaded into Log Analysis again to see the latest data.
If the dashboard does not display any data, ensure that the time that you specified matches a period
when alerts were created.
Prerequisites
You must install IBM Operations Analytics Log AnalysisFix Pack 1.
You must create alerts in Log Analysis. For more information, see .
Prerequisite knowledge
Creating a Custom Search Dashboard requires that you have a knowledge of these areas:
• Coding in one of the accepted formats: Python, Shell scripting, or Java programming
• JSON development
Procedure
1. The source log file you want to use must be loaded and processed by IBM Operations Analytics Log
Analysis.
2. Using python, shell scripting, or Java, create a script. The script and its corresponding application
or template file must reside in the same directory: <HOME>/AppFramework/Apps. If no value is
specified for the type parameter at the top of the application file, the application file runs the script
from the same folder as the application file.
5. (Optional) If you want to create a template for your application, create a template in the directory:
<HOME>/AppFramework/Templates. If a template name has been specified in the type field,
the application file references that template and executes the script in the same directory as that
template. That is, the application file and script must be located in the same directory as the template
file.
Insight Packs and templates: If you want to include the application in an Insight Pack project, the
script, the application, and template file must reside in the same in the folder within the project:
src-files/unity_apps/templates under the project folder.
Scripts
The script that is executed can be written as a Python script, shell script, or a Java application packaged
as an executable JAR file. When you execute the application, the parameters are written to a temporary
file in JSON format, and the file name is passed to the script. The script reads the parameters from
the file. And the script generates the output in the standard JSON format required for the dashboard
specification.
Within an Insight Pack project, scripts must reside in the same folder as the application or template that
references it, either in src-files/unity_apps/apps or src-files/unity_apps/templates.
In the script, use an HTTP POST request to query a IBM Operations Analytics URL for JSON data. The
query uses the same method as the queries you can run in the Search Workspace. For the query, use
the JSON format for Search requests as described in Search REST API. If you need to narrow down the
returned data further, you can parse the data within the script.
To connect to the Operations Analytics server, create an instance of UnityConnection() and call the
UnityConnection.login() method. The UnityConnection() object takes the parameters:
• url - base URL to the Operations Analytics server; if you changed the default port number during install,
you should also change it here
• username - username used for authentication
• password - password used for authentication
For example:
The parameters to the Python script are written to a file in JSON format, and the filename is passed to the
script. For example, the parameters can be passed in a file containing the following:
{
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {
"filter": {
"range": {
"timestamp":{
"from":"01/01/2013 00:00:00.000 EST",
"to":"01/01/2014 00:00:00.000 EST",
"dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
"logsources": [
{
"type": "logSource",
"name": "SystemOut"
}
],
}
},
]
}
parameters = data['parameters']
for i in parameters:
if i['name'] == 'search':
search = i['value']
for key in search.keys():
if key == 'filter':
filter = search['filter']
To make an http request that uses the Search runtime API, use the connection.post() and
get_response_content() methods. For example:
request = {
"logsources": logsource, // this is the value from the parameters
"query": "*",
"filter": filter // this is the value from the parameters
}
{
"start":0,
"results":1,
"filter":{
"range":{
"timestamp":{
"from":"5/31/2012 5:37:26.682 -0400",
"to":"5/31/2013 5:37:26.682 -0400",
"dateFormat":"MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
"query":"severity:(W OR E)",
"sortKey":[
"-timestamp"
],
"getAttributes":[
"timestamp",
"severity"
],
"facets":{
"dateFacet":{
"date_histogram":{
"field":"timestamp",
"interval":"hour",
"outputDateFormat":"MM-dd HH:mm",
"nested_facet":{
"severityFacet":{
"terms":{
"field":"severity",
"size":10
}
}
}
}
}
},
"logsources":[
{
"type":"logSource",
"name":"/SystemOut"
}
]
}
• The query searches for Warnings or Errors and the results are sorted by timestamp. Only the timestamp
and severity attributes are returned in the results.
"query":"severity:(W OR E)",
"sortKey":["-timestamp"],"getAttributes":["timestamp","severity"],
"logsources":[{"type":"logSource","name":"/SystemOut"}]
• (Optional) Facet requests that you can use to create sums, time interval, or other statistical data
for a given field/value pair. In this example there is a date_histogram facet with a nested terms
facet. Within each time interval returned by the date_histogram facet, the term facet, called
severityFacet, counts the number of each type of severity.
"facets":{"dateFacet":{"date_histogram":{"field":
"timestamp","interval":"hour","outputDateFormat":"MM-dd HH:mm",
"nested_facet":{"severityFacet":{"terms":{"field":"severity","size":10}}}}}},
1,000 or more results and Custom Search Dashboard: When a query in a Custom Search Dashboard
returns more than 1000 records, you get only 1000 results back. The search result returned includes
a field totalResults which shows total number of matching results. Another field numResults gives
the number of records returned. You can check these values in the Custom Search Dashboard script and
handle the results accordingly.
{"searchRequest":{"start":0,"results":1,"filter":{"and":
[{"range":{"timestamp":{"from":"5\/31\/2012 5:37:26.682 -0400",
"to":"5\/31\/2013 5:37:26.682 -0400",
"dateFormat":"MM\/dd\/yyyy HH:mm:ss.SSS Z"}}},{"or":[{"phrase":
{"logsource":"SystemOut"}}]},
{"range":{"_writetime":{"dateFormat":"yyyy-MM-dd'T'HH:mm:ss.SSSZ","from":
"2013-05-25T00:00:00.000-0400","to":"2013-05-31T05:48:18.415-0400"}}}]},
"query":"severity:(W OR E)",
"sortKey":["-timestamp"],"getAttributes":["timestamp","severity"],
"facets":{"dateFacet":{"date_histogram":{"field":"timestamp","interval":"hour",
"outputDateFormat":"MM-dd HH:mm","nested_facet":{"severityFacet":
{"terms":{"field":"severity","size":10}}},"__usr_fast_date_query":"UTC"}}},
"logsources":[{"type":"logSource","name":"\/SystemOut"}],"collections":
["WASSystemOut-Collection1"]},"totalResults":805,"numResults":1,"executionInfo":
{"processingTime":33,"searchTime":54},"searchResults":[
{"resultIndex":0,"attributes":
{"msgclassifier":"TCPC0003E","threadID":"00000000","message":
"TCP Channel TCP_2 initialization failed. The socket bind failed for host
SystemOut.log
SystemOut.logOneLiners SystemOut.logOneLinersNoTS unity_populate_was_log.sh
unity_search_pattern_insert.sh
was_search_pattern.txt x and port 9080. The port may already be in use._LOG_2_",
"_writetime":"05\/27\/13 12:39:03:254 +0000","logsourceHostname":
{
"searchRequest": {
"start": 0,
"results": 1,
"filter": {
"and": [
{
"range": {
"timestamp": {
"from": "5/31/2012 5:37:26.682 -0400",
"to": "5/31/2013 5:37:26.682 -0400",
"dateFormat": "MM/dd/yyyy HH:mm:ss.SSS Z"
}
}
},
{
"or": [
{
"phrase": {
"logsource": "SystemOut"
}
}
]
},
{
"range": {
"_writetime": {
"dateFormat": "yyyy-MM-dd'T'HH:mm:ss.SSSZ",
"from": "2013-05-25T00:00:00.000-0400",
"to": "2013-05-31T05:48:18.415-0400"
}
}
}
]
},
"query": "severity:(W OR E)",
"sortKey": [
"-timestamp"
],
"getAttributes": [
"timestamp",
"severity"
],
"facets": {
"dateFacet": {
"date_histogram": {
"field": "timestamp",
"interval": "hour",
"outputDateFormat": "MM-dd HH:mm",
"nested_facet": {
"severityFacet": {
"terms": {
"field": "severity",
"size": 10
}
}
},
"__usr_fast_date_query": "UTC"
{
"error_msg": "<error_message_string>"
}
{
"data": [{
"rows": [{
"userId": "116",
"count": 9
}],
"fields": [{
"label": "userId",
"type": "TEXT",
"id": "userId"
},
{
"label": "count",
"type": "LONG",
"id": "count"
}],
"id": "DynamicDashboardSearch_0_1386061434000"
},
{
"error_msg": "CTGLA2107E : Custom applications need to be configured for
every use case with details of data sources. No valid data sources were specified
in the search request. Verify that the chosen data sources exist and that any
specified tags contain data source descendants. For further information on how to
configure the custom applications, refer to the documentation.",
"id": "DynamicDashboardSearch_1_1386061434000"
}]
}
Example Script
The full script example shown here contains all the required elements.
In this example script, the value for some elements are defined as variables inside the script. For
example, the elements of the search request such as the logsource and query are defined as variables.
Custom scripts should return date field data only in the supported formats for charts to render correctly.
Supported formats include yyyy-MM-ddTHH:mm:ssZ and MM/dd/yy HH:mm:ss:SSS Z.
try:
import json
except ImportError:
import simplejson as json
import sys
import re
from datetime import datetime, date, time, timedelta
import UnityAppMod
# Parameter defaults
start = 0
results = 1
##########################################################
def getSearchData(query, facet, sortKey, attributes):
if logsource:
body = body + ',' + logsource
return data
##########################################################
##########################################################
def getErrorsWarningsVsTime(chartdata):
# Define the query for the search
query = '"query":"severity:(W OR E)"'
# do the query
data = getSearchData(query, facet, sortKey, attributes)
if 'facetResults' in data:
if 'dateFacet' in facetResults:
# get the facetRows
dateFacet = facetResults['dateFacet']
CommonAppMod.dateSort(dateFacet)
for dateRow in dateFacet:
for severityRow in dateRow['nested_facet']['severityFacet']
['counts']:
facetRows.append({"date":dateRow['low'], "severity":severityRow
['term'], "count":severityRow['count']} );
#print facetRows
chartdata.append({'id':'ErrorsWarningsVsTime','fields':facetFields,
'rows':facetRows})
return chartdata
##########################################################
getErrorsWarningsVsTime(chartdata)
unity_connection.logout()
#------------------------------------------------------
# Build the final output data JSON
#------------------------------------------------------
Application files
The application file JSON can be created by implementing the structure provided in the sample JSON
outlined in this topic.
If you are using the Insight Pack Tooling, the application file JSON can be created in the Insight Pack
project in the subfolder src-files/unity_apps/apps.
Charts
The chart types specified here are supported by IBM Operations Analytics Log Analysis. The chart
specifications are contained in the <HOME>/AppsFrameowrk/chartspecs directory.
Displaying a chart: To display a chart, you execute your Custom Search Dashboard from the Search
workspace. If you close a chart portlet, you must run the Custom Search Dashboard again to reopen the
chart.
These parameters are defined for all charts:
type
Specify the type of chart required. The value must match the ID of a chart specification that is
contained in the <HOME>/AppFramework/chartspecs directory. The supported chart types are
outlined in this section.
title
Specify the chart title.
data
Specify the ID for the data element that is represented in the chart. This ID specified must match to
the ID provided in the dashboard specifications.
parameters
Fields to be displayed in the chart.
Line chart
The line chart is defined with these limitations:
• Chart name: Line Chart
{
"type": "Line Chart",
"title": "Line Chart ( 2 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "throughput"
}
}
{
"type": "Line Chart",
"title": "Line Chart ( 2 parameters )",
"data": {
"$ref": "searchResults01",
"summarizeData": {
"column": "throughput",
"function": "sum"
}
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "throughput"
}
The summarizeData key determines whether aggregation is to be performed or not. column is the
name of numeric column that uses the LONG or DOUBLE data type to perform aggregation on. sum is the
aggregation function to be applied. Supported functions are sum, min, max.
Bar chart
The bar chart is defined with these limitations:
• Chart name: Bar Chart1
• Parameters: xaxis, yaxis, and categories
• Limitations: Only integer values are supported for the yaxis parameter.
• Chart Specification:
{
"type": "Bar Chart",
"title": "Bar Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "CPU",
"categories": "hostname"
}
}
Point chart
The point chart is defined with these limitations:
• Chart name: Point Chart
• Parameters: xaxis and yaxis
• Chart Specification:
{
"type": "Point Chart",
"title": "Point Chart ( 2 parameters )",
Pie chart
The pie chart is defined with these limitations:
• Chart name: Pie Chart
• Parameters: xaxis and yaxis
• Chart Specification:
{
"type": "Pie Chart",
"title": "Pie Chart ( 2 parameters )",
"data": {
"$ref": "searchResults03"
},
"parameters": {
"xaxis": "count",
"yaxis": "severity"
}
}
{
"type": "Cluster Bar",
"title": "Cluster Bar ( 3 parameters )",
"data": {
"$ref": "searchResults02"
},
"parameters": {
"xaxis": "hostname",
"yaxis": "errorCount",
"sub-xaxis": "msgClassifier"
}
}
Bubble chart
The bubble chart is defined with these limitations:
• Chart name: Bubble Chart
• Parameters: xaxis, yaxis, and categories
• Chart Specification:
{
"type": "Bubble Chart",
"title": "Bubble Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis": "CPU",
"categories": "errorCode"
The size of the bubble on the graph depends on the number of items in the parameter that is being
represented. In some cases, for example if you have a large bubble and a small bubble, the large bubble
may cover the smaller one.
{
"type": "Tree Map",
"title": "Tree Map ( 4 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"level1": "hostname",
"level2": "errorCode",
"level3": "severity",
"value":"CPU"
}
}
{
"type": "Two Series Line Chart",
"title": "Two Series Line Chart ( 3 parameters)",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "timeStamp",
"yaxis1": "throughput",
"yaxis2": "ResponseTime"
}
}
{
"type": "Stacked Bar Chart",
"title": "Stacked Bar Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "hostname",
"yaxis": "CPU",
{
"type": "Stacked Line Chart",
"title": "Stacked Line Chart ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "threadID",
"yaxis": "timestamp",
"categories": "MBO"
}
}
Heat map
The heat map chart is defined with these limitations:
• Chart name: Heat map
• Parameters: xaxis, yaxis, and count
• Chart Specification:
{
"type": "Heat Map",
"title": "Heat Map ( 3 parameters )",
"data": {
"$ref": "searchResults01"
},
"parameters": {
"xaxis": "messageClassifier",
"yaxis": "hostname",
"count": "throughput"
}
}
{
"name": "WAS Errors and Warnings Dashboard",
"description": "Display a dashboard of charts that show WAS errors and warnings",
"customLogic": {
"script": "3Params_was_systemout.py",
"description": "View charts based on search results",
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {
},
{
"type": "Two Series Line Chart",
"title": "Error and Warning Total by Hostname
- Top 5 over Last Day",
"data": {
"$ref": "ErrorsWarningsVsTime"
},
"parameters": {
"xaxis": "date",
"yaxis1": "count",
"yaxis2": "severity"
}
},
{
"type": "Stacked Bar Chart",
"title": "Java Exception by Hostname
- Top 5 over Last Day",
"data": {
"$ref": "ExceptionByHost"
},
"parameters": {
"xaxis": "date",
"yaxis": "count",
"categories": "hostname"
}
]
}
}
}
This sample shows a chart created from the example script and application files.
_rows
Displays the entire row data for a cell that you select in Grid view. To display the data, select
a cell in Grid view and then launch the application. The type field must contain the value
additionalParameterFromUI.
The application does not necessarily require all three parameters. Depending on your requirements, you
can create an application to display the search query JSON or the contents of a column, cell, or row.
The following example is an application that when executed displays both the search query and any
selected column or cell:
{
"name": "Sample HTML App",
"description": "Sample App to demonstrate use of _query and _data params",
"customLogic": {
"script": "SampleHTMLApp.sh",
"description": "Read data from a file and return",
"parameters": [
{
"name": "_query",
"type": "additionalParameterFromUI",
"value": []
},
{
"name": "_data",
"type": "additionalParameterFromUI",
"value": []
},
{
"name": "_rows",
"type": "additionalParameterFromUI",
"value": []
}
],
"output": {
"type": "Data",
"visualization": {
"dashboard": {
"columns": 2,
"charts": [
{
"type": "html",
"title": "Sample HTML",
"data": {
"$ref": "htmlData"
},
"parameters": {
"html": "text"
}
}
]
}
}
}
}
}
https://<ip_address>:9987:/Unity/SearchUI?queryString=
<q>&timefilters=<t>&dataSources=<ds>
where <ip_address> is the IP address of the server on which IBM Operations Analytics Log Analysis is
installed and the parameters that you specify are:
queryString
(Required) Replace the <q> parameter with a valid velocity query.
timefilter
(Optional) Specify a time range as an absolute or relative time range in JSON format. If this parameter
is not specified, the default option Last 15 minutes is applied. For an absolute value, specify JSON
using this example:
{ “type”:“absolute”,
“startTime”:“24/06/2013 05:30:00”
“endTime”:“25/06/2013 05:30:00”
}
The timezone for the startTime and endTime is the user's timezone. If the user's default timezone
is the browser timezone, the absolute time is the browser timezone or the timezone that is set as
default for all sessions is used for querying.
For a relative time range, specify JSON using this example:
{ “type”:“relative”,
“lastnum”:“7”,
“granularity”:“Day”
}
datasources
(Optional) For this parameter, you specify a Data Source or a group of Data Sources in a JSON array
format. Each element in the array can be of type group or datasource. Specify group if you want to
specify a group of Data Sources. If a value is not specified for this parameter, all of the available Data
Sources are selected. The JSON format is indicated in this example:
[
{ “type”:“datasource”,“name”:"<datasource_name>" },
{ “type”:“group”,“name”:"<group_name>}", …
This example URL launches IBM Operations Analytics Log Analysis with the required parameters to
search for critical IBM Tivoli Netcool®/OMNIbus events in IBM Operations Analytics Log Analysis:
https://9.118.41.69:9987/Unity/SearchUI?queryString=*&timefilters=
{“type”:“relative”,“lastnum”:1,“granularity”:“year”}&dataSources=
[{“type”:“datasource”,“name”:/Omnibus-events}]
https://<ip_address>:9987:/Unity/CustomAppsUI?name=
<name>&<appParameters>=<params>
where <ip_address> is the IP address of the server on which IBM Operations Analytics Log Analysis is
installed and the parameters that you specify are:
name
(Required) Replace the <name> parameter with a valid Custom Search Dashboard name.
appParameters
If the Custom Search Dashboard that you want to launch requires that parameters are specified,
specify the required parameters in a JSON array in {key:value} format.
This example URL launches the IBM Operations Analytics Log Analysis Day Trader Custom Search
Dashboard:
https://9.120.98.21:9987/Unity/CustomAppsUI?name=
Day%20Trader%20App&appParameters=[]
Note: Any spaces that are required in your URL must be escaped. If you want to include a % character in
your URL, it must be added as %25.
Related concepts
Using a Custom Search Dashboard to display a search query or selected data
You can create a Custom Search Dashboard that displays, in a new tab, the full syntax for the current
search and also displays the contents of a selected column or individual cell.
Templates
A template file defines a set of custom scripts. For each custom script, it specifies the parameters with
the type, name, and the default value.
If you are using the Insight Pack Tooling, the template file JSON can be created in an Insight Pack project
in the subfolder src-files/unity_apps/templates. Any script files referenced by the template must
also reside in this folder.
In addition to the fields included for applications files, these additional parameters are included in the
template:
template
This is a boolean value that specifies that the file is a template. Specify true as the value for this
parameter.
parameter
Although this parameter is specified in the application file, additional values are required in a template
file for this parameter.
required
If this is set to true, the parameter is required. Custom Search Dashboards that use the template
must include this parameter.
default
Specify the default parameter value. For parameters that are not required, this value is used
where the application does not specify a value.
Example
Procedure
• Any Custom Search Dashboard definitions, scripts, and template files found in the src-files/
unity_apps folder are included in the archive file.
Performance_Msgs.py script
This example shows the script Performance_Msgs.py, which collects data on performance messages.
For details, read the coding notes within the script.
#Installing simplejson
# Download the rpm from
http://pkgs.org/centos-5-rhel-5/centos-rhel-x86_64
/python-simplejson-2.0.9-8.el5.x86_64.rpm/download/
# Run the command rpm -i python-simplejson-2.0.9-8.el5.x86_64.rpm
#------------------------------------------------------
# getSearchData()
#------------------------------------------------------
def getSearchData(logsource, filter):
return data
#------------------------------------------------------
# dateSort()
#------------------------------------------------------
def dateSort(dateFacet):
# This function parses the UTC label found in the dateFacet in the
# format "mm-hh-DDD-yyyy UTC"
# and returns an array in the form [yyyy, DD, hh, mm]
def parseDate(dateLabel):
aDate = map(int, dateLabel.split(" ")[0].split("-"))
aDate.reverse()
return aDate
#--------------------------------------------------------------------------
# Main script starts here
#--------------------------------------------------------------------------
# initialize variables
filter = {}
logsource = {}
chartdata = []
parameters = data['parameters']
for i in parameters:
if i['name'] == 'search':
search = i['value']
for key in search.keys():
if key == 'filter':
filter = search['filter']
elif key == 'logsources':
logsource = search['logsources']
#------------------------------------------------------
# get the data to be returned
#------------------------------------------------------
rows = []
# call getSearchData() to post the search request and retrieve the data
data = getSearchData(logsource, filter)
if 'facetResults' in data:
if 'dateFacet' in facetResults:
# get the dateFacet rows
dateFacet = facetResults['dateFacet']
#------------------------------------------------------
# Create the HTML data to be returned
#------------------------------------------------------
html = "<!DOCTYPE html><html><body><h1>Hello World!</h1></body></html>"</p><p>
chartdata.append({"id": "htmlData", "htmltext": html })
#------------------------------------------------------
# Build the final output data JSON
#------------------------------------------------------
# build the JSON structure containing the chart data which will be the
# output of this script
appData = {'data':chartdata}
Performance_Msgs.app file
This sample shows the Performance_Msgs.app application file that references the
Performance_Msgs.py script and specifies the chart to display for the Custom Search Dashboard.
{
"name": "Performance Messages",
"description": "Displays charts showing performance messages over time",
"customLogic": {
"script": "Performance_Msgs.py",
"description": "View chart on search results",
"parameters": [
{
"name": "search",
"type": "SearchQuery",
"value": {
"filter": {
"range": {
"timestamp":{
"from":"01/01/2013 00:00:00.000 EST",
"to":"01/01/2014 00:00:00.000 EST",
]
}
}
}
}
}
Performance_Msgs chart
The chart shows the counts of each message for each day.
Supported databases
The following databases are supported by default
• Derby 10.10
• DB2 9.7
To add databases that are not supported by default, you must locate the JAR file that contains the class
4 JDBC driver for the database, change the location of the drivers in the classpath section of the
searchFilter.sh file.
1. Download the appropriate class 4 JDBC driver file for your database.
2. Copy the driver file to the <HOME>/IBM/LogAnalysis/AppFramework/Apps/<directory>
directory where <directory> is the directory that contains your search filter application.
3. Add the location of the new driver to the classpath parameter in the searchFilter.sh file.
At time of publication, Oracle Database 11g Release 11.1.0.0.0 is the only additional database tested.
Database Connections
Create a database connection to help your search filter application to uniquely identifies sources of
structured data.
You can associate semi-structured data, such as a log file, with structured data, such as the transaction
status in the application database. To make this association, you must define a Database Connection,
which specifies the details required by IBM Operations Analytics Log Analysis to access the database.
After you have defined a Database Connection, you can use it when you configure your search filter
application.
Procedure
To add a Database Connection:
Procedure
To edit a Database Connection:
1. In the Data Sources workspace, expand the Database Connections list.
2. Select the Database Connection that you want to edit and click Edit. The Database Connection is
opened in a new tab.
3. Edit the data source as required.
4. To save the changes, click OK.
Procedure
To delete a Database Connection:
1. In the Data Sources workspace, expand the Database Connections list.
2. Select the Database Connection that you want to delete and click Delete.
3. Click OK
Procedure
1. SearchFilter definition: Use this app definition to define your own search filter:
"value": {
"dataSource": {
"schema": "ITMUSER",
"jdbcdriver": "com.ibm.db2.jcc.DB2Driver",
"jdbcurl": "jdbc:db2://9.12.34.56:50000/WALE",
"datasource_pk": 1,
"username": "itmuser",
"password": "tbsm4120",
"name": "datasource"
},
Results
After you run the app, the customized search is displayed in the configured pattern section of the user
interface and the data source and time filters are set. The keywords, if any, and the count information are
also displayed.
Note:
1. If the app output contains data sources, the values that match the search criteria are returned on the
user interface. If no data sources are returned, the existing selections are retained and used.
2. If the app output contains time filters, the values that match the search criteria are returned on the UI.
If no time filters are returned, the existing selections are retained and used.
3. If the app output contains keywords, IBM Operations Analytics searches the data sources and time
filters that are returned and displays the keywords and the search hit count in the configured patterns
widget UI.
Example
The following code example demonstrates the logic that is used for the search filter app:
{
"name": "Sample Search filter app",
"description": "App for getting search filter",
"customLogic": {
Note: The output > visualization sub parameter must be set to searchFilters.
The search filter app uses relationalQuery as the input parameters. The relational query is a JSON
object that consists of the following JSON keys:
• dataSource is the value of the data source key that defines database connection attributes such as
JDBC connection details, schema name, user credentials .
• SQL is the value of SQL key that defines the query that we want to run against the database
Note: If you are trying to run search filter against a data source registered with IBM Operations Analytics,
you can provide dataSource name instead of dataSource details. All details of the corresponding
dataSource are fetched from IBM Operations Analytics. The dataSource is created with the Database
connections option in the Data Sources workspace.
{
"name": "relationalQuery",
"type": "relationalQuery",
"value": {
"dataSource": "myDataSourceName",
"SQL": "select * from MYDB.MYTABLE"
}
}
where myDataSourceName is the name of a data source that is defined in IBM Operations Analytics Log
Analysis.
Data elements
There are three parameters required for data elements:
id
Specify an identifier for the data element. The Custom Search Dashboard uses this value to determine
the data that is displayed. The Custom Search Dashboard uses this value to determine the data that is
displayed for a particular chart.
fields
Specify an array containing field descriptions. Each field that you specify in the array has an ID, a
label, and a type.
rows
Using the rows array to specify a value for each of the fields that you have specified.
Example
This is a sample JSON output which contains both the types of output which contains both chart data and
HTML output:
{
"data":[
{
"id":"ErrorsWarningsVsTime",
"fields":[
{
"id":"timestamp",
"label":"Date",
"type":"TEXT"
},
{
"id":"severity",
"label":"severity",
"type":"TEXT"
},
{
"id":"count",
"label":"count",
"type":"LONG"
}
],
"rows":[
{
"timestamp":"10-21 13:00",
"severity":"W",
"count": 221
},
{
"timestamp":"10-21 13:00",
"severity":"E",
"count": 204
}
]
},
{
"id":"htmlData",
"htmltext":",<!DOCTYPE html><html><body><div><h1>Sample HTML</h1>
</div></body></html>"
}
]
}
chartdata.append({
'id':'ErrorsWarningsVsTime',
'fields':facetFields,
'rows':facetRows})
In the application file chart specification, the id ErrorsWarningsVsTime is referenced to specify the
data set used in the chart.
{
"type": "Tree Map",
"title": "Messages by Hostname - Top 5 over Last Day",
"data": {
{
"id": "htmlData",
"htmltext": "<!DOCTYPE html><html><body><br><div>
<a href=http://www.espncricinfo.com//>Normal URL -
- ESPN Cricinfo</a>
</div><br><div><a href=https://192.168.56.101:9987/Unity/CustomApps
UI?name=All%20Supported%20Visualizations&appParameters=[]>Custom App LIC
- - All Supported Visualizations</a></div><br><div><a href=https://192.168.56.101:99
87/Unity/SearchUI?queryString=diagnosticLevel:==Warning&timefilters=
{\"type\":\"relative\",\"lastnum\":\"1\",\"granularity\":\"year\"}&d
ataSources=[{\"type\":\"datasource\",\"name\":\"/DB2Diag1\"}]>Sear
ch UI LIC - - DB Log Events with Diagnostic Level - Warning</a></div>
<br><div><a href=https://192.168.56.101:9987/Unity/SearchUI?querySt
ring=diagnosticLevel:==Severe&timefilters={\"type\":\"relative\",\"la
stnum\":\"1\",\"granularity\":\"year\"}&dataSources=[{\"type\":\"datasour
ce\",\"name\":\"/DB2Diag1\"}]>Search UI LIC - - DB Log Events with Diag
nostic Level - Severe</a></div></body></html>" }
Note: Any spaces that are required in your URL must be escaped. If you want to include a % character in
your URL in must be added as %25.
{
"name": "Sample Search filter app",
"type":"SearchFiltersTemplate",
"description": "App for getting search filter",
"customLogic": {
"script": "searchFilter.sh",
"description": "App for getting search filter",
"parameters": [
{
"name": "relationalQuery",
"type": "relationalQuery",
"value": {
"dataSource": {
"schema": "ITMUSER",
"jdbcdriver": "com.ibm.db2.jcc.DB2Driver",
"jdbcurl": "jdbc:db2://9.12.34.56u:50000/WALE",
"datasource_pk": 1,
"username": "itmuser",
"password": "tbsm4120",
"name": "datasource"
},
"SQL": "select * from ITMUSER.test_OLSC"
}
}
],
"output": {
"type": "searchFilters",
"visualization": {
"searchFilters": {}
If you use this template, the application directory for this application requires the application file only. You
do not need to copy the .sh and JAR script files into the application folder.
Procedure
1. Create a connection to the datasource or database that you specified in the input file using the Data
Sources workspace.
2. Run the SQL query that is part of the input file for the app.
3. Tokenize the output of the SQL query and remove common words such as verbs.
4. Pick out the timestamp column and pick up startTime as minimum timestamp and endTime as
maximum timestamp.
5. If you already know the data source against which you want to use this search filter, you can use the
logsource name in the output parameters. If you do not know this, use an asterisk (*).
6. Construct output in the JSON format. For example:
{
"keywords": [
"keyword1",
"keyword2",
],
"timefilters": {
"type": "absolute",
"startTime":"2012-06-24 05:30:00",
"endTime":"2013-06-24 05:30:00",
"lastnum": 7,
"granularity": "day"
},
"logsources": [
{
"type": "tag",
"name": "*"
}
]
}
Note: The source code of the default implementation (in Java) of the search filter app is in
included with IBM Operations Analytics Log Analysis in the <HOME>/AppFramework/Templates/
SearchFilters directory. You can update the default implementation based on your requirements.
Procedure
1. Open the CustomAppsConfigFile.json Custom Search Dashboard configuration file located in the
<HOME>/wlp/usr/servers/Unity/apps/Unity.war/configs directory.
2. Add a link, an icon image, and a tooltip for each Custom Search Dashboard for which you want to
create a shortcut. The syntax for the shortcut is:
[{
"url": "<url>",
"icon": "<icon>",
"tooltip": "<tooltip>"
}]
https://<ip_address>:9987:/Unity/CustomAppsUI?name=
<name>&<appParameters>=<params>
where <ip_address> is the IP address of the server on which IBM Operations Analytics Log
Analysis is installed and the parameters that you specify are:
name
(Required) Replace the <name> parameter with a valid Custom Search Dashboard name.
appParameters
If the Custom Search Dashboard that you want to launch requires that parameters are
specified, specify the required parameters in a JSON array in {key:value} format.
icon
Specify the path to the graphic that you want to use for your icon. If no icon is specified, the default
Custom Search Dashboard icon is used.
tooltip
Specify the text for the tooltip displayed when the mouse is placed on the icon. If no tooltip is
specified, the name of the Custom Search Dashboard is used as the tooltip text.
This example provides a link to a Custom Search Dashboard named AnomalyApp with a path to a
shortcut icon located in the Unity/images directory and a tooltip text Detect Anomalies:
[{
"url": "https://192.168.56.101:9987/Unity/CustomAppsUI?name=Anomaly
App&appParameters=[]",
"icon": "https://192.168.56.101:9987/Unity/images/context.gif",
"tooltip": "Detect Anomalies"
}]
Note: Any spaces that are required in your URL must be escaped. If you want to include a % character
in your URL in must be added as %25.
3. Save the file.
Results
Your shortcut is added to the Table view toolbar. Double-click the icon to launch your Custom Search
Dashboard.
This information was developed for products and services that are offered in the USA.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program, or
service that does not infringe any IBM intellectual property right may be used instead. However, it is the
user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not grant you any license to these patents. You can
send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at
www.ibm.com/legal/copytrade.shtml.
Applicability
These terms and conditions are in addition to any terms of use for the IBM website.
Commercial use
You may reproduce, distribute and display these publications solely within your enterprise provided
that all proprietary notices are preserved. You may not make derivative works of these publications, or
reproduce, distribute or display these publications or any portion thereof outside your enterprise, without
the express consent of IBM.
Rights
Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either
express or implied, to the publications or any information, data, software or other intellectual property
contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use
of the publications is detrimental to its interest or, as determined by IBM, the above instructions are not
being properly followed.
You may not download, export or re-export this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS
ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT,
AND FITNESS FOR A PARTICULAR PURPOSE.
Appendix A. Notices 77
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and service names might be trademarks of IBM or other companies.
Product Number: