Informatica 1011 WorkflowBasicsGuide
Informatica 1011 WorkflowBasicsGuide
1)
Version 10.1.1
December 2016
© Copyright Informatica LLC 2001, 2016
This software and documentation are provided only under a separate license agreement containing restrictions on use and disclosure. No part of this document may be
reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC.
Informatica, the Informatica logo, PowerCenter, and PowerExchange are trademarks or registered trademarks of Informatica LLC in the United States and many
jurisdictions throughout the world. A current list of Informatica trademarks is available on the web at https://www.informatica.com/trademarks.html. Other company and
product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights
reserved. Copyright © Sun Microsystems. All rights reserved. Copyright © RSA Security Inc. All Rights Reserved. Copyright © Ordinal Technology Corp. All rights
reserved. Copyright © Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright © Meta
Integration Technology, Inc. All rights reserved. Copyright © Intalio. All rights reserved. Copyright © Oracle. All rights reserved. Copyright © Adobe Systems
Incorporated. All rights reserved. Copyright © DataArt, Inc. All rights reserved. Copyright © ComponentSource. All rights reserved. Copyright © Microsoft Corporation. All
rights reserved. Copyright © Rogue Wave Software, Inc. All rights reserved. Copyright © Teradata Corporation. All rights reserved. Copyright © Yahoo! Inc. All rights
reserved. Copyright © Glyph & Cog, LLC. All rights reserved. Copyright © Thinkmap, Inc. All rights reserved. Copyright © Clearpace Software Limited. All rights
reserved. Copyright © Information Builders, Inc. All rights reserved. Copyright © OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved.
Copyright Cleo Communications, Inc. All rights reserved. Copyright © International Organization for Standardization 1986. All rights reserved. Copyright © ej-
technologies GmbH. All rights reserved. Copyright © Jaspersoft Corporation. All rights reserved. Copyright © International Business Machines Corporation. All rights
reserved. Copyright © yWorks GmbH. All rights reserved. Copyright © Lucent Technologies. All rights reserved. Copyright © University of Toronto. All rights reserved.
Copyright © Daniel Veillard. All rights reserved. Copyright © Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright © MicroQuill Software Publishing, Inc. All
rights reserved. Copyright © PassMark Software Pty Ltd. All rights reserved. Copyright © LogiXML, Inc. All rights reserved. Copyright © 2003-2010 Lorenzi Davide, All
rights reserved. Copyright © Red Hat, Inc. All rights reserved. Copyright © The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright
© EMC Corporation. All rights reserved. Copyright © Flexera Software. All rights reserved. Copyright © Jinfonet Software. All rights reserved. Copyright © Apple Inc. All
rights reserved. Copyright © Telerik Inc. All rights reserved. Copyright © BEA Systems. All rights reserved. Copyright © PDFlib GmbH. All rights reserved. Copyright ©
Orientation in Objects GmbH. All rights reserved. Copyright © Tanuki Software, Ltd. All rights reserved. Copyright © Ricebridge. All rights reserved. Copyright © Sencha,
Inc. All rights reserved. Copyright © Scalable Systems, Inc. All rights reserved. Copyright © jQWidgets. All rights reserved. Copyright © Tableau Software, Inc. All rights
reserved. Copyright© MaxMind, Inc. All Rights Reserved. Copyright © TMate Software s.r.o. All rights reserved. Copyright © MapR Technologies Inc. All rights reserved.
Copyright © Amazon Corporate LLC. All rights reserved. Copyright © Highsoft. All rights reserved. Copyright © Python Software Foundation. All rights reserved.
Copyright © BeOpen.com. All rights reserved. Copyright © CNRI. All rights reserved.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/), and/or other software which is licensed under various versions
of the Apache License (the "License"). You may obtain a copy of these Licenses at http://www.apache.org/licenses/. Unless required by applicable law or agreed to in
writing, software distributed under these Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software
copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under various versions of the GNU Lesser General Public License
Agreement, which may be found at http:// www.gnu.org/licenses/lgpl.html. The materials are provided free of charge by Informatica, "as-is", without warranty of any
kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose.
The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California,
Irvine, and Vanderbilt University, Copyright (©) 1993-2006, all rights reserved.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and
redistribution of this software is subject to terms available at http://www.openssl.org and http://www.openssl.org/source/license.html.
This product includes Curl software which is Copyright 1996-2013, Daniel Stenberg, <daniel@haxx.se>. All Rights Reserved. Permissions and limitations regarding this
software are subject to terms available at http://curl.haxx.se/docs/copyright.html. Permission to use, copy, modify, and distribute this software for any purpose with or
without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
The product includes software copyright 2001-2005 (©) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http://www.dom4j.org/ license.html.
This product includes software copyright © 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at
http:// www.gnu.org/software/ kawa/Software-License.html.
This product includes OSSP UUID software which is Copyright © 2002 Ralf S. Engelschall, Copyright © 2002 The OSSP Project Copyright © 2002 Cable & Wireless
Deutschland. Permissions and limitations regarding this software are subject to terms available at http://www.opensource.org/licenses/mit-license.php.
This product includes software developed by Boost (http://www.boost.org/) or under the Boost software license. Permissions and limitations regarding this software are
subject to terms available at http:/ /www.boost.org/LICENSE_1_0.txt.
This product includes software copyright © 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at
http:// www.pcre.org/license.txt.
This product includes software copyright © 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http:// www.eclipse.org/org/documents/epl-v10.php and at http://www.eclipse.org/org/documents/edl-v10.php.
This product includes software licensed under the terms at http://www.tcl.tk/software/tcltk/license.html, http://www.bosrup.com/web/overlib/?License, http://
www.stlport.org/doc/ license.html, http://asm.ow2.org/license.html, http://www.cryptix.org/LICENSE.TXT, http://hsqldb.org/web/hsqlLicense.html, http://
httpunit.sourceforge.net/doc/ license.html, http://jung.sourceforge.net/license.txt , http://www.gzip.org/zlib/zlib_license.html, http://www.openldap.org/software/release/
license.html, http://www.libssh2.org, http://slf4j.org/license.html, http://www.sente.ch/software/OpenSourceLicense.html, http://fusesource.com/downloads/license-
agreements/fuse-message-broker-v-5-3- license-agreement; http://antlr.org/license.html; http://aopalliance.sourceforge.net/; http://www.bouncycastle.org/licence.html;
http://www.jgraph.com/jgraphdownload.html; http://www.jcraft.com/jsch/LICENSE.txt; http://jotm.objectweb.org/bsd_license.html; . http://www.w3.org/Consortium/Legal/
2002/copyright-software-20021231; http://www.slf4j.org/license.html; http://nanoxml.sourceforge.net/orig/copyright.html; http://www.json.org/license.html; http://
forge.ow2.org/projects/javaservice/, http://www.postgresql.org/about/licence.html, http://www.sqlite.org/copyright.html, http://www.tcl.tk/software/tcltk/license.html, http://
www.jaxen.org/faq.html, http://www.jdom.org/docs/faq.html, http://www.slf4j.org/license.html; http://www.iodbc.org/dataspace/iodbc/wiki/iODBC/License; http://
www.keplerproject.org/md5/license.html; http://www.toedter.com/en/jcalendar/license.html; http://www.edankert.com/bounce/index.html; http://www.net-snmp.org/about/
license.html; http://www.openmdx.org/#FAQ; http://www.php.net/license/3_01.txt; http://srp.stanford.edu/license.txt; http://www.schneier.com/blowfish.html; http://
www.jmock.org/license.html; http://xsom.java.net; http://benalman.com/about/license/; https://github.com/CreateJS/EaselJS/blob/master/src/easeljs/display/Bitmap.js;
http://www.h2database.com/html/license.html#summary; http://jsoncpp.sourceforge.net/LICENSE; http://jdbc.postgresql.org/license.html; http://
protobuf.googlecode.com/svn/trunk/src/google/protobuf/descriptor.proto; https://github.com/rantav/hector/blob/master/LICENSE; http://web.mit.edu/Kerberos/krb5-
current/doc/mitK5license.html; http://jibx.sourceforge.net/jibx-license.html; https://github.com/lyokato/libgeohash/blob/master/LICENSE; https://github.com/hjiang/jsonxx/
blob/master/LICENSE; https://code.google.com/p/lz4/; https://github.com/jedisct1/libsodium/blob/master/LICENSE; http://one-jar.sourceforge.net/index.php?
page=documents&file=license; https://github.com/EsotericSoftware/kryo/blob/master/license.txt; http://www.scala-lang.org/license.html; https://github.com/tinkerpop/
blueprints/blob/master/LICENSE.txt; http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html; https://aws.amazon.com/asl/; https://github.com/
twbs/bootstrap/blob/master/LICENSE; https://sourceforge.net/p/xmlunit/code/HEAD/tree/trunk/LICENSE.txt; https://github.com/documentcloud/underscore-contrib/blob/
master/LICENSE, and https://github.com/apache/hbase/blob/master/LICENSE.txt.
This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php), the Common Development and Distribution
License (http://www.opensource.org/licenses/cddl1.php) the Common Public License (http://www.opensource.org/licenses/cpl1.0.php), the Sun Binary Code License
Agreement Supplemental License Terms, the BSD License (http:// www.opensource.org/licenses/bsd-license.php), the new BSD License (http://opensource.org/
licenses/BSD-3-Clause), the MIT License (http://www.opensource.org/licenses/mit-license.php), the Artistic License (http://www.opensource.org/licenses/artistic-
license-1.0) and the Initial Developer’s Public License Version 1.0 (http://www.firebirdsql.org/en/initial-developer-s-public-license-version-1-0/).
This product includes software copyright © 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this
software are subject to terms available at http://xstream.codehaus.org/license.html. This product includes software developed by the Indiana University Extreme! Lab.
For further information please visit http://www.extreme.indiana.edu/.
This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject
to terms of the MIT license.
See patents at https://www.informatica.com/legal/patents.html.
DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation is
subject to change at any time without notice.
NOTICES
This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT
INFORMED OF THE POSSIBILITIES OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT
LIMITATION, BREACH OF CONTRACT, BREACH OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.
The information in this documentation is subject to change without notice. If you find any problems in this documentation, please report them to us in writing at
Informatica LLC 2100 Seaport Blvd. Redwood City, CA 94063.
INFORMATICA LLC PROVIDES THE INFORMATION IN THIS DOCUMENT "AS IS" WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-
INFRINGEMENT.
4 Table of Contents
Copying Workflow Segments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Comparing Repository Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Comparing Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Metadata Extensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Creating a Metadata Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Editing a Metadata Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Deleting a Metadata Extension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Expression Editor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Adding Comments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Validating Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Expression Editor Display. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Keyboard Shortcuts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Table of Contents 5
Deleting Links in a Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Chapter 3: Sessions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Sessions Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Session Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Creating a Session Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Editing a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Applying Attributes to All Instances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Performance Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Configuring Performance Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Pre- and Post-Session Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Pre- and Post-Session SQL Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Using Pre- and Post-Session Shell Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Chapter 5: Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Tasks Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Creating a Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Creating a Task in the Task Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Creating a Task in the Workflow or Worklet Designer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Configuring Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Reusable Workflow Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
AND or OR Input Links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Disabling Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Failing Parent Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Working with the Assignment Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Command Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Using Parameters and Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Assigning Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Creating a Command Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Executing Commands in the Command Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Log Files and Command Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Control Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6 Table of Contents
Creating a Control Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Working with the Decision Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Working with the Event Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Example of User-Defined Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Event-Raise Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Event-Wait Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Timer Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Creating a Timer Task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Chapter 6: Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Sources Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Globalization Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Source Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Allocating Buffer Memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Partitioning Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Configuring Sources in a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Configuring Readers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Configuring Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Configuring Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Working with Relational Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Selecting the Source Database Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Defining the Treat Source Rows As Property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
SQL Query Override. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Configuring the Table Owner Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Overriding the Source Table Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Working with File Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Configuring Source Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Configuring Commands for File Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Configuring Fixed-Width File Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configuring Delimited File Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Configuring Line Sequential Buffer Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Integration Service Handling for File Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Character Set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Multibyte Character Error Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Null Character Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Row Length Handling for Fixed-Width Flat Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Numeric Data Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Working with XML Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Server Handling for XML Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Using a File List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Creating the File List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Configuring a Session to Use a File List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Table of Contents 7
Chapter 7: Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Targets Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Globalization Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Target Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Partitioning Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Targets in a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Writers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Configuring Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Performing a Test Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Configuring a Test Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Working with Relational Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Target Database Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Target Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Target Table Truncation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Truncating a Target Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Deadlock Retry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Dropping and Recreating Indexes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Constraint-Based Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Bulk Loading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Table Name Prefix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Target Table Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Reserved Words. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Teradata Array Insert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Working with Target Connection Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Working with Active Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Working with File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Configuring Target Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Configuring Commands for File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Configuring Fixed-Width Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Configuring Delimited Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Integration Service Handling for File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Writing to Fixed-Width Flat Files with Relational Target Definitions. . . . . . . . . . . . . . . . . . 110
Writing to Fixed-Width Files with Flat File Target Definitions. . . . . . . . . . . . . . . . . . . . . . 110
Generating Flat File Targets By Transaction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Writing Empty Fields for Unconnected Ports in Fixed-Width File Definitions. . . . . . . . . . . . 112
Writing Multibyte Data to Fixed-Width Flat Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Null Characters in Fixed-Width Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Character Set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Writing Metadata to Flat File Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Working with XML Targets in a Session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8 Table of Contents
Integration Service Handling for XML Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Character Set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Special Characters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Null and Empty Strings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Handling Duplicate Group Rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
DTD and Schema Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Flushing XML on Commits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
XML Caching Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Session Logs for XML Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Multiple XML Document Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Working with Heterogeneous Targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Reject Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Locating Reject Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Reading Reject Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Table of Contents 9
JNDI Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
JMS Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
PowerExchange for MSMQ Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
PowerExchange for Netezza Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
PowerExchange for PeopleSoft Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
PowerExchange for Salesforce Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
PowerExchange for SAP NetWeaver Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
SAP R/3 Application Connection for ABAP Integration. . . . . . . . . . . . . . . . . . . . . . . . . . 150
Application Connections for ALE Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Application Connection for BAPI/RFC Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
PowerExchange for SAP NetWeaver BI Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
SAP BW OHS Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
SAP BW Application Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
PowerExchange for TIBCO Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Connection Properties for TIB/Rendezvous Application Connections. . . . . . . . . . . . . . . . . 154
Connection Properties for TIB/Adapter SDK Connections. . . . . . . . . . . . . . . . . . . . . . . . 156
PowerExchange for Web Services Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
PowerExchange for webMethods Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
webMethods Broker Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
webMethods Integration Server Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
PowerExchange for WebSphere MQ Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Testing a Queue Connection on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Testing a Queue Connection on UNIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Connection Object Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Creating a Connection Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Editing a Connection Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Deleting a Connection Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
10 Table of Contents
Repeat Options for Schedulers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Scheduled States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Restored State and Schedule Frequencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Scheduling a Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Creating a Reusable Scheduler. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Unscheduling a Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Disabling a Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Manual Workflow Runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Running an Entire Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Running a Workflow with Advanced Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Running Part of a Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Running a Task in the Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Table of Contents 11
Filtering Tasks and Integration Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Opening and Closing Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Viewing Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Viewing Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Customizing Workflow Monitor Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Configuring General Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Configuring Gantt Chart View Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Configuring Task View Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Configuring Advanced Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Using Workflow Monitor Toolbars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Working with Tasks and Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Opening Previous Workflow Runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Displaying Previous Workflow Runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Running a Task, Workflow, or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Recovering a Workflow or Worklet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Restarting a Task or Workflow Without Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Stopping or Aborting Tasks and Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Scheduling Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Unscheduling Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Session and Workflow Logs in the Workflow Monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Viewing History Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Workflow and Task Status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Using the Gantt Chart View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Listing Tasks and Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Navigating the Time Window in Gantt Chart View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Zooming the Gantt Chart View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Performing a Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Opening All Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Using the Task View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Filtering in Task View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Opening All Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Tips for Monitoring Workflows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
12 Table of Contents
Session Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Worklet Run Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Worklet Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Command Task Run Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Session Task Run Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Failure Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Session Task Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Source and Target Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Partition Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Performance Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Viewing Performance Details in the Workflow Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Understanding Performance Counters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Table of Contents 13
General Options Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Performance Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Mapping Tab (Transformations View). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Sources Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Targets Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Transformations Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Mapping Tab (Partitions View). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Components Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Metadata Extensions Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
14 Table of Contents
Preface
The PowerCenter® Workflow Basics Guide is written for developers and administrators who are responsible
for creating workflows and sessions, and running workflows. This guide assumes you have knowledge of your
operating systems, relational database concepts, and the database engines, flat files or mainframe system in
your environment. This guide also assumes you are familiar with the interface requirements for your
supporting applications.
Informatica Resources
Informatica Network
Informatica Network hosts Informatica Global Customer Support, the Informatica Knowledge Base, and other
product resources. To access Informatica Network, visit https://network.informatica.com.
To access the Knowledge Base, visit https://kb.informatica.com. If you have questions, comments, or ideas
about the Knowledge Base, contact the Informatica Knowledge Base team at
KB_Feedback@informatica.com.
Informatica Documentation
To get the latest documentation for your product, browse the Informatica Knowledge Base at
https://kb.informatica.com/_layouts/ProductDocumentation/Page/ProductDocumentSearch.aspx.
If you have questions, comments, or ideas about this documentation, contact the Informatica Documentation
team through email at infa_documentation@informatica.com.
15
Informatica Product Availability Matrixes
Product Availability Matrixes (PAMs) indicate the versions of operating systems, databases, and other types
of data sources and targets that a product release supports. If you are an Informatica Network member, you
can access PAMs at
https://network.informatica.com/community/informatica-network/product-availability-matrices.
Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional Services.
Developed from the real-world experience of hundreds of data management projects, Informatica Velocity
represents the collective knowledge of our consultants who have worked with organizations from around the
world to plan, develop, deploy, and maintain successful data management solutions.
If you are an Informatica Network member, you can access Informatica Velocity resources at
http://velocity.informatica.com.
If you have questions, comments, or ideas about Informatica Velocity, contact Informatica Professional
Services at ips@informatica.com.
Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that augment, extend, or enhance your
Informatica implementations. By leveraging any of the hundreds of solutions from Informatica developers and
partners, you can improve your productivity and speed up time to implementation on your projects. You can
access Informatica Marketplace at https://marketplace.informatica.com.
To find your local Informatica Global Customer Support telephone number, visit the Informatica website at the
following link: http://www.informatica.com/us/services-and-training/support-services/global-support-centers.
If you are an Informatica Network member, you can use Online Support at http://network.informatica.com.
16 Preface
CHAPTER 1
Workflow Manager
This chapter includes the following topics:
You can also create a worklet in the Workflow Manager. A worklet is an object that groups a set of tasks. A
worklet is similar to a workflow, but without scheduling information. You can run a batch of worklets inside a
workflow.
After you create a workflow, you run the workflow in the Workflow Manager and monitor it in the Workflow
Monitor.
17
Workflow Manager Tools
To create a workflow, you first create tasks such as a session, which contains the mapping you build in the
Designer. You then connect tasks with conditional links to specify the order of execution for the tasks you
created. The Workflow Manager consists of three tools to help you develop a workflow:
• Task Developer. Use the Task Developer to create tasks you want to run in the workflow.
• Workflow Designer. Use the Workflow Designer to create a workflow by connecting tasks with links. You
can also create tasks in the Workflow Designer as you develop the workflow.
• Worklet Designer. Use the Worklet Designer to create a worklet.
Workflow Tasks
You can create the following types of tasks in the Workflow Manager:
• Navigator. You can connect to and work in multiple repositories and folders. In the Navigator, the
Workflow Manager displays a red icon over invalid objects.
• Workspace. You can create, edit, and view tasks, workflows, and worklets.
• Output. Contains tabs to display different types of output messages. The Output window contains the
following tabs:
- Save. Displays messages when you save a workflow, worklet, or task. The Save tab displays a
validation summary when you save a workflow or a worklet.
- Fetch Log. Displays messages when the Workflow Manager fetches objects from the repository.
The Workflow Manager also displays a status bar that shows the status of the operation you perform.
Note: For the Timer task and schedule settings, the Workflow Manager displays date in short date format and
the time in 24-hour format (HH:mm).
To configure Workflow Manager options, click Tools > Options. You can configure the following options:
• General. You can configure workspace options, display options, and other general options on the General
tab.
• Format. You can configure font, color, and other format options on the Format tab.
• Miscellaneous. You can configure Copy Wizard and Versioning options on the Miscellaneous tab.
• Advanced. You can configure enhanced security for connection objects in the Advanced tab.
General Options
General options control tool behavior, such as whether or not a tool retains its view when you close it, how
the Overview window behaves, and where the Workflow Manager stores workspace files.
The following table describes general options you can configure in the Workflow Manager:
Option Description
Reload Tasks/Workflows Reloads the last view of a tool when you open it. For example, if you have a workflow open
When Opening a Folder when you disconnect from a repository, select this option so that the same workflow appears the
next time you open the folder and Workflow Designer. Default is enabled.
Ask Whether to Reload the Appears when you select Reload tasks/workflows when opening a folder. Select this option if
Tasks/Workflows you want the Workflow Manager to prompt you to reload tasks, workflows, and worklets each
time you open a folder. Default is disabled.
Delay Overview Window By default, when you drag the focus of the Overview window, the focus of the workbook moves
Pans concurrently. When you select this option, the focus of the workspace does not change until you
release the mouse button. Default is disabled.
Allow Invoking In-Place By default, you can press F2 to edit objects directly in the workspace instead of opening the Edit
Editing Using the Mouse Task dialog box. Select this option so you can also click the object name in the workspace to
edit the object. Default is disabled.
Open Editor When a Task Is Opens the Edit Task dialog box when you create a task. By default, the Workflow Manager
Created creates the task in the workspace. If you do not enable this option, double-click the task to open
the Edit Task dialog box. Default is disabled.
Workspace File Directory Directory for workspace files created by the Workflow Manager. Workspace files maintain the
last task or workflow you saved. This directory should be local to the PowerCenter Client to
prevent file corruption or overwrites by multiple users. By default, the Workflow Manager creates
files in the PowerCenter Client installation directory.
Display Tool Names on Displays the name of the tool in the upper left corner of the workspace or workbook. Default is
Views enabled.
Always Show the Full Name Shows the full name of a task when you select it. By default, the Workflow Manager abbreviates
of Tasks the task name in the workspace. Default is disabled.
Show the Expression on a Shows the link condition in the workspace. If you do not enable this option, the Workflow
Link Manager abbreviates the link condition in the workspace. Default is enabled.
Show Background in Displays background color for objects in iconic view. Disable this option to remove background
Partition Editor and color from objects in iconic view. Default is disabled.
Pushdown Optimization
Launch Workflow Monitor Launches Workflow Monitor when you start a workflow or a task. Default is enabled.
when Workflow Is Started
Receive Notifications from You can receive notification messages in the Workflow Manager and view them in the Output
Repository Service window. Notification messages include information about objects that another user creates,
modifies, or deletes. You receive notifications about sessions, tasks, workflows, and worklets.
The Repository Service notifies you of the changes so you know objects you are working with
may be out of date. For the Workflow Manager to receive a notification, the folder containing the
object must be open in the Navigator, and the object must be open in the workspace. You also
receive user-created notifications posted by the user who manages the Repository Service.
Default is enabled.
Format Options
Format options control workspace colors and fonts. You can configure format options for each Workflow
Manager tool.
The following table describes the format options for the Workflow Manager:
Option Description
Current Theme Currently selected color theme for the Workflow Manager tools. This field is display-only.
Tools Workflow Manager tool that you want to configure. When you select a tool, the configurable
workspace elements appear in the list below Tools menu.
Orthogonal Links Link lines run horizontally and vertically but not diagonally in the workspace.
Solid Lines for Links Links appear as solid lines. By default, the Workflow Manager displays orthogonal links as
dotted lines.
Change Change the display font and language script for the selected category.
Current Font Font of the Workflow Manager component that is currently selected in the Categories menu.
This field is display-only.
After you select a color theme for the Workflow Manager tools, you can modify the color of individual
workspace elements.
Miscellaneous Options
Miscellaneous options control the display settings and available functions of the Copy Wizard, versioning,
and target load options. Target options control how the Integration Service loads targets. To configure the
Copy Wizard, Versioning, and Target Load Type options, click Tools > Options and select the Miscellaneous
tab.
Option Description
Generate Unique Name When Generates unique names for copied objects if you select the Rename option. For
Resolved to “Rename” example, if the workflow wf_Sales has the same name as a workflow in the
destination folder, the Rename option generates the unique name wf_Sales1. Default
is enabled.
Get Default Object When Resolved to Uses the object with the same name in the destination folder if you select the Choose
“Choose” option. Default is disabled.
Show Check Out Image in Navigator Displays the Check Out icon when an object has been checked out. Default is
enabled.
Allow Delete Without Checkout You can delete versioned repository objects without first checking them out. You
cannot, however, delete an object that another user has checked out. When you
select this option, the Repository Service checks out an object to you when you
delete it. Default is disabled.
Check In Deleted Objects Checks in deleted objects after you save the changes to the repository. When you
Automatically After They Are Saved clear this option, the deleted object remains checked out and you must check it in
from the results view. Default is disabled.
Target Load Type Sets default load type for sessions. You can choose normal or bulk loading.
Any change you make takes effect after you restart the Workflow Manager.
You can override this setting in the session properties. Default is Bulk.
When you disable enable enhanced security, the Workflow Manager assigns read, write, and execute
permissions to all users that would otherwise receive permissions of the default group. If you delete the
owner from the repository, the Workflow Manager assigns ownership of the object to the administrator.
Option Description
Header and Footer Displays the window title, page number, number of pages, current date and current time in the printout
of the workspace. You can also indicate the alignment of the header and footer.
Options Adds a frame or corner to the page, shows full name of the tasks and options. You can also choose to
print in color or black and white.
• Customize windows.
• Customize toolbars.
• Search for tasks, links, events and variables.
• Arrange objects in the workspace.
• Zoom and pan the workspace.
• Display a window. From the menu, select View. Then select the window you want to open.
• Close a window. Click the small x in the upper right corner of the window.
• Dock or undock a window. Double-click the title bar or drag the title bar toward or away from the
workspace.
• Standard. Contains buttons to connect to and disconnect from repositories and folders, toggle windows,
zoom in and out, pan the workspace, and find objects.
• Connections. Contains buttons to create and edit connections, and assign Integration Services.
• Repository. Contains buttons to connect to and disconnect from repositories and folders, export and
import objects, save changes, and print the workspace.
• View. Contains buttons to customize toolbars, toggle the status bar and windows, toggle full-screen view,
create a new workbook, and view the properties of objects.
• Layout. Contains buttons to arrange and restore objects in the workspace, find objects, zoom in and out,
and pan the workspace.
• Tasks. Contains buttons to create tasks.
• Workflow. Contains buttons to edit workflow properties.
• Run. Contains buttons to schedule the workflow, start the workflow, or start a task.
• Versioning. Contains buttons to check in objects, undo checkouts, compare versions, list checked-out
objects, and list repository queries.
• Tools. Contains buttons to connect to the other PowerCenter Client applications. When you use a Tools
button to open another PowerCenter Client application, PowerCenter uses the same repository connection
to connect to the repository and opens the same folders.
You can perform the following operations with toolbars:
• Find in Workspace.
• Find Next.
1. In any Workflow Manager tool, click the Find in Workspace toolbar button or click Edit > Find in
Workspace.
The Find in Workspace dialog box appears.
2. Choose search for tasks, links, variables, or events.
3. Enter a search string, or select a string from the list.
The Workflow Manager saves the last 10 search strings in the list.
1. To search for a task, link, event, or variable, open the appropriate Workflow Manager tool and click a
task, link, or event. To search for text in the Output window, click the appropriate tab in the Output
window.
2. Enter a search string in the Find field on the standard toolbar.
The search is not case sensitive.
3. Click Edit > Find Next, click the Find Next button on the toolbar, or press Enter or F3 to search for the
string.
The Workflow Manager highlights the first task name, link condition, event name, or variable name that
contains the search string, or the first string in the Output window that matches the search string.
4. To search for the next item, press Enter or F3 again.
The Workflow Manager alerts you when you have searched through all items in the workspace or Output
window before it highlights the same objects a second time.
To pan the workspace, click Layout > Pan or click the Pan button on the toolbar. Drag the focus of the
workspace window and release the mouse button when it is in the appropriate position. Double-click the
workspace to stop panning.
You can view properties of a folder, task, worklet, or workflow. For folders, the Workflow Manager displays
folder name and whether the folder is shared. Object properties are read-only.
Checking In Objects
You commit changes to the repository by checking in objects. When you check in an object, the repository
creates a new version of the object and assigns it a version number. The repository increments the version
number by one each time it creates a new version.
If you want to check out or check in scheduler objects in the Workflow Manager, you can run an object query
to search for them. You can also check out a scheduler object in the Scheduler Browser window when you
edit the object. However, you must run an object query to check in the object.
If you want to check out or check in session configuration objects in the Workflow Manager, you can run an
object query to search for them. You can also check out objects from the Session Config Browser window
when you edit them.
You also can check out and check in session configuration and scheduler objects from the Repository
Manager.
Use the following rules and guidelines when you view older versions of objects in the workspace:
• You cannot simultaneously view multiple versions of composite objects, such as workflows and worklets.
• Older versions of a composite object might not include the child objects that were used when the
composite object was checked in. If you open a composite object that includes a child object version that
is purged from the repository, the preceding version of the child object appears in the workspace as part
of the composite object. For example, you might want to view version 5 of a workflow that originally
included version 3 of a session, but version 3 of the session is purged from the repository. When you view
version 5 of the workflow, version 2 of the session appears as part of the workflow.
• You cannot view older versions of sessions if they reference deleted or invalid mappings, or if they do not
have a session configuration.
1. In the workspace or Navigator, select the object and click Versioning > View History.
2. Select the version you want to view in the workspace and click Tools > Open in Workspace.
1. In the workspace or Navigator, select an object and click Versioning > View History.
2. Select the versions you want to compare and click Compare > Selected Versions.
-or-
Select a version and click Compare > Previous Version to compare a version of the object with the
previous version.
The Diff Tool appears.
• Track repository objects during development. You can add Label, User, Last saved, or Comments
parameters to queries to track objects during development.
• Associate a query with a deployment group. When you create a dynamic deployment group, you can
associate an object query with it.
To create an object query, click Tools > Queries to open the Query Browser.
From the Query Browser, you can create, edit, and delete queries. You can also configure permissions for
each query from the Query Browser. You can run any queries for which you have read permissions from the
Query Browser.
Use the Copy Wizard in the Workflow Manager to copy objects. When you copy a workflow or a worklet, the
Copy Wizard copies all of the worklets, sessions, and tasks in the workflow. You must resolve all conflicts
that occur. Conflicts occur when the Copy Wizard finds a workflow or worklet with the same name in the
target folder or when the connection object does not exist in the target repository. If a connection object does
not exist, you can skip the conflict and choose a connection object after you copy the workflow. You cannot
copy connection objects. Conflicts may also occur when you copy Session tasks.
You can configure display settings and functions of the Copy Wizard by choosing Tools > Options.
Note: Use the Import Wizard in the Workflow Manager to import objects from an XML file. The Import Wizard
provides the same options to resolve conflicts as the Copy Wizard.
Copying Sessions
When you copy a Session task, the Copy Wizard looks for the database connection and associated mapping
in the destination folder. If the mapping or connection does not exist in the destination folder, you can select
a new mapping or connection. If the destination folder does not contain any mapping, you must first copy a
mapping to the destination folder in the Designer before you can copy the session.
When you copy a session that has mapping variable values saved in the repository, the Workflow Manager
either copies or retains the saved variable values.
You can compare objects across folders and repositories. You must open both folders to compare the
objects. You can compare a reusable object with a non-reusable object. You can also compare two versions
of the same object.
• Tasks
• Sessions
• Worklets
• Workflows
You can also compare instances of the same type. For example, if the workflows you compare contain
worklet instances with the same name, you can compare the instances to see if they differ. Use the Workflow
Manager to compare the following instances and attributes:
• Instances of sessions and tasks in a workflow or worklet comparison. For example, when you
compare workflows, you can compare task instances that have the same name.
• Instances of mappings and transformations in a session comparison. For example, when you
compare sessions, you can compare mapping instances.
• The attributes of instances of the same type within a mapping comparison. For example, when you
compare flat file sources, you can compare attributes, such as file type (delimited or fixed), delimiters,
escape characters, and optional quotes.
You can compare schedulers and session configuration objects in the Repository Manager. You cannot
compare objects of different types. For example, you cannot compare an Email task with a Session task.
When you compare objects, the Workflow Manager displays the results in the Diff Tool window. The Diff Tool
output contains different nodes for different types of objects.
When you import Workflow Manager objects, you can compare object conflicts.
1. Open the folders that contain the objects you want to compare.
2. Open the appropriate Workflow Manager tool.
3. Click Tasks > Compare.
-or-
Click Worklets > Compare.
-or-
Click Workflow > Compare.
4. In the dialog box that appears, select the objects that you want to compare.
5. Click Compare.
Tip: You can also compare objects from the Navigator or workspace. In the Navigator, select the objects,
right-click and select Compare Objects. In the workspace, select the objects, right-click and select
Compare Objects.
6. To view more differences between object properties, click the Compare Further icon or right-click the
differences.
7. If you want to save the comparison as a text or HTML file, click File > Save to File.
Metadata Extensions
You can extend the metadata stored in the repository by associating information with individual repository
objects. For example, you may want to store your name with the worklets you create. If you create a session,
you can store your telephone extension with that session. You associate information with repository objects
using metadata extensions. You can create and promote metadata extensions on the Metadata Extensions
tab.
The following table describes the configuration options for the Metadata Extensions tab:
Extension Name Name of the metadata extension. Metadata extension names must be unique for each type of object
in a domain. Metadata extension names cannot contain any special characters except underscores
and cannot begin with numbers.
Reusable Makes the metadata extension reusable or non-reusable. Check to apply the metadata extension to
all objects of this type (reusable). Clear to make the metadata extension apply to this object only
(non-reusable).
Note: If you make a metadata extension reusable, you cannot change it back to non-reusable. The
Workflow Manager makes the extension reusable as soon as you confirm the action.
UnOverride This column appears only if the value of one of the metadata extensions was changed. To restore the
default value, click Revert.
Tip: To create multiple reusable metadata extensions, use the Repository Manager.
What you can edit depends on whether the metadata extension is reusable or non-reusable. You can
promote a non-reusable metadata extension to reusable, but you cannot change a reusable metadata
extension to non-reusable.
Metadata Extensions 31
To edit the value of a reusable metadata extension, click the Metadata Extensions tab and modify the Value
field. To restore the default value for a metadata extension, click Revert in the UnOverride column.
To edit a non-reusable metadata extension, click the Metadata Extensions tab. You can update the Datatype,
Value, Precision, and Description fields.
To make the metadata extension reusable, select Reusable. If you make a metadata extension reusable, you
cannot change it back to non-reusable. The Workflow Manager makes the extension reusable as soon as you
confirm the action.
To restore the default value for a metadata extension, click Revert in the UnOverride column.
Expression Editor
The Workflow Manager provides an Expression Editor for any expression in the workflow. You can enter
expressions using the Expression Editor for Link conditions, Decision tasks, and Assignment tasks.
The Expression Editor displays built-in variables, user-defined workflow variables, and predefined workflow
variables such as $Session.status.
Adding Comments
You can add comments using -- or // comment indicators with the Expression Editor. Use comments to give
descriptive information about the expression, or you can specify a valid URL to access business
documentation about the expression.
Validating Expressions
Use the Validate button to validate an expression. If you do not validate an expression, the Workflow
Manager validates it when you close the Expression Editor. You cannot run a workflow with invalid
expressions.
Expressions in link conditions and Decision task conditions must evaluate to a numeric value. Workflow
variables used in expressions must exist in the workflow.
You can resize the Expression Editor. Expand the dialog box by dragging from the borders. The Workflow
Manager saves the new size for the dialog box as a client setting.
Keyboard Shortcuts
When editing a repository object or maneuvering around the Workflow Manager, use the following Keyboard
shortcuts to help you complete different operations quickly.
The following table lists the Workflow Manager keyboard shortcuts for editing a repository object:
Task Shortcut
Find all combination and list boxes. Type the first letter on the list.
Paste copied or cut text from the clipboard into a cell. Ctrl+V
The following table lists the Workflow Manager keyboard shortcuts for navigating in the workspace:
Task Shortcut
Create links. Ctrl+F2. Press Ctrl+F2 to select first task you want to link.
Press Tab to select the rest of the tasks you want to link.
Press Ctrl+F2 again to link all the tasks you selected.
Expand selected node and all its children. SHIFT + * (use asterisk on numeric keypad )
Keyboard Shortcuts 33
Task Shortcut
• Workflows Overview, 35
• Creating a Workflow, 36
• Using the Workflow Wizard, 37
• Assigning an Integration Service, 39
• Workflow Reports (Deprecated), 40
• Working with Worklets, 41
• Workflow Links, 43
Workflows Overview
A workflow is a set of instructions that tells the Integration Service how to run tasks such as sessions, email
notifications, and shell commands. After you create tasks in the Task Developer and Workflow Designer, you
connect the tasks with links to create a workflow.
In the Workflow Designer, you can specify conditional links and use workflow variables to create branches in
the workflow. The Workflow Manager also provides Event-Wait and Event-Raise tasks to control the
sequence of task execution in the workflow. You can also create worklets and nest them inside the workflow.
Every workflow contains a Start task, which represents the beginning of the workflow.
When you create a workflow, select an Integration Service to run the workflow. You can start the workflow
using the Workflow Manager, Workflow Monitor, or pmcmd.
Use the Workflow Monitor to see the progress of a workflow during its run. The Workflow Monitor can also
show the history of a workflow.
1. Create a workflow. Create a workflow in the Workflow Designer or by using the Workflow Generation
Wizard in the PowerCenter Designer.
35
2. Add tasks to the workflow. You might have already created tasks in the Task Developer. Or, you can
add tasks to the workflow as you develop the workflow in the Workflow Designer.
3. Connect tasks with links. After you add tasks to the workflow, connect them with links to specify the
order of execution in the workflow.
4. Specify conditions for each link. You can specify conditions on the links to create branches and
dependencies.
5. Validate workflow. Validate the workflow in the Workflow Designer to identify errors.
6. Save workflow. When you save the workflow, the Workflow Manager validates the workflow and
updates the repository.
7. Run workflow. In the workflow properties, select an Integration Service to run the workflow. Run the
workflow from the Workflow Manager, Workflow Monitor, or pmcmd. You can monitor the workflow in the
Workflow Monitor.
Related Topics:
• “Manual Workflow Runs” on page 174
• “Workflow Monitor” on page 187
• “Workflow Properties Reference” on page 249
Creating a Workflow
A workflow must contain a Start task. The Start task represents the beginning of a workflow. When you create
a workflow, the Workflow Designer creates a Start task and adds it to the workflow. You cannot delete the
Start task.
After you create a workflow, you can add tasks to the workflow. The Workflow Manager includes tasks such
as the Session, Command, and Email tasks.
Finally, you connect workflow tasks with links to specify the order of execution in the workflow. You can add
conditions to links.
When you edit a workflow, the Repository Service updates the workflow information when you save the
workflow. If a workflow is running when you make edits, the Integration Service uses the updated information
the next time you run the workflow.
You can also create a workflow through the Workflow Wizard in the Workflow Manager or the Workflow
Generation Wizard in the PowerCenter Designer.
If you have already created tasks in the Task Developer, add them to the workflow by dragging the tasks from
the Navigator to the Workflow Designer workspace.
To create and add tasks as you develop the workflow, click Tasks > Create in the Workflow Designer. Or, use
the Tasks toolbar to create and add tasks to the workflow. Click the button on the Tasks toolbar for the task
you want to create. Click again in the Workflow Designer workspace to create and add the task.
Tasks you create in the Workflow Designer are non-reusable. Tasks you create in the Task Developer are
reusable.
Deleting a Workflow
You may decide to delete a workflow that you no longer use. When you delete a workflow, you delete all non-
reusable tasks and reusable task instances associated with the workflow. Reusable tasks used in the
workflow remain in the folder when you delete the workflow.
If you delete a workflow that is running, the Integration Service aborts the workflow. If you delete a workflow
that is scheduled to run, the Integration Service removes the workflow from the schedule.
You can delete a workflow in the Navigator window, or you can delete the workflow currently displayed in the
Workflow Designer workspace:
• To delete a workflow from the Navigator window, open the folder, select the workflow and press the
Delete key.
• To delete a workflow currently displayed in the Workflow Designer workspace, click Workflows > Delete.
Before you create a workflow, verify that the folder contains a valid mapping for the Session task.
Complete the following steps to build a workflow using the Workflow Wizard:
1. In the Workflow Manager, open the folder containing the mapping you want to use in the workflow.
2. Open the Workflow Designer.
3. Click Workflows > Wizard.
The Workflow Wizard appears.
4. Enter a name for the workflow.
The convention for naming workflows is wf_WorkflowName.
5. Enter a description for the workflow.
6. Select the Integration Service to run the workflow and click Next.
1. In the second step of the Workflow Wizard, select a valid mapping and click the right arrow button.
The Workflow Wizard creates a Session task in the right pane using the selected mapping and names it
s_MappingName by default.
2. You can select additional mappings to create more Session tasks in the workflow.
When you add multiple mappings to the list, the Workflow Wizard creates sequential sessions in the
order you add them.
3. Use the arrow buttons to change the session order.
4. Specify whether the session should be reusable.
When you create a reusable session, use the session in other workflows.
5. Specify how you want the Integration Service to run the workflow.
You can specify that the Integration Service runs sessions only if previous sessions complete, or you can
specify that the Integration Service always runs each session. When you select this option, it applies to
all sessions you create using the Workflow Wizard.
When you configure a task, you can configure the workflow to fail if the task fails. If you configure the
workflow to fail when a task fails, the Integration Service removes the workflow from the schedule, and you
must reschedule it. You can reschedule the workflow through the Workflow Manager or through pmcmd. If
you do not configure the workflow to fail when a task fails, the Integration Service reschedules the workflow.
1. In the third step of the Workflow Wizard, configure the scheduling and run options.
2. Click Next.
The Workflow Wizard displays the settings for the workflow.
3. Verify the workflow settings, then click Finish. To edit settings, click Back.
The completed workflow opens in the Workflow Designer workspace. From the workspace, you can add
tasks, create concurrent sessions, add conditions to links, or change properties.
Note: If you install version 10.1, Informatica deprecated the Reporting and Dashboards Service and you
cannot view the reports in the Workflow Manager. If you try to view the reports, you will get an error indicating
that the Reporting and Dashboards Service does not exist. If you upgrade to 10.1, you can configure the
Reporting and Dashboards Service and view the reports either in the Workflow Manager or in the Designer.
An administrator uses the Administrator tool to create a Reporting and Dashboards Service and adds a
reporting source for the service. The reporting source must be the PowerCenter repository that contains the
workflows that you want to report on.
The Workflow Composite Report includes information about the following components in a workflow:
Deprecated Behavior
Effective in version 10.1, Informatica deprecated the Reporting and Dashboards Service. Informatica will drop
support for the Reporting and Dashboards Service and JasperReports Server in a future release.
If you upgrade to version 10.1, you can continue to use the Reporting and Dashboards Service. Informatica
recommends that you begin using a third-party reporting tool before Informatica drops support. You can use
the recommended SQL queries for building all the reports shipped with earlier versions of PowerCenter.
If you install version 10.1, you cannot create a Reporting and Dashboards Service. You must use a third-
party reporting tool to run PowerCenter and Metadata Manager reports.
For information about the PowerCenter Reports, see the Informatica PowerCenter Using PowerCenter
Reports Guide. For information about the PowerCenter repository views, see the Informatica PowerCenter
Repository Guide.
To run a worklet, include the worklet in a workflow. The workflow that contains the worklet is called the parent
workflow. When the Integration Service runs a worklet, it expands the worklet to run tasks and evaluate links
within the worklet. It writes information about worklet execution in the workflow log.
Suspending Worklets
When you choose Suspend on Error for the parent workflow, the Integration Service also suspends the
worklet if a task in the worklet fails. When a task in the worklet fails, the Integration Service stops executing
the failed task and other tasks in its path. If no other task is running in the worklet, the worklet status is
“Suspended.” If one or more tasks are still running in the worklet, the worklet status is “Suspending.” The
Integration Service suspends the parent workflow when the status of the worklet is “Suspended” or
“Suspending.”
Developing a Worklet
To develop a worklet, you must first create a worklet. After you create a worklet, configure worklet properties
and add tasks to the worklet. You can create reusable worklets in the Worklet Designer. You can also create
non-reusable worklets in the Workflow Designer as you develop the workflow.
Note: You can promote non-reusable worklet to reusable worklet by selecting the Make Reusable option in
the worklet properties in a non-versioned repository. In a versioned repository, the reusable option is
unavailable. To rename a non-reusable worklet, open the worklet properties in the Workflow Designer.
In addition to general task settings, you can configure the following worklet properties:
• Worklet variables. Use worklet variables to reference values and record information. You use worklet
variables the same way you use workflow variables. You can assign a workflow variable to a worklet
variable to override its initial value.
• Events. To use the Event-Wait and Event-Raise tasks in the worklet, you must first declare an event in
the worklet properties.
• Metadata extension. Extend the metadata stored in the repository by associating information with
repository objects.
Related Topics:
• “Metadata Extensions” on page 30
• “Working with the Event Task” on page 70
You might choose to nest worklets to load data to fact and dimension tables. Create a nested worklet to load
fact and dimension data into a staging area. Then, create a nested worklet to load the fact and dimension
data from the staging area to the data warehouse.
You might choose to nest worklets to simplify the design of a complex workflow. Nest worklets that can be
grouped together within one worklet. To nest an existing reusable worklet, click Tasks > Insert Worklet. To
create a non-reusable nested worklet, click Tasks > Create, and select worklet.
Workflow Links
Use links to connect each task in a workflow or worklet. You can specify conditions with links to create
branches. The Workflow Manager does not allow you to use links to create loops. Each link in the workflow or
worklet can run only once.
After you create links between tasks, you can create conditions for each link to determine the order of
operation in the workflow. If you do not specify conditions for each link, the Integration Service runs the next
task in the workflow or worklet by default.
Use predefined or user-defined workflow and worklet variables in the link condition. If the link condition
evaluates to True, the Integration Service runs the next task in the workflow or worklet. If the link condition
evaluates to False, the Integration Service does not run the next task.
You can view results of link evaluation during workflow runs in the workflow log file.
Workflow Links 43
Linking Tasks Sequentially
Link tasks sequentially when you want to link tasks in order between one task and each subsequent task you
add.
1. In the Workflow Designer or Worklet Designer workspace, double-click the link you want to specify.
The Expression Editor appears.
2. In the Expression Editor, enter the link condition.
The Expression Editor provides predefined workflow and worklet variables, user-defined workflow and
worklet variables, variable functions, and boolean and arithmetic operators.
3. Validate the expression using the Validate button.
The Workflow Manager displays validation results in the Output window.
Tip: Drag the end point of a link to move it from one task to another without losing the link condition.
To accomplish this, you can set the following link condition between the sessions so that the s_STORES_AZ
runs only if the number of failed target rows for S_STORES_CA is zero:
$s_STORES_CA.TgtFailedRows = 0
After you specify the link condition in the Expression Editor, the Workflow Manager validates the link
condition and displays it next to the link in the workflow or worklet.
1. In the Workflow Designer or Worklet Designer workspace, right-click a task and choose Highlight Path.
2. Select Forward Path, Backward Path, or Both.
The Workflow Manager highlights all links in the branch you select.
1. In the Workflow Designer or Worklet Designer workspace, select all links you want to delete.
Tip: Use the mouse to drag the selection, or you can Ctrl-click the tasks and links.
Workflow Links 45
CHAPTER 3
Sessions
This chapter includes the following topics:
• Sessions Overview, 46
• Session Task, 46
• Editing a Session, 47
• Performance Details, 49
• Pre- and Post-Session Commands, 50
Sessions Overview
A session is a set of instructions that tells the Integration Service how and when to move data from sources
to targets. A session is a type of task, similar to other tasks available in the Workflow Manager. In the
Workflow Manager, you configure a session by creating a Session task. To run a session, you must first
create a workflow to contain the Session task.
When you create a Session task, enter general information such as the session name, session schedule, and
the Integration Service to run the session. You can select options to run pre-session shell commands, send
On-Success or On-Failure email, and use FTP to transfer source and target files.
Configure the session to override parameters established in the mapping, such as source and target location,
source and target type, error tracing levels, and transformation attributes. You can also configure the session
to collect performance details for the session and store them in the PowerCenter repository. You might view
performance details for a session to tune the session.
You can run as many sessions in a workflow as you need. You can run the Session tasks sequentially or
concurrently, depending on the requirement.
The Integration Service creates several files and in-memory caches depending on the transformations and
options used in the session.
Session Task
You create a Session task for each mapping that you want the Integration Service to run. The Integration
Service uses the instructions configured in the session to move data from sources to targets.
46
You can create a reusable Session task in the Task Developer. You can also create non-reusable Session
tasks in the Workflow Designer as you develop the workflow. After you create the session, you can edit the
session properties at any time.
Note: Before you create a Session task, you must configure the Workflow Manager to communicate with
databases and the Integration Service. You must assign appropriate permissions for any database, FTP, or
external loader connections you configure.
Editing a Session
After you create a session, you can edit it. For example, you might need to adjust the buffer and cache sizes,
modify the update strategy, or clear a variable value saved in the repository.
Double-click the Session task to open the session properties. The session has the following tabs, and each of
those tabs has multiple settings:
• General tab. Enter session name, mapping name, and description for the Session task, assign resources,
and configure additional task options.
• Properties tab. Enter session log information, test load settings, and performance configuration.
• Config Object tab. Enter advanced settings, log options, and error handling configuration.
• Mapping tab. Enter source and target information, override transformation properties, and configure the
session for partitioning.
• Components tab. Configure pre- or post-session shell commands and emails.
• Metadata Extension tab. Configure metadata extension options.
You can edit session properties at any time. The repository updates the session properties immediately.
If the session is running when you edit the session, the repository updates the session when the session
completes. If the mapping changes, the Workflow Manager might issue a warning that the session is invalid.
The Workflow Manager then lets you continue editing the session properties. After you edit the session
properties, the Integration Service validates the session and reschedules the session.
Related Topics:
• “Session Validation” on page 166
• “Session Properties Reference” on page 229
Editing a Session 47
Applying Attributes to All Instances
When you edit the session properties, you can apply source, target, and transformation settings to all
instances of the same type in the session. You can also apply settings to all partitions in a pipeline. You can
apply reader or writer settings, connection settings, and properties settings.
For example, you might need to change a relational connection from a test to a production database for all
the target instances in a session. On the Mapping tab, you can change the connection value for one target in
a session and apply the connection to the other relational target objects.
The following table shows the options you can use to apply attributes to objects in a session. You can apply
different options depending on whether the setting is a reader or writer, connection, or an object property.
Reader Apply Type to All Instances Applies a reader or writer type to all instances of the same object
Writer type in the session. For example, you can apply a relational reader
type to all the other readers in the session.
Reader Apply Type to All Partitions Applies a reader or writer type to all the partitions in a pipeline. For
Writer example, if you have four partitions, you can change the writer
type in one partition for a target instance. Use this option to apply
the change to the other three partitions.
Connections Apply Connection Type Applies the same type of connection to all instances. Connection
types are relational, FTP, queue, application, or external loader.
Connections Apply Connection Value Apply a connection value to all instances or partitions. The
connection value defines a specific connection that you can view
in the connection browser. You can apply a connection value that
is valid for the existing connection type.
Connections Apply Connection Attributes Apply only the connection attribute values to all instances or
partitions. Each type of connection has different attributes. You
can apply connection attributes separately from connection values.
Connections Apply Connection Data Apply the connection value and its connection attributes to all the
other instances that have the same connection type. This option
combines the connection option and the connection attribute
option.
Connections Apply All Connection Information Applies the connection value and its attributes to all the other
instances even if they do not have the same connection type. This
option is similar to Apply Connection Data, but it lets you change
the connection type.
Properties Apply Attribute to all Instances Applies an attribute value to all instances of the same object type
in the session. For example, if you have a relational target you can
choose to truncate a table before you load data. You can apply the
attribute value to all the relational targets in the session.
Properties Apply Attribute to all Partitions Applies an attribute value to all partitions in a pipeline. For
example, you can change the name of the reject file name in one
partition for a target instance, then apply the file name change to
the other partitions.
48 Chapter 3: Sessions
Applying Connection Settings
When you apply connection settings you can apply the connection type, connection value, and connection
attributes. You can only apply a connection value that is valid for a connection type unless you choose the
Apply All Connection Information option. For example, if a target instance uses an FTP connection, you can
only choose an FTP connection value to apply to it. The Apply All Connection Information option lets you
apply a new connection type, connection value, and connection attributes.
Performance Details
You can configure a session to collect performance details and store them in the PowerCenter repository.
Collect performance data for a session to view performance details while the session runs. Write
performance data for a session in the PowerCenter repository to store and view performance details for
previous session runs.
If you want to write performance data to the repository you must perform the following tasks:
1. In the Workflow Manager, open the session properties and select the Properties tab.
2. Select Collect performance data to view performance details while the session runs.
3. Select Write Performance Data to Repository to store and view performance details for previous session
runs.
Performance Details 49
You must also configure the Integration Service to store the run-time information at the verbose level.
4. Click OK.
The Integration Service runs pre-session SQL commands before it reads the source. It runs post-session
SQL commands after it writes to the target.
You can use parameters and variables in SQL executed against the source and target. Use any parameter or
variable type that you can define in the parameter file. You can enter a parameter or variable within the SQL
statement, or you can use a parameter or variable as the command. For example, you can use a session
parameter, $ParamMyPreSQL, as the source pre-session SQL command, and set $ParamMyPreSQL to the
SQL statement in the parameter file.
• Use any command that is valid for the database type. However, the Integration Service does not allow
nested comments, even though the database might.
• Use a semicolon (;) to separate multiple statements. The Integration Service issues a commit after each
statement.
• The Integration Service ignores semicolons within /* ...*/.
• If you need to use a semicolon outside of comments, you can escape it with a backslash (\).
• The Workflow Manager does not validate the SQL.
Error Handling
You can configure error handling on the Config Object tab. You can choose to stop or continue the session if
the Integration Service encounters an error issuing the pre- or post- session SQL command.
50 Chapter 3: Sessions
The Workflow Manager provides the following types of shell commands for each Session task:
• Pre-session command. The Integration Service performs pre-session shell commands at the beginning
of a session. You can configure a session to stop or continue if a pre-session shell command fails.
• Post-session success command. The Integration Service performs post-session success commands
only if the session completed successfully.
• Post-session failure command. The Integration Service performs post-session failure commands only if
the session failed to complete.
Use the following guidelines to call a shell command:
• Use any valid UNIX command or shell script for UNIX nodes, or any valid DOS or batch file for Windows
nodes.
• Configure the session to run the pre- or post-session shell commands.
The Workflow Manager provides a task called the Command task that lets you configure shell commands
anywhere in the workflow. You can choose a reusable Command task for the pre- or post-session shell
command. Or, you can create non-reusable shell commands for the pre- or post-session shell commands.
If you create a non-reusable pre- or post-session shell command, you can make it into a reusable Command
task.
The Workflow Manager lets you choose from the following options when you configure shell commands:
• Create non-reusable shell commands. Create a non-reusable set of shell commands for the session.
Other sessions in the folder cannot use this set of shell commands.
• Use an existing reusable Command task. Select an existing Command task to run as the pre- or post-
session shell command.
Configure pre- and post-session shell commands in the Components tab of the session properties.
1. In the Components tab of the session properties, select Non-reusable for pre- or post-session shell
command.
2. Click the Edit button in the Value field to open the Edit Pre- or Post-Session Command dialog box.
3. Enter a name for the command in the General tab.
4. If you want the Integration Service to perform the next command only if the previous command
completed successfully, select Fail Task if Any Command Fails in the Properties tab.
5. In the Commands tab, click the Add button to add shell commands.
To create a Command Task from non-reusable pre- or post-session shell commands, click the Edit button to
open the Edit dialog box for the shell commands. In the General tab, select the Make Reusable check box.
After you select the Make Reusable check box and click OK, a new Command task appears in the Tasks
folder in the Navigator window. Use this Command task in other workflows, just as you do with any other
reusable workflow tasks.
1. In the Components tab of the session properties, click Reusable for the pre- or post-session shell
command.
2. Click the Edit button in the Value field to open the Task Browser dialog box.
3. Select the Command task you want to run as the pre- or post-session shell command.
4. Click the Override button in the Task Browser dialog box if you want to change the order of the
commands, or if you want to specify whether to run the next command when the previous command fails.
Changes you make to the Command task from the session properties only apply to the session. In the
session properties, you cannot edit the commands in the Command task.
5. Click OK to select the Command task for the pre- or post-session shell command.
The name of the Command task you select appears in the Value field for the shell command.
Configure the session to stop or continue if a pre-session shell command fails in the Error Handling settings
on the Config Object tab.
52 Chapter 3: Sessions
CHAPTER 4
When you create a session, the Workflow Manager applies the default configuration object settings to the
Config Object tab of the session. You can also choose a configuration object to use for the session.
When you edit a session configuration object, each session that uses the session configuration object inherits
the changes. When you override the configuration object settings in the Session task, the session
configuration object does not inherit changes.
• Advanced. Advanced settings allow you to configure constraint-based loading, lookup caches, and buffer
sizes.
• Log options. Log options allow you to configure how you want to save the session log. By default, the
Log Manager saves only the current session log.
• Error handling. Error Handling settings allow you to determine if the session fails or continues when it
encounters pre-session command errors, stored procedure errors, or a specified number of session
errors.
53
• Partitioning options. Partitioning options allow the Integration Service to determine the number of
partitions to create at run time.
• Session on grid. When Session on Grid is enabled, the Integration Service distributes session threads to
the nodes in a grid to increase performance and scalability.
Advanced Settings
Advanced settings allow you to configure constraint-based loading, lookup caches, and buffer sizes.
The following table describes the Advanced settings of the Config Object tab:
Constraint Based Load Ordering Integration Service loads targets based on primary key-foreign key constraints where
possible.
Cache Lookup() Function If selected, the Integration Service caches PowerMart 3.5 LOOKUP functions in the
mapping, overriding mapping-level LOOKUP configurations.
If not selected, the Integration Service performs lookups on a row-by-row basis,
unless otherwise specified in the mapping.
Default Buffer Block Size Size of buffer blocks used to move data from sources to targets. By default, this value
is set to auto.
You can specify auto or a numeric value. The default unit is bytes. Append KB, MB, or
GB to the value to specify other units. For example, 1048576 or 1024KB or 1MB.
Line Sequential Buffer Length Number of bytes that the PowerCenter Integration Service reads for each line.
Increase this setting from the default of 1024 bytes if source flat file records are larger
than 1024 bytes.
Maximum Partial Session Log Files The maximum number of partial log files to save. Configure this option with Session
Log File Max Size or Session Log File Max Time Period. Default is one.
Maximum Memory Allowed for Auto Maximum memory allocated for automatic cache when you configure the Integration
Memory Attributes Service to determine session cache size at run time.
You enable automatic memory settings by configuring a value for this attribute. The
default unit is bytes. Append KB, MB, or GB to the value to specify other units. For
example, 1048576 or 1024KB or 1MB.
Maximum Percentage of Total Maximum percentage of memory allocated for automatic cache when you configure
Memory Allowed for Auto Memory the Integration Service to determine session cache size at run time.
Attributes
Additional Concurrent Pipelines for Restricts the number of pipelines that the Integration Service can create concurrently
Lookup Cache Creation to pre-build lookup caches. Configure this property when the Pre-build Lookup Cache
property is enabled for a session or transformation.
When the Pre-build Lookup Cache property is enabled, the Integration Service
creates a lookup cache before the Lookup transformation receives the data. If the
session has multiple Lookup transformations, the Integration Service creates an
additional pipeline for each lookup cache that it builds.
To configure the number of pipelines that the Integration Service can create
concurrently, select Auto or enter a numeric value:
- Auto. The Integration Service determines the number of pipelines it can create at
run time.
- Numeric value. The Integration Service can create the specified number of
pipelines to create lookup caches.
Custom Properties Configure custom properties of the Integration Service for the session. You can
override custom properties that the Integration Service uses after the DTM process
has started. The Integration Service also writes the override value of the property to
the session log.
Pre-build Lookup Cache Allows the Integration Service to build the lookup cache before the Lookup
transformation receives the data. The Integration Service can build multiple lookup
cache files at the same time to improve performance.
You can configure this option in the mapping or the session. The Integration Service
uses the session-level setting if you configure the Lookup transformation option as
Auto.
Configure one of the following options:
- Auto. The Integration Service uses the value configured in the session.
- Always allowed. The Integration Service can build the lookup cache before the
Lookup transformation receives the first source row. The Integration Service
creates an additional pipeline to build the cache.
- Always disallowed. The Integration Service cannot build the lookup cache before
the Lookup transformation receives the first row.
You must configure the number of pipelines that the Integration Service can build
concurrently. Configure the Additional Concurrent Pipelines for Lookup Cache
Creation session property. The Integration Service can pre-build lookup cache if this
property is greater than zero.
Advanced Settings 55
Advanced Settings Description
DateTime Format String Date time format defined in the session configuration object. Default format specifies
microseconds: MM/DD/YYYY HH24:MI:SS.US.
You can specify seconds, milliseconds, or nanoseconds.
MM/DD/YYYY HH24:MI:SS, specifies seconds.
MM/DD/YYYY HH24:MI:SS.MS, specifies milliseconds.
MM/DD/YYYY HH24:MI:SS.US, specifies microseconds.
MM/DD/YYYY HH24:MI:SS.NS, specifies nanoseconds.
Pre 85 Timestamp Compatibility Trims subseconds to maintain compatibility with versions prior to 8.5. The Integration
Service converts the Oracle Timestamp datatype to the Oracle Date datatype. The
Integration Service trims subsecond data for the following sources, targets, and
transformations:
- Relational sources and targets
- XML sources and targets
- SQL transformation
- XML Generator transformation
- XML Parser transformation
Default is disabled.
The following table shows the Log Options settings of the Config Object tab:
Save Session Log By Configure this option to save session log files.
If you select Save Session Log by Timestamp, the Log Manager saves all session logs,
appending a time stamp to each log.
If you select Save Session Log by Runs, the Log Manager saves a designated number of
session logs. Configure the number of sessions in the Save Session Log for These Runs
option.
You can also use the $PMSessionLogCount service variable to save the configured number of
session logs for the Integration Service.
Save Session Log for These Number of historical session logs you want the Log Manager to save.
Runs The Log Manager saves the number of historical logs you specify, plus the most recent
session log. When you configure five runs, the Log Manager saves the most recent session
log, plus historical logs 0-4.
You can configure up to 2,147,483,647 historical logs. If you configure zero logs, the Log
Manager saves the most recent session log.
Session Log File Max Size Maximum number of megabytes for a session log file. Configure a maximum size to enable log
file rollover. When the log file reaches the maximum size, the Integration Service creates a
another log file. If you set the size to zero the session log file size has no limit.
Configure this option for real-time sessions that generate large session logs. The Integration
Service writes the session logs to multiple files. Each file is a partial log file. Default is zero.
Session Log File Max Time Maximum number of hours that the Integration Service writes to a session log file. Configure
Period the maximum period to enable log file rollover by time. When the period is over, the Integration
service creates another log file.
Configure this option for real-time sessions that might generate large session logs. The
Integration Service writes the session logs to multiple files. Each file is a partial log file.
Default is zero.
Maximum Partial Session Log Maximum number of session log files to save. The Integration Service overwrites the oldest
Files partial log file if the number of log files has reached the limit.
Configure this option in conjunction with the maximum time period or maximum file size option.
You must configure one of these options to enable session log rollover.
If you set the maximum number to 0, the number of session log files is unlimited. Default is 1.
Writer Commit Statistics Log Frequency that the Integration Service writes commit statistics in the session log. The
Frequency Integration Service writes commit statistics to the session log after the specified number of
commits occurs. The Integration Service writes commit statistics after each commit. Default is
1.
Writer Commit Statistics Log Time interval, in minutes, to write commit statistics to the session log. The Integration Service
Interval writes commit statistics to the session log after each time interval.
Related Topics:
• “Session Logs” on page 225
Stop On Errors Indicates how many non-fatal errors the Integration Service can encounter before it stops the
session. Non-fatal errors include reader, writer, and DTM errors. Enter the number of non-fatal
errors you want to allow before stopping the session. The Integration Service maintains an
independent error count for each source, target, and transformation. If you specify 0, non-fatal
errors do not cause the session to stop.
Optionally use the $PMSessionErrorThreshold service variable to stop on the configured number
of errors for the Integration Service.
Override Tracing Overrides tracing levels set on a transformation level. Selecting this option enables a menu from
which you choose a tracing level: None, Terse, Normal, Verbose Initialization, or Verbose Data.
On Stored Procedure Required if the session uses pre- or post-session stored procedures.
Error If you select Stop Session, the Integration Service stops the session on errors executing a pre-
session or post-session stored procedure.
If you select Continue Session, the Integration Service continues the session regardless of errors
executing pre-session or post-session stored procedures.
By default, the Integration Service stops the session on Stored Procedure error and marks the
session failed.
On Pre-Post SQL Error Required if the session uses pre- or post-session SQL.
If you select Stop Session, the Integration Service stops the session errors executing pre-session
or post-session SQL.
If you select Continue, the Integration Service continues the session regardless of errors
executing pre-session or post-session SQL.
By default, the Integration Service stops the session upon pre- or post-session SQL error and
marks the session failed.
Error Log Type Specifies the type of error log to create. You can specify relational, file, or no log. Default is none.
Note: You cannot log row errors from XML file sources. You can view the XML source errors in the
session log.
Error Log DB Connection Specifies the database connection for a relational error log.
Error Log Table Name Specifies table name prefix for a relational error log. Oracle and Sybase have a 30 character limit
Prefix for table names. If a table name exceeds 30 characters, the session fails.
Error Log File Directory Specifies the directory where errors are logged. By default, the error log file directory is
$PMBadFilesDir\.
Error Log File Name Specifies error log file name. By default, the error log file name is PMError.log.
Log Row Data Specifies whether or not to log transformation row data. When you enable error logging, the
Integration Service logs transformation row data by default. If you disable this property, n/a or -1
appears in transformation row data fields.
Log Source Row Data Specifies whether or not to log source row data. By default, the check box is clear and source row
data is not logged.
Data Column Delimiter Delimiter for string type source row data and transformation group row data. By default, the
Integration Service uses a pipe ( | ) delimiter. Verify that you do not use the same delimiter for the
row data as the error logging columns. If you use the same delimiter, you may find it difficult to
read the error log file.
The following table describes the Partitioning Options settings on the Config Object tab:
Dynamic Partitioning Configure dynamic partitioning using one of the following methods:
- Disabled. Do not use dynamic partitioning. Define the number of partitions on the Mapping tab.
- Based on number of partitions. Sets the partitions to a number that you define in the Number of
Partitions attribute. Use the $DynamicPartitionCount session parameter, or enter a number
greater than 1.
- Based on number of nodes in grid. Sets the partitions to the number of nodes in the grid running
the session. If you configure this option for sessions that do not run on a grid, the session runs in
one partition and logs a message in the session log.
- Based on source partitioning. Determines the number of partitions using database partition
information. The number of partitions is the maximum of the number of partitions at the source.
- Based on number of CPUs. Sets the number of partitions equal to the number of CPUs on the
node that prepares the session. If the session is configured to run on a grid, dynamic partitioning
sets the number of partitions equal to the number of CPUs on the node that prepares the session
multiplied by the number of nodes in the grid.
Default is disabled.
Number of Partitions Determines the number of partitions that the Integration Service creates when you configure
dynamic partitioning based on the number of partitions. Enter a value greater than 1 or use the
$DynamicPartitionCount session parameter.
1. In the Workflow Manager, open a folder and click Tasks > Session Configuration.
The Session Configuration Browser appears.
2. Click New to create a new session configuration object.
3. Enter a name for the session configuration object.
4. On the Properties tab, configure the settings.
5. Click OK.
1. In the Workflow Manager, open the session properties and click the Config Object tab.
2. Click the Open button in the Config Name field.
A list of session configuration objects appears.
3. Select the configuration object you want to use and click OK.
The settings associated with the configuration object appear on the Config Object tab.
4. Click OK.
Tasks
This chapter includes the following topics:
• Tasks Overview, 61
• Creating a Task, 62
• Configuring Tasks, 63
• Working with the Assignment Task, 65
• Command Task, 66
• Control Task, 67
• Working with the Event Task, 70
• Timer Task, 73
Tasks Overview
The Workflow Manager contains many types of tasks to help you build workflows and worklets. You can
create reusable tasks in the Task Developer. Or, create and add tasks in the Workflow or Worklet Designer
as you develop the workflow.
Command Task Developer Yes Specifies shell commands to run during the workflow. You can
Workflow Designer choose to run the Command task if the previous task in the
workflow completes.
Worklet Designer
Decision Workflow Designer No Specifies a condition to evaluate in the workflow. Use the
Worklet Designer Decision task to create branches in a workflow.
61
Task Name Tool Reusable Description
Event-Raise Workflow Designer No Represents the location of a user-defined event. The Event-
Worklet Designer Raise task triggers the user-defined event when the Integration
Service runs the Event-Raise task.
Event-Wait Workflow Designer No Waits for a user-defined or a predefined event to occur. Once
Worklet Designer the event occurs, the Integration Service completes the rest of
the workflow.
Timer Workflow Designer No Waits for a specified period of time to run the next task.
Worklet Designer
The Workflow Manager validates tasks attributes and links. If a task is invalid, the workflow becomes invalid.
Workflows containing invalid sessions may still be valid.
Creating a Task
You can create tasks in the Task Developer, or you can create them in the Workflow Designer or the Worklet
Designer as you develop the workflow or worklet. Tasks you create in the Task Developer are reusable.
Tasks you create in the Workflow Designer and Worklet Designer are non-reusable by default.
62 Chapter 5: Tasks
the Workflow Designer or Worklet Designer are non-reusable. Edit the General tab of the task properties to
promote a non-reusable task to a reusable task.
Configuring Tasks
After you create the task, you can configure general task options on the General tab. For each task instance
in the workflow, you can configure how the Integration Service runs the task and the other objects associated
with the selected task. You can also disable the task so you can run rest of the workflow without the selected
task.
When you use a task in the workflow, you can edit the task in the Workflow Designer and configure the
following task options in the General tab:
• Fail parent if this task fails. Choose to fail the workflow or worklet containing the task if the task fails.
• Fail parent if this task does not run. Choose to fail the workflow or worklet containing the task if the task
does not run.
• Disable this task. Choose to disable the task so you can run the rest of the workflow without the task.
• Treat input link as AND or OR. Choose to have the Integration Service run the task when all or one of
the input link conditions evaluates to True.
You can create any task as non-reusable or reusable. Tasks you create in the Task Developer are reusable.
Tasks you create in the Workflow Designer are non-reusable by default. However, you can edit the general
properties of a task to promote it to a reusable task.
The Workflow Manager stores each reusable task separate from the workflows that use the task. You can
view a list of reusable tasks in the Tasks node in the Navigator window. You can see a list of all reusable
Session tasks in the Sessions node in the Navigator window.
Configuring Tasks 63
To promote a non-reusable workflow task:
1. In the Workflow Designer, double-click the task you want to make reusable.
2. In the General tab of the Edit Task dialog box, select the Make Reusable option.
3. When prompted whether you are sure you want to promote the task, click Yes.
4. Click OK.
The newly promoted task appears in the list of reusable tasks in the Tasks node in the Navigator
window.
You can edit the task instance in the Workflow Designer. Changes you make in the task instance exist only in
the workflow. The task definition remains unchanged in the Task Developer.
When you make changes to a reusable task definition in the Task Developer, the changes reflect in the
instance of the task in the workflow if you have not edited the instance.
To set the type of input links, double-click the task to open the Edit Tasks dialog box. Select AND or OR for
the input link type.
Disabling Tasks
In the Workflow Designer, you can disable a workflow task so that the Integration Service runs the workflow
without the disabled task. The status of a disabled task is DISABLED. Disable a task in the workflow by
selecting the Disable This Task option in the Edit Tasks dialog box.
To fail the parent workflow or worklet if the task fails, double-click the task and select the Fail Parent If This
Task Fails option in the General tab. When you select this option and a task fails, it does not prevent the
other tasks in the workflow or worklet from running. Instead, the Integration Service marks the status of the
64 Chapter 5: Tasks
workflow or worklet as failed. If you have a session nested within multiple worklets, you must select the Fail
Parent If This Task Fails option for each worklet instance to see the failure at the workflow level.
To fail the parent workflow or worklet if the task does not run, double-click the task and select the Fail Parent
If This Task Does Not Run option in the General tab. When you choose this option, the Integration Service
fails the parent workflow if a task did not run.
Note: The Integration Service does not fail the parent workflow if you disable a task.
• Standalone Command task. Use a Command task anywhere in the workflow or worklet to run shell
commands.
• Pre- and post-session shell command. You can call a Command task as the pre- or post-session shell
command for a Session task.
Use any valid UNIX command or shell script for UNIX servers, or any valid DOS or batch file for Windows
servers. For example, you might use a shell command to copy a file from one directory to another. For a
Windows server you would use the following shell command to copy the SALES_ ADJ file from the source
directory, L, to the target, H:
copy L:\sales\sales_adj H:\marketing\
For a UNIX server, you would use the following command to perform a similar operation:
cp sales/sales_adj marketing/
Each shell command runs in the same environment as the Integration Service. Environment settings in one
shell command script do not carry over to other scripts. To run all shell commands in the same environment,
call a single shell script that invokes other scripts.
• Standalone Command tasks. You can use service, service process, workflow, and worklet variables in
standalone Command tasks. You cannot use session parameters, mapping parameters, or mapping
variables in standalone Command tasks. The Integration Service does not expand these types of
parameters and variables in standalone Command tasks.
• Pre- and post-session shell commands. You can use any parameter or variable type that you can
define in the parameter file.
Assigning Resources
You can assign resources to Command task instances in the Worklet or Workflow Designer. You might want
to assign resources to a Command task if you assign the workflow to an Integration Service associated with a
grid. When you assign a resource to a Command task and the Integration Service is configured to check
resources, the Load Balancer dispatches the task to a node that has the resource available. A task fails if the
Load Balancer cannot find a node where the required resource is available.
1. In the Workflow Designer or the Task Developer, click Task > Create.
2. Select Command Task for the task type.
66 Chapter 5: Tasks
3. Enter a name for the Command task. Click Create. Then click Done.
4. Double-click the Command task in the workspace to open the Edit Tasks dialog box.
5. In the Commands tab, click the Add button to add a command.
6. In the Name field, enter a name for the new command.
7. In the Command field, click the Edit button to open the Command Editor.
8. Enter the command you want to run. Enter one command in the Command Editor. You can use service,
service process, workflow, and worklet variables in the command.
9. Click OK to close the Command Editor.
10. Repeat steps 4 to 9 to add more commands in the task.
11. Optionally, click the General tab in the Edit Tasks dialog to assign resources to the Command task.
12. Click OK.
If you specify non-reusable shell commands for a session, you can promote the non-reusable shell
commands to a reusable Command task.
You can choose to run a command only if the previous command completed successfully. Or, you can
choose to run all commands in the Command task, regardless of the result of the previous command. If you
configure multiple commands in a Command task to run on UNIX, each command runs in a separate shell.
If you choose to run a command only if the previous command completes successfully, the Integration
Service stops running the rest of the commands and fails the task when one of the commands in the
Command task fails. If you do not choose this option, the Integration Service runs all the commands in the
Command task and treats the task as completed, even if a command fails. If you want the Integration Service
to perform the next command only if the previous command completes successfully, select Fail Task if Any
Command Fails in the Properties tab of the Command task.
You can choose a recovery strategy for the task. The recovery strategy determines how the Integration
Service recovers the task when you configure workflow recovery and the task fails. You can configure the
task to restart or you can configure the task to fail and continue running the workflow.
Control Task
Use the Control task to stop, abort, or fail the top-level workflow or the parent workflow based on an input link
condition. A parent workflow or worklet is the workflow or worklet that contains the Control task.
Control Task 67
The following table describes the options you can configure in the Control task:
Fail Me Marks the Control task as “Failed.” The Integration Service fails the Control task if you
choose this option. If you choose Fail Me in the Properties tab and choose Fail Parent If This
Task Fails in the General tab, the Integration Service fails the parent workflow.
Fail Parent Marks the status of the workflow or worklet that contains the Control task as failed after the
workflow or worklet completes.
Stop Parent Stops the workflow or worklet that contains the Control task.
Abort Parent Aborts the workflow or worklet that contains the Control task.
You can specify one decision condition per Decision task. After the Integration Service evaluates the
Decision task, use the predefined condition variable in other expressions in the workflow to help you develop
the workflow.
Depending on the workflow, you might use link conditions instead of a Decision task. However, the Decision
task simplifies the workflow. If you do not specify a condition in the Decision task, the Integration Service
evaluates the Decision task to True.
68 Chapter 5: Tasks
Example
For example, you have a Command task that depends on the status of the three sessions in the workflow.
You want the Integration Service to run the Command task when any of the three sessions fails. To
accomplish this, use a Decision task with the following decision condition:
$Q1_session.status = FAILED OR $Q2_session.status = FAILED OR $Q3_session.status =
FAILED
You can then use the predefined condition variable in the input link condition of the Command task.
Configure the input link with the following link condition:
$Decision.condition = True
You can configure the same logic in the workflow without the Decision task. Without the Decision task, you
need to use three link conditions and treat the input links to the Command task as OR links.
You can further expand the workflow. The Integration Service runs the Command task if any of the three
Session tasks fails. Suppose now you want the Integration Service to also run an Email task if all three
Session tasks succeed. To do this, add an Email task and use the decision condition variable in the link
condition.
The following figure shows the expanded sample workflow using a Decision task:
Control Task 69
5. Click the Open button in the Value field to open the Expression Editor.
6. In the Expression Editor, enter the condition you want the Integration Service to evaluate.
Validate the expression before you close the Expression Editor.
7. Click OK.
• Event-Raise task. Event-Raise task represents a user-defined event. When the Integration Service runs
the Event-Raise task, the Event-Raise task triggers the event. Use the Event-Raise task with the Event-
Wait task to define events.
• Event-Wait task. The Event-Wait task waits for an event to occur. Once the event triggers, the Integration
Service continues executing the rest of the workflow.
To coordinate the execution of the workflow, you may specify the following types of events for the Event-Wait
and Event-Raise tasks:
• Predefined event. A predefined event is a file-watch event. For predefined events, use an Event-Wait
task to instruct the Integration Service to wait for the specified indicator file to appear before continuing
with the rest of the workflow. When the Integration Service locates the indicator file, it starts the next task
in the workflow.
• User-defined event. A user-defined event is a sequence of tasks in the workflow. Use an Event-Raise
task to specify the location of the user-defined event in the workflow. A user-defined event is sequence of
tasks in the branch from the Start task leading to the Event-Raise task.
When all the tasks in the branch from the Start task to the Event-Raise task complete, the Event-Raise
task triggers the event. The Event-Wait task waits for the Event-Raise task to trigger the event before
continuing with the rest of the tasks in its branch.
Related Topics:
• “Configuring Worklet Properties” on page 42
• “Metadata Extensions” on page 30
70 Chapter 5: Tasks
The following workflow shows how to accomplish this using the Event-Raise and Event-Wait tasks:
Event-Raise Tasks
The Event-Raise task represents the location of a user-defined event. A user-defined event is the sequence
of tasks in the branch from the Start task to the Event-Raise task. When the Integration Service runs the
Event-Raise task, the Event-Raise task triggers the user-defined event.
To use an Event-Raise task, you must first declare the user-defined event. Then, create an Event-Raise task
in the workflow to represent the location of the user-defined event you just declared. In the Event-Raise task
properties, specify the name of a user-defined event.
1. In the Workflow Designer workspace, create an Event-Raise task and place it in the workflow to
represent the user-defined event you want to trigger.
A user-defined event is the sequence of tasks in the branch from the Start task to the Event-Raise task.
2. Double-click the Event-Raise task to open it.
3. On the Properties tab, click the Open button in the Value field to open the Events Browser for user-
defined events.
4. Choose an event in the Events Browser.
5. Click OK twice.
Event-Wait Tasks
The Event-Wait task waits for a predefined event or a user-defined event. A predefined event is a file-watch
event. When you use the Event-Wait task to wait for a predefined event, you specify an indicator file for the
Integration Service to watch. The Integration Service waits for the indicator file to appear. Once the indicator
file appears, the Integration Service continues running tasks after the Event-Wait task.
You can assign resources to Event-Wait tasks that wait for predefined events. You may want to assign a
resource to a predefined Event-Wait task if you are running on a grid and the indicator file appears on a
specific node or in a specific directory. When you assign a resource to a predefined Event-Wait task and the
Integration Service is configured to check resources, the Load Balancer distributes the task to a node where
the required resource is available.
Note: If you use the Event-Raise task to trigger the event when you wait for a predefined event, you may not
be able to successfully recover the workflow.
You can also use the Event-Wait task to wait for a user-defined event. To use the Event-Wait task for a user-
defined event, specify the name of the user-defined event in the Event-Wait task properties. The Integration
Service waits for the Event-Raise task to trigger the user-defined event. Once the user-defined event is
triggered, the Integration Service continues running tasks after the Event-Wait task.
1. In the workflow, create an Event-Wait task and double-click the Event-Wait task to open it.
2. In the Events tab of the task, select User-Defined.
3. Click the Event button to open the Events Browser dialog box.
4. Select a user-defined event for the Integration Service to wait.
5. Click OK twice.
72 Chapter 5: Tasks
Waiting for Predefined Events
To use a predefined event, you need a shell command, script, or batch file to create an indicator file. The file
must be created or sent to a directory that the Integration Service can access. The file can be any format
recognized by the Integration Service operating system. You can choose to have the Integration Service
delete the indicator file after it detects the file, or you can manually delete the indicator file. The Integration
Service marks the status of the Event-Wait task as failed if it cannot delete the indicator file.
When you specify the indicator file in the Event-Wait task, enter the directory in which the file appears and
the name of the indicator file. You must provide the absolute path for the file. If you specify the file name and
not the directory, the Integration Service looks for the indicator file in the following directory:
• On Windows, the Integration Service looks for the file in the system directory. For example, on Windows
2000, the system directory is c:\winnt\system32.
• On UNIX, the Integration Service looks for the indicator file in the current working directory for the
Integration Service process. On UNIX this directory is /server/bin.
You can enter the actual name of the file or use process variables to specify the location of the file. You can
also use user-defined workflow and worklet variables to specify the file name and location. For example,
create a workflow variable, $$MyFileWatchFile, for the indicator file name and location, and set $
$MyFileWatchFile to the file name and location in the parameter file.
The Integration Service writes the time the file appears in the workflow log.
Note: Do not use a source or target file name as the indicator file name because you may accidentally delete
a source or target file. Or, the Integration Service may try to delete the file before the session finishes writing
to the target.
When you select Enable Past Events, the Integration Service continues executing the next tasks if the event
already occurred.
Select the Enable Past Events option in the Properties tab of the Event-Wait task.
Timer Task
You can specify the period of time to wait before the Integration Service runs the next task in the workflow
with the Timer task. You can choose to start the next task in the workflow at a specified time and date. You
Timer Task 73
can also choose to wait a period of time after the start time of another task, workflow, or worklet before
starting the next task.
• Absolute time. You specify the time that the Integration Service starts running the next task in the
workflow. You may specify the date and time, or you can choose a user-defined workflow variable to
specify the time.
• Relative time. You instruct the Integration Service to wait for a specified period of time after the Timer
task, the parent workflow, or the top-level workflow starts.
For example, a workflow contains two sessions. You want the Integration Service wait 10 minutes after the
first session completes before it runs the second session. Use a Timer task after the first session. In the
Relative Time setting of the Timer task, specify ten minutes from the start time of the Timer task. Use a Timer
task anywhere in the workflow after the Start task.
The following table describes the attributes you configure in the Timer task:
Absolute Time: Specify the exact time Integration Service starts the next task in the workflow at the date and time you
to start specify.
Absolute Time: Use this workflow Specify a user-defined date-time workflow variable. The Integration Service starts the
date-time variable to calculate the next task in the workflow at the time you choose.
wait The Workflow Manager verifies that the variable you specify has the Date/Time
datatype. If the variable precision includes subseconds, the Integration Service ignores
the subsecond portion of the time value.
The Timer task fails if the date-time workflow variable evaluates to NULL.
Relative time: Start after Specify the period of time the Integration Service waits to start executing the next task
in the workflow.
Relative time: from the start time of Select this option to wait a specified period of time after the start time of the Timer task
this task to run the next task.
Relative time: from the start time of Select this option to wait a specified period of time after the start time of the parent
the parent workflow/worklet workflow/worklet to run the next task.
Relative time: from the start time of Choose this option to wait a specified period of time after the start time of the top-level
the top-level workflow workflow to run the next task.
74 Chapter 5: Tasks
CHAPTER 6
Sources
This chapter includes the following topics:
• Sources Overview, 75
• Configuring Sources in a Session, 76
• Working with Relational Sources, 77
• Working with File Sources, 79
• Integration Service Handling for File Sources, 84
• Working with XML Sources, 86
• Using a File List, 87
Sources Overview
In the Workflow Manager, you can create sessions with the following sources:
• Relational. You can extract data from any relational database that the Integration Service can connect to.
When extracting data from relational sources and Application sources, you must configure the database
connection to the data source prior to configuring the session.
• File. You can create a session to extract data from a flat file, COBOL, or XML source. Use an operating
system command to generate source data for a flat file or COBOL source or generate a file list.
If you use a flat file or XML source, the Integration Service can extract data from any local directory or
FTP connection for the source file. If the file source requires an FTP connection, you need to configure
the FTP connection to the host machine before you create the session.
• Heterogeneous. You can extract data from multiple sources in the same session. You can extract from
multiple relational sources, such as Oracle and Microsoft SQL Server. Or, you can extract from multiple
source types, such as relational and flat file. When you configure a session with heterogeneous sources,
configure each source instance separately.
Globalization Features
You can choose a code page that you want the Integration Service to use for relational sources and flat files.
You specify code pages for relational sources when you configure database connections in the Workflow
Manager. You can set the code page for file sources in the session properties.
75
Source Connections
Before you can extract data from a source, you must configure the connection properties the Integration
Service uses to connect to the source file or database. You can configure source database and FTP
connections in the Workflow Manager.
Partitioning Sources
You can create multiple partitions for relational, Application, and file sources. For relational or Application
sources, the Integration Service creates a separate connection to the source database for each partition you
set in the session properties. For file sources, you can configure the session to read the source with one
thread or multiple threads.
The Sources node lists the sources used in the session and displays their settings. To view and configure
settings for a source, select the source from the list. You can configure the following settings for a source:
• Readers
• Connections
• Properties
Configuring Readers
You can click the Readers settings on the Sources node to view the reader the Integration Service uses with
each source instance. The Workflow Manager specifies the necessary reader for each source instance in the
Readers settings on the Sources node.
Configuring Connections
Click the Connections settings on the Sources node to define source connection information. For relational
sources, choose a configured database connection in the Value column for each relational source instance.
By default, the Workflow Manager displays the source type for relational sources.
76 Chapter 6: Sources
For flat file and XML sources, choose one of the following source connection types in the Type column for
each source instance:
• FTP. To read data from a flat file or XML source using FTP, you must specify an FTP connection when
you configure source options. You must define the FTP connection in the Workflow Manager prior to
configuring the session.
• None. Choose None to read from a local flat file or XML file.
Configuring Properties
Click the Properties settings in the Sources node to define source property information. The Workflow
Manager displays properties, such as source file name and location for flat file, COBOL, and XML source file
types. You do not need to define any properties on the Properties settings for relational sources.
• Source database connection. Select the database connection for each relational source.
• Treat source rows as. Define how the Integration Service treats each source row as it reads it from the
source table.
• Override SQL query. You can override the default SQL query to extract source data.
• Table owner name. Define the table owner name for each relational source.
• Source table name. You can override the source table name for each relational source.
On the Connections settings in the Sources node, choose the database connection. You can select a
connection object, use a connection variable, or use a session parameter to define the connection value in a
parameter file.
Insert Integration Service marks all rows to insert into the target.
Delete Integration Service marks all rows to delete from the target.
Update Integration Service marks all rows to update the target. You can further define the update operation
in the target options.
Data Driven Integration Service uses the Update Strategy transformations in the mapping to determine the
operation on a row-by-row basis. You define the update operation in the target options. If the
mapping contains an Update Strategy transformation, this option defaults to Data Driven. You can
also use this option when the mapping contains Custom transformations configured to set the
update strategy.
After you determine how to treat all rows in the session, you also need to set update strategy options for
individual targets.
The Workflow Manager does not validate the SQL override. The following types of errors can cause data
errors and session failure:
Specify the table owner name in the Owner Name field in the Properties settings on the Mapping tab.
You can use a parameter or variable as the table owner name. Use any parameter or variable type that you
can define in the parameter file. For example, you can use a session parameter, $ParamMyTableOwner, as
78 Chapter 6: Sources
the table owner name, and set $ParamMyTableOwner to the table owner name in the parameter file. Use a
mapping parameter to include the owner name with the table name in the following types of overrides: source
filter, user-defined join, query override, or pre- or post-SQL.
Note: If you override the source table name on the Properties tab of the source instance, and you override
the source table name using an SQL query, the Integration Service uses the source table name defined in the
SQL query.
• Source properties. You can define source properties on the Properties settings in the Sources node,
such as source file options.
• Flat file properties. You can edit fixed-width and delimited source file properties.
• Line sequential buffer length. You can change the buffer length for flat files on the Advanced settings on
the Config Object tab.
• Treat source rows as. You can define how the Integration Service treats each source row as it reads it
from the source.
The following table describes the properties you define for flat file source definitions:
Input Type Type of source input. You can choose the following types of source input:
- File. For flat file, COBOL, or XML sources.
- Command. For source data or a file list generated by a command.
You cannot use a command to generate XML source data.
Source File Directory Directory name of flat file source. By default, the Integration Service looks in the service process
variable directory, $PMSourceFileDir, for file sources.
If you specify both the directory and file name in the Source Filename field, clear this field. The
Integration Service concatenates this field with the Source Filename field when it runs the session.
You can also use the $InputFileName session parameter to specify the file location.
Source File Name File name, or file name and path of flat file source. Optionally, use the $InputFileName session
parameter for the file name.
The Integration Service concatenates this field with the Source File Directory field when it runs the
session. For example, if you have “C:\data\” in the Source File Directory field, then enter
“filename.dat” in the Source Filename field. When the Integration Service begins the session, it looks
for “C:\data\filename.dat”.
By default, the Workflow Manager enters the file name configured in the source definition.
Source File Type Indicates whether the source file contains the source data, or whether it contains a list of files with
the same file properties. You can choose the following source file types:
- Direct. For source files that contain the source data.
- Indirect. For source files that contain a list of files. When you select Indirect, the Integration
Service finds the file list and reads each listed file when it runs the session.
Command Type Type of source data the command generates. You can choose the following command types:
- Command generating data for commands that generate source data input rows.
- Command generating file list for commands that generate a file list.
Set File Properties link Overrides source file properties. By default, the Workflow Manager displays file properties as
configured in the source definition.
Truncate string null Strips the first null character and all characters after the first null character from string values.
Enable this option for delimited flat files that contain null characters in strings. If you do not enable
this option, the PowerCenter Integration Service generates a row error for any row that contains null
characters in a string.
Default is disabled.
For example, to uncompress a data file and use the uncompressed data as the source data input rows, use
the following command:
uncompress -c $PMSourceFileDir/myCompressedFile.Z
The command uncompresses the file and sends the standard output of the command to the flat file reader.
The flat file reader reads the standard output of the command as the flat file source data.
80 Chapter 6: Sources
Generating a File List
Use a command to generate a list of source files. The flat file reader reads each file in the list when the
session runs. Use a command to generate a file list when the list of source files changes often or you want to
generate a file list based on specific conditions. You might want to use a command to generate a file list
based on a directory listing.
For example, to use a directory listing as a file list, use the following command:
cd $PMSourceFileDir; ls -1 sales-records-Sep-*-2005.dat
The command generates a file list from the source file directory listing. When the session runs, the flat file
reader reads each file as it reads the file names from the command.
To use the output of a command as a file list, select Command as the Input Type, Command generating file
list as the Command Type, and enter a command for the Command property.
Click Set File Properties to open the Flat Files dialog box. To edit the fixed-width properties, select Fixed
Width and click Advanced. The Fixed Width Properties dialog box appears. By default, the Workflow Manager
displays file properties as configured in the mapping. Edit these settings to override those configured in the
source definition.
The following table describes options you can define in the Fixed Width Properties dialog box for file sources:
Text/Binary Indicates the character representing a null value in the file. This can be any valid character in the
file code page, or any binary value from 0 to 255.
Repeat Null Character If selected, the Integration Service reads repeat null characters in a single field as a single null
value. If you do not select this option, the Integration Service reads a single null character at the
beginning of a field as a null field.
Important: For multibyte code pages, specify a single-byte null character if you use repeating non-
binary null characters. This ensures that repeating null characters fit into the column.
Code Page Code page of the fixed-width file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
Number of Initial Rows to Integration Service skips the specified number of rows before reading the file. Use this to skip
Skip header rows. One row may contain multiple records. If you select the Line Sequential File Format
option, the Integration Service ignores this option.
Number of Bytes to Skip Integration Service skips the specified number of bytes between records. For example, you have
Between Records an ASCII file on Windows with one record on each line, and a carriage return and line feed appear
at the end of each line. If you want the Integration Service to skip these two single-byte characters,
enter 2.
If you have an ASCII file on UNIX with one record for each line, ending in a carriage return, skip
the single character by entering 1.
Strip Trailing Blanks If selected, the Integration Service strips trailing blanks from string values.
Line Sequential File Select this option if the file uses a carriage return at the end of each record, shortening the final
Format column.
To edit the delimited properties, select Delimited and click Advanced. The Delimited File Properties dialog
box appears. By default, the Workflow Manager displays file properties as configured in the mapping. Edit
these settings to override those configured in the source definition.
The following table describes options you can define in the Delimited File Properties dialog box for file
sources:
Column Delimiters One or more characters used to separate columns of data. Delimiters can be either printable or
single-byte unprintable characters and must be different from the escape character and the quote
character. You can enter a single-byte unprintable character by browsing the delimiter list in the
Delimiters dialog box.
You cannot select unprintable multibyte characters as delimiters. You cannot select the NULL
character as the column delimiter for a flat file source.
Maximum number of delimiters is 80.
Treat Consecutive By default, the Integration Service treats multiple delimiters separately. If selected, the Integration
Delimiters as One Service reads any number of consecutive delimiter characters as one.
For example, a source file uses a comma as the delimiter character and contains the following
record: 56, , , Jane Doe. By default, the Integration Service reads that record as four columns
separated by three delimiters: 56, NULL, NULL, Jane Doe. If you select this option, the Integration
Service reads the record as two columns separated by one delimiter: 56, Jane Doe.
Treat Multiple Delimiters If selected, the Integration Service treats a specified set of delimiters as one. For example, a
as AND source file contains the following record: abc~def|ghi~|~|jkl|~mno. By default, the Integration
Service reads the record as nine columns separated by eight delimiters: abc, def, ghi, NULL,
NULL, NULL, jkl, NULL, mno. If you select this option and specify the delimiter as ( ~ | ), the
Integration Service reads the record as three columns separated by two delimiters: abc~def|ghi,
NULL, jkl|~mno.
82 Chapter 6: Sources
Delimited File Description
Properties Options
Optional Quotes Select No Quotes, Single Quote, or Double Quotes. If you select a quote character, the Integration
Service ignores delimiter characters within the quote characters. Therefore, the Integration Service
uses quote characters to escape the delimiter.
For example, a source file uses a comma as a delimiter and contains the following row:
342-3849, ‘Smith, Jenna’, ‘Rockville, MD’, 6.
If you select the optional single quote character, the Integration Service ignores the commas within
the quotes and reads the row as four fields.
If you do not select the optional single quote, the Integration Service reads six separate fields.
When the Integration Service reads two optional quote characters within a quoted string, it treats
them as one quote character. For example, the Integration Service reads the following quoted
string as I’m going tomorrow:
2353, ‘I’’m going tomorrow’, MD
Additionally, if you select an optional quote character, the Integration Service reads a string as a
quoted string if the quote character is the first character of the field.
Note: You can improve session performance if the source file does not contain quotes or escape
characters.
Code Page Code page of the delimited file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
Row Delimiter Specify a line break character. Select from the list or enter a character. Preface an octal code with
a backslash (\). To use a single character, enter the character.
The Integration Service uses only the first character when the entry is not preceded by a
backslash. The character must be a single-byte character, and no other character in the code page
can contain that byte. Default is line-feed, \012 LF (\n).
Escape Character Character immediately preceding a delimiter character embedded in an unquoted string, or
immediately preceding the quote character in a quoted string. When you specify an escape
character, the Integration Service reads the delimiter character as a regular character (called
escaping the delimiter or quote character).
Note: You can improve session performance for mappings containing Sequence Generator
transformations if the source file does not contain quotes or escape characters.
Remove Escape This option is selected by default. Clear this option to include the escape character in the output
Character From Data string.
Number of Initial Rows to Integration Service skips the specified number of rows before reading the file. Use this to skip title
Skip or header rows in the file.
• Character set
• Multibyte character error handling
• Null character handling
• Row length handling for fixed-width flat files
• Numeric data handling
• Tab handling
Character Set
You can configure the Integration Service to run sessions in either ASCII or Unicode data movement mode.
The following table describes source file formats supported by each data movement path in PowerCenter:
EBCDIC-based SBCS Supported Not supported. The Integration Service terminates the session.
EBCDIC-based MBCS Supported Not supported. The Integration Service terminates the session.
If you configure a session to run in ASCII data movement mode, delimiters, escape characters, and null
characters must be valid in the ISO Western European Latin 1 code page. Any 8-bit characters you specified
in previous versions of PowerCenter are still valid. In Unicode data movement mode, delimiters, escape
characters, and null characters must be valid in the specified code page of the flat file.
When you import a fixed-width flat file, you can create, move, or delete column breaks using the Flat File
Wizard. Incorrect positioning of column breaks can create alignment errors when you run a session
containing multibyte characters.
84 Chapter 6: Sources
The Integration Service handles alignment errors in fixed-width flat files according to the following guidelines:
• Non-line sequential file. The Integration Service skips rows containing misaligned data and resumes
reading the next row. The skipped row appears in the session log with a corresponding error message. If
an alignment error occurs at the end of a row, the Integration Service skips both the current row and the
next row, and writes them to the session log.
• Line sequential file. The Integration Service skips rows containing misaligned data and resumes reading
the next row. The skipped row appears in the session log with a corresponding error message.
• Reader error threshold. You can configure a session to stop after a specified number of non-fatal errors.
A row containing an alignment error increases the error count by 1. The session stops if the number of
rows containing errors reaches the threshold set in the session properties. Errors and corresponding error
messages appear in the session log file.
Fixed-width COBOL sources are always byte-oriented and can be line sequential. The Integration Service
handles COBOL files according to the following guidelines:
• Line sequential files. The Integration Service skips rows containing misaligned data and writes the
skipped rows to the session log. The session stops if the number of error rows reaches the error
threshold.
• Non-line sequential files. The session stops at the first row containing misaligned data.
The following table describes how the Integration Service uses the Null Character and Repeat Null Character
properties to determine if a column is null:
Binary Disabled A column is null if the first byte in the column is the binary null character. The Integration
Service reads the rest of the column as text data to determine the column alignment and
track the shift state for shift sensitive code pages. If data in the column is misaligned, the
Integration Service skips the row and writes the skipped row and a corresponding error
message to the session log.
Non-binary Disabled A column is null if the first character in the column is the null character. The Integration
Service reads the rest of the column to determine the column alignment and track the shift
state for shift sensitive code pages. If data in the column is misaligned, the Integration
Service skips the row and writes the skipped row and a corresponding error message to
the session log.
Binary Enabled A column is null if it contains the specified binary null character. The next column inherits
the initial shift state of the code page.
Non-binary Enabled A column is null if the repeating null character fits into the column with no bytes leftover.
For example, a five-byte column is not null if you specify a two-byte repeating null
character. In shift-sensitive code pages, shift bytes do not affect the null value of a column.
A column is still null if it contains a shift byte at the beginning or end of the column.
Specify a single-byte null character if you use repeating non-binary null characters. This
ensures that repeating null characters fit into a column.
• The file is fixed-width line-sequential with a carriage return or line feed that appears sooner than
expected.
• The file is fixed-width non-line sequential, and the last line in the file is shorter than expected.
In these cases, the Integration Service reads the data but does not append any blanks to fill the remaining
bytes. The Integration Service reads subsequent fields as NULL. Fields containing repeating null characters
that do not fill the entire field length are not considered NULL.
The following table describes the properties you can override for XML readers in a session:
Treat Empty Content Treat empty XML components as Null. By default, the Integration Service does not output element tags
as Null for Null values. The Integration Service outputs tags for empty content.
Source File Directory Location of the Source XML file. By default, the Integration Service looks in the service process
variable directory, $PMSourceFileDir.
You can enter the full path and file name. If you specify both the directory and file name in the Source
Filename field, clear the Source File Directory. The Integration Service concatenates this field with the
Source Filename field.
You can also use the $InputFileName session parameter to specify the file directory.
Source Filename Enter the file name or file name and path. Optionally, use the $InputFileName session parameter for
the file name.
If you specify both the directory and file name in the Source File Directory field, clear this field. The
Integration Service concatenates this field with the Source File Directory field when it runs the session.
For example, if you have “C:\XMLdata\” in the Source File Directory field, then enter “filename.xml” in
the Source Filename field. When the Integration Service begins the session, it looks for “C:\data
\filename.xml”.
Source Filetype Use to configure multiple file sources with a file list. Choose Direct or Indirect. The option indicates
whether the source file contains the source data, or whether the source file contains a list of files with
the same file properties. Choose Direct if the source file contains the source data. Choose Indirect if
the source file contains a list of files.
When you select Indirect, the Integration Service finds the file list and reads each listed file when it
runs the session.
86 Chapter 6: Sources
The following table describes the properties you can override for an XML Source Qualifier in a session:
Validate XML Source Provides flexibility for validating an XML source against a schema or DTD file. Select Do Not Validate
to skip validation, even if the instance document has an associated DTD or schema reference. Select
Validate Only if DTD is Present to validate when the XML source has a corresponding DTD or schema
file. The session fails if the instance document specifies a DTD or schema and one is not present.
Select Always Validate to always validate the XML file. The session fails if the DTD or schema does
not exist or the data is invalid.
Partitionable You can create multiple partitions for the source pipeline.
You can choose to omit fixed elements from the XML source definition. If the DTD or XML schema specifies a
fixed or default value for an element, the value appears in the XML source definition.
You can define attributes as required, optional, or prohibited in an element tag. You can also specify fixed or
default values for attributes. When a DTD or XML schema contains an attribute with a fixed or default value,
the Integration Service passes the value into the pipeline even if the element tag in the instance document
does not contain the attribute. If the attribute does not have a fixed or default value, the Integration Service
passes a null value for the attribute. A parser error occurs when a required attribute is not present in an
element or a prohibited attribute appears in the element tag. The Integration Service writes this error to the
session log.
To use multiple source files, you create a file containing the names and directories of each source file you
want the Integration Service to use. This file is referred to as a file list.
When you configure the session properties, enter the file name of the file list in the Source Filename field and
enter the location of the file list in the Source File Directory field. When the session starts, the Integration
Service reads the file list, then locates and reads the first file source in the list. After the Integration Service
reads the first file, it locates and reads the next file in the list.
The Integration Service writes the path and name of the file list to the session log. If the Integration Service
encounters an error while accessing a source file, it logs the error in the session log and stops the session.
Note: When you use a file list and the session performs incremental aggregation, the Integration Service
performs incremental aggregation across all listed source files.
The Integration Service interprets the file list using the Integration Service code page. Map the drives on an
Integration Service on Windows or mount the drives on an Integration Service on UNIX. The Integration
Service skips blank lines and ignores leading blank spaces. Any characters indicating a new line, such as \n
in ASCII files, must be valid in the code page of the Integration Service.
Use the following rules and guidelines when you create the file list:
• Each file in the list must use the user-defined code page configured in the source definition.
• Each file in the file list must share the same file properties as configured in the source definition or as
entered for the source instance in the session property sheet.
• Enter one file name or one path and file name on a line. If you do not specify a path for a file, the
Integration Service assumes the file is in the same directory as the file list.
• Each path must be local to the Integration Service node.
The following example shows a valid file list created for an Integration Service on Windows. Each of the
drives listed are mapped on the Integration Service node. The western_trans.dat file is located in the same
directory as the file list.
western_trans.dat
d:\data\eastern_trans.dat
e:\data\midwest_trans.dat
f:\data\canada_trans.dat
After you create the file list, place it in a directory local to the Integration Service.
88 Chapter 6: Sources
CHAPTER 7
Targets
This chapter includes the following topics:
• Targets Overview, 89
• Configuring Targets in a Session, 91
• Performing a Test Load, 92
• Working with Relational Targets, 93
• Working with Target Connection Groups, 104
• Working with Active Sources, 104
• Working with File Targets, 105
• Integration Service Handling for File Targets, 109
• Working with XML Targets in a Session, 115
• Integration Service Handling for XML Targets, 116
• Working with Heterogeneous Targets, 121
• Reject Files, 122
Targets Overview
In the Workflow Manager, you can create sessions with the following targets:
• Relational. You can load data to any relational database that the Integration Service can connect to.
When loading data to relational targets, you must configure the database connection to the target before
you configure the session.
• File. You can load data to a flat file or XML target or write data to an operating system command. For flat
file or XML targets, the Integration Service can load data to any local directory or FTP connection for the
target file. If the file target requires an FTP connection, you need to configure the FTP connection to the
host machine before you create the session.
• Heterogeneous. You can output data to multiple targets in the same session. You can output to multiple
relational targets, such as Oracle and Microsoft SQL Server. Or, you can output to multiple target types,
such as relational and flat file.
Globalization Features
You can configure the Integration Service to run sessions in either ASCII or Unicode data movement mode.
89
The following table describes target character sets supported by each data movement mode in PowerCenter:
ASCII-based MBCS Supported Integration Service generates a warning message, but does
not terminate the session.
UTF-8 Supported (Targets Only) Integration Service generates a warning message, but does
not terminate the session.
EBCDIC-based SBCS Supported Not supported. The Integration Service terminates the
session.
EBCDIC-based MBCS Supported Not supported. The Integration Service terminates the
session.
You can work with targets that use multibyte character sets with PowerCenter. You can choose a code page
that you want the Integration Service to use for relational objects and flat files. You specify code pages for
relational objects when you configure database connections in the Workflow Manager. The code page for a
database connection used as a target must be a superset of the source code page.
When you change the database connection code page to one that is not two-way compatible with the old
code page, the Workflow Manager generates a warning and invalidates all sessions that use that database
connection.
Code pages you select for a file represent the code page of the data contained in these files. If you are
working with flat files, you can also specify delimiters and null characters supported by the code page you
have specified for the file.
However, if you configure the Integration Service and Client for code page relaxation, you can select any
code page supported by PowerCenter for the target database connection. When using code page relaxation,
select compatible code pages for the source and target data to prevent data inconsistencies.
If the target contains multibyte character data, configure the Integration Service to run in Unicode mode.
When the Integration Service runs a session in Unicode mode, it uses the database code page to translate
data.
If the target contains only single-byte characters, configure the Integration Service to run in ASCII mode.
When the Integration Service runs a session in ASCII mode, it does not validate code pages.
Target Connections
Before you can load data to a target, you must configure the connection properties the Integration Service
uses to connect to the target file or database. You can configure target database and FTP connections in the
Workflow Manager.
Related Topics:
• “Relational Database Connections” on page 134
• “FTP Connections” on page 138
90 Chapter 7: Targets
Partitioning Targets
When you create multiple partitions in a session with a relational target, the Integration Service creates
multiple connections to the target database to write target data concurrently. When you create multiple
partitions in a session with a file target, the Integration Service creates one target file for each partition. You
can configure the session properties to merge these target files.
The Targets node contains the following settings where you define properties:
• Writers
• Connections
• Properties
Configuring Writers
Click the Writers settings in the Transformations view to define the writer to use with each target instance.
When the mapping target is a flat file, an XML file, an SAP NetWeaver BI target, or a WebSphere MQ target,
the Workflow Manager specifies the necessary writer in the session properties. However, when the target is
relational, you can change the writer type to File Writer if you plan to use an external loader.
Note: You can change the writer type for non-reusable sessions in the Workflow Designer and for reusable
sessions in the Task Developer. You cannot change the writer type for instances of reusable sessions in the
Workflow Designer.
When you override a relational target to use the file writer, the Workflow Manager changes the properties for
that target instance on the Properties settings. It also changes the connection options you can define in the
Connections settings.
If the target contains a column with datetime values, the Integration Service compares the date formats
defined for the target column and the session. When the date formats do not match, the Integration Service
uses the date format with the lesser precision. For example, a session writes to a Microsoft SQL Server
target that includes a Datetime column with precision to the millisecond. The date format for the session is
MM/DD/YYYY HH24:MI:SS.NS. If you override the Microsoft SQL Server target with a flat file writer, the
Integration Service writes datetime values to the flat file with precision to the millisecond. If the date format
for the session is MM/DD/YYYY HH24:MI:SS, the Integration Service writes datetime values to the flat file
with precision to the second.
After you override a relational target to use a file writer, define the file properties for the target. Click Set File
Properties and choose the target to define.
Configuring Connections
View the Connections settings on the Mapping tab to define target connection information. For relational
targets, the Workflow Manager displays Relational as the target type by default. In the Value column, choose
a configured database connection for each relational target instance.
• FTP. If you want to load data to a flat file or XML target using FTP, you must specify an FTP connection
when you configure target options. FTP connections must be defined in the Workflow Manager prior to
configuring sessions.
• Loader. Use the external loader option to improve the load speed to Oracle, DB2, Sybase IQ, or Teradata
target databases.
To use this option, you must use a mapping with a relational target definition and choose File as the writer
type on the Writers settings for the relational target instance. The Integration Service uses an external
loader to load target files to the Oracle, DB2, Sybase IQ, or Teradata database. You cannot choose
external loader if the target is defined in the mapping as a flat file, XML, MQ, or SAP BW target.
• Queue. Choose Queue when you want to output to a WebSphere MQ or MSMQ message queue.
• None. Choose None when you want to write to a local flat file or XML file.
Configuring Properties
View the Properties settings on the Mapping tab to define target property information. The Workflow Manager
displays different properties for the different target types: relational, flat file, and XML.
The Integration Service writes data to relational targets, but rolls back the data when the session completes.
For all other target types, such as flat file and SAP BW, the Integration Service does not write data to the
targets.
Use the following rules and guidelines when performing a test load:
92 Chapter 7: Targets
Working with Relational Targets
When you configure a session to load data to a relational target, you define most properties in the
Transformations view on the Mapping tab. You also define some properties on the Properties tab and the
Config Object tab.
• Table name prefix. You can specify the target owner name or prefix in the session properties to override
the table name prefix in the mapping.
• Pre-session SQL. You can create SQL commands and execute them in the target database before
loading data to the target. For example, you might want to drop the index for the target table before
loading data into it.
• Post-session SQL. You can create SQL commands and execute them in the target database after
loading data to the target. For example, you might want to recreate the index for the target table after
loading data into it.
• Target table name. You can override the target table name for each relational target.
If any target table or column name contains a database reserved word, you can create and maintain a
reserved words file containing database reserved words. When the Integration Service executes SQL against
the database, it places quotes around the reserved words.
When the Integration Service runs a session with at least one relational target, it performs database
transactions per target connection group. For example, it commits all data to targets in a target connection
group at the same time.
On the Connections settings in the Targets node, choose the database connection. You can select a
connection object, use a connection variable, or use a session parameter to define the connection value in a
parameter file.
The following table describes the properties available in the Properties settings on the Mapping tab of the
session properties:
Update (as Update) Integration Service updates all rows flagged for update.
Default is enabled.
Update (as Insert) Integration Service inserts all rows flagged for update.
Default is disabled.
Update (else Insert) Integration Service updates rows flagged for update if they exist in the target, then inserts any
remaining rows marked for insert.
Default is disabled.
Reject File Directory Reject-file directory name. By default, the Integration Service writes all reject files to the service
process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject Filename field, clear this field. The
Integration Service concatenates this field with the Reject Filename field when it runs the session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject Filename File name or file name and path for the reject file. By default, the Integration Service names the
reject file after the target instance name: target_name.bad. Optionally, use the $BadFileName
session parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs the
session. For example, if you have “C:\reject_file\” in the Reject File Directory field, and enter
“filename.bad” in the Reject Filename field, the Integration Service writes rejected rows to C:
\reject_file\filename.bad.
94 Chapter 7: Targets
Using Session-Level Target Properties with Source Properties
You can set session-level target properties to specify how the Integration Service inserts, updates, and
deletes rows. However, you can also set session-level properties for sources.
At the source level, you can specify whether the Integration Service inserts, updates, or deletes source rows
or whether it treats rows as data driven. If you treat source rows as data driven, you must use an Update
Strategy transformation to indicate how the Integration Service handles rows.
This section explains how the Integration Service writes data based on the source and target row properties.
PowerCenter uses the source and target row options to provide an extra check on the session-level
properties. In addition, when you use both the source and target row options, you can control inserts,
updates, and deletes for the entire session or, if you use an Update Strategy transformation, based on the
data.
When you set the row-handling property for a source, you can treat source rows as inserts, deletes, updates,
or data driven according to the following guidelines:
• Inserts. If you treat source rows as inserts, select Insert for the target option. When you enable the Insert
target row option, the Integration Service ignores the other target row options and treats all rows as
inserts. If you disable the Insert target row option, the Integration Service rejects all rows.
• Deletes. If you treat source rows as deletes, select Delete for the target option. When you enable the
Delete target option, the Integration Service ignores the other target-level row options and treats all rows
as deletes. If you disable the Delete target option, the Integration Service rejects all rows.
• Updates. If you treat source rows as updates, the behavior of the Integration Service depends on the
target options you select.
The following table describes how the Integration Service loads the target when you configure the session
to treat source rows as updates:
Insert If enabled, the Integration Service uses the target update option (Update as Update, Update as
Insert, or Update else Insert) to update rows.
If disabled, the Integration Service rejects all rows when you select Update as Insert or Update
else Insert as the target-level update option.
Update as Insert Integration Service updates all rows as inserts. You must also select the Insert target option.
Update else Insert Integration Service updates existing rows and inserts other rows as if marked for insert. You
must also select the Insert target option.
Delete Integration Service ignores this setting and uses the selected target update option.
The Integration Service rejects all rows if you do not select one of the target update options.
• Data Driven. If you treat source rows as data driven, you use an Update Strategy transformation to
specify how the Integration Service handles rows. However, the behavior of the Integration Service also
depends on the target options you select.
Insert If enabled, the Integration Service inserts all rows flagged for insert. Enabled by default.
If disabled, the Integration Service rejects the following rows:
- Rows flagged for insert
- Rows flagged for update if you enable Update as Insert or Update else Insert
Update as Update Integration Service updates all rows flagged for update. Enabled by default.
Update as Insert Integration Service inserts all rows flagged for update. Disabled by default.
Update else Insert Integration Service updates rows flagged for update and inserts remaining rows as if
marked for insert.
Delete If enabled, the Integration Service deletes all rows flagged for delete.
If disabled, the Integration Service rejects all rows flagged for delete.
The Integration Service rejects rows flagged for update if you do not select one of the target update
options.
The Integration Service issues a delete or truncate command based on the target database and primary key-
foreign key relationships in the session target. To optimize performance, use the truncate table command.
The delete from command may impact performance.
The following table describes the commands that the Integration Service issues for each database:
Target Database Table contains a primary key referenced by Table does not contain a primary key
a foreign key referenced by a foreign key
1
DB2 truncate table <table_name> truncate table <table_name>
2
Microsoft SQL Server delete from <table_name> truncate table <table_name>
1. If you use a DB2 database on AS/400, the Integration Service issues a clrpfm command in both cases.
2. If you use the Microsoft SQL Server ODBC driver, the Integration Service issues a delete statement.
If the Integration Service issues a truncate target table command and the target table instance specifies a
table name prefix, the Integration Service verifies the database user privileges for the target table by issuing
96 Chapter 7: Targets
a truncate command. If the database user is not specified as the target owner name or does not have the
database privilege to truncate the target table, the Integration Service issues a delete command instead.
If the Integration Service issues a delete command and the database has logging enabled, the database
saves all deleted records to the log for rollback. If you do not want to save deleted records for rollback, you
can disable logging to improve the speed of the delete.
For all databases, if the Integration Service fails to truncate or delete any selected table because the user
lacks the necessary privileges, the session fails.
If you enable truncate target tables with the following sessions, the Integration Service does not truncate
target tables:
• Incremental aggregation. When you enable both truncate target tables and incremental aggregation in
the session properties, the Workflow Manager issues a warning that you cannot enable truncate target
tables and incremental aggregation in the same session.
• Test load. When you enable both truncate target tables and test load, the Integration Service disables the
truncate table function, runs a test load session, and writes a message to the session log indicating that
the truncate target tables option is turned off for the test load session.
• Real-time. The Integration Service does not truncate target tables when you restart a JMS or WebSphere
MQ real-time session that has recovery data.
Deadlock Retry
Select the Session Retry on Deadlock option in the session properties if you want the Integration Service to
retry writes to a target database or recovery table on a deadlock. A deadlock occurs when the Integration
Service attempts to take control of the same lock for a database row.
The Integration Service may encounter a deadlock under the following conditions:
You can retry sessions on deadlock for targets configured for normal load. If you select this option and
configure a target for bulk mode, the Integration Service does not retry target writes on a deadlock for that
target. You can also configure the Integration Service to set the number of deadlock retries and the deadlock
sleep time period.
• Using pre- and post-session SQL. The preferred method for dropping and re-creating indexes is to
define an SQL statement in the Pre SQL property that drops indexes before loading data to the target.
Use the Post SQL property to recreate the indexes after loading data to the target. Define the Pre SQL
and Post SQL properties for relational targets in the Transformations view on the Mapping tab in the
session properties.
• Using the Designer. The same dialog box you use to generate and execute DDL code for table creation
can drop and recreate indexes. However, this process is not automatic. Every time you run a session that
modifies the target table, you need to launch the Designer and use this feature.
Constraint-Based Loading
In the Workflow Manager, you can specify constraint-based loading for a session. When you select this
option, the Integration Service orders the target load on a row-by-row basis. For every row generated by an
active source, the Integration Service loads the corresponding transformed row first to the primary key table,
then to any foreign key tables. Constraint-based loading depends on the following requirements:
• Active source. Related target tables must have the same active source.
• Key relationships. Target tables must have key relationships.
• Target connection groups. Targets must be in one target connection group.
• Treat rows as insert. Use this option when you insert into the target. You cannot use updates with
constraint-based loading.
Active Source
When target tables receive rows from different active sources, the Integration Service reverts to normal
loading for those tables, but loads all other targets in the session using constraint-based loading when
possible. For example, a mapping contains three distinct pipelines. The first two contain a source, source
qualifier, and target. Since these two targets receive data from different active sources, the Integration
Service reverts to normal loading for both targets. The third pipeline contains a source, Normalizer, and two
targets. Since these two targets share a single active source (the Normalizer), the Integration Service
performs constraint-based loading: loading the primary key table first, then the foreign key table.
Key Relationships
When target tables have no key relationships, the Integration Service does not perform constraint-based
loading. Similarly, when target tables have circular key relationships, the Integration Service reverts to a
normal load. For example, you have one target containing a primary key and a foreign key related to the
primary key in a second target. The second target also contains a foreign key that references the primary key
in the first target. The Integration Service cannot enforce constraint-based loading for these tables. It reverts
to a normal load.
98 Chapter 7: Targets
Target Connection Groups
The Integration Service enforces constraint-based loading for targets in the same target connection group. If
you want to specify constraint-based loading for multiple targets that receive data from the same active
source, you must verify the tables are in the same target connection group. If the tables with the primary key-
foreign key relationship are in different target connection groups, the Integration Service cannot enforce
constraint-based loading when you run the workflow.
To verify that all targets are in the same target connection group, complete the following tasks:
• Verify all targets are in the same target load order group and receive data from the same active source.
• Use the default partition properties and do not add partitions or partition points.
• Define the same target type for all targets in the session properties.
• Define the same database connection name for all targets in the session properties.
• Choose normal mode for the target load type for all targets in the session properties.
When the mapping contains Update Strategy transformations and you need to load data to a primary key
table first, split the mapping using one of the following options:
• Load primary key table in one mapping and dependent tables in another mapping. Use constraint-based
loading to load the primary table.
• Perform inserts in one mapping and updates in another mapping.
Constraint-based loading does not affect the target load ordering of the mapping. Target load ordering
defines the order the Integration Service reads the sources in each target load order group in the mapping. A
target load order group is a collection of source qualifiers, transformations, and targets linked together in a
mapping. Constraint-based loading establishes the order in which the Integration Service loads individual
targets within a set of targets receiving data from a single source qualifier.
In the first pipeline, target T_1 has a primary key, T_2 and T_3 contain foreign keys referencing the T1
primary key. T_3 has a primary key that T_4 references as a foreign key.
Since these tables receive records from a single active source, SQ_A, the Integration Service loads rows to
the target in the following order:
1. T_1
2. T_2 and T_3 (in no particular order)
3. T_4
The Integration Service loads T_1 first because it has no foreign key dependencies and contains a primary
key referenced by T_2 and T_3. The Integration Service then loads T_2 and T_3, but since T_2 and T_3
have no dependencies, they are not loaded in any particular order. The Integration Service loads T_4 last,
because it has a foreign key that references a primary key in T_3.
After loading the first set of targets, the Integration Service begins reading source B. If there are no key
relationships between T_5 and T_6, the Integration Service reverts to a normal load for both targets.
If T_6 has a foreign key that references a primary key in T_5, since T_5 and T_6 receive data from a single
active source, the Aggregator AGGTRANS, the Integration Service loads rows to the tables in the following
order:
• T_5
• T_6
T_1, T_2, T_3, and T_4 are in one target connection group if you use the same database connection for each
target, and you use the default partition properties. T_5 and T_6 are in another target connection group
together if you use the same database connection for each target and you use the default partition properties.
The Integration Service includes T_5 and T_6 in a different target connection group because they are in a
different target load order group from the first four targets.
1. In the General Options settings of the Properties tab, choose Insert for the Treat Source Rows As
property.
2. Click the Config Object tab. In the Advanced settings, select Constraint Based Load Ordering.
3. Click OK.
Bulk Loading
You can enable bulk loading when you load to DB2, Sybase, Oracle, or Microsoft SQL Server.
If you enable bulk loading for other database types, the Integration Service reverts to a normal load. Bulk
loading improves the performance of a session that inserts a large amount of data to the target database.
Configure bulk loading on the Mapping tab.
When bulk loading, the Integration Service invokes the database bulk utility and bypasses the database log,
which speeds performance. Without writing to the database log, however, the target database cannot perform
rollback. As a result, you may not be able to perform recovery. Therefore, you must weigh the importance of
improved session performance against the ability to recover an incomplete session.
Note: When loading to DB2, Microsoft SQL Server, and Oracle targets, you must specify a normal load for
data driven sessions. When you specify bulk mode and data driven, the Integration Service reverts to normal
load.
Committing Data
When bulk loading to Sybase and DB2 targets, the Integration Service ignores the commit interval you define
in the session properties and commits data when the writer block is full.
When bulk loading to Microsoft SQL Server and Oracle targets, the Integration Service commits data at each
commit interval. Also, Microsoft SQL Server and Oracle start a new bulk load transaction after each commit.
Tip: When bulk loading to Microsoft SQL Server or Oracle targets, define a large commit interval to reduce
the number of bulk load transactions and increase performance.
Oracle Guidelines
When you enable bulk load to Oracle, the Integration Service invokes the standard Oracle client interface
with the bulk routines for direct path loads.
DB2 Guidelines
Use the following guidelines when bulk loading to DB2:
• You must drop indexes and constraints in the target tables before running a bulk load session. After the
session completes, you can rebuild them. If you use bulk loading with the session on a regular basis, use
pre- and post-session SQL to drop and rebuild indexes and key constraints.
• You cannot use source-based or user-defined commit when you run bulk load sessions on DB2.
• If you create multiple partitions for a DB2 bulk load session, you must use database partitioning for the
target partition type. If you choose any other partition type, the Integration Service reverts to normal load.
• When you bulk load to DB2, the DB2 database writes non-fatal errors and warnings to a message log file
in the session log directory. The message log file name is
<session_log_name>.<target_instance_name>.<partition_index>.log. You can check both the message
log file and the session log when you troubleshoot a DB2 bulk load session.
• If you want to bulk load flat files to DB2 for z/OS, use PowerExchange®.
For more information, see the DB2 documentation.
You can specify the table owner name in the target instance or on the Mapping tab of the session properties.
When you specify the table owner name in the session properties, you override table owner name in the
transformation properties.
You can use a parameter or variable as the target table name prefix. Use any parameter or variable type that
you can define in the parameter file. For example, you can use a session parameter, $ParamMyPrefix, as the
table name prefix, and set $ParamMyPrefix to the table name prefix in the parameter file.
Note: When you specify the table owner name and you set the sqlid for a DB2 database in the connection
environment SQL, the Integration Service uses table owner name in the target instance. To use the table
owner name specified in the SET sqlid statement, do not enter a name in the target name prefix.
Configure the target table name on the Transformation view of the Mapping tab.
Use the following rules and guidelines when working with reserved words:
• The Integration Service searches the reserved words file when it generates SQL to connect to source,
target, and lookup databases.
• If you override the SQL for a source, target, or lookup, you must enclose any reserved word in quotes.
• You may need to enable some databases, such as Microsoft SQL Server and Sybase, to use SQL-92
standards regarding quoted identifiers. Use connection environment SQL to issue the command. For
example, use the following command with Microsoft SQL Server:
SET QUOTED_IDENTIFIER ON
To insert arrays of data into a Teradata target by using ODBC, configure the OptimizeTeradataWrite custom
property at the session level or at the PowerCenter Integration Service level. Set the value of the
OptimizeTeradataWrite custom property to 1 to insert arrays of data into the target.
Note that the OptimizeTeradataWrite custom property is applicable only for inserting data into the target, and
not for updating data in the target, deleting data from the target, or reading data from the source.
The Integration Service performs the following database transactions per target connection group:
• Deadlock retry. If the Integration Service encounters a deadlock when it writes to a target, the deadlock
affects targets in the same target connection group. The Integration Service still writes to targets in other
target connection groups.
• Constraint-based loading. The Integration Service enforces constraint-based loading for targets in a
target connection group. If you want to specify constraint-based loading, you must verify the primary table
and foreign table are in the same target connection group.
Targets in the same target connection group meet the following criteria:
Suppose you create a session based on the same mapping. In the Workflow Manager, you do not create
multiple partitions. However, you use one Oracle database connection name for one target, and you use a
different Oracle database connection name for the other target. You specify normal mode for the target load
type for both target tables. The targets in the session belong to different target connection groups.
Note: When you define the target database connections for multiple targets in a session using session
parameters, the targets may or may not belong to the same target connection group. The targets belong to
the same target connection group if all session parameters resolve to the same target connection name. For
example, you create a session with two targets and specify the session parameter $DBConnection1 for one
target, and $DBConnection2 for the other target. In the parameter file, you define $DBConnection1 as Sales1
and you define $DBConnection2 as Sales1 and run the workflow. Both targets in the session belong to the
same target connection group.
• Aggregator
• Application Source Qualifier
Note: The Filter, Router, Transaction Control, and Update Strategy transformations are active
transformations in that they can change the number of rows that pass through. However, they are not active
sources in the mapping because they do not generate rows. Only transformations that can generate rows are
active sources.
Active sources affect how the Integration Service processes a session when you use any of the following
transformations or session properties:
• XML targets. The Integration Service can load data from different active sources to an XML target when
each input group receives data from one active source.
• Transaction generators. Transaction generators, such as Transaction Control transformations, become
ineffective for downstream transformations or targets if you put a transaction control point after it.
Transaction control points are transaction generators and active sources that generate commits.
• Mapplets. An Input transformation must receive data from a single active source.
• Source-based commit. Some active sources generate commits. When you run a source-based commit
session, the Integration Service generates a commit from these active sources at every commit interval.
• Constraint-based loading. To use constraint-based loading, you must connect all related targets to the
same active source. The Integration Service orders the target load on a row-by-row basis based on rows
generated by an active source.
• Row error logging. If an error occurs downstream from an active source that is not a source qualifier, the
Integration Service cannot identify the source row information for the logged error row.
• Use a flat file target definition. Create a mapping with a flat file target definition. Create a session using
the flat file target definition. When the Integration Service runs the session, it creates the target flat file or
generates the target data based on the connected ports in the mapping and on the flat file target
definition. The Integration Service does not write data in unconnected ports to a fixed-width flat file target.
• Use a relational target definition. Use a relational definition to write to a flat file when you want to use
an external loader to load the target. Create a mapping with a relational target definition. Create a session
using the relational target definition. Configure the session to output to a flat file by specifying the File
Writer in the Writers settings on the Mapping tab.
• Target properties. You can define target properties such as partitioning options, merge options, output
file options, reject options, and command options.
• Flat file properties. You can choose to create delimited or fixed-width files, and define their properties.
The following table describes the properties you define on the Mapping tab for flat file target definitions:
Merge Type Type of merge the Integration Service performs on the data for partitioned targets.
Merge File Directory Name of the merge file directory. By default, the Integration Service writes the merge file in the
service process variable directory, $PMTargetFileDir.
If you enter a full directory and file name in the Merge File Name field, clear this field.
Merge File Name Name of the merge file. Default is target_name.out. This property is required if you select a merge
type.
Append if Exists Appends the output data to the target files and reject files for each partition. Appends output data to
the merge file if you merge the target files. You cannot use this option for target files that are non-
disk files, such as FTP target files.
If you do not select this option, the Integration Service truncates each target file before writing the
output data to the target file. If the file does not exist, the Integration Service creates it.
Create Directory if Not Creates the target directory if it does not exist.
Exists
Header Options Create a header row in the file target. You can choose the following options:
- No Header. Do not create a header row in the flat file target.
- Output Field Names. Create a header row in the file target with the output port names.
- Use header command output. Use the command in the Header Command field to generate a
header row. For example, you can use a command to add the date to a header row for the file
target.
Default is No Header.
Header Command Command used to generate the header row in the file target.
Footer Command Command used to generate a footer row in the file target.
Output Type Type of target for the session. Select File to write the target data to a file target. Select Command to
output data to a command. You cannot select Command for FTP or Queue target connections.
Merge Command Command used to process the output data from all partitioned targets.
Output File Directory Name of output directory for a flat file target. By default, the Integration Service writes output files in
the service process variable directory, $PMTargetFileDir.
If you specify both the directory and file name in the Output Filename field, clear this field. The
Integration Service concatenates this field with the Output Filename field when it runs the session.
You can also use the $OutputFileName session parameter to specify the file directory.
Output File Name File name, or file name and path of the flat file target. Optionally, use the $OutputFileName session
parameter for the file name. By default, the Workflow Manager names the target file based on the
target definition used in the mapping: target_name.out. The Integration Service concatenates this
field with the Output File Directory field when it runs the session.
If the target definition contains a slash character, the Workflow Manager replaces the slash character
with an underscore.
When you use an external loader to load to an Oracle database, you must specify a file extension. If
you do not specify a file extension, the Oracle loader cannot find the flat file and the Integration
Service fails the session.
Note: If you specify an absolute path file name when using FTP, the Integration Service ignores the
Default Remote Directory specified in the FTP connection. When you specify an absolute path file
name, do not use single or double quotes.
Reject File Directory Name of the directory for the reject file. By default, the Integration Service writes all reject files to the
service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject File Name field, clear this field. The
Integration Service concatenates this field with the Reject File Name field when it runs the session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject File Name File name, or file name and path of the reject file. By default, the Integration Service names the reject
file after the target instance name: target_name.bad. Optionally use the $BadFileName session
parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs the
session. For example, if you have “C:\reject_file\” in the Reject File Directory field, and
enter “filename.bad” in the Reject Filename field, the Integration Service writes rejected rows to C:
\reject_file\filename.bad.
Use a command to perform additional processing of flat file target data. For example, use a command to sort
target data or compress target data. You can increase session performance by pushing transformation tasks
to the command instead of the Integration Service.
To send the target data to a command, select Command for the output type and enter a command for the
Command property.
For example, to generate a compressed file from the target data, use the following command:
compress -c - > $PMTargetFileDir/myCompressedFile.Z
The Integration Service sends the output data to the command, and the command generates a compressed
file that contains the target data.
Note: You can also use service process variables, such as $PMTargetFileDir, in the command.
In the Transformations view on the Mapping tab, click the Targets node and then click Set File Properties
to open the Flat Files dialog box.
To edit the fixed-width properties, select Fixed Width and click Advanced.
The following table describes the options you define in the Fixed Width Properties dialog box:
Null Character Optional. Character that the PowerCenter Integration Service substitutes for null values when it
reads null values from a database or a flat file. You can enter any valid character in the file
code page.
Repeat Null Character Optional. Fills null value fields with the character specified in the Null Character option. If you
do not select this option, then the PowerCenter Integration Service substitutes each null value
with one null character.
Code Page Optional. Code page of the fixed-width file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
In the Transformations view on the Mapping tab, click the Targets node and then click Set File Properties to
open the Flat Files dialog box. To edit the delimited properties, select Delimited and click Advanced.
Delimiters Character used to separate columns of data. Delimiters can be either printable or single-byte
unprintable characters, and must be different from the escape character and the quote character (if
selected). To enter a single-byte unprintable character, click the Browse button to the right of this
field. In the Delimiters dialog box, select an unprintable character from the Insert Delimiter list and
click Add. You cannot select unprintable multibyte characters as delimiters.
Optional Quotes Select None, Single, or Double. If you select a quote character, the Integration Service does not treat
delimiter characters within the quote characters as a delimiter. For example, suppose an output file
uses a comma as a delimiter and the Integration Service receives the following row: 342-3849,
‘Smith, Jenna’, ‘Rockville, MD’, 6.
If you select the optional single quote character, the Integration Service ignores the commas within
the quotes and writes the row as four fields.
If you do not select the optional single quote, the Integration Service writes six separate fields.
Code Page Code page of the delimited file. Select a code page or a variable:
- Code page. Select the code page.
- Use Variable. Enter a user-defined workflow or worklet variable or the session parameter
$ParamName, and define the code page in the parameter file. Use the code page name.
Default is the PowerCenter Client code page.
• Write to fixed-width flat files from relational target definitions. The Integration Service adds spaces to
target columns based on transformation datatype.
• Write to fixed-width flat files from flat file target definitions. You must configure the precision and field
width for flat file target definitions to accommodate the total length of the target field.
• Generate flat file targets by transaction. You can configure the file target to generate a separate output
file for each transaction.
• Write empty fields for unconnected ports in fixed-width file definitions. You can configure the
mapping so that the Integration Service writes empty fields for unconnected ports in a fixed-width flat file
target definition.
• Write multibyte data to fixed-width files. You must configure the precision of string columns to
accommodate character data. When writing shift-sensitive data to a fixed-width flat file target, the
Integration Service adds shift characters and spaces to meet file requirements.
• Null characters in fixed-width files. The Integration Service writes repeating or non-repeating null
characters to fixed-width target file columns differently depending on whether the characters are single-
byte or multibyte.
• Character set. You can write ASCII or Unicode data to a flat file target.
• Write metadata to flat file targets. You can configure the Integration Service to write the column header
information when you write to flat file targets.
When the Integration Service writes to a fixed-width flat file based on a relational target definition in the
mapping, it adds spaces to columns based on the transformation datatype connected to the target. This
allows the Integration Service to write optional symbols necessary for the datatype, such as a negative sign
or decimal point, without sending the row to the reject file.
For example, you connect a transformation Integer(10) port to a Number(10) column in a relational target
definition. In the session properties, you override the relational target definition to use the File Writer and you
specify to output a fixed-width flat file. In the target flat file, the Integration Service appends an additional byte
to the Number(10) column to allow for negative signs that might be associated with Integer data.
The following table describes the number of bytes the Integration Service adds to the target column and
optional characters it uses for each datatype:
Note: When the Integration Service writes a row to the reject file, it writes a message in the session log.
Fixed-width files are byte-oriented, which means the total length of a field is measured in bytes.
The following table describes how the Integration Service measures the total field length for fields in a fixed-
width flat file target definition:
String Precision
The following table lists the characters you must accommodate when you configure the precision or field
width for flat file target definitions to accommodate the total length of the target field:
Datetime - Date and time separators, such as slashes (/), dashes (-), and colons (:).
- For example, the format MM/DD/YYYY HH24:MI:SS.US has a total length of 26 bytes.
When you edit the flat file target definition in the mapping, define the precision or field width great enough to
accommodate both the target data and the characters in the preceding table.
For example, suppose you have a mapping with a fixed-width flat file target definition. The target definition
contains a number column with a precision of 10 and a scale of 2. You use a comma as the decimal
separator and a period as the thousands separator. You know some rows of data might have a negative
value. Based on this information, you know the longest possible number is formatted with the following
format:
-NN.NNN.NNN,NN
Open the flat file target definition in the mapping and define the field width for this number column as a
minimum of 14 bytes.
• EmployeeID
• EmployeeName
• Street
• City
• State
In the mapping, you connect only the EmployeeID and EmployeeName ports in the flat file target definition.
You configure the flat file target definition to create a header row with the output port names. The Integration
Service generates an output file with the following rows:
EmployeeID EmployeeName
If you want the Integration Service to write empty fields for the unconnected ports, create output ports in an
upstream transformation that do not contain data. Then connect these ports containing null values to the
fixed-width flat file target definition. For example, you connect the ports containing null values to the Street,
City, and State ports in the flat file target definition. The Integration Service generates an output file with the
following rows:
For string columns, the Integration Service truncates the data if the precision is not large enough to
accommodate the multibyte data.
• Non shift-sensitive multibyte data. The file contains all multibyte data. Configure the precision in the
target definition to allow for the additional bytes.
For example, you know that the target data contains four double-byte characters, so you define the target
definition with a precision of 8 bytes.
If you configure the target definition with a precision of 4, the Integration Service truncates the data before
writing to the target.
Note: Delimited files are character-oriented, and you do not need to allow for additional precision for
multibyte data.
The Integration Service writes shift characters and spaces in the following ways:
• If a column begins or ends with a double-byte character, the Integration Service adds shift characters so
the column begins and ends with a single-byte shift character.
• If the data is shorter than the column width, the Integration Service pads the rest of the column with
spaces.
• If the data is longer than the column width, the Integration Service truncates the data so the column ends
with a single-byte shift character.
To illustrate how the Integration Service handles a fixed-width file containing shift-sensitive data, say you
want to output the following data to the target:
SourceCol1 SourceCol2
AAAA aaaa
The first target column contains eight bytes and the second target column contains four bytes.
The Integration Service must add shift characters to handle shift-sensitive data. Since the first target column
can handle eight bytes, the Integration Service truncates the data before it can add the shift characters.
TargetCol1 TargetCol2
-oAAA-i aaaa
Notation Description
A Double-byte character
-o Shift-out character
-i Shift-in character
For the first target column, the Integration Service writes three of the double-byte characters to the target. It
cannot write any additional double-byte characters to the output column because the column must end in a
For the second target column, the Integration Service writes all four single-byte characters to the target. It
does not add write shift characters to the column because the column begins and ends with single-byte
characters.
The null character can be repeating or non-repeating. If the null character is repeating, the Integration
Service writes as many null characters as possible into a target column. If you specify a multibyte null
character and there are extra bytes left after writing null characters, the Integration Service pads the column
with single-byte spaces. If a column is smaller than the multibyte character specified as the null character,
the session fails at initialization.
Character Set
You can configure the Integration Service to run sessions with flat file targets in either ASCII or Unicode data
movement mode.
If you configure a session with a flat file target to run in Unicode data movement mode, the target file code
page must be a superset of the source code page. Delimiters, escape, and null characters must be valid in
the specified code page of the flat file.
If you configure a session to run in ASCII data movement mode, delimiters, escape, and null characters must
be valid in the ISO Western European Latin1 code page. Any 8‑bit character you specified in previous
versions of PowerCenter is still valid.
When writing to fixed-width files, the Integration Service truncates the target definition port name if it is longer
than the column width.
For example, you have a flat file target definition with the following structure:
ITEM_ID number
ITEM_NAME string
PRICE number
The column width for ITEM_ID is six. When you enable the Output Metadata For Flat File Target option, the
Integration Service writes the following text to a flat file:
#ITEM_ITEM_NAME PRICE
100001Screwdriver 9.50
100002Hammer 12.90
100003Small nails 3.00
The following table describes the properties you define in the XML Writer:
Output File Directory Enter the directory name in this field. By default, the Integration Service writes output files in the
service process variable directory, $PMTargetFileDir.
You can enter the full path and file name. If you specify both the directory and file name in the
Output Filename field, clear this field. The Integration Service concatenates this field with the
Output Filename field when it runs the session.
You can also use the $OutputFileName session parameter to specify the file directory.
Output Filename Enter the file name or file name and path. By default, the Workflow Manager names the target file
based on the target definition used in the mapping: target_name.xml.
If the target definition contains a slash character, the Workflow Manager replaces the slash
character with an underscore.
Enter the file name, or file name and path. Optionally, use the $OutputFileName session parameter
for the file name.
If you specify both the directory and file name in the Output File Directory field, clear this field. The
Integration Service concatenates this field with the Output File Directory field when it runs the
session.
If you specify an absolute path file name when using FTP, the Integration Service ignores the
Default Remote Directory specified in the FTP connection. When you specify an absolute path file
name, do not use single or double quotes.
Validate Target Validates simple data types. The Integration Service does not validate the target XML structure
against a schema.
Format Output Format the XML target file so the XML elements and attributes indent. If you do not select Format
Output, each line of the XML file starts in the same position.
XML Datetime Format Select local time, local time with time zone, or UTC. Local time with time zone is the difference in
hours between the server time zone and Greenwich Mean Time. UTC is Greenwich Mean Time.
Null Content Choose how to represent null content in the target. Default is No Tag.
Representation
Empty String Content Choose how to represent empty string content in the target. Default is Tag with Empty Content.
Representation
Empty String Attribute Choose how to represent empty string attributes in the target. Default is Attribute Name with Empty
Representation String.
• Character set. Configure the Integration Service to run sessions with XML targets in either ASCII or
Unicode data movement mode.
• Null and empty string. Choose how the Integration Service handles null data or empty strings when it
writes data to an XML target.
• Handling duplicate group rows. Choose how the Integration Service handles duplicate rows of data.
• DTD and schema reference. Define a DTD or schema file name for the target XML file.
• Flushing XML on commits. Configure the Integration Service to periodically flush data to the target.
• XML caching properties. Define a cache directory for an XML target.
• Session logs for XML targets. View session logs for an XML session.
• Multiple XML output. Configure the Integration Service to output a new XML document when the data in
the root changes.
• Partitioning the XML Generator. When you generate XML in multiple partitions, you always generate
separate documents for each partition.
• Generating XML files with no data. Configure the WriteNullXMLFile custom property to skip creating an
XML file when the XML Generator transformation receives no data.
Character Set
You can configure the Integration Service to run sessions with XML targets in either ASCII or Unicode data
movement mode. XML files contain an encoding declaration that indicates the code page used in the file. The
most commonly used code pages are UTF-8 and UTF-16. PowerCenter supports UTF-8 code pages for XML
targets only. Use the same set of code pages for XML files as for relational databases and other files.
For XML targets, PowerCenter uses the code page declared in the XML file. When you run the Integration
Service in Unicode data movement mode, the XML target code page must be a superset of the Integration
Service code page and the source code page.
Special Characters
The Integration Service adds escape characters to the following special characters in XML targets:
< & > ”
To change these defaults, you can change the Null Content Representation and Empty String Content
Representation XML target properties. For attributes, change Null Attribute Representation and the Empty
String Attribute Representation properties.
Null Content or Empty String Content - No Tag - Does not output a tag.
- Tag with Empty Content - Outputs the XML tag with no content.
Null Attribute or Empty String Attribute - No Attribute - Does not output the attribute.
- Attribute Name with Empty - Outputs the attribute name with no content.
String
You can specify fixed or default values for elements and attributes. When an element in an XML schema or a
DTD has a default value, the Integration Service inserts the value instead of writing empty content. When an
element has a fixed value in the schema, the value is always inserted in the XML file. If the XML schema or
DTD does not specify a value for an attribute and the attribute has a null value, the Integration Service omits
the attribute.
If a required attribute does not have a fixed value, the attribute must be a projected field. The Integration
Service does not output invalid attributes to a target. An error occurs when a prohibited attribute appears in
an element tag. An error also occurs if a required attribute is not present in an element tag. The Integration
Service writes these errors to the session log or the error log when you enable row error logging.
The following table describes the format of XML file elements and attributes that contain null values or empty
strings:
The Integration Service does not write duplicate rows to the reject file. The Integration Service writes
duplicate rows to the session log. You can skip writing warning messages in the session log for the duplicate
rows. Disable the XMLWarnDupRows Integration Service option in the Informatica Administrator.
The Integration Service handles duplicate rows passed to the XML target root group differently than it
handles rows passed to other XML target groups:
• For the XML target root group, the Integration Service always passes the first row to the target. When the
Integration Service encounters duplicate rows, it increases the number of rejected rows in the session
load summary.
• For any XML target group other than the root group, you can configure duplicate group row handling in the
XML target definition in the Mapping Designer.
• First row. The Integration Service passes the first row to the target. When the Integration Service
encounters other rows with the same primary key, the Integration Service increases the number of
rejected rows in the session load summary.
• Last row. The Integration Service passes the last duplicate row to the target. You can configure the
Integration Service to write the duplicate XML rows to the session log by setting the Warn About Duplicate
XML Rows option.
For example, the Integration Service encounters five duplicate rows. If you configure the Integration
Service to write the duplicate XML rows to the session log, the Integration Service passes the fifth row to
the XML target and writes the first four duplicate rows to the session log. Otherwise, the Integration
Service passes the fifth row to the XML target but does not write anything to the session log.
• Error. The Integration Service passes the first row to the target. When the Integration Service encounters
a duplicate row, it increases the number of rejected rows in the session load summary and increments the
error count.
When the Integration Service reaches the error threshold, the session fails and the Integration Service
does not write any rows to the XML target.
The Integration Service sets an error threshold for each XML group.
The Integration Service does not check that the file you specify exists or that the file is valid. The Integration
Service does not validate the target XML file against the DTD or schema file you specify.
Note: An XML instance document must refer to the full relative path of a schema if a midstream XML
transformation is processing the file. Otherwise, the full path is not required.
• Large XML files. If you are processing a large XML file of several gigabytes, the Integration Service may
have reduced performance. You can set the On Commit attribute to Append to Doc. This flushes XML
data periodically to the target document.
• Real-time processing. If you process real-time data that requires commits at specific times, use Append
to Doc.
• Ignore commit. Generate and write to the XML document at end of file.
• Append to document. Write to the same XML document at the end of each commit. The XML document
closes at end of file. This option is not available for XML Generator transformations.
• Create new document. Create and write to a new document at each commit. You create multiple XML
documents.
You can flush data if all groups in the XML target are connected to the same single commit or transaction
point. The transformation at the commit point generates denormalized output. The denormalized output
contains repeating primary key values for all but the lowest level node in the XML schema. The Integration
Service extracts rows from this output for each group in the XML target.
You must have only one child group for the root group in the XML target.
Ignoring Commit
You can choose to generate the XML document after the session has read all the source records. This option
causes the Integration Service to store all of the XML data in cache during a session. Use this option when
you are not processing a lot of data.
For sessions using source-based commits, the single transaction point might be a source or nearest active
source to the XML target, such as the last active transformation before the target. For sessions with user-
defined commits, the transaction point is a transaction generating transformation.
Warning: When you create new a document on commit, you need to provide a unique file name for each
document. Otherwise, the Integration Service overwrites the document it created from the previous commit.
You can configure the Integration Service to automatically determine the XML cache size, or you can
configure the cache size. When the memory requirements exceed the cache size, the Integration Service
pages data to index and data files in the cache directory. When the session completes, the Integration
Service releases cache memory and deletes the cache files.
You can specify the cache directory and cache size for the XML target. The default cache directory is
$PMCacheDir, which is a service process variable that represents the directory where the Integration Service
stores cache files by default.
For example, the following session log entry contains target EMP_SALARY and group DEPARTMENT:
WRITER_1_1_1> WRT_8167 Start loading table [EMP_SALARY::DEPARTMENT] at: Wed Nov 05
08:01:35 2003
The Integration Service creates multiple XML files when the root group has more than one distinct primary
key value. If the Integration Service receives multiple rows with the same primary key value, the Integration
Service chooses the first or last row based on the way you configure duplicate row handling.
If you pass data to a column in the root group, but you do not pass data to the primary key, the Integration
Service does not generate a new XML document. The Integration Service writes a warning message to the
session log indicating that the primary key for the root group is not projected, and the Integration Service is
generating one document.
Example
The following example includes a mapping that contains a flat file source of country names, regions, and
revenue dollars per region. The target is an XML file. The root view contains the primary key, XPK_COL_0,
which is a string.
Each time the Integration Service passes a new country name to the root view the Integration Service
generates a new target file. Each target XML file contains country name, region, and revenue data for one
country.
The Integration Service passes the following rows to the XML target:
Country,Region,Revenue
USA,region1,1000
Canada,region1,100
USA,region2,200
USA,region3,300
USA,region4,400
France,region1,10
France,region2,20
France,region3,30
France,region4,40
The Integration Service builds the XML files in cache. The Integration Service creates one XML file for USA,
one file for Canada, and one file for France. The Integration Service creates a file list that contains the file
name and absolute path of each target XML file.
If you specify “revenue_file.xml” as the output file name in the session properties, the session produces the
following files:
To create a session with heterogeneous targets, you can create a session based on a mapping with
heterogeneous targets. Or, you can create a session based on a mapping with homogeneous targets and
select different database connections.
• Multiple target types. You can create a session that writes to both relational and flat file targets.
• Multiple target connection types. You can create a session that writes to a target on an Oracle
database and to a target on a DB2 database. Or, you can create a session that writes to multiple targets
of the same type, but you specify different target connections for each target in the session.
All database connections you define in the Workflow Manager are unique to the Integration Service, even if
you define the same connection information. For example, you define two database connections, Sales1 and
Sales2. You define the same user name, password, connect string, code page, and attributes for both Sales1
and Sales2. Even though both Sales1 and Sales2 define the same connection information, the Integration
Service treats them as different database connections. When you create a session with two relational targets
and specify Sales1 for one target and Sales2 for the other target, you create a session with heterogeneous
targets.
You can create a session with heterogeneous targets in one of the following ways:
• Create a session based on a mapping with targets of different types or different database types. In the
session properties, keep the default target types and database types.
• Create a session based on a mapping with the same target types. However, in the session properties,
specify different target connections for the different target instances, or override the target type to a
different type.
You can specify the following target type overrides in a session:
Note: When the Integration Service runs a session with at least one relational target, it performs database
transactions per target connection group. For example, it orders the target load for targets in a target
connection group when you enable constraint-based loading.
Each time you run a session, the Integration Service appends rejected data to the reject file. Depending on
the source of the problem, you can correct the mapping and target database to prevent rejects in subsequent
sessions.
Note: If you enable row error logging in the session properties, the Integration Service does not create a
reject file. It writes the reject rows to the row error tables or file.
When you run a session that contains multiple partitions, the Integration Service creates a separate reject file
for each partition. The Integration Service names reject files after the target instance name. The default name
for reject files is filename_partitionnumber.bad. The reject file name for the first partition does not contain a
partition number.
For example,
/home/directory/filename.bad
/home/directory/filename2.bad
/home/directory/filename3.bad
The Workflow Manager replaces slash characters in the target instance name with underscore characters.
To find a reject file name and path, view the target properties settings on the Mapping tab of session
properties.
• Row indicator. The first column in each row of the reject file is the row indicator. The row indicator
defines whether the row was marked for insert, update, delete, or reject.
If the session is a user-defined commit session, the row indicator might indicate whether the transaction
was rolled back due to a non-fatal error, or if the committed transaction was in a failed target connection
group.
• Column indicator. Column indicators appear after every column of data. The column indicator defines
whether the column contains valid, overflow, null, or truncated data.
The following sample reject file shows the row and column indicators:
0,D,1921,D,Nelson,D,William,D,415-541-5145,D
0,D,1922,D,Page,D,Ian,D,415-541-5145,D
0,D,1923,D,Osborne,D,Lyle,D,415-541-5145,D
0,D,1928,D,De Souza,D,Leo,D,415-541-5145,D
0,D,2001123456789,O,S. MacDonald,D,Ira,D,415-541-514566,T
Column Indicators
A column indicator appears after every column of data. A column indicator defines whether the data is valid,
overflow, null, or truncated.
The column indicator “D” also appears after each row indicator.
D Valid data. Good data. Writer passes it to the target database. The target
accepts it unless a database error occurs, such as finding a
duplicate key.
N Null. The column contains a null value. Good data. Writer passes it to the target, which rejects it if the
target database does not accept null values.
T Truncated. String data exceeded a specified Bad data, if you configured the mapping target to reject
precision for the column, so the value was overflow or truncated data.
truncated.
Null columns appear in the reject file with commas marking their column. The following example shows a null
column surrounded by good data:
0,D,5,D,,N,5,D
Either the writer or target database can reject a row. Consult the log to determine the cause for rejection.
Connection Objects
This chapter includes the following topics:
124
Connection Objects Overview
Before you create and run sessions, you must configure connections in the Workflow Manager. A connection
object is a global object that defines a connection in the repository. You create and modify connection objects
and assign permissions to connection objects in the Workflow Manager.
Connection Types
When you create a connection object, choose the connection type in the Connection Browser. Some
connection types also have connection subtypes. For example, a relational connection type has subtypes
such as Oracle and Microsoft SQL Server. Define the values for the connection based on the connection type
and subtype.
When you configure a session, you can choose the connection type and select a connection to use. You can
also override the connection attributes for the session or create a connection. Set the connection type on the
mapping tab for each object.
The following table describes the connection types that you can create or choose when you configure a
session:
Connection Description
Types
Loader Relational connection to the external loader for the target, such as IBM DB2 Autoloader or Teradata
FastLoad.
When you configure a session, choose File as the writer type for the relational target instance. Select a
Loader connection to load output files to teradata, Oracle, DB2, or Sybase IQ through an external loader.
Select a loader connection in the Value column.
Note: For information about connections to PowerExchange see PowerExchange Interfaces for PowerCenter.
Session Parameters
You can enter session parameter $ParamName as the database user name and password, and define the
user name and password in a parameter file. For example, you can use a session parameter,
$ParamMyDBUser, as the database user name, and set $ParamMyDBUser to the user name in the
parameter file.
To use a session parameter for the database password, enable the Use Parameter in Password option and
encrypt the password by using the pmpasswd command line program. Encrypt the password by using the
CRYPT_DATA encryption type. For example, to encrypt the database password “monday,” enter the following
command:
pmpasswd monday -e CRYPT_DATA
• PmNullUser
• PmNullPasswd
Use the PmNullUser user name if you use one of the following authentication methods:
• Oracle OS Authentication. Oracle OS Authentication lets you log in to an Oracle database if you have a
login name and password for the operating system. You do not need to know a database user name and
password. PowerCenter uses Oracle OS Authentication when the connection user name is PmNullUser
and the connection is for an Oracle database.
• IBM DB2 client authentication. IBM DB2 client authentication lets you log in to an IBM DB2 database
without specifying a database user name or password if the IBM DB2 server is configured for external
authentication or if the IBM DB2 server is on the same as the Integration Service process. PowerCenter
uses IBM DB2 client authentication when the connection user name is PmNullUser and the connection is
for an IBM DB2 database.
Use the PmNullUser user name with any of the following connection types:
• Relational database connections. Use for Oracle OS Authentication, IBM DB2 client authentication, or
databases such as ISG Navigator that do not allow user names,
• External loader connections. Use for Oracle OS Authentication or IBM DB2 client authentication.
• HTTP connections. Use if the HTTP server does not require authentication.
• PowerChannel relational database connections. Use for Oracle OS Authentication, IBM DB2 client
authentication, or databases such as ISG Navigator that do not allow user names.
• Web Services connections. Use if the web service does not require a user name.
• Relational database connections. Use to connect to all databases except Microsoft SQL Server and
Sybase ASE.
• External loader connection. Use to connect to all databases.
• PowerChannel relational database connections. Use to connect with all databases except Microsoft
SQL Server and Sybase ASE.
• PeopleSoft application connections. Use to connect to the underlying database of the PeopleSoft
system for DB2, Oracle, and Informix databases.
The following table lists the native connect string syntax for each supported database when you create or
update connections:
1
Teradata ODBC_data_source_name or TeradataODBC
ODBC_data_source_name@db_name or TeradataODBC@mydatabase
ODBC_data_source_name@db_user_name TeradataODBC@jsmith
When you configure a mapping, you can specify the database location to use $Source or $Target variable for
Lookup and Stored Procedure transformations. You can also configure the $Source variable to specify the
source connection for relational sources and the $Target variable to specify the target connection for
relational targets in the session properties.
If you use $Source or $Target in a Lookup or Stored Procedure transformation, you can configure the
connection value on the Properties tab or Mapping tab of the session. When you configure $Source
Connection Value or $Target Connection Value, the Integration Service uses that connection when it runs the
session. If you do not configure $Source Connection Value or $Target Connection Value, the Integration
Service determines the database connection to use when it runs the session.
One source The database connection you specify for the source.
Joiner transformation is before a Lookup or Stored The database connection for the detail source.
Procedure transformation
Lookup or Stored Procedure transformation is before The database connection for the source connected to the
a Joiner transformation transformation.
The following table describes how the Integration Services determines the value of $Target when you do not
configure $Target Connection Value in the session properties:
One target The database connection you specify for the target.
To enter the database connection for the $Source and $Target connection variables:
1. In the session properties, select the Properties tab or the Mapping tab, Connections node.
2. Click the Open button in $Source Connection Value or $Target Connection Value field.
The Connection Browser dialog box appears.
3. Select a connection variable or session parameter.
You can enter the $Source or $Target connection variable, or the $DBConnectionName or
$AppConnectionName session parameter. If you enter a session parameter, define the parameter in the
parameter file. If you do not define a value for the session parameter, the Integration Service determines
which database connection to use when it runs the session.
4. Click OK.
• You use an FTP, queue, external loader, or application connection for a non-relational source or target.
• You use an FTP, queue, or external loader connection for a relational target.
• You use an application connection for a relational source.
You configure connections in the Connections settings on the Mapping tab.
You can override connection attributes in the session or in the parameter file:
• Session. Select the connection object and override attributes in the session.
• Parameter file. Use a session parameter to define the connection and override connection attributes in
the parameter file.
1. On the Mapping tab, select the source or target instance in the Connections node.
2. Select the connection type.
3. Click the Open button in the value field to select a connection object.
4. Choose the connection object.
5. Click Override.
6. Update the attributes you want to change.
7. Click OK.
The Workflow Manager filters the list of code pages for connections to ensure that the code page for the
connection is a subset of the code page for the repository. It lists the five code pages you have most recently
selected. Then it lists all remaining code pages in alphabetical order.
If you configure the Integration Service for code page validation, the Integration Service enforces code page
compatibility at run time. The Integration Service ensures that the target database code page is a superset of
the source database code page.
When you change the code page in a connection object, you must choose one that is compatible with the
previous code page. If the code pages are incompatible, the Workflow Manager invalidates all sessions using
that connection.
If you configure the PowerCenter Client and Integration Service for relaxed code page validation, you can
select any supported code page for source and target connections. If you are familiar with the data and are
confident that it will convert safely from one code page to another, you can run sessions with incompatible
source and target data code pages. It is your responsibility to ensure your data will convert properly.
The trust certificates file (ca-bundle.crt) contains certificate files from major, trusted certificate authorities. If
the certificate bundle does not contain a certificate from a certificate authority that the session uses, you can
convert the certificate of the HTTP server or web service provider to PEM format and append it to the ca-
bundle.crt file.
You can generate the client certificate and private key files in a single file or as separate files.
The command generates a single certificate file in the PEM format. In the Web Service Consumer application
connection, use the single certificate file while configuring both the client certificate file and the private key
file. Use the password that you provide after running the OpenSSL command to configure the Web Service
Consumer application connection.
The command generates certificate files in the PEM format. In the Web Service Consumer application
connection, specify the fully qualified path along with the client certificate and private key files. Use the
passwords that you provide after running the OpenSSL commands to configure the Web Service Consumer
application connection.
For example, to convert the DER file named server.der to PEM format, use the following command:
openssl x509 -in server.der -inform DER -out server.pem -outform PEM
If you want to convert the PKCS12 file named server.pfx to PEM format, use the following command:
openssl pkcs12 -in server.pfx -out server.pem
To convert a private key named key.der from DER to PEM format, use the following command:
openssl rsa -in key.der -inform DER -outform PEM -out keyout.pem
For more information, refer to the OpenSSL documentation. After you convert certificate files to the PEM
format, you can append them to the trust certificates file. Also, you can use PEM format private key files with
the HTTP transformation or PowerExchange for Web Services.
The Workflow Manager assigns default permissions for connection objects to users, groups, and all others if
you enable enhanced security.
You can specify read, write, and execute permissions for each user and group. You can perform the following
types of tasks with different connection object permissions in combination with user privileges and folder
permissions:
• Read. View the connection object in the Workflow Manager and Repository Manager. When you have
read permission, you can perform tasks in which you view, copy, or edit repository objects associated with
the connection object.
• Write. Edit the connection object.
• Execute. Run sessions that use the connection object.
To assign or edit permissions on a connection object, select an object from the Connection Object Browser,
and click Permissions.
You can perform the following tasks to manage permissions on a connection object:
Environment SQL
The Integration Service runs environment SQL in auto-commit mode and closes the transaction after it issues
the SQL. Use SQL commands that do not depend on a transaction being open during the entire read or write
process. For example, if a source database is set to read only mode and you create an environment SQL
statement in the source connection to set the transaction to read only, the Integration Service issues a
commit after it runs the SQL and cannot read the source in read only mode.
Use environment SQL for source, target, lookup, and stored procedure connections. If the SQL syntax is not
valid, the Integration Service does not connect to the database, and the session fails.
Note: When a connection object has “environment SQL,” the connection uses “connection environment SQL.”
For example, use the following SQL statement to set the quoted identifier parameter for the duration of the
connection:
SET QUOTED_IDENTIFIER ON
• You want to set up the connection environment so that double quotation marks are object identifiers.
• You configure the target load type to Normal and the Microsoft SQL Server target name includes spaces.
Use SQL commands that depend on a transaction being open during the entire read or write process. For
example, you might use the following statement as transaction environment SQL to modify how the session
handles characters:
ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR
This command must be run before each transaction. The command is not appropriate for connection
environment SQL because setting the parameter once for each connection is not sufficient.
• You can enter any SQL command that is valid in the database associated with the connection object. The
Integration Service does not allow nested comments, even though the database might.
• When you enter SQL in the SQL Editor, you type the SQL statements.
• Use a semicolon (;) to separate multiple statements.
• The Integration Service ignores semicolons within /*...*/.
• If you need to use a semicolon outside of comments, you can escape it with a backslash (\).
• You can use parameters and variables in the environment SQL. Use any parameter or variable type that
you can define in the parameter file. You can enter a parameter or variable within the SQL statement, or
you can use a parameter or variable as the environment SQL. For example, you can use a session
parameter, $ParamMyEnvSQL, as the connection or transaction environment SQL, and set
$ParamMyEnvSQL to the SQL statement in a parameter file.
• You can configure the table owner name using sqlid in the connection environment SQL for a DB2
connection. However, the table owner name in the target instance overrides the SET sqlid statement in
environment SQL. To use the table owner name specified in the SET sqlid statement, do not enter a name
in the target name prefix.
Connection Resilience
Connection resilience is the ability of the Integration Service to tolerate temporary network failures when
connecting to a relational database, an application, or the PowerExchange Listener. The Integration Service
You configure the resilience retry period in the connection object. You can configure the retry period for
source, target, SQL transformation, and Lookup transformation connections. When a network failure occurs
or the source or target becomes unavailable, the Integration Service attempts to reconnect for the amount of
time configured for the Connection Retry Period property. If the Integration Service cannot reconnect to the
source or target within the retry period, the session fails.
PowerExchange does not support runtime connection resilience for database connections other than those
used for PowerExchange Express CDC for Oracle. Configure the workflow for automatic recovery of
terminated tasks if recovery from a dropped PowerExchange connection is required. PowerExchange also
does not support runtime resilience of connections between the Integration Service and PowerExchange
Listener after the initial connection attempt. However, you can configure resilience for the initial connection
attempt by setting the Connection Retry Period property to a value greater than 0 when you define
PowerExchange Client for PowerCenter (PWXPC) relational and application connections. The Integration
Service then retries the connection to the PowerExchange Listener after the initial connection attempt fails. If
the Integration Service cannot connect to the PowerExchange Listener within the retry period, the session
fails.
The Integration Service will not attempt to reconnect to a source or target in the following situations:
Note: For a database connection to be resilient, the source or target must be a highly available database and
you must have the high availability option or the real-time option.
Property Description
Name Name you want to use for this connection. The connection name cannot contain spaces or other special
characters, except for the underscore.
Use Kerberos Indicates that the database to connect to runs on a network that uses Kerberos authentication. If this
Authentication option is selected, you cannot set the user name and password in the connection object. The connection
uses the credentials of the user account that runs the session that connects to the database. The user
account must have a user principal on the Kerberos network where the database runs.
Informatica supports Kerberos authentication for native relational connections to the following
databases: Oracle, DB2, SQL Server, and Sybase.
User Name Database user name with the appropriate read and write database permissions to access the database.
For Oracle connections that process BLOB, CLOB, or NCLOB data, the user must have permission to
access and create temporary tablespaces.
To define the user name in the parameter file, enter session parameter $ParamName as the user name,
and define the value in the session or workflow parameter file. The Integration Service interprets user
names that start with $Param as session parameters.
If you use Oracle OS Authentication, IBM DB2 client authentication, or databases such as ISG Navigator
that do not allow user names, enter PmNullUser. For Teradata connections, this overrides the default
database user name in the ODBC entry.
Not available if the Use Kerberos Authentication option is selected.
Use Parameter in Indicates that the password for the database user name is a session parameter, $ParamName. Define
Password the password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Password Password for the database user name. For Oracle OS Authentication, IBM DB2 client authentication, or
databases such as ISG Navigator that do not allow passwords, enter PmNullPassword. For Teradata
connections, this overrides the database password in the ODBC entry.
Passwords must be in 7-bit ASCII.
Not available if the Use Kerberos Authentication option is selected.
Connect String Connect string used to communicate with the database. For syntax, see “Native Connect Strings” on page
127.
Required for all databases except Microsoft SQL Server and Sybase ASE.
Provider Type The connection provider that you want to use to connect to the Microsoft SQL Server database.
You can select the following provider types:
- OBDC
- Oledb(Deprecated)
Default is ODBC.
OLEDB is a deprecated provider type. Support for the OLEDB provider type will be dropped in a future
release.
Use DSN Enables the PowerCenter Integration Service to use the Data Source Name for the connection.
If you select the Use DSN option, the PowerCenter Integration Service retrieves the database and
server names from the DSN.
If you do not select the Use DSN option, you must provide the database and server names.
Code Page Code page the Integration Service uses to read from a source database or write to a target database or
file.
Connection Runs an SQL command with each database connection. Default is disabled.
Environment SQL
Transaction Runs an SQL command before the initiation of each transaction. Default is disabled.
Environment SQL
Enable Parallel Enables parallel processing when loading data into a table in bulk mode. Default is enabled.
Mode
Database Name Name of the database. For Teradata connections, this overrides the default database name in the
ODBC entry. Also, if you do not enter a database name for a Teradata or Sybase ASE connection, the
Integration Service uses the default database name in the ODBC entry. If you do not enter a database
name, connection-related messages do not show a database name when the default database is used.
Packet Size Use to optimize the native drivers for Sybase ASE and Microsoft SQL Server.
Domain Name The name of the domain. Used for Microsoft SQL Server on Windows.
Use Trusted If selected, the Integration Service uses Windows authentication to access the Microsoft SQL Server
Connection database. The user name that starts the Integration Service must be a valid Windows user with access
to the Microsoft SQL Server database.
Connection Retry Number of seconds the Integration Service attempts to reconnect to the database if the connection fails.
Period If the Integration Service cannot connect to the database in the retry period, the session fails. Default
value is 0.
Related Topics:
• “Target Connections” on page 90
• “FTP Connections” on page 138
The Workflow Manager appends an underscore and the first three letters of the relational database type to
the name of the new database connection. For example, you have lookup table in the same database as your
source definition. You you make a copy of the Microsoft SQL Server database connection called
Dev_Source. The Workflow Manager names the new database connection Dev_Source_Mic. You can edit
the copied connection to use a different name.
When you replace database connections, the Workflow Manager replaces the relational database
connections in the following locations for all sessions using the connection:
• Source connection
• Target connection
• Connection Information property in Lookup and Stored Procedure transformations
• $Source Connection Value session property
• $Target Connection Value session property
When the repository contains both relational and application connections with the same name, the Workflow
Manager replaces the relational connections only if you specified the connection type as relational in all
locations.
The Integration Service uses the updated connection information the next time the session runs.
You must close all folders before replacing a relational database connection.
FTP Connections
Use an FTP connection object for each source or target that you want to access through FTP or SFTP.
To connect to an SFTP server, create an FTP connection and enable SFTP. SFTP uses the SSH2
authentication protocol. Configure the authentication properties to use the SFTP connection. You can
configure publickey or password authentication. The Integration Service connects to the SFTP server with the
authentication properties you configure. If the authentication does not succeed, the session fails.
The following table describes the properties that you configure for an FTP connection:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces or other
special characters, except for the underscore.
User Name User name necessary to access the host machine. Must be in 7-bit ASCII only. Required to connect to
an SFTP server with password based authentication.
To define the user name in the parameter file, enter session parameter $ParamName as the user name,
and define the value in the session or workflow parameter file. The Integration Service interprets user
names that start with $Param as session parameters.
Use Parameter in Indicates the password for the user name is a session parameter, $ParamName. Define the password in
Password the workflow or session parameter file, and encrypt it by using the pmpasswd CRYPT_DATA option.
Default is disabled.
Password Password for the user name. Must be in 7-bit ASCII only. Required to connect to an SFTP server with
password based authentication.
Note: When you specify pmnullpasswd, the PowerCenter Integration Service authenticates the user
directly based on public key without performing the password authentication.
Default Remote Default directory on the FTP host used by the Integration Service. Do not enclose the directory in
Directory quotation marks.
You can enter a parameter or variable for the directory. Use any parameter or variable type that you can
define in the parameter file.
Depending on the FTP server you use, you may have limited options to enter FTP directories.
In the session, when you enter a file name without a directory, the Integration Service appends the file
name to this directory. This path must contain the appropriate trailing delimiter. For example, if you
enter c:\staging\ and specify data.out in the session, the Integration Service reads the path and file
name as c:\staging\data.out.
For SAP, you can leave this value blank. SAP sessions use the Source File Directory session property
for the FTP remote directory. If you enter a value, the Source File Directory session property overrides
it.
Retry Period Number of seconds the Integration Service attempts to reconnect to the FTP host if the connection fails.
If the Integration Service cannot reconnect to the FTP host in the retry period, the session fails. Default
value is 0 and indicates an infinite retry period.
Public Key File Public key file path and file name. Required if the SFTP server uses publickey authentication. Enabled
Name for SFTP.
Private Key File Private key file path and file name. Required if the SFTP server uses publickey authentication. Enabled
Name for SFTP.
Private Key File Private key file password used to decrypt the private key file. Required if the SFTP server uses public
Password key authentication and the private key is encrypted. Enabled for SFTP.
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces or
other special characters, except for the underscore.
User Name Database user name with the appropriate read and write database permissions to access the
database. If you use Oracle OS Authentication or IBM DB2 client authentication, enter
PmNullUser. PowerCenter uses Oracle OS Authentication when the connection user name is
PmNullUser and the connection is to an Oracle database. PowerCenter uses IBM DB2 client
authentication when the connection user name is PmNullUser and the connection is to an IBM
DB2 database.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file. The Integration
Service interprets user names that start with $Param as session parameters.
You can connect to a database runs on a network that uses Kerberos authentication. To use
Kerberos authentication for the database connection, set the user name to the reserved word
PmKerberosUser. If you use Kerberos authentication, the connection uses the credentials of the
user account that runs the session that connects to the database. The user account must have
a user principal on the Kerberos network where the database runs.
Use Parameter in Password Indicates the password for the database user name is a session parameter, $ParamName.
Define the password in the workflow or session parameter file, and encrypt it by using the
pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the database user name. For Oracle OS Authentication or IBM DB2 client
authentication, enter PmNullPassword. For Teradata connections, you can enter PmNullPasswd
to prevent the password from appearing in the control file. Instead, the Integration Service
writes an empty string for the password in the control file.
Passwords must be in 7-bit ASCII.
If you set the user name to PmKerberosUser to use Kerberos authentication for the database
connection, set the password to the reserved word PmKerberosPassword. The connection uses
the credentials of the user account that runs the session that connects to the database.
Connect String Connect string used to communicate with the database. For syntax, see “Native Connect
Strings” on page 127.
HTTP Connections
Use an application connection object for each HTTP server that you want to connect to.
Configure connection information for an HTTP transformation in an HTTP application connection. The
Integration Service can use HTTP application connections to connect to HTTP servers. HTTP application
connections enable you to control connection attributes, including the base URL and other parameters.
If you want to connect to an HTTP proxy server, configure the HTTP proxy server settings in the Integration
Service.
The following table describes the properties that you configure for an HTTP connection:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces or
other special characters, except for the underscore.
User Name Authenticated user name for the HTTP server. If the HTTP server does not require authentication,
enter PmNullUser.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service
interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the authenticated user is a session parameter, $ParamName. Define
Password the password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Password Password for the authenticated user. If the HTTP server does not require authentication, enter
PmNullPasswd.
Base URL URL of the HTTP server. This value overrides the base URL defined in the HTTP transformation.
You can use a session parameter to configure the base URL. For example, enter the session
parameter $ParamBaseURL in the Base URL field, and then define $ParamBaseURL in the
parameter file.
Timeout Number of seconds the Integration Service waits for a connection to the HTTP server before it
closes the connection.
Domain Authentication domain for the HTTP server. This is required for NTLM authentication.
Trust Certificates File File containing the bundle of trusted certificates that the client uses when authenticating the SSL
certificate of a server. You specify the trust certificates file to have the Integration Service
authenticate the HTTP server. By default, the name of the trust certificates file is ca-bundle.crt.
For information about adding certificates to the trust certificates file, see “SSL Authentication
Certificate Files” on page 130.
Certificate File Client certificate that an HTTP server uses when authenticating a client. You specify the client
certificate file if the HTTP server needs to authenticate the Integration Service.
Certificate File Password Password for the client certificate. You specify the certificate file password if the HTTP server
needs to authenticate the Integration Service.
Certificate File Type File type of the client certificate. You specify the certificate file type if the HTTP server needs to
authenticate the Integration Service. The file type can be PEM or DER. For information about
converting certificate file types to PEM or DER, see “SSL Authentication Certificate Files” on page
130. Default is PEM.
Private Key File Private key file for the client certificate. You specify the private key file if the HTTP server needs
to authenticate the Integration Service.
Key Password Password for the private key of the client certificate. You specify the key password if the web
service provider needs to authenticate the Integration Service.
Key File Type File type of the private key of the client certificate. You specify the key file type if the HTTP server
needs to authenticate the Integration Service. The HTTP transformation uses the PEM file type
for SSL authentication.
Authentication Type Select one of the following authentication types to use when the HTTP server does not return an
authentication type to the Integration Service:
- Auto. The Integration Service attempts to determine the authentication type of the HTTP
server.
- Basic. Based on a non-encrypted user name and password.
- Digest. Based on an encrypted user name and password.
- NTLM. Based on encrypted user name, password, and domain.
Default is Auto.
The following table describes the properties that you configure for a PowerChannel relational database
connection:
Property Description
Name Connection name used by the Workflow Manager. Connection name cannot contain spaces or
other special characters, except for the underscore.
User Name Database user name with the appropriate read and write database permissions to access the
database. If you use Oracle OS Authentication, IBM DB2 client authentication, or databases
such as ISG Navigator that do not allow user names, enter PmNullUser.
To define the user name in the parameter file, enter session parameter $ParamName as the
user name, and define the value in the session or workflow parameter file. The Integration
Service interprets user names that start with $Param as session parameters.
Use Parameter in Password Indicates the password for the database user name is a session parameter, $ParamName.
Define the password in the workflow or session parameter file, and encrypt it by using the
pmpasswd CRYPT_DATA option. Default is disabled.
Password Password for the database user name. For Oracle OS Authentication, IBM DB2 client
authentication, or databases such as ISG Navigator that do not allow passwords, enter
PmNullPassword. For Teradata connections, this overrides the database password in the ODBC
entry.
Passwords must be in 7-bit ASCII.
Connect String Connect string used to communicate with the database. For syntax, see “Native Connect
Strings” on page 127.
Required for all databases except Microsoft SQL Server.
Code Page Code page the Integration Service uses to read from a source database or write to a target
database or file.
Database Name Name of the database. If you do not enter a database name, connection-related messages do
not show a database name when the default database is used.
Environment SQL Runs an SQL command with each database connection. Default is disabled.
Packet Size Use to optimize the native drivers for Sybase ASE and Microsoft SQL Server.
Domain Name The name of the domain. Used for Microsoft SQL Server on Windows.
Use Trusted Connection If selected, the Integration Service uses Windows authentication to access the Microsoft SQL
Server database. The user name that starts the Integration Service must be a valid Windows
user with access to the Microsoft SQL Server database.
Remote PowerChannel Host Host name or IP address for the remote PowerChannel Server that can access the database
Name data.
Remote PowerChannel Port Port number for the remote PowerChannel Server. Make sure the PORT attribute of the
Number ACTIVE_LISTENERS property in the PowerChannel.properties file uses a value that other
applications on the PowerChannel Server do not use.
Use Local PowerChannel Select to use compression or encryption while extracting or loading data. When you select this
option, you need to specify the local PowerChannel Server address and port number. The
Integration Service uses the local PowerChannel Server as a client to connect to the remote
PowerChannel Server and access the remote database.
Local PowerChannel Host Host name or IP address for the local PowerChannel Server. Enter this option when you select
Name the Use Local PowerChannel option.
Local PowerChannel Port Port number for the local PowerChannel Server. Specify this option when you select the Use
Number Local PowerChannel option. Make sure the PORT attribute of the ACTIVE_LISTENERS
property in the PowerChannel.properties file uses a value that other applications on the
PowerChannel Server do not use.
Encryption Level Encryption level for the data transfer. Encryption levels range from 0 to 3. 0 indicates no
encryption and 3 is the highest encryption level. Default is 0.
Use this option only if you have selected the Use Local PowerChannel option.
Compression Level Compression level for the data transfer. Compression levels range from 0 to 9. 0 indicates no
compression and 9 is the highest compression level. Default is 2.
Use this option only if you have selected the Use Local PowerChannel option.
Certificate Account Certificate account to authenticate the local PowerChannel Server to the remote PowerChannel
Server. Use this option only if you have selected the Use Local PowerChannel option.
If you use the sample PowerChannel repository that the installation program set up, and you
want to use the default certificate account in the repository, you can enter “default” as the
certificate account.
You connect to a Hadoop cluster through an HDFS host that runs the name node service for a Hadoop
cluster.
The following table describes the properties that you configure for a Hadoop HDFS application connection:
Property Description
Name The connection name used by the Workflow Manager. Connection name cannot contain spaces or other
special characters, except for the underscore character.
User Name The name of the user in the Hadoop group that is used to access the HDFS host.
Host Name The name of HDFS host that runs the name node service for the Hadoop cluster.
Hive Password Not used. The password for the Hive user.
Hadoop The name of the Hadoop distribution. You can choose one of the following options:
Distribution - Cloudera CDH
- Hortonworks HDP
- IBM BigInsights
- MapR
Default is Cloudera.
When the Integration Service connects to the JNDI server, it retrieves information from JNDI about the JMS
provider during the session. When you configure a JNDI application connection, you must specify connection
properties in the Connection Object Definition dialog box.
The following table describes the properties that you configure for a JNDI application connection:
Property Description
JNDI Context Factory Name of the context factory that you specified when you defined the context factory for your
JMS provider.
JNDI Provider URL Provider URL that you specified when you defined the provider URL for your JMS provider.
When you configure a JMS application connection, you specify connection properties the Integration Service
uses to connect to JMS providers during a session. Specify the JMS application connection properties in the
Connection Object Definition dialog box.
Property Description
JMS Destination Type Select QUEUE or TOPIC for the JMS Destination Type. Select QUEUE if you want to
read source messages from a JMS provider queue or write target messages to a JMS
provider queue. Select TOPIC if you want to read source messages based on the
message topic or write target messages with a particular message topic.
JMS Connection Factory Name Name of the connection factory. The name of the connection factory must be the same
as the connection factory name you configured in JNDI. The Integration Service uses
the connection factory to create a connection with the JMS provider.
JMS Destination Name of the destination. The destination name must match the name you configured in
JNDI. Optionally, you can use the $ParamName session parameter for the destination
name.
JMS Recovery Destination Recovery queue or recovery topic name, based on what you configure for the JMS
Destination Type. Configure this option when you enable recovery for a real-time
session that reads from a JMS or WebSphere MQ source and writes to a JMS target.
Note: The session fails if the recovery destination does not match a recovery queue or
topic name in the JMS provider.
Connection Retry Period Number of seconds the Integration Service attempts to reconnect to JMS if the
connection fails. If the Integration Service cannot connect to JMS in the retry period, the
session fails. Default value is 0.
Retry Connection Error Code File Name of the properties file that contains error codes that identify JMS connection
Name errors. Default is pmjmsconnerr.properties.
The following table describes the properties that you configure for an MSMQ application connection:
Property Description
Machine Name Name of the MSMQ machine. If MSMQ is running on the same machine as the Integration Service, you can
enter a period (.).
Queue Type Select public if the MSMQ queue is a public queue. Select private if the MSMQ queue is a private queue.
Is Transactional Define whether the MSMQ queue is transactional or not. When a session writes to a remote private queue,
the Integration Service cannot determine whether the queue is transactional or not. Configure the Is
Transactional attribute to match the queue configuration.
Choose one of the following options:
- Auto. The Integration Service determines if the queue is transactional or not transactional. Choose Auto
for a local queue or a remote queue that is not private.
- Yes. The queue is transactional.
- No. The queue is not transactional.
Default is Auto. If you configure this property incorrectly, the session will not fail, but the target queue will not
persist the data.
The relational database connection defines how the Integration Service accesses the underlying database for
Netezza Performance Server. When you configure a Netezza connection, you specify the connection
attributes that the Integration Service uses to connect to Netezza.
The following table describes the properties that you configure for a Netezza connection:
Property Description
User Name Database user name with the appropriate read and write database permissions to access Netezza
Performance Server.
Use Parameter in Indicates the password for the database user name is a session parameter, $ParamName. Define
Password the password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connection Runs an SQL command with each database connection. Default is disabled.
Environment SQL
Transaction Runs an SQL command before the initiation of each transaction. Default is disabled.
Environment SQL
Connection Retry Number of seconds the Integration Service attempts to reconnect to the database if the connection
Period fails. If the Integration Service cannot connect to the database in the retry period, the session fails.
Default value is 0.
The following table describes the properties that you configure for a PeopleSoft application connection:
Property Description
User Name Database user name with SELECT permission on physical database tables in the PeopleSoft source system.
To define the user name in the parameter file, enter session parameter $ParamName as the user name, and
define the value in the session or workflow parameter file. The Integration Service interprets user names that
start with $Param as session parameters.
Use Parameter Indicates the password for the database user name is a session parameter, $ParamName. Define the
in Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd CRYPT_DATA
option. Default is disabled.
Connect String Connect string for the underlying database of the PeopleSoft system. This option appears for DB2, Oracle,
and Informix.
Code Page Code page the Integration Service uses to extract data from the source database. When using relaxed code
page validation, select compatible code pages for the source and target data to prevent data inconsistencies.
Language PeopleSoft language code. Enter a language code for language-sensitive data. When you enter a language
Code code, the Integration Service extracts language-sensitive data from related language tables. If no data exists
for the language code, the PowerCenter extracts data from the base table.
When you do not enter a language code, the Integration Service extracts all data from the base table.
Database Name of the underlying database of the PeopleSoft system. This option appears for Sybase ASE and
Name Microsoft SQL Server.
Server Name Name of the server for the underlying database of the PeopleSoft system. This option appears for Sybase
ASE and Microsoft SQL Server.
Packet Size Packet size used to transmit data. This option appears for Sybase ASE and Microsoft SQL Server.
Use Trusted If selected, the Integration Service uses Windows authentication to access the Microsoft SQL Server
Connection database. The user name that enables the Integration Service must be a valid Windows user with access to
the Microsoft SQL Server database. This option appears for Microsoft SQL Server.
Rollback Name of the rollback segment for the underlying database of the PeopleSoft system. This option appears for
Segment Oracle.
Environment SQL commands used to set the environment for the underlying database of the PeopleSoft system.
SQL
The following table describes the connection attributes for a Salesforce application connection:
Property Description
• SAP R/3 application connection. Configure application connections to access the SAP system when you
run a stream or file mode session.
• FTP connection. Configure FTP connections to access the staging file through FTP. When you run a file
mode session, you can configure the session to access the staging file on the SAP system through FTP.
• SAP_ALE _IDoc_Reader and SAP_ALE _IDoc_Writer application connection. Configure
SAP_ALE_IDoc_Reader application connections to receive IDocs and business content integration
documents using ALE. Configure SAP_ALE_IDoc_Writer application connections to send IDocs using
ALE.
• SAP RFC/BAPI interface application connection. Configure SAP RFC/BAPI Interface application
connections if you want to process data in SAP using BAPI/RFC transformations.
The following table describes the type of connection you need depending on the method of integration with
mySAP applications:
SAP R/3 application connection ABAP integration with stream and file mode sessions.
SAP_ALE _IDoc_Reader application connection IDoc ALE and business content integration.
SAP_ALE _IDoc_Writer application connection IDoc ALE and business content integration.
BCI Metadata Connection IDoc ALE and business content integration for segments in SAP
longer than 1,000 characters.
• RFC File Mode. Use an RFC file mode connection when you extract data through file mode. The
connection information for RFC is stored in the sapnwrfc.ini file. You must also have authorizations on
the SAP system to read SAP tables and to run file mode sessions.
• RFC Stream Mode. Use an RFC stream mode connection when you extract data through stream mode.
The connection information for RFC is stored in the sapnwrfc.ini file. You must also have authorizations
on the SAP system to read SAP tables and to run stream mode sessions.
• CPI-C (deprecated). Use a CPI-C connection when you extract data through stream mode. The
connection information for CPI-C is stored in the sapnwrfc.ini file.
To create one connection for both modes, the SAP administrator must have created a single profile with
authorizations for both file and stream mode sessions.
The following table describes the properties that you configure for an SAP ECC connection:
Property Values for RFC File Mode and RFC Stream Values for Deprecated CPI-C Stream Mode
Mode
Name Connection name used by the Workflow Manager. Connection name used by the Workflow Manager.
User Name SAP user name with authorization on S_DATASET, SAP user name with authorization on S_CPIC and
S_TABU_DIS, S_PROGRAM, and B_BTCH_JOB S_TABU_DIS objects.
objects. To define the user name in the parameter file, enter
To define the user name in the parameter file, enter session parameter $ParamName as the user name,
session parameter $ParamName as the user name, and define the value in the session or workflow
and define the value in the session or workflow parameter file. The Integration Service interprets user
parameter file. The Integration Service interprets names that start with $Param as session parameters.
user names that start with $Param as session
parameters.
Use Indicates the password for the SAP user name is a Indicates the password for the SAP user name is a
Parameter in session parameter, $ParamName. Define the session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, password in the workflow or session parameter file,
and encrypt it by using the pmpasswd and encrypt it by using the pmpasswd CRYPT_DATA
CRYPT_DATA option. Default is disabled. option. Default is disabled.
Password Password for the SAP user name. Password for the SAP user name. SAP allows a
maximum number of 8 characters.
Connect DEST entry defined in the sapnwrfc.ini file for a DEST entry defined in the sapnwrfc.ini file for a
String connection to a specific SAP application server or connection to a specific SAP application server or for
for an SAP load balancing connection. a connection that uses SAP load balancing.
Code Page Code page compatible with the SAP server. The Code page compatible with the SAP server. The code
code page must correspond to the Language Code. page must correspond to the Language Code.
Language Language code that corresponds to the SAP Language code that corresponds to the SAP
Code language. language.
The following table describes the properties that you configure for an SAP_ALE_IDoc_Reader application
connection:
Property Description
Destination Entry DEST entry defined in the sapnwrfc.ini file for a connection to an RFC server program. The
Program ID for this destination entry must be the same as the Program ID for the logical system you
defined in SAP to receive IDocs or consume business content data. For business content integration,
set to INFACONTNT.
The following table describes the properties that you configure for an SAP_ALE_IDoc_Writer or a BCI
Metadata Connection application connection:
Property Description
User Name SAP user name with authorization on S_DATASET, S_TABU_DIS, S_PROGRAM, and B_BTCH_JOB
objects.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service
interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the SAP user name is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application server.
Code Page Code page compatible with the SAP server. Must also correspond to the Language Code.
The following table describes the properties that you configure for an SAP RFC/BAPI application connection:
Property Description
User Name SAP user name with authorization on S_DATASET, S_TABU_DIS, S_PROGRAM, and B_BTCH_JOB
objects.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service interprets
user names that start with $Param as session parameters.
Use Parameter in Indicates the password for the SAP user name is a session parameter, $ParamName. Define the
Password password in the workflow or session parameter file, and encrypt it by using the pmpasswd
CRYPT_DATA option. Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application server.
Code Page Code page compatible with the SAP server. Must also correspond to the Language Code.
The following table describes the properties that you configure for an SAP BW OHS application connection:
Property Description
Use Parameter in Indicates the SAP NetWeaver BI password is a session parameter, $ParamName. Define the password
Password in the workflow or session parameter file, and encrypt it by using the pmpasswd CRYPT_DATA option.
Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application server.
The Integration Service uses the sapnwrfc.ini file to connect to the SAP NetWeaver BI system.
Code Page Code page compatible with the SAP NetWeaver BI server.
Client Code SAP NetWeaver BI client. Must match the client you use to log on to the SAP NetWeaver BI server.
The following table describes the properties that you configure for an SAP BW application connection:
Property Description
Use Parameter in Indicates the SAP NetWeaver BI password is a session parameter, $ParamName. Define the password
Password in the workflow or session parameter file, and encrypt it by using the pmpasswd CRYPT_DATA option.
Default is disabled.
Connect String DEST entry defined in the sapnwrfc.ini file for a connection to a specific SAP application server.
The Integration Service uses the sapnwrfc.ini file to connect to the SAP NetWeaver BI system. If
you do not enter a connection string, the Integration Service obtains the connection parameters from the
SAP BW Service.
Code Page Code page compatible with the SAP NetWeaver BI server.
Client Code SAP NetWeaver BI client. Must match the client you use to log in to the SAP NetWeaver BI server.
Property Description
Code Page Code page the Integration Service uses to extract data from the TIBCO. When using relaxed code page
validation, select compatible code pages for the source and target data to prevent data inconsistencies.
Subject Default subject for source and target messages. During a session, the Integration Service reads messages
with this subject from TIBCO sources. It also writes messages with this subject to TIBCO targets.
You can overwrite the default subject for TIBCO targets when you link the SendSubject port in a TIBCO
target definition in a mapping.
Service Service attribute value. Enter a value if you want to include a service name, service number, or port
number.
Network Network attribute value. Enter a value if your machine contains more than one network card.
Daemon TIBCO daemon you want to connect to during a session. If you leave this option blank, the Integration
Service connects to the local daemon during a session.
If you want to specify a remote daemon, which resides on a different host than the Integration Service, enter
the following values:
<remote hostname>:<port number>
For example, you can enter host2:7501 to specify a remote daemon.
Certified Select if you want the Integration Service to read or write certified messages.
CmName Unique CM name for the CM transport when you choose certified messaging.
Relay Agent Enter a relay agent when you choose certified messaging and the node running the Integration Service is
not constantly connected to a network. The Relay Agent name must be fewer than 127 characters.
Ledger File Enter a unique ledger file name when you want the Integration Service to read or write certified messages.
The ledger file records the status of each certified message.
Configure a file-based ledger when you want the TIBCO daemon to send unconfirmed certified messages to
TIBCO targets. You also configure a file-based ledger with Request Old when you want the Integration
Service to receive unconfirmed certified messages from TIBCO sources.
Synchronized Select if you want PowerCenter to wait until it writes the status of each certified message to the ledger file
Ledger before continuing message delivery or receipt.
Request Old Select if you want the Integration Service to receive certified messages that it did not confirm with the
source during a previous session run. When you select Request Old, you should also specify a file-based
ledger for the Ledger File attribute.
User Certificate Register the user certificate with a private key when you want to connect to a secure TIB/Rendezvous
daemon during the session. The text of the user certificate must be in PEM encoding or PKCS #12 binary
format.
Note: The adapter instances you specify in TIB/Adapter SDK connections should only contain one session.
The following table describes the connection properties you configure for a TIB/Adapter SDK application
connection:
Property Description
Code Page Code page the Integration Service uses to extract data from the TIBCO. When using relaxed code
page validation, select compatible code pages for the source and target data to prevent data
inconsistencies.
Subject Default subject for source and target messages. During a workflow, the Integration Service reads
messages with this subject from TIBCO sources. It also writes messages with this subject to TIBCO
targets.
You can overwrite the default subject for TIBCO targets when you link the SendSubject port in a
TIBCO target definition in a mapping.
Repository URL URL for the TIB/Repository instance you want to connect to. You can enter the server process
variable $PMSourceFileDir for the Repository URL.
Session Name Name of the TIBCO session associated with the adapter instance.
Validate Messages Select Validate Messages when you want the Integration Service to read and write messages in AE
format.
To connect to a web service, the Integration Service requires an endpoint URL. If you do not configure a Web
Services Consumer application connection or if you configure one without providing an endpoint URL, the
Integration Service uses the endpoint URL contained in the WSDL file on which the source, target, or Web
Services Consumer transformation is based.
• Configure a Web Services Consumer application connection with an endpoint URL if the web service you
connect to requires authentication or if you want to use an endpoint URL that differs from the one
contained in the WSDL file.
• Configure a Web Services Consumer application connection without an endpoint URL if the web service
you connect to requires authentication but you want to use the endpoint URL contained in the WSDL file.
• You do not need to configure a Web Services Consumer application connection if the web service you
connect to does not require authentication and you want to use the endpoint URL contained in the WSDL
file.
If you need to configure SSL authentication, enter values for the SSL authentication-related properties in the
Web Services Consumer application connection.
The following table describes the properties that you configure for a Web Services Consumer application
connection:
Property Description
User Name User name that the web service requires. If the web service does not require a user name, enter
PmNullUser.
To define the user name in the parameter file, enter session parameter $ParamName as the user
name, and define the value in the session or workflow parameter file. The Integration Service
interprets user names that start with $Param as session parameters.
Use Parameter in Indicates the web service password is a session parameter, $ParamName. Define the password in
Password the workflow or session parameter file, and encrypt it by using the pmpasswd CRYPT_DATA option.
Default is disabled.
Password Password that the web service requires. If the web service does not require a password, enter
PmNullPasswd.
Code Page Connection code page. The Repository Service uses the character set encoded in the repository
code page when writing data to the repository.
End Point URL Endpoint URL for the web service that you want to access. The WSDL file specifies this URL in the
location element.
You can use session parameter $ParamName, a mapping parameter, or a mapping variable as the
endpoint URL. For example, you can use a session parameter, $ParamMyURL, as the endpoint URL,
and set $ParamMyURL to the URL in the parameter file.
Timeout Number of seconds the Integration Service waits for a connection to the web service provider before
it closes the connection and fails the session. Also, the number of seconds the Integration Service
waits for a SOAP response after sending a SOAP request before it fails the session. Default is 60
seconds.
Trust Certificates File File containing the bundle of trusted certificates that the Integration Service uses when authenticating
the SSL certificate of the web services provider. Default is ca-bundle.crt.
Certificate File Client certificate that a web service provider uses when authenticating a client. You specify the client
certificate file if the web service provider needs to authenticate the Integration Service.
Certificate File Password for the client certificate. You specify the certificate file password if the web service provider
Password needs to authenticate the Integration Service.
Certificate File Type File type of the client certificate. You specify the certificate file type if the web service provider needs
to authenticate the Integration Service. The file type can be either PEM or DER.
Private Key File Private key file for the client certificate. You specify the private key file if the web service provider
needs to authenticate the Integration Service.
Key Password Password for the private key of the client certificate. You specify the key password if the web service
provider needs to authenticate the Integration Service.
Key File Type File type of the private key of the client certificate. You specify the key file type if the web service
provider needs to authenticate the Integration Service. PowerExchange for Web Services requires
the PEM file type for SSL authentication.
Authentication Type Select one of the following authentication types to use when the web service provider does not return
an authentication type to the Integration Service:
- Auto. The Integration Service attempts to determine the authentication type of the web service
provider.
- Basic. Based on a non-encrypted user name and password.
- Digest. Based on a non-encrypted user name and encrypted password.
- NTLM. Based on encrypted user name, password, and domain.
Default is Auto.
Note: You cannot write to webMethods target documents that have special characters.
Property Description
Broker Host Enter the host name of the Broker you want the PowerCenter Integration Service to connect to. If the port
number for the Broker is not the default port number, also enter the port number. Default port number is
6849.
Enter the host name and port number in the following format:
<host name:port>
Broker Name Enter the name of the Broker. If you do not enter a Broker name, the PowerCenter Integration Service
uses the default Broker.
Client ID Enter a client ID for the PowerCenter Integration Service to use when it connects to the Broker during the
session. If you do not enter a client ID, the Broker generates a random client ID.
If you select Preserve Client State, enter a client ID.
Client Group Enter the name of the group to which the client belongs.
Application Name Enter the name of the application that will run the Broker Client.
Automatic Select this option to enable the PowerCenter Integration Service to reconnect to the Broker if the
Reconnection connection to the Broker is lost.
Preserve Client Select this option to maintain the client state across sessions. The client state is the information the Broker
State keeps about the client, such as the client ID, application name, and client group.
Preserving the client state enables the webMethods Broker to retain documents it sends when a
subscribing client application, such as the PowerCenter Integration Service, is not listening for documents.
Preserving the client state also allows the Broker to maintain the publication ID sequence across sessions
when writing documents to webMethods targets.
If you select this option, configure a Client ID in the application connection. You should also configure
guaranteed storage for your webMethods Broker.
If you do not select this option, the PowerCenter Integration Service destroys the client state when it
disconnects from the Broker.
Property Description
User Name User name of a user with read access in the webMethods Integration Server.
Use Parameter in Enables the PowerCenter Integration Service to parameterize the password. Password for the
Password webMethods Integration Server user name is a session parameter, $ParamName. Define the password in
the workflow or session parameter file, and encrypt it by using the pmpasswd CRYPT_DATA option.
Default is disabled.
IS Host Host name and port number of the webMethods Integration Server in the following format:
<host name:port>
Certificate Files Client certificate that the webMethods Integration Server uses to authenticate a client. Specify the client
certificate file if the webMethods Integration Server is configured as HTTPS. Use a semicolon (;) to
separate multiple certificate files.
Certificate File The file type of the client certificate. You specify the certificate file type if the webMethods Integration
Type Server needs to authenticate the Integration Service. Supported file type is DER.
Private Key File Private key file for the client certificate. Specify the private key file if the webMethods Integration Server is
configured as HTTPS.
Key File Type File type of the private key of the client certificate. You specify the key file type if the webMethods
Integration Server is configured as HTTPS. Supported file type is DER.
Before you use PowerExchange for WebSphere MQ to extract data from message queues or load data to
message queues, you can test the queue connections configured in the Workflow Manager.
The following table describes the properties that you configure for a Message Queue queue connection:
Property Description
Code Page Code page that is the same as or a a subset of the code page of the queue manager coded
character set identifier (CCSID).
Queue Manager Name of the queue manager for the message queue.
Connection Retry Period Number of seconds the Integration Service attempts to reconnect to the WebSphere MQ
queue if the connection fails. If the Integration Service cannot reconnect to the WebSphere
MQ queue in the retry period, the session fails. Default is 0.
Recovery Queue Name Name of the recovery queue. The recovery queue enables message recovery for a session
that writes to a queue target.
1. Open the Connection Browser dialog box for the connection object. For example, click Connections >
Relational to open the Connection Browser dialog box for a relational database connection.
2. Click Edit.
The Connection Object Definition dialog box appears.
3. Enter the values for the properties you want to modify.
The connection properties vary depending on the type of connection you select. For more information
about connection properties, see the section for each specific connection type in this chapter.
4. Click OK.
1. Open the Connection Browser dialog box for the connection object. For example, click Connections >
Relational to open the Connection Browser dialog box for a relational database connection.
2. Select the connection object you want to delete in the Connection Browser dialog box.
Tip: Hold the shift key to select more than one connection to delete.
3. Click Delete, and then click Yes.
Validation
This chapter includes the following topics:
Workflow Validation
Before you can run a workflow, you must validate it. When you validate the workflow, you validate all task
instances in the workflow, including nested worklets.
When you validate a workflow, you validate worklet instances, worklet objects, and all other nested worklets
in the workflow. You validate task instances and worklets, regardless of whether you have edited them.
The Workflow Manager validates the worklet object using the same validation rules for workflows. The
Workflow Manager validates the worklet instance by verifying attributes in the Parameter tab of the worklet
instance.
If the workflow contains nested worklets, you can select a worklet to validate the worklet and all other
worklets nested under it. To validate a worklet and its nested worklets, right-click the worklet and choose
Validate.
Note: The Workflow Manager validates Session tasks separately. If a session is invalid, the workflow may
still be valid.
163
Example
You have a workflow that contains a non-reusable worklet called Worklet_1. Worklet_1 contains a nested
worklet called Worklet_a. The workflow also contains a reusable worklet instance called Worklet_2.
Worklet_2 contains a nested worklet called Worklet_b.
The Workflow Manager validates links, conditions, and tasks in the workflow. The Workflow Manager
validates all tasks in the workflow, including tasks in Worklet_1, Worklet_2, Worklet_a, and Worklet_b.
You can validate a part of the workflow. Right-click Worklet_1 and choose Validate. The Workflow Manager
validates all tasks in Worklet_1 and Worklet_a.
Worklet Validation
The Workflow Manager validates worklets when you save the worklet in the Worklet Designer. In addition,
when you use worklets in a workflow, the Integration Service validates the workflow according to the following
validation rules at run time:
• If the parent workflow is configured to run concurrently, each worklet instance in the workflow must be
configured to run concurrently.
• Each worklet instance in the workflow can run once.
When a worklet instance is invalid, the workflow using the worklet instance remains valid.
The Workflow Manager displays a red invalid icon if the worklet object is invalid. The Workflow Manager
validates the worklet object using the same validation rules for workflows. The Workflow Manager displays a
blue invalid icon if the worklet instance in the workflow is invalid. The worklet instance may be invalid when
any of the following conditions occurs:
• The parent workflow or worklet variable you assign to the user-defined worklet variable does not have a
matching datatype.
• The user-defined worklet variable you used in the worklet properties does not exist.
• You do not specify the parent workflow or worklet variable you want to assign.
For non-reusable worklets, you may see both red and blue invalid icons displayed over the worklet icon in the
Navigator.
The Workflow Manager verifies that attributes in the tasks follow validation rules. For example, the user-
defined event you specify in an Event task must exist in the workflow. The Workflow Manager also verifies
that you linked each task properly. For example, you must link the Start task to at least one task in the
workflow.
When you delete a reusable task, the Workflow Manager removes the instance of the deleted task from each
workflow that contains the task. The Workflow Manager also marks the workflow as not valid when you delete
a reusable task that a workflow uses.
The Workflow Manager verifies that a folder does not contain duplicate task names, and it verfies that a
workflow does not contain duplicate task instances.
You can validate reusable tasks in the Task Developer. Or, you can validate task instances in the Workflow
Designer. When you validate a task, the Workflow Manager validates the task attributes and the links. For
example, the user-defined event you specify in an Event tasks must exist in the workflow.
• Assignment. The Workflow Manager validates the expression that you enter for the Assignment task. For
example, the Workflow Manager verifies that you assigned a matching datatype value to the workflow
variable in the assignment expression.
• Command. The Workflow Manager does not validate the shell command you enter for the Command task.
• Event-Wait. If you choose to wait for a predefined event, the Workflow Manager verifies that you specified
a file to watch. If you choose to use the Event-Wait task to wait for a user-defined event, the Workflow
Manager verifies that you specified an event.
• Event-Raise. The Workflow Manager verifies that you specified a user-defined event for the Event-Raise
task.
• Human Task. The Workflow Manager verifies that a Human task has a potential owner. The task must
also have a business administrator and an escalation user. The Workflow Manager verifies that a task
notification has a recipient. It also verifies that the Human task receives the results of a mapping task in
the workflow.
• Timer. The Workflow Manager verifies that the variable you specified for the Absolute Time setting has
the Date/Time datatype.
• Start. The Workflow Manager verifies that you linked the Start task to at least one task in the workflow.
When a task instance is not valid, the workflow running the task instance becomes not valid. When a
reusable task is not valid, it does not affect the validity of the task instance in the workflow. However, if a
Session task instance is not valid, the workflow might still be valid. The Workflow Manager validates sessions
differently.
To validate a task, select the task in the workspace and click Tasks > Validate. Or, right-click the task in the
workspace and choose Validate.
The Workflow Manager marks a reusable session or session instance invalid if you perform one of the
following tasks:
• Edit the mapping in a way that might invalidate the session. You can edit the mapping used by a session
at any time. When you edit and save a mapping, the repository might invalidate sessions that already use
the mapping. The Integration Service does not run invalid sessions.
You must reconnect to the folder to see the effect of mapping changes on Session tasks.
When you edit a session based on an invalid mapping, the Workflow Manager displays a warning
message:
The mapping [mapping_name] associated with the session [session_name] is invalid.
• Delete a database, FTP, or external loader connection used by the session.
• Leave session attributes blank. For example, the session is invalid if you do not specify the source file
name.
• Change the code page of a session database connection to an incompatible code page.
If you delete objects associated with a Session task such as session configuration object, Email, or
Command task, the Workflow Manager marks a reusable session invalid. However, the Workflow Manager
does not mark a non-reusable session invalid if you delete an object associated with the session.
If you delete a shortcut to a source or target from the mapping, the Workflow Manager does not mark the
session invalid.
The Workflow Manager does not validate SQL overrides or filter conditions entered in the session properties
when you validate a session. You must validate SQL override and filter conditions in the SQL Editor.
If a reusable session task is invalid, the Workflow Manager displays an invalid icon over the session task in
the Navigator and in the Task Developer workspace. This does not affect the validity of the session instance
and the workflows using the session instance.
If a reusable or non-reusable session instance is invalid, the Workflow Manager marks it invalid in the
Navigator and in the Workflow Designer workspace. Workflows using the session instance remain valid.
To validate a session, select the session in the workspace and click Tasks > Validate. Or, right-click the
session instance in the workspace and choose Validate.
Related Topics:
• “Editing a Session” on page 47
• “Session Properties Reference” on page 229
Note: If you use the Repository Manager, you can select and validate multiple sessions from the Navigator.
Expression Validation
The Workflow Manager validates all expressions in the workflow. You can enter expressions in the
Assignment task, Decision task, and link conditions. The Workflow Manager writes any error message to the
Output window.
Expressions in link conditions and Decision task conditions must evaluate to a numerical value. Workflow
variables used in expressions must exist in the workflow.
The Workflow Manager marks the workflow invalid if a link condition is invalid.
Workflow Schedulers
Each workflow has an associated scheduler. A workflow scheduler is a repository object that contains a set of
schedule settings. It contains information about how and when to run a workflow.
You can schedule a workflow to run continuously, repeat at a specified time or interval, or you can manually
start a workflow. By default, workflows run on demand. You can create a non-reusable scheduler for an
individual workflow. Or, you can create a reusable scheduler to use the same schedule settings for all
workflows in a folder.
If you configure multiple instances of a workflow, and you schedule the workflow run time, the Integration
Service runs all instances at the scheduled time. You cannot schedule workflow instances to run at different
times.
• On Windows, the Integration Service does not run a scheduled workflow during the last hour of Daylight
Saving Time (DST). If a workflow is scheduled to run between 1:00 a.m. and 1:59 a.m. DST, the
Integration Service resumes the workflow after 1:00 a.m. Standard Time (ST). If you try to schedule a
workflow during the last hour of DST or the first hour of ST, you receive an error. Wait until 2:00 a.m. to
create a scheduler.
168
• The Integration Service schedules the workflow in the time zone of the Integration Service node. For
example, the PowerCenter Client is in the local time zone and the Integration Service is in a time zone two
hours later. If you schedule the workflow to start at 9:00 a.m., it starts at 9:00 a.m. in the time zone of the
Integration Service node and 7:00 a.m. local time.
Non-reusable scheduler
When you configure or edit a non-reusable scheduler, check in the workflow to allow the schedule to
take effect. You can update the schedule manually with the workflow checked out. Note that the changes
are applied to the latest checked-in version of the workflow.
Reusable scheduler
When you create a reusable scheduler for a workflow, you must check in the workflow and the scheduler
to enable the schedule to take effect.
When you edit a reusable scheduler and check it in, workflows are updated with the latest schedule.
Note that the workflow schedule is updated even for workflows that are checked out.
When you edit a reusable scheduler and do not check it in, you must manually update a workflow to
update the workflow schedule. Note that the workflow schedule is updated only for workflows that are
checked in.
You can configure the following options on the Schedule tab of the scheduler:
Run Options
Indicates the how to run the workflow. You can choose one of the following options:
• Run On Integration Service Initialization. The Integration Service runs the workflow as soon as the
service is initialized. The Integration Service then starts the next run of the workflow according to
settings in Schedule Options.
• Run On Demand. The Integration Service runs the workflow when you start the workflow manually.
• Run Continuously. The Integration Service runs the workflow as soon as the service initializes. The
Integration Service then starts the next run of the workflow as soon as it finishes the previous run. If
you edit a workflow that is set to run continuously, you must stop or unschedule the workflow, save
the workflow, and then restart or reschedule the workflow.
Schedule Options
Indicates the type of schedule. Required if you select Run On Integration Service Initialization, or if
you do not choose any setting in Run Options. You can choose one of the following options:
• Run Once. The Integration Service runs the workflow once, as scheduled in the scheduler.
• Run Every. The Integration Service runs the workflow at regular intervals, as configured.
Start Options
Indicates when to start the workflow schedule. You can choose one of the following options:
• Start Date. The date that the Integration Service begins the workflow schedule.
• Start Time. The time when the Integration Service begins the workflow schedule.
End Options
Indicates when to end the workflow schedule. Required if the workflow schedule is Run Every or
Customized Repeat. You can choose one of the following options:
• End On. The Integration Service stops scheduling the workflow on the selected date.
• End After. The Integration Service stops scheduling the workflow after the configured number of
workflow runs.
• Forever. The Integration Service schedules the workflow as long as the workflow does not fail.
Weekly
Required to enter a weekly schedule. Select the day or days of the week on that you want to run the
workflow.
Monthly
Required to enter a monthly schedule. You can choose one of the following options:
• Run On Day. Select the dates on which you want the workflow scheduled on a monthly basis. The
Integration Service schedules the workflow to run on the selected dates. If you select a numeric date
exceeding the number of days within a particular month, the Integration Service schedules the
workflow for the last day of the month, including leap years. For example, if you schedule the
workflow to run on the 31st of every month, the Integration Service schedules the session on the 30th
of April, June, September, and November.
• Run On The. Select the week or weeks of the month, and then select the day of the week on which
you want the workflow to run. For example, if you select Second and Last, and then select
Wednesday, the Integration Service schedules the workflow to run on the second and last
Wednesday of every month.
• Run Once. The Integration Service runs the workflow one time on the selected day, at the time
entered on the Start Time setting on the Time tab.
• Run Every. The Integration Service runs the workflow on the hour and minute interval that you
configure. The Integration Service then schedules the workflow at regular intervals on the selected
day. The Integration Service uses the Start Time setting for the first scheduled workflow of the day. If
you choose an interval that is greater than the start time, the workflow runs one time each day. The
Integration Service then schedules the workflow at regular intervals on the selected day.
Scheduled States
The scheduled state of a workflow includes historical run-time information such as the last time the workflow
ran and how many times a repeating workflow has run. A workflow can get removed from the schedule based
on changes to the workflow status or the Integration Service state.
When a workflow is removed from the schedule, the Integration Service either discards or maintains the
scheduled state. If the Integration Service discards the scheduled state, it resets the state when the workflow
is rescheduled. If the Integration Service maintains the scheduled state, it restores the state when the
workflow is rescheduled.
When the Integration Service resets the scheduled state, it maintains the scheduler configuration. It does not
check for missed schedules, and it schedules the workflow as though the workflow never ran. For example,
you configure a workflow to run five times, and it stops during the second run. When you reschedule the
workflow, the Integration Service resets the schedule to run five times.
The Integration Service can restore the scheduled state of a workflow in a highly available environment when
it successfully recovers a terminated workflow or when you restart a workflow. When the Integration Service
restores the scheduled state, it reschedules the workflow based on the scheduler configuration and the
schedule frequency.
The Integration Service maintains or discards the scheduled state based on the following situations:
You disable a workflow.
When you enable a workflow, the Integration Service resets the schedule.
A workflow fails.
To re-establish the schedule, you can reschedule the workflow. In a highly available domain, if you
restart the workflow, and the workflow succeeds, the Integration Service restores the scheduled state
and determines whether a scheduled run was missed.
A workflow terminates.
The Integration Service terminates all running workflows when it shuts down unexpectedly. If the domain
is not highly available, the Integration Service resets the schedule when you reschedule the workflow. If
the domain is highly available, and the workflow is recoverable, you can recover the workflow to restore
the scheduled state. If the workflow is not recoverable, you can reset the schedule by rescheduling the
workflow. If you restart the workflow, and the workflow succeeds, the Integration Service restores the
scheduled state and determines whether a scheduled run was missed.
Important: If you manually start a failed, terminated, stopped, or aborted workflow in a highly available
domain, Informatica recommends that you unschedule it first. If you do not unschedule the workflow, and the
Integration Service detects that the scheduled run time was missed, it immediately runs the workflow again.
This can result in errors such as key violations and invalid data. When you unschedule the workflow first and
reschedule it after the manual run completes, the Integration Service does not run the workflow based on the
missed schedule.
The following scheduler configurations determine how the Integration Service restores the scheduled state:
If you restart the Integration Service or choose a different Integration Service for a workflow, you must
reschedule workflows that are not scheduled to run continuously. The Integration Service reschedules
workflows that are scheduled to run continuously. The Integration Service also reschedules workflows in a
folder if you copy the folder.
Scheduling a Workflow
You can schedule a workflow to run continuously, repeat at a given time or interval, or you can manually start
a workflow.
Note: When you delete a reusable scheduler, all workflows that use the deleted scheduler becomes invalid.
To make the workflows valid, you must edit them and replace the missing scheduler
To permanently remove a workflow from a schedule, configure the workflow schedule to run on demand.
Note: When the Integration Service restarts, it reschedules all unscheduled workflows that are scheduled to
run continuously.
Disabling a Workflow
You might want to disable the workflow while you edit it. When you disable a workflow, the Integration
Service does not run the workflow until you enable it.
To disable a workflow select Disable Workflows on the General tab of the workflow properties.
Before you can run a workflow, you must select an Integration Service to run the workflow. You can select an
Integration Service when you edit a workflow or from the Assign Integration Service dialog box. If you
select an Integration Service from the Assign Integration Service dialog box, the Workflow Manager
overwrites the Integration Service assigned in the workflow properties.
You can also use advanced options to override the Integration Service or operating system profile assigned
to the workflow and select concurrent workflow run instances.
- Operating System Profile. Overrides the operating system profile assigned to the folder.
Integration Service Overrides the Integration Service configured for the workflow.
Operating System Profile Overrides the operating system profile assigned to the folder.
Workflow Run Instances The workflow instances you want to run. Appears if the workflow is configured for
concurrent execution.
5. Click OK.
To run a task using the Workflow Manager, select the task in the Workflow Designer workspace. Right-click
the task and choose Start Task.
You can also use menu commands in the Workflow Manager to start a task. In the Navigator, drill down the
Workflow node to locate the task. Right-click the task you want to start and choose Start Task.
Sending Email
This chapter includes the following topics:
To send email when the Integration Service runs a workflow, perform the following steps:
• Configure the Integration Service to send email. Before creating Email tasks, configure the Integration
Service to send email.
If you use a grid or high availability in a Windows environment, you must use the same Microsoft Outlook
profile on each node to ensure the Email task can succeed.
• Create Email tasks. Before you can configure a session or workflow to send email, you need to create an
Email task.
• Configure sessions to send post-session email. You can configure the session to send an email when
the session completes or fails. You create an Email task and use it for post-session email.
When you configure the subject and body of post-session email, use email variables to include information
about the session run, such as session name, status, and the total number of rows loaded. You can also
use email variables to attach the session log or other files to email messages.
• Configure workflows to send suspension email. You can configure the workflow to send an email when
the workflow suspends. You create an Email task and use it for suspension email.
176
The Integration Service sends the email based on the locale set for the Integration Service process running
the session.
You can use parameters and variables in the email user name, subject, and text. For Email tasks and
suspension email, you can use service, service process, workflow, and worklet variables. For post-session
email, you can use any parameter or variable type that you can define in the parameter file. For example, you
can use the $PMSuccessEmailUser or $PMFailureEmailUser service variable to specify the email recipient
for post-session email.
If you want to send email to more than one person, separate the email address entries with a comma. Do not
put spaces between addresses.
1. Log in to the UNIX system as the PowerCenter user who starts the Informatica services.
2. Type the following lines at the prompt and press Enter:
rmail <your fully qualified email address>,<second fully qualified email address>
From <your_user_name>
3. To indicate the end of the message, type ^D.
You should receive a blank email from the email account of the user you specify in the From line. If not,
locate the directory where rmail resides and add that directory to the path.
1. Log in to the Linux machine as the PowerCenter user who starts the Informatica services.
2. Add/usr/sbin to the $PATH environment variable to send emails.
3. Type the following line at the prompt and press Enter:
sendmail <your fully qualified email address>,<second fully qualified email address>
4. To indicate the end of the message, enter a period (.) on a separate line and press Enter. Or, type ^D.
You should receive a blank email from the email account of the PowerCenter user. If not, find the
directory where sendmail resides and add that directory to the path.
To send email using MAPI on Windows, you must meet the following requirements:
• Install the Microsoft Outlook mail client on each node configured to run the Integration Service.
• Run Microsoft Outlook on a Microsoft Exchange Server.
Complete the following steps to configure the Integration Service on Windows to send email:
Note: If you have high availability or if you use a grid, use the same profile for each node configured to run a
service process.
1. Open the Control Panel on the node running the Integration Service process.
2. Double-click the Mail icon.
3. In the Mail Setup - Outlook dialog box, click Show Profiles.
The Mail dialog box displays the list of profiles configured for the computer.
4. Click Add.
5. In the New Profile dialog box, enter a profile name. Click OK.
The E-mail Accounts wizard appears.
6. Select Add a new e-mail account. Click Next.
7. Select Microsoft Exchange Server for the server type. Click Next.
8. Enter the Microsoft Exchange Server name and the mailbox name. Click Next.
9. Click Finish.
10. In the Mail dialog box, select the profile you added and click Properties.
11. In the Mail Setup dialog box, click E-mail Accounts.
The E-mail Accounts wizard appears.
12. Select Add a new directory or address book. Click Next.
13. Select Additional Address Books. Click Next.
14. Select Personal Address Book. Click Next.
15. Enter the path to a personal address book. Click OK.
For more information about working with a Personal Address Book, refer to Microsoft Outlook documentation.
1. From the Administrator tool, click the Properties tab for the Integration Service.
2. In the Configuration Properties tab, select Edit.
3. In the MSExchangeProfile field, verify that the name of Microsoft Exchange profile matches the Microsoft
Outlook profile you created.
Property Description
SMTPServerAddress The server address for the SMTP outbound mail server, for example,
powercenter.mycompany.com.
SMTPPortNumber The port number for the SMTP outbound mail server, for example, 25.
SMTPFromAddress Email address the Service Manager uses to send email, for example,
PowerCenter@MyCompany.com.
SMTPServerTimeout Amount of time in seconds the Integration Service waits to connect to the SMTP server before
it times out. Default is 20.
For more information about setting custom properties for the Integration Service, see the
PowerCenterAdministrator Guide.
• Session properties. You can configure the session to send email when the session completes or fails.
• Workflow properties. You can configure the workflow to send email when the workflow is interrupted.
• Workflows or worklets. You can include an Email task anywhere in the workflow or worklet to send email
based on a condition you define.
For example, you may have a Session task in the workflow and you want the Integration Service to send an
email if more than 20 rows are dropped. To do this, you create a condition in the link, and create a non-
reusable Email task. The workflow sends an email if the session fails more than 20 rows are dropped.
The Integration Service sends post-session email at the end of a session, after executing post-session shell
commands or stored procedures. When the Integration Service encounters an error sending the email, it
writes a message to the Log Service. It does not fail the session.
You cannot specify a non-reusable Email task you create in the Workflow or Worklet Designer for post-
session email.
You can use parameters and variables in the email user name, subject, and text. Use any parameter or
variable type that you can define in the parameter file. For example, you can use the service variable
$PMSuccessEmailUser or $PMFailureEmailUser for the email recipient. Ensure that you specify the values of
the service variables for the Integration Service that runs the session. You can also enter a parameter or
variable within the email subject or text, and define it in the parameter file.
Note: The Integration Service does not limit the type or size of attached files. However, since large
attachments can cause problems with the email system, avoid attaching excessively large files, such as
session logs generated using verbose tracing. The Integration Service generates an error message in the
email if an error occurs attaching the file.
The following table describes the email variables that you can use in a post-session email:
%a<filename> Attach the named file. The file must be local to the Integration Service. The following file names are
valid: %a<c:\data\sales.txt> or %a</users/john/data/sales.txt>. The email does not display the full path
for the file. Only the attachment file name appears in the email.
Note: The file name cannot include the greater than character (>) or a line break.
%e Session status.
%g Attach the session log to the message. The Integration Service attaches a session log if you configure
the session to create a log file. If you do not configure the session to create a log file or if you run a
session on a grid, the Integration Service creates a temporary file in the PowerCenter Services
installation directory and attaches the file. If the Integration Service does not use operating system
profiles, verify that the user that starts Informatica Services has permissions on PowerCenter Services
installation directory to create a temporary log file. If the Integration Service uses operating system
profiles, verify that the operating system user of the operating system profile has permissions on
PowerCenter Services installation directory to create a temporary log file.
%s Session name.
%t Source and target table details, including read throughput in bytes per second and write throughput in
rows per second. The Integration Service includes all information displayed in the session detail dialog
box.
%w Workflow name.
Note: The Integration Service ignores %a, %g, and %t when you include them in the email subject. Include these variables in
the email message only.
The following table lists the format tags you can use in an Email task:
tab \t
new line \n
Post-Session Email
You can configure post-session email to use a reusable or non-reusable Email task.
Sample Email
The following example shows a user-entered text from a sample post-session email configuration using
variables:
Session complete.
Session name: %s
Integration Service name: %v
%l
%r
%e
%b
%c
%i
%g
Suspension Email
You can configure a workflow to send email when the Integration Service suspends the workflow. For
example, when a task fails, the Integration Service suspends the workflow and sends the suspension email.
You can fix the error and recover the workflow.
If another task fails while the Integration Service is suspending the workflow, you do not get the suspension
email again. However, the Integration Service sends another suspension email if another task fails after you
recover the workflow.
Configure suspension email on the General tab of the workflow properties. You can use service, service
process, workflow, and worklet variables in the email user name, subject, and text. For example, you can use
the service variable $PMSuccessEmailUser or $PMFailureEmailUser for the email recipient. Ensure that you
specify the values of the service variables for the Integration Service that runs the session. You can also
enter a parameter or variable within the email subject or text, and define it in the parameter file.
• $PMSuccessEmailUser. Defines the email address of the user to receive email when a session
completes successfully. Use this variable with post-session email. You can also use it to address email in
standalone Email tasks or suspension email.
• $PMFailureEmailUser. Defines the email address of the user to receive email when a session completes
with failure or when the Integration Service suspends a workflow. Use this variable with post-session or
suspension email. You can also use it to address email in standalone Email tasks.
When you use one of these service variables, the Integration Service sends email to the address configured
for the service variable. $PMSuccessEmailUser and $PMFailureEmailUser are optional process variables.
Verify that you define a variable before using it to address email.
You might use this functionality when you have an administrator who troubleshoots all failed sessions.
Instead of entering the administrator email address for each session, use the email variable
$PMFailureEmailUser as the recipient for post-session email. If the administrator changes, you can correct all
sessions by editing the $PMFailureEmailUser service variable, instead of editing the email address in each
session.
You might also use this functionality when you have different administrators for different Integration Services.
If you deploy a folder from one repository to another or otherwise change the Integration Service that runs the
session, the new service sends email to users associated with the new service when you use process
variables instead of hard-coded email addresses.
Workflow Monitor
This chapter includes the following topics:
With the Workflow Monitor, you can view details about a workflow or task in Gantt Chart view or Task view.
You can also view details about the Integration Service, nodes, and grids.
The Workflow Monitor displays workflows that have run at least once. You can run, stop, abort, and resume
workflows from the Workflow Monitor. The Workflow Monitor continuously receives information from the
Integration Service and Repository Service. It also fetches information from the repository to display historic
information.
• Navigator window. Displays monitored repositories, Integration Services, and repository objects.
• Output window. Displays messages from the Integration Service and the Repository Service.
• Properties window. Displays details about services, workflows, worklets, and tasks.
• Time window. Displays progress of workflow runs.
• Gantt Chart view. Displays details about workflow runs in chronological (Gantt Chart) format.
• Task view. Displays details about workflow runs in a report format, organized by workflow run.
The Workflow Monitor displays time relative to the time configured on the Integration Service node. For
example, a folder contains two workflows. One workflow runs on an Integration Service in the local time zone,
187
and the other runs on an Integration Service in a time zone two hours later. If you start both workflows at 9
a.m. local time, the Workflow Monitor displays the start time as 9 a.m. for one workflow and as 11 a.m. for the
other workflow.
Toggle between Gantt Chart view and Task view by clicking the tabs on the bottom of the Workflow Monitor.
You can view and hide the Output and Properties windows in the Workflow Monitor. To view or hide the
Output window, click View > Output. To view or hide the Properties window, click View > Properties View.
You can also dock the Output and Properties windows at the bottom of the Workflow Monitor workspace. To
dock the Output or Properties window, right-click a window and select Allow Docking. If the window is
floating, drag the window to the bottom of the workspace. If you do not allow docking, the windows float in the
Workflow Monitor workspace.
You can customize the Workflow Monitor display by configuring the maximum days or workflow runs the
Workflow Monitor shows. You can also filter tasks and Integration Services in both Gantt Chart and Task
view.
1. Select Start > Programs > Informatica PowerCenter [version] > Client > Workflow Monitor from the
Windows Start menu.
-or-
Configure the Workflow Manager to open the Workflow Monitor when you run a workflow from the
Workflow Manager.
-or-
Click Tools > Workflow Monitor from the Designer, Workflow Manager, or Repository Manager.
-or-
Click the Workflow Monitor icon on the Tools toolbar. When you use a Tools button to open the Workflow
Monitor, PowerCenter uses the same repository connection to connect to the repository and opens the
same folders.
-or-
From the Workflow Manager, right-click an Integration Service or a repository, and select Run Monitor.
You can open multiple instances of the Workflow Monitor on one machine using the Windows Start menu.
After you connect to a repository, the Workflow Monitor displays a list of Integration Services available for the
repository. The Workflow Monitor can monitor multiple repositories, Integration Services, and workflows at
the same time.
Note: If you are not connected to a repository, you can remove the repository from the Navigator. Select the
repository in the Navigator and click Edit > Delete. The Workflow Monitor displays a message verifying that
you want to remove the repository from the Navigator list. Click Yes to remove the repository. You can
connect to the repository again at any time.
To connect to an Integration Service, right-click it and select Connect. When you connect to an Integration
Service, you can view all folders that you have permission for. To disconnect from an Integration Service,
right-click it and select Disconnect. When you disconnect from an Integration Service, or when the Workflow
Monitor cannot connect to an Integration Service, the Workflow Monitor displays disconnected for the
Integration Service status.
The Workflow Monitor is resilient to the Integration Service. If the Workflow Monitor loses connection to the
Integration Service, LMAPI tries to reestablish the connection for the duration of the PowerCenter Client
resilience time-out period.
After the connection is reestablished, the Workflow Monitor retrieves the workflow status from the repository.
Depending on your Workflow Monitor advanced settings, you may have to reopen the workflow to view the
latest status of child tasks.
You can also ping an Integration Service to verify that it is running. Right-click the Integration Service in the
Navigator and select Ping Integration Service. You can view the ping response time in the Output window.
Note: You can also open an Integration Service in the Navigator without connecting to it. When you open an
Integration Service, the Workflow Monitor gets workflow run information stored in the repository. It does not
get dynamic workflow run information from currently running workflows.
Filtering Tasks
You can view all or some workflow tasks. You can filter tasks you do not want to view. For example, if you
want to view only Session tasks, you can hide all other tasks. You can view all tasks at any time.
When you hide an Integration Service, the Workflow Monitor hides the Integration Service from the Navigator
for the Gantt Chart and Task views. You can show the Integration Service again at any time.
You can hide unconnected Integration Services. When you hide a connected Integration Service, the
Workflow Monitor asks if you want to disconnect from the Integration Service and then filter it. You must
disconnect from an Integration Service before hiding it.
1. In the Navigator, right-click a repository to which you are connected and select Filter Integration
Services.
The Filter Integration Services dialog box appears.
2. Select the Integration Services you want to view and clear the Integration Services you want to filter.
Click OK.
If you are connected to an Integration Service that you clear, the Workflow Monitor prompts you to
disconnect from the Integration Service before filtering.
3. Click Yes to disconnect from the Integration Service and filter it.
-or-
Click No to remain connected to the Integration Service.
Tip: To filter an Integration Service in the Navigator, right-click it and select Filter Integration Service.
You can open and close folders in the Gantt Chart and Task views. When you open a folder, it opens in both
views. To open a folder, right-click it in the Navigator and select Open. Or, you can double-click the folder.
Viewing Statistics
You can view statistics about the objects you monitor in the Workflow Monitor. Click View > Statistics. The
Statistics window displays the following information:
• Number of opened repositories. Number of repositories you are connected to in the Workflow Monitor.
Viewing Properties
You can view properties for the following items:
• Tasks. You can view properties, such as task name, start time, and status.
• Sessions. You can view properties about the Session task and session run, such as mapping name and
number of rows successfully loaded. You can also view load statistics about the session run. You can also
view performance details about the session run.
• Workflows. You can view properties such as start time, status, and run type.
• Links. When you double-click a link between tasks in Gantt Chart view, you can view tasks that you
filtered out.
• Integration Services. You can view properties such as Integration Service version and startup time. You
can also view the sessions and workflows running on the Integration Service.
• Grid. You can view properties such as the name, Integration Service type, and code page of a node in the
Integration Service grid. You can view these details in the Integration Service Monitor.
• Folders. You can view properties such as the number of workflow runs displayed in the Time window.
To view properties for all objects, right-click the object and select Properties. You can right-click items in the
Navigator or the Time window in either Gantt Chart view or Task view.
To view link properties, double-click the link in the Time window of Gantt Chart view. When you view link
properties, you can double-click a task in the Link Properties dialog box to view the properties for the filtered
task.
• General. Customize general options such as the maximum number of workflow runs to display and
whether to receive messages from the Workflow Manager. See “Configuring General Options” on page
192.
• Gantt Chart view. Configure Gantt Chart view options such as workspace color, status colors, and time
format. See “Configuring Gantt Chart View Options” on page 192.
• Task view. Configure which columns to display in Task view. See “Configuring Task View Options” on
page 192.
• Advanced. Configure advanced options such as the number of workflow runs the Workflow Monitor holds
in memory for each Integration Service. See “Configuring Advanced Options” on page 192.
The following table describes the options you can configure on the General tab:
Setting Description
Maximum Days Number of tasks the Workflow Monitor displays up to a maximum number of days. Default is 5.
Maximum Workflow Maximum number of workflow runs the Workflow Monitor displays for each folder. Default is 200.
Runs per Folder
Receive Messages Select to receive messages from the Workflow Manager. The Workflow Manager sends messages
from Workflow when you start or schedule a workflow in the Workflow Manager. The Workflow Monitor displays
Manager these messages in the Output window.
Receive Notifications Select to receive notification messages in the Workflow Monitor and view them in the Output window.
from Repository You must be connected to the repository to receive notifications. Notification messages include
Service information about objects that another user creates, modifies, or delete. You receive notifications
about folders and Integration Services. The Repository Service notifies you of the changes so you
know objects you are working with may be out of date. You also receive notices posted by the user
who manages the Repository Service.
The following table describes the options you can configure on the Gantt Chart tab:
Setting Description
Status Color Select a status and configure the color for the status. The Workflow Monitor displays tasks with the
selected status in the colors you select. You can select two colors to display a gradient.
Recovery Color Configure the color for the recovery sessions. The Workflow Monitor uses the status color for the body
of the status bar, and it uses and the recovery color as a gradient in the status bar.
Setting Description
Refresh Workflow Tasks When the Connection to Refreshes workflow tasks when you reconnect to the Integration Service.
the Integration Service is Re-established
Expand Workflow Runs When Opening the Latest Expands workflows when you open the latest run.
Runs
Hide Folders/Workflows That Do Not Contain Any Hides folders or workflows under the Workflow Run column in the Time
Runs When Filtering By Running/Schedule Runs window when you filter running or scheduled tasks.
Highlight the Entire Row When an Item Is Selected Highlights the entire row in the Time window for selected items. When you
disable this option, the Workflow Monitor highlights the item in the
Workflow Run column in the Time window.
Open Latest 20 Runs At a Time You can open the number of workflow runs. Default is 20.
Minimum Number of Workflow Runs (Per Specifies the minimum number of workflow runs for each Integration
Integration Service) the Workflow Monitor Will Service that the Workflow Monitor holds in memory before it starts
Accumulate in Memory releasing older runs from memory.
When you connect to an Integration Service, the Workflow Monitor
fetches the number of workflow runs specified on the General tab for each
folder you connect to. When the number of runs is less than the number
specified in this option, the Workflow Monitor stores new runs in memory
until it reaches this number.
• Standard. Contains buttons to connect to and disconnect from repositories, print, view print previews,
search the workspace, show or hide the navigator in task view, and show or hide the output window.
• Integration Service. Contains buttons to connect to and disconnect from Integration Services, ping
Integration Service, and perform workflows operations.
• View. Contains buttons to configure time increments and show properties, workflow logs, or session logs.
• Filters. Contains buttons to display most recent runs, and to filter tasks, Integration Services, and folders.
After a toolbar appears, it displays until you exit the Workflow Monitor or hide the toolbar. You can drag each
toolbar to resize and reposition each toolbar.
1. In the Navigator or Workflow Run List, select the workflow with the runs you want to see.
2. Right-click the workflow and select Open Latest 20 Runs.
The menu option is disabled when the latest 20 workflow runs are already open.
Up to 20 of the latest runs appear.
You can also run part of a workflow. When you run part of a workflow, the Integration Service runs the
workflow from the selected task to the end of the workflow.
The Integration Service appends log events to the existing log events when you recover the workflow. The
Integration Service creates another session log when you recover a session.
1. In the Navigator, select the task, workflow, or worklet you want to stop or abort.
2. Click Tasks > Stop.
-or-
Click Tasks > Abort.
The Workflow Monitor displays the status of the stop or abort command in the Output window.
Scheduling Workflows
You can schedule workflows in the Workflow Monitor. You can schedule any workflow that is not configured
to run on demand. When you try to schedule a run on demand workflow, the Workflow Monitor displays an
error message in the Output window.
When you schedule an unscheduled workflow, the workflow uses its original schedule specified in the
workflow properties. If you want to specify a different schedule for the workflow, you must edit the scheduler
in the Workflow Manager.
If you want to view past session or workflow logs, configure the session or workflow to save logs by
timestamp. When you configure the workflow to save log files, the workflow creates a text file and the binary
file that displays in the Log Events window. You can save log files by timestamp or by workflow or session
runs. You can configure how many workflow or session runs to save.
When you open a session or workflow log, the Log Events window sends a request to the Log Agent. The
Log Agent retrieves logs from each node that ran the session or workflow. The Log Events window displays
the logs by node.
Related Topics:
• “Session and Workflow Logs” on page 216
Aborted Workflows You choose to abort the workflow or task in the Workflow Monitor or through pmcmd.
Tasks The Integration Service kills the DTM process and aborts the task. You can recover an
aborted workflow if you enable the workflow for recovery.
Aborting Workflows The Integration Service is in the process of aborting the workflow or task.
Tasks
Disabled Workflows You select the Disabled option in the workflow or task properties. The Integration Service
Tasks does not run the disabled workflow or task until you clear the Disabled option.
Failed Workflows The Integration Service fails the workflow or task because it encountered errors. You
Tasks cannot recover a failed workflow.
Preparing to Workflows The Integration Service is waiting for an execution lock for the workflow.
Run
Scheduled Workflows You schedule the workflow to run at a future date. The Integration Service runs the
workflow for the duration of the schedule.
Stopped Workflows You choose to stop the workflow or task in the Workflow Monitor or through pmcmd. The
Tasks Integration Service stops processing the task and all other tasks in its path. The
Integration Service continues running concurrent tasks. You can recover a stopped
workflow if you enable the workflow for recovery.
Stopping Workflows The Integration Service is in the process of stopping the workflow or task.
Tasks
Succeeded Workflows The Integration Service successfully completes the workflow or task.
Tasks
Suspended Workflows The Integration Service suspends the workflow because a task failed and no other tasks
Worklets are running in the workflow. This status is available when you select the Suspend on
Error option. You can recover a suspended workflow.
Suspending Workflows A task fails in the workflow when other tasks are still running. The Integration Service
Worklets stops running the failed task and continues running tasks in other paths. This status is
available when you select the Suspend on Error option.
Terminated Workflows The Integration Service shuts down unexpectedly when running this workflow or task.
Tasks You can recover a terminated workflow if you enable the workflow for recovery.
Terminating Workflows The Integration Service is in the process of terminating the workflow or task.
Tasks
Waiting Workflows The Integration Service is waiting for available resources so it can run the workflow or
Tasks task. For example, you may set the maximum number of running Session and Command
tasks allowed for each Integration Service process on the node to 10. If the Integration
Service is already running 10 concurrent sessions, all other workflows and tasks have
the Waiting status until the Integration Service is free to run more tasks.
To see a list of tasks by status, view the workflow in the Task view and filter by status. Or, click Edit > List
Tasks in Gantt Chart view.
1. Open the Gantt Chart view and click Edit > List Tasks.
2. In the List What field, select the type of task status you want to list.
For example, select Failed to view a list of failed tasks and workflows.
3. Click List to view the list.
Tip: Double-click the task name in the List Tasks dialog box to highlight the task in Gantt Chart view.
To zoom the Time window in Gantt Chart view, click View > Zoom, and then select the time increment. You
can also select the time increment in the Zoom button on the toolbar.
Performing a Search
Use the search tool in the Gantt Chart view to search for tasks, workflows, and worklets in all repositories you
connect to. The Workflow Monitor searches for the word you specify in task names, workflow names, and
worklet names. You can highlight the task in Gantt Chart view by double-clicking the task after searching.
To perform a search:
1. Open the Gantt Chart view and click Edit > Find.
The Find Object dialog box appears.
2. In the Find What field, enter the keyword you want to find.
3. Click Find Now.
The Workflow Monitor displays a list of tasks, workflows, and worklets that match the keyword.
Tip: Double-click the task name in the Find Object dialog box to highlight the task in Gantt Chart view.
• Workflow run list. The list of workflow runs. The workflow run list contains folder, workflow, worklet, and
task names. The Workflow Monitor displays workflow runs chronologically with the most recent run at the
top. It displays folders and Integration Services alphabetically.
• Filter tasks. Use the Filter menu to select the tasks you want to display or hide.
• Hide and view columns. Hide or view an entire column in Task view.
• Hide and view the Navigator. You can hide the Navigator in Task view. Click View > Navigator to hide or
view the Navigator.
To view the tasks in Task view, select the Integration Service you want to monitor in the Navigator.
• By task type. You can filter out tasks you do not want to view. For example, if you want to view only
Session tasks, you can filter out all other tasks.
• By nodes in the Navigator. You can filter the workflow runs in the Time window by selecting different
nodes in the Navigator. For example, when you select a repository name in the Navigator, the Time
window displays all workflow runs that ran on the Integration Services registered to that repository. When
you select a folder name in the Navigator, the Time window displays all workflow runs in that folder.
• By the most recent runs. To display by the most recent runs, click Filters > Most Recent Runs and select
the number of runs you want to display.
• By Time window columns. You can click Filters > Auto Filter and filter by properties you specify in the
Time window columns.
To filter by Time view columns:
• Repository Service details. View information about repositories, such as the number of connected
Integration Services.
• Integration Service properties. View information about the Integration Service, such as the Integration
Service Version. You can also view system resources that running workflows consume, such as the
system swap usage at the time of the running workflow.
• Repository folder details. View information about a repository folder, such as the folder owner.
• Workflow run properties. View information about a workflow, such as the start and end time.
• Worklet run properties. View information about a worklet, such as the execution nodes on which the
worklet is run.
• Command task run properties. View the information about Command tasks in a running workflow, such
as the start and end time.
• Session task run properties. View information about Session tasks in a running workflow, such as
details on session failures.
• Performance details. View counters that help you understand the session and mapping efficiency, such
as information on the data cache size for an Aggregator transformation.
202
Repository Service Details
To view details about a repository, right-click on the repository and choose Properties.
The following table describes the attributes that appear in the Repository Details area:
Is Opened Yes, if you are connected to the repository. Otherwise, value is No.
User Name Name of the user connected to the repository. Attribute appears if you are connected to the
repository.
Number of Connected Number of Integration Services you are connected to in the Workflow Monitor. Attribute appears if
Integration Services you are connected to the repository.
The following table describes the attributes that appear in the Integration Service Details area:
Integration Service PowerCenter version and build. Appears if you are connected to the Integration Service in the
Version Workflow Monitor.
Integration Service Data movement mode of the Integration Service. Appears if you are connected to the Integration
Mode Service in the Workflow Monitor.
Integration Service The operating mode of the Integration Service. Appears if you are connected to the Integration
OperatingMode Service in the Workflow Monitor.
Startup Time Time the Integration Service started. Startup Time appears in the following format: MM/DD/YYYY
HH:MM:SS AM|PM. Appears if you are connected to the Integration Service in the Workflow Monitor.
Last Updated Time Time the Integration Service was last updated. Last Updated Time appears in the following format:
MM/DD/YYYY HH:MM:SS AM|PM. Appears if you are connected to the Integration Service in the
Workflow Monitor.
Grid Assigned Grid the Integration Service is assigned to. Attribute appears if the Integration Service is assigned to
a grid. Appears if you are connected to the Integration Service in the Workflow Monitor.
Node(s) Names of nodes configured to run Integration Service processes. Appears if you are connected to
the Integration Service in the Workflow Monitor.
To view the Integration Service Monitor, right-click an Integration Service and choose Properties. The
Integration Service Monitor area appears if you are connected to an Integration Service. You can view the
Integration Service type and code page for each node the Integration Service is running on. To view the tool
tip for the Integration Service type and code page, move the pointer over the node name.
The following table describes the attributes that appear in the Integration Service Monitor area:
Node Name Name of the node on which the Integration Service is running.
Task/Partition Name of the session and partition that is running. Or, name of Command task that is running.
CPU % For a node, this is the percent of CPU usage of processes running on the node. For a task, this is the
percent of CPU usage by the task process.
Memory Usage For a node, this is the memory usage of processes running on the node. For a task, this is the
memory usage of the task process.
Swap Usage Amount of swap space usage of processes running on the node.
The following table describes the attributes that appear in the Folder Details area:
Number of Workflow Runs Number of workflows that have run in the time window during which the Workflow Monitor displays
Within Time Window workflow statistics.
Number of Fetched Number of workflow runs displayed during the time window.
Workflow Runs
Workflows Fetched Time period during which the Integration Service fetched the workflows.
Between Appears as, DD/MM/YYYT HH:MM:SS and DD/MM/YYYT HH:MM:SS.
When you view workflow properties, the following areas appear in the Properties window:
Workflow Details
To view workflow details in the Properties window, right-click on a workflow and choose Get Run Properties.
In the Properties window, you can click Get Workflow Log to view the Log Events window for the workflow.
The following table describes the attributes that appear in the Workflow Details area:
Concurrent Type -
OS Profile Name of the operating system profile assigned to the workflow. Value is empty if an operating system
profile is not assigned to the workflow.
Deleted Yes if the workflow is deleted from the repository. Otherwise, value is No.
Session Statistics
The Session Statistics area displays information about sessions, such as the session run time and the
number or rows loaded to the targets.
The following table describes the attributes that appear in the Session Statistics area:
Source Success Rows Number of rows the Integration Service successfully read from the source.
Source Failed Rows Number of rows the Integration Service failed to read from the source.
Target Success Rows Number of rows the Integration Service wrote to the target.
Target Failed Rows Number of rows the Integration Service failed to write the target.
When you view worklet properties, the following areas appear in the Properties window:
Worklet Details
To view worklet details in the Properties window, right-click on a worklet and choose Get Run Properties.
The following table describes the attributes that appear in the Worklet Details area:
Integration Service Name Name of the Integration Service assigned to the workflow associated with the worklet.
The following table describes the attributes that appear in the Task Details area:
Integration Service Name Name of the Integration Service assigned to the workflow associated with the Command task.
When you view session task properties, the following areas display in the Properties window:
When you load data to a target with multiple groups, such as an XML target, the Integration Service provides
session details for each group.
The following table describes the attributes that appear in the Failure Information area:
The following table describes the attributes that appear in the Task Details area:
Integration Service Name Name of the Integration Service assigned to the workflow associated with the session.
Source Success Rows Number of rows the Integration Service successfully read from the source.
Source Failed Rows Number of rows the Integration Service failed to read from the source.
1
Target Success Rows Number of rows the Integration Service wrote to the target.
Target Failed Rows Number of rows the Integration Service failed to write the target.
1. For a recovery session, this value lists the number of rows the Integration Service processed after recovery. To determine the
number of rows processed before recovery, see the session log.
The following table describes the attributes that appear in the Source/Target Statistics area:
Transformation Name Name of the source qualifier instance or the target instance in the mapping. If you create multiple
partitions in the source or target, the Instance Name displays the partition number. If the source or
target contains multiple groups, the Instance Name displays the group name.
Applied Rows For sources, shows the number of rows the Integration Service successfully read from the source.
For targets, shows the number of rows the Integration Service successfully applied to the target.
For example, you have a target table with one column called SALES_ID and five rows that contain
the values 1, 2, 3, 2, and 2. You have a source table with one column called SALES_ID_IN and
five rows that contain the values 1, 2, 3, 4, and 5. You mark rows for update where SALES_ID_IN
is 2. The Integration Service applies one row, which updates three rows in the target. If you mark
rows for update where SALES_ID_IN is 4, the Integration Service applies one row. The Integration
Service does not update any rows at the target as the target does not contain rows with
SALES_ID as 4.
For a recovery session, this value lists the number of rows that the Integration Service affected or
applied to the target after recovery. To determine the number of rows processed before recovery,
see the session log.
Affected Rows For sources, shows the number of rows the Integration Service successfully read from the source.
For targets, shows the number of rows affected by the specified operation. For example, you have
a table with one column called SALES_ID and five rows that contain the values 1, 2, 3, 2, and 2.
You mark rows for update where SALES_ID is 2. The Integration Service updates three rows,
even though there was one update request. If you mark rows for update where SALES_ID is 4, the
Integration Service updates no rows.
For a recovery session, this value lists the number of rows that the Integration Service affected or
applied to the target after recovery. To determine the number of rows processed before recovery,
see the session log.
Rejected Rows Number of rows the Integration Service dropped when reading from the source, or the number of
rows the Integration Service rejected when writing to the target.
Throughput (Rows/Sec) Rate at which the Integration Service read rows from the source or wrote data into the target per
second.
Throughput (Bytes/Sec) Estimated rate at which the Integration Service read data from the source and wrote data to the
target in bytes per second. Throughput (Bytes/Sec) is based on the Throughput (Rows/Sec) and
the row size. The row size is based on the number of columns the Integration Service read from
the source and wrote to the target, the data movement mode, column metadata, and if you
enabled high precision for the session. The calculation is not based on the actual data size in
each row.
Bytes Total bytes processed in the PowerCenter Integration Service memory for the source and target.
Last Error Code Error message code of the most recent error message written to the session log. If you view
details after the session completes, this field displays the last error code.
Last Error Message Most recent error message written to the session log. If you view details after the session
completes, this field displays the last error message.
Start Time Time the Integration Service started to read from the source or write to the target.
The Workflow Monitor displays time relative to the Integration Service.
End Time Time the Integration Service finished reading from the source or writing to the target.
The Workflow Monitor displays time relative to the Integration Service.
Partition Details
The Partition Details area displays information about partitions in a session. When you create multiple
partitions in a session, the Integration Service provides session details for each partition. Use these details to
determine if the data is evenly distributed among the partitions. For example, if the Integration Service moves
more rows through one target partition than another, or if the throughput is not evenly distributed, you might
want to adjust the data range for the partitions.
The following table describes the attributes that appear in the Partition Details area:
CPU % Percent of the CPU the partition is consuming during the current session run.
CPU Seconds Amount of process time in seconds the CPU is taking to process the data in the partition during
the current session run.
Memory Usage Amount of memory the partition consumes during the current session run.
Performance Details
The performance details provide counters that help you understand the session and mapping efficiency. Each
source qualifier and target definition appears in the performance details, along with counters that display
performance information about each transformation. You can view session performance details in the
Workflow Monitor or in the performance details file.
By evaluating the final performance details, you can determine where session performance slows down. The
Workflow Monitor also provides session-specific details that can help tune the following memory settings:
1. Right-click a session in the Workflow Monitor and choose Get Run Properties.
2. Click the Performance area in the Properties window.
The following table describes the attributes that appear in the Performance area:
When you create multiple partitions, the Performance Area displays a column for each partition. The
columns display the counter values for each partition.
3. Click OK.
Source Qualifier, Normalizer, and target transformations have additional counters that indicate the efficiency
of data moving into and out of buffers. Use these counters to locate performance bottlenecks.
Some transformations have counters specific to their functionality. For example, each Lookup transformation
has a counter that indicates the number of rows stored in the lookup cache.
When you view the performance details file, the first column displays the transformation name as it appears
in the mapping, the second column contains the counter name, and the third column holds the resulting
number or efficiency percentage. If you use a Joiner transformation, the first column shows two instances of
the Joiner transformation:
• <Joiner transformation> [M]. Displays performance details about the master pipeline of the Joiner
transformation.
• <Joiner transformation> [D]. Displays performance details about the detail pipeline of the Joiner
transformation.
When you create multiple partitions, the Integration Service generates one set of counters for each partition.
The following performance counters illustrate two partitions for an Expression transformation:
Note: When you increase the number of partitions, the number of aggregate or rank input rows may be
different from the number of output rows from the previous transformation.
The following table describes the Aggegator and Rank Transformation counters/descriptions that may appear
in the Session Performance Details area or in the performance details file:
Counters Description
Aggregator/Rank_readfromcache Number of times the Integration Service read from the index or data
cache.
Aggregator/Rank_writetocache Number of times the Integration Service wrote to the index or data
cache.
Aggregator/Rank_readfromdisk Number of times the Integration Service read from the index or data file
on the local disk, instead of using cached data.
Aggregator/Rank_writetodisk Number of times the Integration Service wrote to the index or data file
on the local disk, instead of using cached data.
The following table describes the Lookup Transformation counters/descriptions that may appear in the
Session Performance Details area or in the performance details file:
Counters Description
The following table describes the Master and Detail Joiner Transformation counters/descriptions that may
appear in the Session Performance Details area or in the performance details file:
Counters Description
Joiner_inputMasterRows Number of rows the master source passed into the transformation.
Joiner_inputDetailRows Number of rows the detail source passed into the transformation.
Joiner_readfromcache Number of times the Integration Service read from the index or data
cache.
Joiner_writetocache Number of times the Integration Service wrote to the index or data
cache.
Joiner_readfromdisk Number of times the Integration Service read from the index or data files
on the local disk, instead of using cached data.
The Integration Service generates this counter when you use sorted
input for the Joiner transformation.
Joiner_writetodisk Number of times the Integration Service wrote to the index or data files
on the local disk, instead of using cached data.
The Integration Service generates this counter when you use sorted
input for the Joiner transformation.
Joiner_readBlockFromDisk Number of times the Integration Service read from the index or data files
on the local disk, instead of using cached data.
The Integration Service generates this counter when you do not use
sorted input for the Joiner transformation.
Joiner_writeBlockToDisk Number of times the Integration Service wrote to the index or data
cache.
The Integration Service generates this counter when you do not use
sorted input for the Joiner transformation.
Joiner_seekToBlockInDisk Number of times the Integration Service accessed the index or data files
on the local disk.
The Integration Service generates this counter when you do not use
sorted input for the Joiner transformation.
Joiner_insertInDetailCache Number of times the Integration Service wrote to the detail cache. The
Integration Service generates this counter if you join data from a single
source.
The Integration Service generates this counter when you use sorted
input for the Joiner transformation.
Joiner_duplicaterows Number of duplicate rows the Integration Service found in the master
relation.
Joiner_duplicaterowsused Number of times the Integration Service used the duplicate rows in the
master relation.
The following table describes All Other Transformation counters/descriptions that may appear in the Session
Performance Details area or in the performance details file:
Counters Description
If you have multiple source qualifiers and targets, evaluate them as a whole. For source qualifiers and
targets, a high value is considered 80-100 percent. Low is considered 0-20 percent.
Log events for workflows include information about tasks performed by the Integration Service, workflow
processing, and workflow errors. Log events for sessions include information about the tasks performed by
the Integration Service, session errors, and load summary and transformation statistics for the session.
You can view log events for workflows with the Log Events window in the Workflow Monitor. The Log Events
window displays information about log events including severity level, message code, run time, workflow
name, and session name. For session logs, you can set the tracing level to log more information. All log
events display severity regardless of tracing level.
The following steps describe how the Log Manager processes session and workflow logs:
1. The Integration Service writes binary log files on the node. It sends information about the sessions and
workflows to the Log Manager.
2. The Log Manager stores information about workflow and session logs in the domain configuration
database. The domain configuration database stores information such as the path to the log file location,
the node that contains the log, and the Integration Service that created the log.
3. When you view a session or workflow in the Log Events window, the Log Manager retrieves the
information from the domain configuration database to determine the location of the session or workflow
logs.
216
4. The Log Manager dispatches a Log Agent to retrieve the log events on each node to display in the Log
Events window.
To access log events for more than the last workflow run, you can configure sessions and workflows to
archive logs by time stamp. You can also configure a workflow to produce text log files. You can archive text
log files by run or by time stamp. When you configure the workflow or session to produce text log files, the
Integration Service creates the binary log and the text log file.
You can limit the size of session logs for long-running and real-time sessions. You can limit the log size by
configuring a maximum time frame or a maximum file size. When a log reaches the maximum size, the
Integration Service starts a new log.
Log Events
You can view log events in the Workflow Monitor Log Events window and you can view them as text files. The
Log Events window displays log events in a tabular format.
Log Codes
Use log events to determine the cause of workflow or session problems. To resolve problems, locate the
relevant log codes and text prefixes in the workflow and session log.
The Integration Service precedes each workflow and session log event with a thread identification, a code,
and a number. The code defines a group of messages for a process. The number defines a message. The
message can provide general information or it can be an error message.
Some log events are embedded within other log events. For example, a code CMN_1039 might contain
informational messages from Microsoft SQL Server.
Message Severity
The Log Events window categorizes workflow and session log events into severity levels. It prioritizes error
severity based on the embedded message type. The error severity level appears with log events in the Log
Events window in the Workflow Monitor. It also appears with messages in the workflow and session log files.
Note: If you cannot view all the workflow log messages when the error severity level is at warning, change
the error severity level of the workflow log. Change the log level from warning to info in the advanced
properties of the PowerCenter Integration Service process.
FATAL Fatal error occurred. Fatal error messages have the highest severity level.
ERROR Indicates the service failed to perform an operation or respond to a request from a client application.
Error messages have the second highest severity level.
WARNING Indicates the service is performing an operation that may cause an error. This can cause repository
inconsistencies. Warning messages have the third highest severity level.
INFO Indicates the service is performing an operation that does not indicate errors or problems. Information
messages have the third lowest severity level.
TRACE Indicates service operations at a more specific level than Information. Tracing messages are generally
record message sizes. Trace messages have the second lowest severity level.
DEBUG Indicates service operations at the thread level. Debug messages generally record the success or failure
of service operations. Debug messages have the lowest severity level.
Writing Logs
The Integration Service writes the workflow and session logs as binary files on the node where the service
process runs. It adds a .bin extension to the log file name you configure in the session and workflow
properties.
When you run a session on a grid, the Integration Service creates one session log for each DTM process.
The log file on the primary node has the configured log file name. The log file on a worker node has
a .w<Partition Group Id> extension:
<session or workflow name>.w<Partition Group ID>.bin
For example, if you run the session s_m_PhoneList on a grid with three nodes, the session log files use the
names, s_m_PhoneList.bin, s_m_PhoneList.w1.bin, and s_m_PhoneList.w2.bin.
When you rerun a session or workflow, the Integration Service overwrites the binary log file unless you
choose to save workflow logs by time stamp. When you save workflow logs by time stamp, the Integration
Service adds a time stamp to the log file name and archives them.
To view log files for more than one run, configure the workflow or session to create log files.
A workflow or session continues to run even if there are errors while writing to the log file after the workflow
or session initializes. If the log file is incomplete, the Log Events window cannot display all the log events.
The Integration Service starts a new log file for each workflow and session run. When you recover a workflow
or session, the Integration Service appends a recovery.time stamp extension to the file name for the recovery
run.
For real-time sessions, the Integration Service overwrites the log file when you restart a session in cold start
mode or when you restart a JMS or WebSphere MQ session that does not have recovery data. The
Integration Service appends the log file when you restart a JMS or WebSphere MQ session that has recovery
data.
To convert the binary file to a text file, use the infacmd convertLog or the infacmd GetLog command.
The Session Log Interface lets you pass session event messages, but not workflow event messages, to an
external shared library.
You can perform the following tasks in the Log Events window:
• Save log events to file. Click Save As to save log events as a binary, text, or XML file.
• Copy log event text to a file. Click Copy to copy one or more log events and paste them into a text file.
• Sort log events. Click a column heading to sort log events.
• Search for log events. Click Find to search for text in log events.
• Refresh log events. Click Refresh to view updated log events during a workflow or session run.
Note: When you view a log larger than 2 GB, the Log Events window displays a warning that the file might be
too large for system memory. If you continue, the Log Events window might shut down unexpectedly.
To Press
By default, the Integration Service writes log files based on the Integration Service code page. If you enable
the LogInUTF8 option in the Advanced Properties for the Integration Service, the Integration Service writes to
the logs using the UTF-8 character set. If you configure the Integration Service to run in ASCII mode, it sorts
all character data using a binary sort order even if you select a different sort order in the session properties.
• Write Backward Compatible Log File. Select this option to create a text file for workflow or session logs.
If you do not select the option, the Integration Service creates the binary log only.
• Log File Directory. The directory where you want the log file created. By default, the Integration Service
writes the workflow log file in the directory specified in the service process variable, $PMWorkflowLogDir.
It writes the session log file in the directory specified in the service process variable, $PMSessionLogDir.
If you enter a directory name that the Integration Service cannot access, the workflow or session fails.
The following table shows the default location for each type of log file and the associated service process
variables:
Log File Type Default Directory (Service Process Default Value for Service Process
Variable) Variable
Note: The Integration Service stores the workflow and session log names in the domain configuration
database. If you want to use Unicode characters in the workflow or session log file names, the domain
configuration database must be a Unicode database.
To create a log file for more than one workflow or session run, configure the workflow or session to archive
logs in the following ways:
• By run. Archive text log files by run. Configure a number of text logs to save.
• By time stamp. Archive binary logs and text files by time stamp. The Integration Service saves an
unlimited number of logs and labels them by time stamp. When you configure the workflow or session to
archive by time stamp, the Integration Service always archives binary logs.
Note: When you run concurrent workflows with the same instance name, the Integration Service appends a
timestamp to the log file name, even if you configure the workflow to archive logs by run.
The Integration Service uses the following naming convention to create historical logs:
<session or workflow name>.n
where n=0 for the first historical log. The variable increments by one for each workflow or session run.
If you run a session on a grid, the worker service processes use the following naming convention for a
session:
<session name>.n.w<DTM ID>
• yyyy = year
• mm = month, ranging from 01-12
• dd = day, ranging from 01-31
• hh = hour, ranging from 00-23
• mi = minute, ranging from 00-59
To prevent filling the log directory, periodically purge or back up log files when using the time stamp option.
If you run a session on a grid, the worker service processes use the following naming convention for
sessions:
<session name>.yyyymmddhhmi.w<DTM ID>
<session name>.yyyymmddhhmi.w<DTM ID>.bin
When you archive text log files, view the logs by navigating to the workflow or session log folder and viewing
the files in a text reader. When you archive binary log files, you can view the logs by navigating to the
workflow or session log folder and importing the files in the Log Events window. You can archive binary files
when you configure the workflow or session to archive logs by time stamp. You do not have to create text log
files to archive binary files. You might need to archive binary files to send to Informatica Global Customer
Support for review.
Configure the session log to roll over to a new file after the log file reaches a maximum size. Or, configure the
session log to roll over to a new file after a maximum period of time. The Integration Service saves the
previous log files.
You can configure the maximum number of partial log files to save for the session. The Integration Service
saves one more log file that the number of files you configure. The Integration Service does not purge the
first session log file. The first log file contains details about the session initialization.
The Integration Service names each partial session log file with the following syntax:
<session log file>.part.n
Configure the following attributes on the Advanced settings of the Config Object tab:
• Session Log File Max Size. The maximum number of megabytes for a log file. Configure a maximum
size to enable log file rollover by file size. When the log file reaches the maximum size, the Integration
Service creates a new log file. Default is zero.
• Session Log File Max Time Period. The maximum number of hours that the Integration Service writes to
a session log. Configure the maximum time period to enable log file rollover by time. When the period is
over, the Integration service creates another log file. Default is zero.
• Maximum Partial Session Log Files. Maximum number of session log files to save. The Integration
Service overwrites the oldest partial log file if the number of log files has reached the limit. If you configure
a maximum of zero, then the number of session log files is unlimited. Default is one.
Note: You can configure a combination of log file maximum size and log file maximum time. You must
configure one of the properties to enable session log file rollover. If you configure only maximum partial
session log files, log file rollover is not enabled.
Write Backward Writes workflow logs to a text log file. Select this option if you want to create a log file in
Compatible Workflow Log addition to the binary log for the Log Events window.
File
Workflow Log File Name Enter a file name or a file name and directory. You can use a service, service process, or
user-defined workflow or worklet variable for the workflow log file name.
The Integration Service appends this value to that entered in the Workflow Log File Directory
field. For example, if you have $PMWorkflowLogDir\ in the Workflow Log File Directory field,
enter “logname.txt” in the Workflow Log File Name field, the Integration Service writes
logname.txt to the $PMWorkflowLogDir\ directory.
Workflow Log File Location for the workflow log file. By default, the Integration Service writes the log file in the
Directory process variable directory, $PMWorkflowLogDir.
If you enter a full directory and file name in the Workflow Log File Name field, clear this field.
Save Workflow Log By You can create workflow logs according to the following options:
- By Runs. The Integration Service creates a designated number of workflow logs.
Configure the number of workflow logs in the Save Workflow Log for These Runs option.
The Integration Service does not archive binary logs.
- By time stamp. The Integration Service creates a log for all workflows, appending a time
stamp to each log. When you save workflow logs by time stamp, the Integration Service
archives binary logs and workflow log files.
You can also use the $PMWorkflowLogCount service variable to create the configured
number of workflow logs for the Integration Service.
Save Workflow Log for Number of historical workflow logs you want the Integration Service to create.
These Runs The Integration Service creates the number of historical logs you specify, plus the most
recent workflow log.
3. Click OK.
Write Backward Writes session logs to a log file. Select this option if you want to create a log file in addition
Compatible Session Log to the binary log for the Log Events window.
File
Session Log File Name By default, the Integration Service uses the session name for the log file name: s_mapping
name.log. For a debug session, it uses DebugSession_mapping name.log.
Enter a file name, a file name and directory, or use the $PMSessionLogFile session
parameter. The Integration Service appends information in this field to that entered in the
Session Log File Directory field. For example, if you have “C:\session_logs\” in the Session
Log File Directory File field and then enter “logname.txt” in the Session Log File field, the
Integration Service writes the logname.txt to the C:\session_logs\ directory.
You can also use the $PMSessionLogFile session parameter to represent the name of the
session log or the name and location of the session log.
Session Log File Directory Location for the session log file. By default, the Integration Service writes the log file in the
process variable directory, $PMSessionLogDir.
If you enter a full directory and file name in the Session Log File Name field, clear this field.
Save Session Log You can create session logs according to the following options:
By - Session Runs. The Integration Service creates a designated number of session log files.
Configure the number of session logs in the Save Session Log for These Runs option. The
Integration Service does not archive binary logs.
- Session Time Stamp. The Integration Service creates a log for all sessions, appending a time
stamp to each log. When you save a session log by time stamp, the Integration Service archives
the binary logs and text log files.
You can also use the $PMSessionLogCount service variable to create the configured number of
session logs for the Integration Service.
Save Session Log Number of historical session logs you want the Integration Service to create.
for These Runs The Integration Service creates the number of historical logs you specify, plus the most recent
session log.
5. Click OK.
Workflow Logs
Workflow logs contain information about the workflow runs. You can view workflow log events in the Log
Events window of the Workflow Monitor. You can also create an XML, text, or binary log file for workflow log
events.
• Workflow name
Session Logs
Session logs contain information about the tasks that the Integration Service performs during a session, plus
load summary and transformation statistics. By default, the Integration Service creates one session log for
each session it runs. If a workflow contains multiple sessions, the Integration Service creates a separate
session log for each session in the workflow. When you run a session on a grid, the Integration Service
creates one session log for each DTM process.
Related Topics:
• “Log Options Settings” on page 56
The session log file includes the Integration Service version and build number.
DIRECTOR> TM_6703 Session [s_PromoItems] is run by 32-bit Integration Service
[sapphire], version [8.1.0], build [0329].
Tracing Levels
The amount of detail that logs contain depends on the tracing level that you set. You can configure tracing
levels for each transformation or for the entire session. By default, the Integration Service uses tracing levels
configured in the mapping.
Setting a tracing level for the session overrides the tracing levels configured for each transformation in the
mapping. If you select a normal tracing level or higher, the Integration Service writes row errors into the
Set the tracing level on the Config Object tab in the session properties.
None Integration Service uses the tracing level set in the mapping.
Terse Integration Service logs initialization information, error messages, and notification of rejected data.
Normal Integration Service logs initialization and status information, errors encountered, and skipped rows due to
transformation row errors. Summarizes session results, but not at the level of individual rows.
Verbose In addition to normal tracing, the Integration Service logs additional initialization details, names of index
Initialization and data files used, and detailed transformation statistics.
Verbose Data In addition to verbose initialization tracing, the Integration Service logs each row that passes into the
mapping. Also notes where the Integration Service truncates string data to fit the precision of a column
and provides detailed transformation statistics.
When you configure the tracing level to verbose data, the Integration Service writes row data for all rows
in a block when it processes a transformation.
You can also enter tracing levels for individual transformations in the mapping. When you enter a tracing
level in the session properties, you override tracing levels configured for transformations in the mapping.
Log Events
The Integration Service generates log events when you run a session or workflow. You can view log events in
the following types of log files:
1. If you do not know the session or workflow log file name and location, check the Log File Name and Log
File Directory attributes on the Session or Workflow Properties tab.
If you are running the Integration Service on UNIX and the binary log file is not accessible on the
Windows machine where the PowerCenter client is running, you can transfer the binary log file to the
Windows machine using FTP.
2. In the Workflow Monitor, click Tools > Import Log.
3. Navigate to the session or workflow log file directory.
4. Select the binary log file you want to view.
5. Click Open.
1. If you do not know the session or workflow log file name and location, check the Log File Name and Log
File Directory attributes on the Session or Workflow Properties tab.
2. Navigate to the session or workflow log file directory.
The session and workflow log file directory contains the text log files and the binary log files. If you
archive log files, check the file date to find the latest log file for the session.
3. Open the log file in any text editor.
General Tab
The following table describes settings on the General tab:
Rename You can enter a new name for the session task with the Rename button.
Description You can enter a description for the session task in the Description field.
Mapping name Name of the mapping associated with the session task.
Fail Parent if This Task Fails Fails the parent worklet or workflow if this task fails.
Appears only in the Workflow Designer.
Fail Parent if This Task Does Not Run Fails the parent worklet or workflow if this task does not run.
Appears only in the Workflow Designer.
Treat the Input Links as AND or OR Runs the task when all or one of the input link conditions evaluate to True.
Appears only in the Workflow Designer.
229
Properties Tab
On the Properties tab, you can configure the following settings:
• General Options. General Options settings allow you to configure session log file name, session log file
directory, parameter file name and other general session settings.
• Performance. The Performance settings allow you to increase memory size, collect performance details,
and set configuration parameters.
General Description
Options
Settings
Session Log File Enter a file name, a file name and directory, or use the $PMSessionLogFile session parameter. The
Name Integration Service appends information in this field to that entered in the Session Log File Directory field.
For example, if you have “C:\session_logs\” in the Session Log File Directory File field and then enter
“logname.txt” in the Session Log File field, the Integration Service writes the logname.txt to the C:
\session_logs\ directory.
Session Log File Location for the session log file. By default, the Integration Service writes the log file in the service process
Directory variable directory, $PMSessionLogDir.
If you enter a full directory and file name in the Session Log File Name field, clear this field.
Parameter File The name and directory for the parameter file. Use the parameter file to define session parameters and
Name override values of mapping parameters and variables.
You can enter a workflow or worklet variable as the session parameter file name if you configure a
workflow to run concurrently, and you want to use different parameter files for the sessions in each
workflow run instance.
Enable Test Load You can configure the Integration Service to perform a test load.
With a test load, the Integration Service reads and transforms data without writing to targets. The
Integration Service generates all session files and performs all pre- and post-session functions, as if
running the full session.
Enter the number of source rows you want to test in the Number of Rows to Test field.
Number of Rows Enter the number of source rows you want the Integration Service to test load.
to Test
$Source The database connection you want the Integration Service to use for the $Source connection variable. You
Connection Value can select a relational or application connection object, or you can use the $DBConnectionName or
$AppConnectionName session parameter if you want to define the connection value in a parameter file.
$Target The database connection you want the Integration Service to use for the $Target connection variable. You
Connection Value can select a relational or application connection object, or you can use the $DBConnectionName or
$AppConnectionName session parameter if you want to define the connection value in a parameter file.
Treat Source Indicates how the Integration Service treats all source rows. If the mapping for the session contains an
Rows As Update Strategy transformation or a Custom transformation configured to set the update strategy, the
default option is Data Driven.
When you select Data Driven and you load to either a Microsoft SQL Server or Oracle database, you must
use a normal load. If you bulk load, the Integration Service fails the session.
Commit Type Determines if the Integration Service uses a source- or target-based, or user-defined commit. You can
choose source- or target-based commit if the mapping has no Transaction Control transformation or only
ineffective Transaction Control transformations. By default, the Integration Service performs a target-based
commit.
A user-defined commit is enabled by default if the mapping has effective Transaction Control
transformations.
Commit Interval In conjunction with the selected commit interval type, indicates the number of rows. By default, the
Integration Service uses a commit interval of 10,000 rows.
This option is not available for user-defined commit.
Commit On End By default, this option is enabled and the Integration Service performs a commit at the end of the file. Clear
of File this option if you want to roll back open transactions.
This option is enabled by default for a target-based commit. You cannot disable it.
Rollback The Integration Service rolls back the transaction at the next commit point when it encounters a non-fatal
Transactions on writer error.
Errors
Java Classpath If you enter a Java Classpath in this field, the Java Classpath is added to the beginning of the system
classpath when the Integration Service runs the session. Use this option if you use third-party Java
packages, built-in Java packages, or custom Java packages in a Java transformation.
You can use service process variables to define the classpath. For example, you can use $PMRootDir to
define a classpath within the $PMRootDir folder.
Performance Description
Settings
DTM Buffer Size Amount of memory allocated to the session from the DTM process.
By default, the PowerCenter Integration Service determines the DTM buffer size at run time. The
Workflow Manager allocates a minimum of 12 MB for DTM buffer memory.
You can specify auto or a numeric value. If you enter 2000, the PowerCenter Integration Service
interprets the number as 2000 bytes. Append KB, MB, or GB to the value to specify other units. For
example, you can specify 512MB.
Increase the DTM buffer size in the following circumstances:
- A session contains large amounts of character data and you configure it to run in Unicode mode.
Increase the DTM buffer size to 24MB.
- A session contains n partitions. Increase the DTM buffer size to at least n times the value for the
session with one partition.
- A source contains a large binary object with a precision larger than the allocated DTM buffer size.
Increase the DTM buffer size so that the session does not fail.
Collect Performance Collects performance details when the session runs. Use the Workflow Monitor to view performance
Data details while the session runs.
Write Performance Writes performance details for the session to the PowerCenter repository. Write performance details
Data to Repository to the repository to view performance details for previous session runs. Use the Workflow Monitor to
view performance details for previous session runs.
Reinitialize Aggregate Overwrites existing aggregate files for an incremental aggregation session.
Cache
Session Retry On The PowerCenter Integration Service retries target writes on deadlock for normal load. You can
Deadlock configure the PowerCenter Integration Service to set the number of deadlock retries and the
deadlock sleep time period.
Pushdown The PowerCenter Integration Service analyzes the transformation logic, mapping, and session
Optimization configuration to determine the transformation logic it can push to the database. Select one of the
following pushdown optimization values:
- None. The PowerCenter Integration Service does not push any transformation logic to the
database.
- To Source. The PowerCenter Integration Service pushes as much transformation logic as possible
to the source database.
- To Target. The PowerCenter Integration Service pushes as much transformation logic as possible
to the target database.
- Full. The PowerCenter Integration Service pushes as much transformation logic as possible to
both the source database and target database.
- $$PushdownConfig. The $$PushdownConfig mapping parameter allows you to run the same
session with different pushdown optimization configurations at different times.
Default is None.
Allow Temporary View Allows the PowerCenter Integration Service to create temporary views in the database when it
for Pushdown pushes the session to the database. The PowerCenter Integration Service must create a view in the
database if the session contains an SQL override, a filtered lookup, or an unconnected lookup.
Allow Temporary Allows the PowerCenter Integration Service to create temporary sequence objects in the database.
Sequence for The PowerCenter Integration Service must create a sequence object in the database if the session
Pushdown contains a Sequence Generator transformation.
Session Sort Order Sort order for the session. The session properties display the options that you can select based on
the client locale settings. You can select one of the following values for the sort order:
- 0. BINARY
- 2. SPANISH
- 3. TRADITIONAL_SPANISH
- 4. DANISH
- 5. SWEDISH
- 6. FINNISH
When the PowerCenter Integration Service runs in Unicode mode, it sorts character data in the
session using the selected sort order. When the PowerCenter Integration Service runs in ASCII
mode, it ignores this setting and uses a binary sort order to sort character data.
• Readers. Displays the reader that the Integration Service uses with each source instance. The Workflow
Manager specifies the necessary reader for each source instance.
• Connections. Displays the source connections. You can choose connection types and connection values.
You can also edit connection object values.
• Properties. Displays source and source qualifier properties. For relational sources, you can override
properties that you configured in the Mapping Designer.
For file sources, you can override properties that you configured in the Source Analyzer. You can also
configure the following session properties for file sources:
Source File Directory Enter the directory name in this field. By default, the Integration Service looks in the service
process variable directory, $PMSourceFileDir, for file sources.
If you specify both the directory and file name in the Source Filename field, clear this field. The
Integration Service concatenates this field with the Source Filename field when it runs the
session.
You can also use the $InputFileName session parameter to specify the file directory.
Source Filename Enter the file name, or file name and path. Optionally use the $InputFileName session parameter
for the file name.
The Integration Service concatenates this field with the Source File Directory field when it runs
the session. For example, if you have “C:\data\” in the Source File Directory field, then enter
“filename.dat” in the Source Filename field. When the Integration Service begins the session, it
looks for “C:\data\filename.dat”.
By default, the Workflow Manager enters the file name configured in the source definition.
Source Filetype You can configure multiple file sources using a file list.
Indicates whether the source file contains the source data, or a list of files with the same file
properties. Select Direct if the source file contains the source data. Select Indirect if the source
file contains a list of files.
When you select Indirect, the Integration Service finds the file list then reads each listed file
when it executes the session.
When you configure a session to extract data from a PowerExchange nonrelational source in batch mode,
you can configure the following session properties for the source:
Schema Name Overrides the schema name in the source PowerExchange data map.
Override
Map Name Overrides the data map name of the source PowerExchange data map.
Override
File Name For the ADABAS Unload source type, specifies the file name of the unloaded Adabas database.
Required for the ADABAS Unload source type.
Database Id For the ADABAS and ADABAS Unload source types, overrides the ADABAS Database ID in the
Override PowerExchange data map.
File Id Override For the ADABAS and ADABAS Unload source types, overrides the Adabas file ID in the
PowerExchange data map.
DB2 Sub System For the DB2 Datamaps source type, overrides the DB2 subsystem ID in the PowerExchange data
Id map.
DB2 Table name For the DB2 Datamaps source type, overrides the DB2 table name in the PowerExchange data map.
Unload File Name For the DB2 Unload Datasets source type, overrides the DB2 unload file name in the PowerExchange
data map.
Filter Overrides Filters the source data that PowerExchange reads based on specific conditions that you define.
PWXPC adds the filter conditions in a WHERE clause on a SELECT SQL statement and then passes
the SQL statement to PowerExchange for processing. You can use any filter condition syntax that
PowerExchange supports for NRDB SQL.
For a single-record source, use the following syntax:
filter_condition
For example, the following filter condition selects records where a column called TYPE has a value of
A or D:
TYPE=‘A’ or TYPE=‘D’
For a multiple-record source, use one of the following syntax alternatives:
filter_condition
group_name1=filter; group_name2=filter;...
The group_name syntax limits the SQL query condition to a specific record in a multi-record source
definition. If you do not use the group_name syntax, the SQL query condition applies to all records in
the multi-record source definition.
For example, to select only records that contain an ID column value of "DBA" for a multi-record
source that has USER1 and USER2 records, specify one of the following SQL query conditions:
USER1=ID=’DBA’;USER2=ID=’DBA’
ID=’DBA’
IMS Unload File For the IMS source type, an IMS database unload file name. Required if you want to read source data
Name from the backup file instead of from the IMS database. For a multiple-record write to an IMS unload
file, required for both the source and target.
IMS AM Override For the IMS source type, overrides the IMS access method in the imported data map for the source
with the other available access method. The session then uses the override access method at run
time.
- If you imported a source data map that specifies the DL/1 BATCH access method, enter O to
override it with the IMS ODBA access method. For ODBA access, you must also specify the IMS
PSBNAME Override and IMS PCBNAME Override attributes.
- If you imported a source data map that specifies the IMS ODBA access method, enter D to
override it with the DL/1 BATCH access method, which provides DL/I or BMP access. You must
also specify the IMS PCBNUMBER Override attribute.
Important: Before you run the session with an access method override, ensure that you complete the
PowerExchange configuration tasks for the new access method. For example, if the override is DL/1
BATCH, you must configure LISTENER and NETPORT statements in the DBMOVER member and
configure the netport JCL. If the override is IMS ODBA, you must perform other configuration tasks.
For more information, see "IMS Data Maps" in the PowerExchange Navigator User Guide.
IMS SSID Override For the IMS source type, if you imported an IMS ODBA data map for the source and did not override
the access method, use this attribute to override the IMS subsystem ID (SSID) from the data map for
the session. If you specified ODBA access as an override in the IMS AM Override session attribute,
you must enter this value. An SSID is required for ODBA access.
If the session has an IMS unload file source, you can use this override to point to another IMSID
statement in the DBMOVER member for the purpose of changing from one DBD library to another
DBD library. By using the override, you can switch DBD libraries without editing or adding any IMSID
statement and restarting the PowerExchange Listener. For example, use this override to test changes
that you made to a DBD library against an unload file.
If you use a netport job with BMP access to IMS, you can use this override with the %IMSID
substitution variable in the netport JCL to specify an IMS SSID to use for the session. This override
replaces the substitution variable. By using the override with the substitution variable, you can use
the same netport JCL to access multiple IMS environments, such as development, test, and
production environments.
Note: An IMS SSID is not required for DL/I batch access to IMS data or for access to an IMS unload
file.
IMS PSBNAME For the IMS source type, if you imported an IMS ODBA data map for the source and did not override
Override the access method, this value overrides the PSB name from the data map. If you specified ODBA
access as an override in the IMS AM Override attribute, you must enter this value. A PSB name is
required for ODBA access.
If you use DL/I batch or BMP access and specify this override, you must also specify the
PSB=%PSBNAME substitution variable in the netport JCL. The override value then replaces the
substitution variable in the JCL.
If you specify the PSB=%1 substitution variable instead of PSB=%PSBNAME in the netport JCL, the
session uses the PSB name from the NETPORT statement, if specified. In this case, you need a
separate NETPORT statement for each PSB. To avoid exceeding the limit of ten NETPORT
statements in the DBMOVER member, use this override with %PSBNAME substitution variable
instead.
Note: A PSB name is not used for access to an IMS source unload file.
IMS PCBNAME For the IMS source type, if you imported an IMS ODBA data map for the source and did not override
Override the access method, this value overrides the PCB name from the data map. If you specified ODBA
access as an override in the IMS AM Override attribute, you must enter this value. A PCB name is
required for ODBA access.
A PCB name is not used for DL/I batch or BMP access or for access to an IMS unload file.
IMS PCBNUMBER For the IMS source type, if you imported a DL/1 BATCH data map for the source and did not override
Override the access method, this value overrides the PCB number from the data map. If you specified DL/I
access as an override in the IMS AM Override attribute, you must enter this value. A PCB number is
required for DL/I or BMP access.
A PCB number is not used for IMS ODBA access or for access to an IMS unload file.
File Name For the VSAM Files and Sequential Files source types, overrides the data set or file name in the
Override PowerExchange data map.
Enter the complete data set or file name.
For the i5/OS, the format is: library_name/file_name.
If you select the Filelist File check box, enter the name of a filelist file in this attribute. A filelist file is
a list of files.
Filelist File For the VSAM Files and Sequential Files source types, identifies the file that contains a list of files.
Select this attribute only if you entered a filelist file in the File Name Override field.
SQL Query Overrides the SQL query sent to PowerExchange, including any filter overrides.
Override PWXPC replaces the default SQL query with the SQL statement that you enter and passes the SQL
statement to PowerExchange for processing. You can enter any SQL statement that PowerExchange
supports for NRDB SQL.
For example, you can select records from table USER where a column called TYPE has a value of A
or D by specifying the following SQL query override:
Select ID, NAME from USER where TYPE=‘A’ or TYPE=‘D’;
For a multiple-record source, use the following syntax:
group_name1=sql_query_override1; group_name2=sql_query_override2;...
For example, you can select only records with ID column values that contain DBA for a multi-record
source with two records called USER1 and USER2 by specifying the following SQL query override:
USER1=Select ID, NAME from USER1 where ID='DBA'; USER2=Select ID, NAME
from USER2 where ID='DBA';
PWX Partition For offloaded DB2 Unload, VSAM Files, and Sequential Files source types, specifies one of the
Strategy following partitioning strategies:
- Single Connection. PowerExchange creates a single connection to the data source. Any
overrides specified for the first partition are used for all partitions. With this option, if you specify
any overrides for other partitions that differ from the overrides for the first partition, the session
fails with an error message.
- Overrides Driven. If the specified overrides are the same for all partitions, PowerExchange
creates a single connection to the data source. If the overrides are not identical for all partitions,
PowerExchange creates multiple connections.
Flush After N For multiple-record sources, specifies the maximum number of block flushes that can occur without
Blocks any one block being flushed.
For bulk multiple-record sources, by default, PWXPC flushes blocks of data only when the buffers are
completely full or at end-of-file. If some record types do not have as much data as others, flushing
might not occur often. In this case, the record types might not have data on the target for a long time,
thereby blocking flushes on the writer side.
To ensure that buffers for all record types are flushed at a regular interval, define this Flush After N
Blocks session property. This property specifies the maximum number of block flushes that can
occur across all record types without any one block being flushed. A value of zero disables this
feature and causes flushing to occur only when blocks are full.
Valid values for the property are -1 to 100000.
The default value of -1 works in the following manner:
- For all multiple-record sources that do not use sequence fields, process the same as Flush After N
Blocks = 0, which disables this feature and flushes only when blocks are full .
- For all multiple-record sources that use sequence fields, use Flush After N Blocks = 7 * (number of
record types in the source).
When you configure a session to extract data from a PowerExchange relational source in batch mode, you
can configure the following session properties for the source:
DB2 Sub System Overrides the DB2 instance name in the PowerExchange data map.
Id
Image Copy For DB2 image copy sources, provides the image copy data set name. If not specified and the table is
Dataset in a non-partitioned table space, the most current image copy data set with TYPE=FULL and
SHRLEVEL=REFERENECE is used. If the table is in a partitioned table space, you must specify the
Image Copy Dataset attribute.
Disable If cleared for a DB2 image copy source, PowerExchange reads the catalog to verify that the DSN of
Consistency the specified image copy data set is defined with SHRLEVEL=REFERENCE and TYPE=FULL and is
Checking an image copy of the specified table. If the DSN is not defined with these properties, the session fails.
If selected, PowerExchange reads the Image Copy Dataset regardless of the values of SHRLEVEL and
TYPE and without verifying that the object ID in the image copy matches the object ID in the DB2
catalog.
Filter Overrides Filters the source data that PowerExchange reads based on specified conditions.
PWXPC adds filter conditions specified to the WHERE clause on the SELECT SQL statement and
passes the SQL statement to PowerExchange for processing. You can use any filter condition syntax
that PowerExchange supports for NRDB SQL. For more information, see the PowerExchange
Reference Manual.
For example, you can select records where a column called TYPE has a value of A or D by specifying
the following filter condition:
TYPE=‘A’ or TYPE=‘D’
SQL Query Overrides the SQL query sent to PowerExchange, including any filter overrides.
Override Caution: For DB2 for z/OS data sources, PowerExchange automatically appends FOR FETCH ONLY
to SQL SELECT statements. If you include FOR FETCH ONLY in the Sql Query Override attribute in
the Properties area, the expression is included twice in the SELECT statement, and PowerExchange
issues an error.
When you create a source definition for a CDC source by using an extraction map and then configure a
session to extract data from the source, you can configure the following session properties for the source:
Schema Name Overrides the schema name in the PowerExchange extraction map.
Override
ADABAS For the Adabas source type, an Adabas password for the source file.
Password If the Adabas FDT for the source file is password-protected, enter the Adabas FDT password.
Database Id For the Adabas source type, overrides the Adabas database ID in the PowerExchange data map.
Override
File Id Override For the Adabas source type, overrides the Adabas file ID in the PowerExchange data map.
Library/File For the DB2i5OS Real Time source type, overrides the library and file names in the extraction map.
Override Specify the full library name and file name in the format:
library/file
Alternatively, specify an asterisk (*) wildcard for the library name to retrieve changes for all files of the
same file name across multiple libraries.
This attribute overrides the Library/File Override attribute on the application connection.
Source Schema For the Oracle source type, overrides the source schema name.
Override
Filter Overrides Filters the source data that PowerExchange reads based on specified conditions.
PWXPC adds filter conditions specified to the WHERE clause on the SELECT SQL statement and
passes the SQL statement to PowerExchange for processing. You can use any filter condition syntax
that PowerExchange supports for NRDB SQL. For more information, see the PowerExchange
Reference Manual.
For example, you can select records where a column called TYPE has a value of A or D by specifying
the following filter condition:
TYPE=‘A’ or TYPE=‘D’
To select change records where columns ID and ACCOUNT have changed, you can use the DTL__CI
columns by specifying the following filter condition:
DTL__CI_ID=‘Y’ and DTL__CI_ACCOUNT=’Y’
SQL Query Overrides the SQL query sent to PowerExchange, including any Filter Overrides.
Override
When you create a source definition for a CDC source by importing metadata from a relational database
and then configure a session to extract data from the source, you can configure the following session
properties for the source:
Extraction Map Required. The PowerExchange extraction map name for the CDC source. You must specify the
Name extraction map name for the relational source.
Library/File Optional. For the DB2i5OS Real Time source type, overrides the library and file names in the
Override extraction map.
Specify the full library name and file name in the format:
library/file
Alternatively, specify an asterisk (*) wildcard for the library name to retrieve changes for all files of the
same file name across multiple libraries.
This attribute overrides the Library/File Override value on the application connection.
Source Schema Optional. For the Oracle Change and Real Time source types, overrides the source schema name.
Override
Targets Node
The Targets node lists the mapping targets and displays the settings. To view and configure the settings of a
specific target, select the target from the list. You can configure the following settings:
• Writers. Displays the writer that the Integration Service uses with each target instance. For relational
targets, you can choose a relational writer or a file writer. Choose a file writer to use an external loader.
After you override a relational target to use a file writer, define the file properties for the target. Click Set
File Properties and choose the target to define.
• Connections. Displays the target connections. You can choose connection types and connection values.
You can also edit connection object values.
Insert The Integration Service inserts all rows flagged for insert.
Update (as Update) The Integration Service updates all rows flagged for update.
Update (as Insert) The Integration Service inserts all rows flagged for update.
Update (else Insert) The Integration Service updates rows flagged for update if they exist in the target, and inserts
remaining rows marked for insert.
Delete The Integration Service deletes all rows flagged for delete.
Truncate Table The Integration Service truncates the target before loading.
Reject File Directory Reject-file directory name. By default, the Integration Service writes all reject files to the
service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject Filename field, clear this field. The
Integration Service concatenates this field with the Reject Filename field when it runs the
session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject Filename File name or file name and path for the reject file. By default, the Integration Service names
the reject file after the target instance name: target_name.bad. Optionally, use the
$BadFileName session parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs
the session. For example, if you have “C:\reject_file\” in the Reject File Directory field, and
enter “filename.bad” in the Reject Filename field, the Integration Service writes rejected rows
to C:\reject_file\filename.bad.
Merge Partitioned When selected, the Integration Service merges the partitioned target files into one file when the
Files session completes, and then deletes the individual output files. If the Integration Service fails to
create the merged file, it does not delete the individual output files.
You cannot merge files if the session uses FTP, an external loader, or a message queue.
Merge File Directory Enter the directory name in this field. By default, the Integration Service writes the merged file in
the service process variable directory, $PMTargetFileDir.
If you enter a full directory and file name in the Merge File Name field, clear this field.
Merge File Name Name of the merge file. Default is target_name.out. This property is required if you select Merge
Partitioned Files.
Create Directory if Not Creates the target directory if it does not exist.
Exists
Output File Directory Enter the directory name in this field. By default, the Integration Service writes output files in the
service process variable directory, $PMTargetFileDir.
If you specify both the directory and file name in the Output Filename field, clear this field. The
Integration Service concatenates this field with the Output Filename field when it runs the session.
You can also use the $OutputFileName session parameter to specify the file directory.
Output Filename Enter the file name, or file name and path. By default, the Workflow Manager names the target file
based on the target definition used in the mapping: target_name.out.
If the target definition contains a slash character, the Workflow Manager replaces the slash
character with an underscore.
When you use an external loader to load to an Oracle database, you must specify a file extension.
If you do not specify a file extension, the Oracle loader cannot find the flat file and the Integration
Service fails the session.
Enter the file name, or file name and path. Optionally use the $OutputFileName session
parameter for the file name.
The Integration Service concatenates this field with the Output File Directory field when it runs the
session.
Note: If you specify an absolute path file name when using FTP, the Integration Service ignores
the Default Remote Directory specified in the FTP connection. When you specify an absolute path
file name, do not use single or double quotes.
Reject File Directory Enter the directory name in this field. By default, the Integration Service writes all reject files to
the service process variable directory, $PMBadFileDir.
If you specify both the directory and file name in the Reject Filename field, clear this field. The
Integration Service concatenates this field with the Reject Filename field when it runs the session.
You can also use the $BadFileName session parameter to specify the file directory.
Reject Filename Enter the file name, or file name and path. By default, the Integration Service names the reject file
after the target instance name: target_name.bad. Optionally use the $BadFileName session
parameter for the file name.
The Integration Service concatenates this field with the Reject File Directory field when it runs the
session. For example, if you have “C:\reject_file\” in the Reject File Directory field, and enter
“filename.bad” in the Reject Filename field, the Integration Service writes rejected rows to C:
\reject_file\filename.bad.
ADABAS For the ADABAS target type, the Adabas file password.
Password If the ADABAS FDT for the target file is password protected, enter the ADABAS FDT password.
BLKSIZE For the SEQ target type on z/OS, the z/OS data set block size.
Default is 0, which means use the best possible block size.
If you select VB for the RECFM value, the actual block size might be up to four bytes greater than the
value you specify for BLKSIZE.
DATACLAS For the SEQ target type on z/OS, the z/OS SMS data class name.
Delete SQL For the ADABAS and VSAM target types, overrides the default Delete SQL that is sent to
Override PowerExchange.
Disp For the SEQ target type on z/OS, the z/OS data set disposition.
Valid values:
- OLD
- SHR
- NEW
- MOD
Default is MOD if the data set exists, and NEW if it does not.
File Name For the SEQ and VSAM target types, overrides the data set or file name in the PowerExchange data
Override map. Enter the complete data set or file name.
For i5/OS, use the following format: library_name/file_name.
IMS AM Override For the IMS target type, overrides the IMS access method in the imported data map for the target with
the other allowable access method. The session then uses the override access method at run time.
- If you imported a target data map that specifies the DL/1 BATCH access method, enter O to
override it with the IMS ODBA access method. For ODBA access, you must also specify the IMS
PSBNAME Override and IMS PCBNAME Override attributes.
- If you imported a target data map that specifies the IMS ODBA access method, enter D to override
it with the DL/1 BATCH access method, which provides DL/I or BMP access. You must also specify
the IMS PCBNUMBER Override attribute.
Important: Before you run the session with an access method override, ensure that you complete the
PowerExchange configuration tasks for the new access method. For example, if the override is DL/1
BATCH, you must configure LISTENER and NETPORT statements in the DBMOVER member and
configure the netport JCL. If the override is IMS ODBA, you must perform other configuration tasks.
For more information, see "IMS Data Maps" in the PowerExchange Navigator User Guide.
IMS PCBNAME For the IMS target type, if you imported an IMS ODBA data map for the target and did not override the
Override access method, this value overrides the PCB name from the data map. If you specified ODBA access
as an override in the IMS AM Override attribute, you must enter this value. A PCB name is required
for ODBA access.
A PCB name is not used for DL/I or BMP access.
IMS For the IMS target type, if you imported a DL/1 BATCH data map for the target and did not override
PCBNUMBER the access method, this value overrides the PCB number from the data map. If you specified DL/I or
Override BMP access as an override in the IMS AM Override attribute, you must enter this value. A PCB
number is required for DL/I or BMP access.
A PCB number is not used for IMS ODBA access.
IMS PSBNAME If you imported an IMS ODBA data map for the target and did not override the access method, this
Override value overrides the PSB name from the data map. If you specified ODBA access as an override in the
IMS AM Override attribute, you must enter this value. A PSB name is required for ODBA access.
If you use DL/I batch or BMP access and specify this override, you must also specify the
PSB=%PSBNAME substitution variable in the netport JCL. The override value then replaces the
substitution variable in the JCL.
If you specify the PSB=%1 substitution variable instead of PSB=%PSBNAME in the netport JCL, the
session uses the PSB name in the NETPORT statement, if specified. In this case, you need a
separate NETPORT statement for each PSB. To avoid exceeding the limit of ten NETPORT
statements, use this override with %PSBNAME substitution variable instead.
IMS SSID For the IMS target type, if you imported an IMS ODBA data map for the target and did not override the
Override access method, use this value to override the IMS subsystem ID (SSID). If you specified ODBA
access as an override in the IMS AM Override attribute, you must enter this value. An SSID is
required for ODBA access.
If you use the IMS DL/1 BATCH access method and a BMP netport job, you can use this override with
the %IMSID substitution variable in the netport JCL. This override replaces the substitution variable to
specify the IMS SSID to use for the session. By using the substitution variable and override together,
you can use the same netport JCL to access multiple IMS environments, such as development,
testing, and production environments.
Note: An IMS SSID is not required for DL/I batch access to IMS data or for access to an IMS unload
file.
Initialize Target For the VSAM target type, select this option to have PowerExchange allow both inserts and updates
into empty VSAM data sets.
If this option is not selected, PowerExchange only allows inserts into empty VSAM data sets.
Insert Only For the ADABAS and VSAM target types, processes updates and deletes as inserts.
Note: You must select this option when the target has no keys.
Insert SQL For all nonrelational target types, overrides the default Insert SQL sent to PowerExchange.
Override
LRECL For the SEQ target type on z/OS, the data set logical record length.
This value is ignored if Disp is not MOD or NEW.
Default is 256.
If you select VB for the RECFM value, specify the maximum number of data bytes in a logical record
for LRECL. PowerExchange adds 4 to this value for the record descriptor word (RDW).
Map Name For all nonrelational target types, overrides the target PowerExchange data map name.
Override Note: PWXPC sends the file name that is specified for the source in the mapping unless this name is
overridden in the File Name Override attribute.
MGMTCLAS For the SEQ target type on z/OS, the SMS management class name.
This value is ignored if Disp is not MOD or NEW.
MODELDSCB for the SEQ target type on z/OS, the Model DSCB for non-SMS-managed GDG data sets.
This value is ignored if Disp is not MOD or NEW.
Post SQL For all nonrelational target types, one or more SQL statements that are executed after the session
runs with the target database connection.
Pre SQL For all nonrelational target types, one or more SQL statements that are executed before the session
runs with the target database connection.
Note: In certain cases, you must specify the Pre SQL run once per Connection attribute along with
the Pre SQL attribute.
Pre SQL run once For all nonrelational target types, uns the SQL that you specify in the Pre SQL attribute only once for
per Connection a connection.
Select this attribute in either of the following cases:
- In the Pre SQL attribute for a session that uses writer partitioning, you specify a SQL statement
such as CREATEFILE that can run only once for the session. If you do not select Pre SQL run
once per Connection, the session tries to run the statement once for each partition.
- In the Pre SQL attribute for a session that performs a multiple-record write, you specify a
CREATEFILE statement that creates a new generation of a GDG or creates an empty file. If you do
not select Pre SQL run once per Connection, the session creates a generation or tries to create a
new empty file for each record that the session writes.
Primary Space For the SEQ target type on z/OS, the primary space allocation, in the units specified in the Space
attribute.
This value is ignored if Disp is not MOD or NEW.
Default is 1.
RECFM For the SEQ target type on z/OS, the z/OS record format. Valid values are F, V, FU, FB, VU, VB,
FBA, and VBA.
This value is ignored if DISP is not MOD or NEW.
Schema Name For all nonrelational target types, overrides the schema name in the target PowerExchange data map.
Override Note: PWXPC sends the file name for the source in the mapping unless this name is overridden in
File Name Override attribute.
Secondary Space For the SEQ target type on z/OS, the secondary space allocation, in the units specified in the Space
attribute.
This value is ignored if Disp is not MOD or NEW.
Default is 1.
Space For the SEQ target type on z/OS, the type of units for expressing primary or secondary space for
z/OS data sets. Valid values are:
- CYLINDER
- TRACK
This value is ignored if Disp is not MOD or NEW.
Default is TRACK.
STORCLAS For the SEQ target type on z/OS, the SMS storage class name.
This value is ignored if Disp is not MOD or NEW.
Truncate target For the VSAM target type, truncates, or deletes, table contents before loading new data.
option Note: VSAM data sets must be defined with the REUSE option for this truncate option to function
correctly.
UNIT For the SEQ target type on z/OS, the z/OS unit type.
This value is ignored if Disp is not MOD or NEW.
Default is SYSDA.
Update SQL For the ADABAS and VSAM target type, overrides the default Update SQL that is sent to
Override PowerExchange.
Upsert For the ADABAS and VSAM target type, processes failed inserts as updates and updates as inserts.
VOLSER For the SEQ target type on z/OS, the volume serial number.
This value is ignored if Disp is not MOD or NEW.
Transformations Node
On the Transformations node, you can override transformation properties that you configure in the Designer.
The attributes you can configure depends on the type of transformation you select.
Components Tab
In the Components tab, you can configure pre-session shell commands, post-session commands, email
messages if the session succeeds or fails, and variable assignments.
Components Description
Tab Option
Task Configure pre- or post-session shell commands, success or failure email messages, and variable
assignments.
Type Select None if you do not want to configure commands and emails in the Components tab.
For pre- and post-session commands, select Reusable to call an existing reusable Command task as the
pre- or post-session shell command. Select Non-Reusable to create pre- or post-session shell commands
for this session task.
For success or failure emails, select Reusable to call an existing Email task as the success or failure email.
Select Non-Reusable to create email messages for this session task.
The following table describes the tasks available in the Components tab:
Pre-Session Command Shell commands that the Integration Service performs at the beginning of a session.
Post-Session Success Shell commands that the Integration Service performs after the session completes
Command successfully.
Post-Session Failure Shell commands that the Integration Service performs if the session fails.
Command
On Success Email Integration Service sends On Success email message if the session completes successfully.
On Failure Email Integration Service sends On Failure email message if the session fails.
Pre-session variable Assign values to mapping parameters, mapping variables, and session parameters before a
assignment session runs. Read-only for reusable sessions.
Post-session on success Assign values to parent workflow and worklet variables after a session completes
variable assignment successfully. Read-only for reusable sessions.
Post-session on failure Assign values to parent workflow and worklet variables after a session fails. Read-only for
variable assignment reusable sessions.
Extension Name Name of the metadata extension. Metadata extension names must be unique in a domain.
Reusable Select to make the metadata extension apply to all objects of this type (reusable). Clear to make the
metadata extension apply to this object only (non-reusable).
General Tab
You can change the workflow name and enter a comment for the workflow on the General tab. By default, the
General tab appears when you open the workflow properties.
Integration Service Integration Service that runs the workflow by default. You can also assign an Integration Service
when you run the workflow.
Suspension Email Email message that the Integration Service sends when a task fails and the Integration Service
suspends the workflow.
Disabled Disables the workflow from the schedule. The Integration Service stops running the workflow until
you clear the Disabled option.
Suspend on Error The Integration Service suspends the workflow when a task in the workflow fails.
Web Services Creates a service workflow. Click Config Service to configure service information.
249
General Tab Options Description
Configure Concurrent Enables the Integration Service to run more than one instance of the workflow at a time. You can
Execution run multiple instances of the same workflow name, or you can configure a different name and
parameter file for each instance.
Click Configure Concurrent Execution to configure instance names.
Service Level Determines the order in which the Load Balancer dispatches tasks from the dispatch queue when
multiple tasks are waiting to be dispatched. Default is “Default.”
You create service levels in the Administrator tool.
Properties Tab
Configure parameter file name and workflow log options on the Properties tab.
Parameter File Name Designates the name and directory for the parameter file. Use the parameter file to define workflow
variables.
Workflow Log File Enter a file name, or a file name and directory. Required.
Name The Integration Service appends information in this field to that entered in the Workflow Log File
Directory field. For example, if you have “C:\workflow_logs\” in the Workflow Log File Directory field,
then enter “logname.txt” in the Workflow Log File Name field, the Integration Service writes
logname.txt to the C:\workflow_logs\ directory.
Workflow Log File Designates a location for the workflow log file. By default, the Integration Service writes the log file in
Directory the service variable directory, $PMWorkflowLogDir.
If you enter a full directory and file name in the Workflow Log File Name field, clear this field.
Save Workflow Log By If you select Save Workflow Log by Timestamp, the Integration Service saves all workflow logs,
appending a timestamp to each log.
If you select Save Workflow Log by Runs, the Integration Service saves a designated number of
workflow logs. Configure the number of workflow logs in the Save Workflow Log for These Runs
option.
You can also use the $PMWorkflowLogCount service variable to save the configured number of
workflow logs for the Integration Service.
Save Workflow Log For Number of historical workflow logs you want the Integration Service to save.
These Runs The Integration Service saves the number of historical logs you specify, plus the most recent
workflow log. Therefore, if you specify 5 runs, the Integration Service saves the most recent workflow
log, plus historical logs 0–4, for a total of 6 logs.
You can specify up to 2,147,483,647 historical logs. If you specify 0 logs, the Integration Service
saves only the most recent workflow log.
Enable HA Recovery Enable workflow recovery. Not available for web service workflows.
Automatically recover Recover terminated tasks without user intervention. You must have high availability and the workflow
terminated tasks must still be running. Not available for web service workflows.
Maximum automatic When you automatically recover terminated tasks you can choose the number of times the
recovery attempts Integration Service attempts to recover the task. Default is 5.
Scheduler Tab
The Scheduler Tab lets you schedule a workflow to run continuously, run at a given interval, or manually start
a workflow.
Schedule Options: Run Once/Run Required if you select Run On Integration Service Initialization in Run Options.
Every/Customized Repeat Also required if you do not choose any setting in Run Options.
If you select Run Once, the Integration Service runs the workflow once, as scheduled in
the scheduler.
If you select Run Every, the Integration Service runs the workflow at regular intervals,
as configured.
If you select Customized Repeat, the Integration Service runs the workflow on the dates
and times specified in the Repeat dialog box.
Edit Required if you select Customized Repeat in Schedule Options. Opens the Repeat
dialog box, allowing you to schedule specific dates and times for the workflow to run.
The selected scheduler appears at the bottom of the page.
Start Date Required if you select Run On Integration Service Initialization in Run Options.
Also required if you do not choose any setting in Run Options.
Indicates the date on which the Integration Service begins scheduling the workflow.
Start Time Required if you select Run On Integration Service Initialization in Run Options.
Also required if you do not choose any setting in Run Options.
Indicates the time at which the Integration Service begins scheduling the workflow.
End Options: End On/End After/ Required if the workflow schedule is Run Every or Customized Repeat.
Forever If you select End On, the Integration Service stops scheduling the workflow in the
selected date.
If you select End After, the Integration Service stops scheduling the workflow after the
set number of workflow runs.
If you select Forever, the Integration Service schedules the workflow as long as the
workflow does not fail.
Repeat Description
Option
Repeat Enter the numeric interval you want to schedule the workflow, then select Days, Weeks, or Months, as appropriate.
Every If you select Days, select the appropriate Daily Frequency settings.
If you select Weeks, select the appropriate Weekly and Daily Frequency settings.
If you select Months, select the appropriate Monthly and Daily Frequency settings.
Weekly Required to enter a weekly schedule. Select the day or days of the week on which you want to schedule the
workflow.
Daily Enter the number of times you would like the Integration Service to run the workflow on any day the session is
scheduled.
If you select Run Once, the Integration Service schedules the workflow once on the selected day, at the time
entered on the Start Time setting on the Time tab.
If you select Run Every, enter Hours and Minutes to define the interval at which the Integration Service runs the
workflow. The Integration Service then schedules the workflow at regular intervals on the selected day. The
Integration Service uses the Start Time setting for the first scheduled workflow of the day. If you choose an interval
that is bigger than the start time, the workflow runs one time each day. The Integration Service then schedules the
workflow at regular intervals on the selected day.
Variables Tab
Before using workflow variables, you must declare them on the Variables tab.
Persistent Indicates whether the Integration Service maintains the value of the variable from the previous
workflow run.
Events Tab
Before using the Event-Raise task, declare a user-defined event on the Events tab.
A B
aborted Backward Compatible Session Log
status 196 configuring 223
aborting Backward Compatible Workflow Log
Control tasks 67 configuring 222
status 196 buffer block size
tasks in Workflow Monitor 195 configuring for sessions 54
Absolute Time bulk loading
specifying 73 commit interval 101
Timer task 73 data driven session 101
active sources DB2 guidelines 102
constraint-based loading 98 Oracle guidelines 101
definition 104 relational targets 101
row error logging 104 session properties 94, 101, 240
transaction generators 104 test load 92
XML targets 104
adding
tasks to workflows 37
Additional Concurrent Pipelines C
restricting pre-built lookup cache 54 caches
advanced settings configuring concurrent lookup caches for sessions 54
session properties 54 configuring lookup in sessions 54
aggregate caches configuring maximum numeric memory limit for sessions 54
reinitializing 232 specifying maximum memory by percentage 54
AND links caching
input type 64 XML properties 119
Append if Exists certified messages
flat file target property 106 configuring TIB/Rendezvous application connections 154
append to document checking in
flushing XML 119 versioned objects 26
application connections checking out
CPI-C 150 versioned objects 26
JMS 145 COBOL sources
JNDI 145 error handling 84
PeopleSoft 148 numeric data handling 86
RFC/BAPI 152 code page compatibility
Salesforce 149 multiple file sources 88
SAP ALE IDoc Reader 151 targets 89
SAP ALE IDoc Writer 152 code pages
SAP NetWeaver 149 connection objects 129
SAP NetWeaver BI 153 database connections 89, 129
TIB/Rendezvous 154 delimited source 82
TIBCO 154 delimited target 108
Web Services 156 fixed-width source 81
webMethods 158 fixed-width target 108
arrange relaxed validation 129
workflows vertically 20 cold start
workspace objects 25 tasks and workflows in Workflow Monitor 195
assigning color themes
Integration Services 39 selecting 21
Assignment tasks colors
creating 65 setting 21
definition 65 workspace 21
description 61 command
using Expression Editor 32 file targets 107
255
command (continued) connections (continued)
generating file list 81 external loader 139
generating source data 80 FTP 138
processing target data 107 multiple targets 121
Command property overriding connection attributes 128
configuring flat file sources 79 overriding for Lookup transformations 128
configuring flat file targets 106 overriding for Stored Procedure transformations 128
Command tasks relational database 134, 142
creating 66 replacing a relational database connection 137
definition 66 resilience 133
description 61 sources 76
executing commands 67 targets 91
Fail Task if Any Command Fails 67 connectivity
making reusable 52 connect string examples 127
monitoring details in the Workflow Monitor 208 constraint-based loading
multiple UNIX commands 67 active sources 98
promoting to reusable 66 configuring 98
using parameters and variables 51 configuring for sessions 54
using variables in 66 enabling 101
Command Type key relationships 98
configuring flat file sources 79 target connection groups 99
comments Update Strategy transformations 99
adding in Expression Editor 32 Control tasks
commit definition 67
flushing XML 118 description 61
commit interval options 67
bulk loading 101 copying
commit type repository objects 28
configuring 230 counters
comparing objects overview 213
sessions 29 CPI-C application connections
tasks 29 configuring 150
workflows 29 creating
worklets 29 Assignment tasks 65
Components tab Command tasks 66
properties 246 Decision tasks 69
concurrent workflows Email tasks 181
scheduling 168 external loader connections 139
Config Object tab metadata extensions 31
overview 53 reserved words file 103
session properties 53 reusable scheduler 173
configuring sessions 46, 47
in Web Services Consumer application connections 130 tasks 62
connect string workflows 36
examples 127 custom properties
syntax 127 overriding Integration Service properties for sessions 54
connection environment SQL customization
configuring 132 of toolbars 24
connection objects of windows 23
assigning permissions 131 workspace colors 21
code pages 129 customized repeat
configuring in sessions 125 daily 170
deleting 162 editing 169
overriding connection attributes 128 monthly 170
owner 131 options 170
Connection Retry Period (property) repeat every 170
database connections 134 weekly 170
WebSphere MQ 160
connection settings
applying to all session instances 49
connection variables D
defining for Lookup transformations 128 data driven
defining for Stored Procedure transformations 128 bulk loading 101
specifying $Source and $Target 127 database connections
connections configuring 134, 142
configuring for sessions 125 configuring for PowerChannel 142
copy as 136 connection retry period 134
copying a relational database connection 136 copying a relational database connection 136
256 Index
database connections (continued) disabling (continued)
domain name 134, 142 workflows 174
packet size 134, 142 displaying
replacing a relational database connection 137 Expression Editor 33
use trusted connection 134, 142 Integration Services in Workflow Monitor 190
using IBM DB2 client authentication 126 domain name
using Oracle OS Authentication 126 database connections 134, 142
databases dropping
configuring a connection 134 indexes 98
connection requirements 134, 139, 142 DTD file
connections 134 schema reference 118
environment SQL 132 DTM Buffer Pool Size
selecting code pages 129 session property 232
datatypes duplicate group row handling
Decimal 110 XML targets 117
Double 110 dynamic partitioning
Float 110 session option 59
Integer 110
Money 110
numeric 110
padding bytes for fixed-width targets 110 E
Real 110 editing
date time metadata extensions 31
format 19 scheduling 169
dates sessions 47
configuring 19 workflows 37
formats 19 email
Daylight Savings Time attaching files 182, 185
workflow schedules 168 configuring a user on Windows 178, 185
DB2 configuring the Integration Service on UNIX 177
bulk loading guidelines 102 configuring the Integration Service on Windows 178
commit interval 101 distribution lists 179
deadlock retries format tags 182
PM_RECOVERY table 97 logon network security on Windows 179
session 232 MIME format 178
target connection groups 104 multiple recipients 179
Decision tasks on failure 181
creating 69 on success 181
decision condition variable 68 overview 176
definition 68 post-session 181
description 61 rmail 177
example 68 sending using MAPI 178
using Expression Editor 32 sending using SMTP 180
Default Remote Directory sendmail 177
for FTP connections 138 service variables 185
deleting specifying a Microsoft Outlook profile 179
connection objects 162 suspending workflows 184
workflows 37 text message 180
delimited flat files tips 185
code page, sources 82 user name 180
code page, targets 108 using other mail programs 185
escape character, sources 82 using service variables 185
numeric data handling 86 variables 182
quote character, sources 82 workflows 180
quote character, targets 108 worklets 180
row settings 82 Email tasks
session properties, sources 82 creating 181
session properties, targets 108 description 61
delimiter overview 180
session properties, sources 82 empty strings
session properties, targets 108 XML target files 116
directories enabling
run-time creation 106 enhanced security 23
workspace file 20 past events in Event-Wait task 73
disabled end options
status 196 end after 169
disabling end on 169
tasks 64 forever 169
Index 257
endpoint URL file sources
in Web Service application connections 156 Integration Service handling 84, 86
enhanced security numeric data handling 86
enabling 23 session properties 79
environment SQL file targets
configuring 132 session properties 105
guidelines for entering 133 file-based ledger
error handling TIB/Rendezvous application connections, configuring 154
COBOL sources 84 filtering
configuring 50 deleted tasks in Workflow Monitor 189
fixed-width file 84 Integration Services in Workflow Monitor 190
pre- and post-session SQL 50 tasks in Gantt Chart view 189
error handling settings tasks in Task View 200
session properties 57 Find in Workspace tool
errors overview 24
pre-session shell command 52 Find Next tool
stopping session on 57 overview 24
validating in Expression Editor 32 fixed-width files
escape characters code page, sources 81
in XML targets 116 code page, targets 108
Event-Raise tasks error handling 84
configuring 72 multibyte character handling 84
declaring user-defined event 71 null characters, sources 81
definition 70 null characters, targets 108
description 61 numeric data handling 86
in worklets 42 padded bytes in fixed-width targets 110
Event-Wait tasks source session properties 81
definition 70 target session properties 108
description 61 writing to 110
for predefined events 73 flat file definitions
for user-defined events 72 escape character, sources 82
waiting for past events 73 Integration Service handling, targets 109
working with 72 quote character, sources 82
events quote character, targets 108
in worklets 42 session properties, sources 79
predefined events 70 session properties, targets 106
user-defined events 70 flat files
ExportSessionLogLibName code page, sources 81
passing log events to an external library 218 code page, targets 108
Expression Editor creating footer 106
adding comments 32 creating headers 106
displaying 33 delimiter, sources 82
syntax colors 33 delimiter, targets 108
using 32 Footer Command property 106
validating 167 generating source data 80
validating expressions using 32 generating with command 80
expressions Header Command property 106
validating 32 Header Options property 106
external loader multibyte data 112
connections 139 null characters, sources 81
null characters, targets 108
numeric data handling 86
258 Index
format
date time 19 H
format options Hadoop HDFS application connections
color themes 21 properties 144
colors 21 header
date and time 19 creating in file targets 106
fonts 21 Header Command
orthogonal links 21 flat file targets 106
resetting 21 Header Options
schedule 19 flat file targets 106
solid lines for links 21 heterogeneous sources
Timer task 19 defined 75
FTP heterogeneous targets
connection names 138 overview 121
connection properties 138 high availability
connections for ABAP integration 149 WebSphere MQ, configuring 160
creating connections 138 high precision
defining connections 138 enabling 232
defining default remote directory 138 history names
defining host names 138 in Workflow Monitor 196
resilience 138 host names
retry period 138 for FTP connections 138
Use SFTP 138
G I
IBM DB2
Gantt Chart connect string example 127
configuring 192 connection with client authentication 126
filtering 189 IBM DB2 EE
listing tasks and workflows 198 connecting with client authentication 139
navigating 199 external loader connections 139
opening and closing folders 190 IBM DB2 EEE
organizing 199 connecting with client authentication 139
overview 187 external loader connections 139
searching 199 icons
time increments 199 Workflow Monitor 188
time window, configuring 192 worklet validation 164
using 198 ignore commit
general options flushing XML 119
arranging workflow vertically 20 in-place editing
configuring 20 enabling 20
in-place editing 20 incremental aggregation
launching Workflow Monitor 20 configuring 232
open editor 20 indexes
panning windows 20 dropping for target tables 98
reload task or workflow 20 recreating for target tables 98
repository notifications 20 indicator files
session properties 229 predefined events 72
show background in partition editor and DBMS based optimization input link type
20 selecting for task 64
show expression on a link 20 Input Type
show full name of task 20 flat file source property 79
General tab in session properties Integration Service
in Workflow Manager 229 assigning workflows 39
generating certificates connecting in Workflow Monitor 189
client certificate file 130 filtering in Workflow Monitor 190
private key file 130 handling file targets 109
globalization monitoring details in the Workflow Monitor 203
database connections 89 online and offline mode 189
overview 89 pinging in Workflow Monitor 189
targets 89 removing from the Navigator 19
grid selecting 39
enabling sessions to run 59 tracing levels 226
truncating target tables 96
using FTP 138
using SFTP 138
version in session log 226
Index 259
Integration Service handling links (continued)
file targets 109 show expression on a link 20
fixed-width targets 110, 112 solid lines 21
multibyte data to file targets 113 specifying condition 43
shift-sensitive data, targets 113 using Expression Editor 32
Integration Service Monitor working with 43
system resource usage 204 List Tasks
Is Transactional in Workflow Monitor 198
MSMQ connection property 146 log files
archiving 221
real-time sessions 218
260 Index
MIME format options (Workflow Manager) (continued)
email 178 solid lines for links 21
monitoring OR links
command tasks 208 input type 64
failed sessions 209 Oracle
folder details 205 bulk loading guidelines 101
Integration Service details 203 commit intervals 101
Repository Service details 203 connect string syntax 127
session details 208 connection with OS Authentication 126
targets 210 temporary tablespace 126
tasks details 206 Oracle external loader
worklet details 207 connecting with OS Authentication 139
MSMQ queue connections external loader connections 139
configuring 146 Output File Name property
Is Transactional 146 flat file targets 106
multibyte data output files
character handling 84 session properties 115, 240
writing to files 112 targets 106
multiple sessions Output Type property
validating 166 flat file targets 106
multiple XML output overriding
example 120 tracing levels in sessions 57
generating 119 owner
connection object 131
owner name
navigating
workspace 23
Netezza connections P
configuring 147 $PMWorkflowCount
non-reusable tasks archiving log files 222
inherited changes 64 $PMSuccessEmailUser
promoting to reusable 63 definition 185
normal loading tips 185
session properties 94, 240 $PMWorkflowLogDir
Normal tracing levels archiving workflow logs 222
definition 226 definition 220
null characters $PMSessionLogDir
file targets 108 archiving session logs 223
fixed-width targets 114 $PMSessionLogCount
Integration Service handling 85 archiving session logs 223
session properties, targets 108 $PMFailureEmailUser
null data definition 185
XML target files 116 tips 185
numeric values packet size
reading from sources 86 database connections 134, 142
page setup
configuring 23
Index 261
performance details precision
in performance details file 212 flat files 112
in Workflow Monitor 212 writing to file targets 110
viewing 212 predefined events
performance settings waiting for 73
session properties 232 predefined variables
permissions in Decision tasks 68
connection object 131 preparing to run
connection objects 131 status 196
database 131 printing
editing sessions 47 page setup 23
pinging Private Key File Name
Integration Service in Workflow Monitor 189 SFTP 138
pipeline partitioning Private Key File Password
merging target files 240 SFTP 138
reject file 122 properties
session properties 246 Hadoop HDFS application connections 144
pipelines XML caching 119
active sources 104 Properties tab in session properties
data flow monitoring 213 in Workflow Manager 230
PM_RECOVERY table Public Key File Name
deadlock retries 97 SFTP 138
PmNullPasswd
reserved word 126
PmNullUser
IBM DB2 client authentication 126 Q
Oracle OS Authentication 126 queue connections
reserved word 126 MSMQ 146
post-session command testing WebSphere MQ 160
session properties 246 WebSphere MQ 160
post-session email quoted identifiers
overview 181 reserved words 103
session properties 246
post-session shell command
configuring non-reusable 51
configuring reusable 52 R
creating reusable Command task 52 real-time sessions
using 50 log files 218
post-session SQL commands session logs 218
entering 50 truncating target tables 96
PowerCenter Repository Reports recovery queue name
viewing in Workflow Manager 40 WebSphere MQ connections 160
PowerChannel recreating
configuring a database connection 142 indexes 98
PowerChannel database connections reject file
configuring 142 changing names 122
PowerExchange column indicators 123
connection resilience 133 locating 122
PowerExchange for Hadoop pipeline partitioning 122
application connection objects 144 reading 122
sessions 144 row indicators 123
Pre 85 Timestamp Compatibility option session properties 94, 106, 240
setting 54 viewing 122
pre- and post-session SQL Reject File Name
entering 50 flat file target property 106
guidelines 50 relational connections
Pre-Build Lookup Cache Netezza 147
restricting concurrent pipelines 54 relational databases
pre-session shell command copying a relational database connection 136
configuring non-reusable 51 replacing a relational database connection 137
configuring reusable 52 relational sources
creating reusable Command task 52 session properties 77
errors 52 relational targets
session properties 246 session properties 93, 94, 240
using 50 Relative time
pre-session SQL commands specifying 73
entering 50 Timer task 73
262 Index
reload task or workflow
configuring 20 S
removing $Source connection value
Integration Service 19 setting 127, 230
renaming $Source
repository objects 26 how Integration Service determines value 127
repeat options multiple sources 127
customizing 170 session properties 230
repositories Salesforce application connections
adding 26 accessing Sandbox 149
connecting in Workflow Monitor 189 configuring 149
entering descriptions 26 SAP ALE IDoc Reader application connections
repository folder configuring 151
monitoring details in the Workflow Monitor 205 SAP ALE IDoc Writer application connections
repository notifications configuring 152
receiving 20 SAP ECC
repository objects ABAP integration 149
comparing 29 ALE integration 151
configuring 26 SAP NetWeaver application connections
rename 26 configuring 149
Repository Service SAP NetWeaver BI application connections
monitoring details in the Workflow Monitor 203 configuring 153
notification in Workflow Monitor 192 SAP R/3 application connections
notifications 20 configuring 150
Request Old (property) stream and file mode sessions 150
TIB/Rendezvous application connections, configuring 154 stream mode sessions 150
reserved words scheduled
generating SQL with 103 status 196
reswords.txt 103 scheduled states
reserved words file workflows 171
creating 103 scheduling
resilience concurrent workflows 168
connections 133 configuring 169
FTP 138 creating reusable scheduler 173
WebSphere MQ, configuring 160 disabling workflows 174
restarting tasks editing 169
in Workflow Monitor 194 end options 169
restarting tasks and workflows without recovery error message 173
in Workflow Monitor 195 run every 169
retry period run once 169
FTP 138 run options 169
reusable tasks schedule options 169
inherited changes 64 start date 169
reverting changes 64 start time 169
reverting changes workflows 168, 251
tasks 64 searching
RFC file mode connections versioned objects in the Workflow Manager 28
configuring 150 Workflow Manager 24
RFC stream mode connections Workflow Monitor 199
configuring 150 sendmail
RFC/BAPI application connections configuring 177
configuring 152 server handling
rmail XML sources 87
configuring 177 XML targets 116
row error logging service process variables
active sources 104 in Command tasks 51
row indicators service variables
reject file 123 email 185
run options session command settings
run continuously 169 session properties 246
run on demand 169 session configuration objects
service initialization 169 creating 60
running session properties 53
status 196 understanding 53
workflows 174 using in a session 60
session events
passing to an external library 218
Index 263
Session Log File Max Size session properties (continued)
configuring session log rollover 222 XML output filename 115
session config object 54 XML sources 86
Session Log File Max Time Period XML targets 115
configuring session log rollover 222 session statistics
session config object 54 viewing in the Workflow Monitor 206
session log files sessions
archiving 221 apply attributes to all instances 48
time stamp 221 configuring for multiple source files 88
session log rollover creating 46, 47
description 222 definition 46
session logs description 61
changing locations 223 editing 47
changing name 223 email 176
duplicate XML rows 117 monitoring counters 213
enabling and disabling 223 monitoring details 208
generating using UTF-8 220 multiple source files 87
Integration Service version and build 226 overriding connection attributes 128
location 220, 230 overriding source table name 79, 234
naming 220 overriding target table name 102
real-time sessions 218 overview 46
sample 226 properties reference 229
saving 56 task progress details 206
tracing levels 226 test load 92
viewing in Workflow Monitor 196 truncating target tables 96
XML targets 120 validating 166
session on grid settings viewing details in the Workflow Monitor 209
session properties 59 viewing failure information in the Workflow Monitor 209
session properties viewing performance details 212
advanced settings 54 viewing statistics in the Workflow Monitor 206
buffer sizes 54 Set File Properties
Components tab 246 description 79, 106
Config Object tab overview 53 SFTP
constraint-based loading 54, 101 authentication methods 138
delimited files, sources 82 configuring connection 138
delimited files, targets 108 defining Private Key File Name 138
email 181 defining Private Key File Password 138
error handling settings 57 defining Public Key File Name 138
fixed-width files, sources 81 shared library
fixed-width files, targets 108 passing log events to an external library 218
general settings 229 shell commands
General tab 229 executing in Command tasks 67
log option settings 56 make reusable 52
lookup caches 54 post-session 50
Metadata Extensions tab 248 pre-session 50
null character, targets 108 using Command tasks 66
on failure email 181 using parameters and variables 51, 66
on success email 181 shortcuts
output files, flat file 240 keyboard 33
partitioning options settings 59 SMTP
Partitions View 246 sending email using 180
performance settings 232 source commands
post-session email 181 generating file list 80
Properties tab 230 generating source data 80
reject file, flat file 106, 240 Source File Name
reject file, relational 94, 240 description 79
relational sources 77 Source File Type
relational targets 93 description 79
session command settings 246 source filename
session on grid settings 59 XML sources option 86
source connections 76 source files
sources 76 configuring for multiple files 88
table name prefix 102 session properties 79, 234
target connections 91 wildcard characters 81
target load options 94, 101, 240 source filetype
targets 91 XML source option 86
Transformation node 246 source location
transformations 246 session properties 79, 234
264 Index
source tables stopped
overriding table name 79, 234 status 196
sources stopping
code page 82 in Workflow Monitor 195
code page, flat file 81 status 196
commands 80 using Control tasks 67
connections 76 stream mode
delimiters 82 SAP R/3 application connections 150
dynamic files names 81 stream mode connections
generating file list 81 RFC 150
generating with command 80 subseconds
line sequential buffer length 83 trimming for pre-8.5 compatibility 54
monitoring details in the Workflow Monitor 210 succeeded
multiple sources in a session 87 status 196
null characters 81, 85 suspended
overriding source table name 79, 234 status 196
overriding SQL query, session 78 suspending
resilience 133 email 184
session properties 76 status 196
wildcard characters 81 Sybase ASE
special characters commit interval 101
parsing 116 connect string example 127
SQL Sybase IQ external loader
configuring environment SQL 132 connections 139
guidelines for entering environment SQL 133 system resource usage
overriding query at session level 78 Integration Service Monitor 204
SQL query
overriding at session level 78
start date and time
scheduling 169 T
Start tasks $Target
definition 35 how Integration Service determines value 127
starting multiple targets 127
selecting a service 39 session properties 230
start from task 175 $Target connection value
starting part of a workflow 175 setting 127, 230
starting tasks 175 table name prefix
Workflow Monitor 188 target owner 102
workflows 174 table names
statistics overriding source table name 79, 234
for Workflow Monitor 190 overriding target table name 102
viewing 190 table owner name
status session properties 78
aborted 196 targets 102
aborting 196 target commands
disabled 196 processing target data 107
failed 196 target connection groups
in Workflow Monitor 196 constraint-based loading 99
preparing to run 196 description 104
running 196 target directories
scheduled 196 creating at run time 106
stopped 196 target load order
stopping 196 constraint-based loading 99
succeeded 196 target owner
suspended 196 table name prefix 102
suspending 196 target properties
tasks 196 bulk mode 94
terminated 196 test load 94
terminating 196 update strategy 94
unknown status 196 using with source properties 95
unscheduled 196 target tables
waiting 196 overriding table name 102
workflows 196 truncating 96
stop on truncating, real-time sessions 96
pre- and post-session SQL errors 50 targets
stop on errors code page 108
session property 57 code page compatibility 89
code page, flat file 108
Index 265
targets (continued) tasks (continued)
commands 107 restarting without recovery in Workflow Monitor 195
connections 91 reusable 37
database connections 89 reverting changes 64
delimiters 108 running 196
duplicate group row handling 117 show full name 20
file writer 91 starting 175
globalization features 89 status 196
heterogeneous 121 stopped 196
load, session properties 94, 101, 240 stopping 196
monitoring details in the Workflow Monitor 210 stopping and aborting in Workflow Monitor 195
multiple connections 121 succeeded 196
multiple types 121 Timer tasks 73
null characters 108 using Tasks toolbar 37
output files 106 validating 165
overriding target table name 102 temporary tablespace
processing with command 107 Oracle 126
relational settings 94, 240 Teradata
relational writer 91 connect string example 127
resilience 133 Teradata external loader
session properties 91, 93 connections 139
setting DTD/schema reference 118 terminated
truncating tables 96 status 196
truncating tables, real-time sessions 96 terminating
writers 91 status 196
Task Developer test load
creating tasks 62 bulk loading 92
displaying and hiding tool name 20 enabling 230
Task view file targets 92
configuring 192 number of rows to test 230
displaying 199 relational targets 92
filtering 200 TIB/Adapter SDK application connections
hiding 192 properties 156
opening and closing folders 190 TIB/Rendezvous application connections
overview 187 configuring 154
using 199 properties 154
tasks TIBCO application connections
aborted 196 configuring 154
aborting 196 time
adding in workflows 37 configuring 19
arranging 25 formats 19
Assignment tasks 65 time increments
cold start 195 Workflow Monitor 199
Command tasks 66 time stamps
configuring 63 session log files 221
Control task 67 session logs 223
copying 28 workflow log files 221
creating 62 workflow logs 222
creating in Task Developer 62 Workflow Monitor 187
creating in Workflow Designer 62 time window
Decision tasks 68 configuring 192
disabled 196 Timer tasks
disabling 64 absolute time 73
email 180 definition 73
Event-Raise tasks 70 description 61
Event-Wait tasks 70 relative time 73
failed 196 subseconds in variables 73
failing parent workflow 64 tool names
in worklets 42 displaying and hiding 20
inherited changes 64 toolbars
instances 64 adding tasks 37
list of 61 using 24
monitoring details 206 Workflow Manager 24
non-reusable 37 Workflow Monitor 193
overview 61 tracing levels
preparing to run 196 Normal 226
promoting to reusable 63 overriding in the session 57
restarting in Workflow Monitor 194 session 226
266 Index
tracing levels (continued) versioned objects
setting 226 Allow Delete without Checkout option 22
Verbose Data 226 checking in 26
Verbose Initialization 226 checking out 26
transaction environment SQL comparing versions 27
configuring 132, 133 searching for in the Workflow Manager 28
transaction generator viewing 27
active sources 104 viewing multiple versions 27
effective and ineffective 104 viewing
transformations older versions of objects 27
session properties 246 reject file 122
Transformations node
properties 246
Transformations view
session properties 233 W
Treat Error as Interruption waiting
effect on worklets 41 status 196
Treat Source Rows As web links
bulk loading 101 adding to expressions 32
using with target properties 95 Web Services application connections
Treat Source Rows As property configuring 156
overview 77 endpoint URL 156
truncating webMethods application connections
Table Name Prefix 96 configuring 158
target tables 96 WebSphere MQ queue connections
configuring 160
testing 160
U wildcard characters
configuring source files 81
UNIX systems windows
email 177 customizing 23
unknown status displaying and closing 23
status 196 docking and undocking 23
unscheduled fonts 21
status 196 Navigator 18
unscheduling Output 18
workflows 174 overview 18
update strategy panning 20
target properties 94 reloading 20
Update Strategy transformation Workflow Manager 18
constraint-based loading 99 Workflow Monitor 187
using with target and source properties 95 workspace 18
URL Windows Start Menu
adding through business documentation links 32 accessing Workflow Monitor 188
user-defined events Windows systems
declaring 71 email 178
example 70 logon network security 179
waiting for 72 Workflow Composite Report
viewing 40
Workflow Designer
V creating tasks 62
displaying and hiding tool name 20
validating workflow log files
expressions 32, 167 archiving 221
multiple sessions 166 configuring 222
tasks 165 time stamp 221
validate target option 115 workflow logs
workflows 163 changing locations 222
worklets 164 changing name 222
XML source option 86 enabling and disabling 222
variables locating 220
email 182 naming 220
in Command tasks 66 viewing in Workflow Monitor 196
Verbose Data tracing level Workflow Manager
configuring session log 226 adding repositories 26
Verbose Initialization tracing level arrange 25
configuring session log 226 checking out and in versioned objects 26
configuring for multiple source files 88
Index 267
Workflow Manager (continued) Workflow Monitor (continued)
connections overview 125 launching automatically 20
copying 28 listing tasks and workflows 198
CPI-C connection 150 Maximum Days 192
customizing options 19 Maximum Workflow Runs 192
database connections 142 monitor modes 189
date and time formats 19 navigating the Time window 199
display options 19 notification from Repository Service 192
entering object descriptions 26 opening folders 190
external loader connections 139 overview 187
FTP connections 138 performing tasks 194
general options 20 pinging the Integration Service 189
Hadoop HDFS application connections 144 receive messages from Workflow Manager 192
JMS connections 145 resilience to Integration Service 189
JNDI connections 145 restarting tasks or workflows without recovery 195
messages to Workflow Monitor 192 restarting tasks, workflows, and worklets 194
MSMQ queue connections 146 searching 199
Netezza connections 147 Start Menu 188
overview 17 starting 188
PeopleSoft connections 148 statistics 190
printing the workspace 23 stopping or aborting tasks and workflows 195
relational database connections 134 switching views 187
RFC file mode connection 150 Task view 187
RFC stream mode connection 150 task view options 192
RFC/BAPI connections 152 time 187
Salesforce connections 149 time increments 199
SAP ALE IDoc Reader connections 151 toolbars 193
SAP ALE IDoc Writer connections 152 viewing command task details 208
SAP ECC connections 150 viewing folder details 205
SAP NetWeaver BI connections 153 viewing history names 196
SAP NetWeaver connections 149 viewing Integration Service details 203
searching 24 viewing performance details 212
searching for versioned objects 28 viewing repository details 203
SFTP connections 138 viewing session details 208
TIB/Rendezvous connections 154 viewing session failure information 209
TIBCO connections 154 viewing session logs 196
toolbars 24 viewing session statistics 206
tools 18 viewing source details 210
validating sessions 166 viewing target details 210
versioned objects 26 viewing task progress details 206
viewing reports 40 viewing workflow details 205
Web Services connections 156 viewing workflow logs 196
webMethods connections 158 viewing worklet details 207
WebSphere MQ connections 160 workflow and task status 196
windows 18, 23 workflow properties
zooming the workspace 25 configuring 249
Workflow Monitor Events tab 254
advanced options 192 General tab 249
closing folders 190 Metadata Extension tab 31
cold start tasks or workflows 195 Properties tab 250
configuring 191 Schedule tab 251
connecting to Integration Service 189 suspension email 184
connecting to repositories 189 Variables tab 253
customizing columns 192 workflow schedules
deleted Integration Services 189 Daylight Savings Time 168
deleted tasks 189 time zones 168
disconnecting from an Integration Service 189 workflow tasks
displaying services 190 reusable and non-reusable 63
filtering deleted tasks 189 workflows
filtering services 190 running 174, 196
filtering tasks in Task View 189, 200 scheduled state 171
Gantt Chart view 187 aborted 196
Gantt chart view options 192 aborting 196
general options 192 adding tasks 37
hiding columns 192 assigning Integration Service 39
hiding services 190 branches 35
icon 188 cold start workflows 195
launching 188 copying 28
268 Index
workflows (continued) worklets (continued)
creating 36 monitoring details in the Workflow Monitor 207
definition 35 overview 41
deleting 37 restarting in Workflow Monitor 194
developing 35, 36 status 196
disabled 196 suspended 196
disabling 174 suspending 41, 196
editing 37 validating 164
email 180 waiting 196
events 35 workspace
fail parent workflow 64 colors 21
failed 196 colors, setting 21
guidelines 35 file directory 20
links 35 fonts, setting 21
monitor 35 navigating 23
override Integration Service 174, 175 printing 23
override operating system profile 174, 175 zooming 25
overview 35 writing
preparing to run 196 multibyte data to files 112
properties reference 249 to fixed-width files 110
restarting in Workflow Monitor 194
restarting without recovery in Workflow Monitor 195
run type 205
running 174, 196 X
scheduled 196 XML
scheduling 168 duplicate row handling 117
scheduling concurrent instances 168 flushing data 118
selecting a service 35 performance 118
starting 174 special characters 116
starting with advanced options 174, 175 XML file
status 196 creating multiple XML files 120
stopped 196 XML sources
stopping 196 numeric data handling 86
stopping and aborting in Workflow Monitor 195 partitionable option 86
succeeded 196 server handling 87
suspended 196 session properties 86
suspending 196 source filename 86
suspension email 184 source filetype option 86
terminated 196 source location 86
terminating 196 validate option 86
unknown status 196 XML targets
unscheduled 196 active sources 104
unscheduling 174 duplicate group row handling 117
using tasks 61 file list of multiple targets 120
validating 163 in sessions 115
viewing details in the Workflow Monitor 205 outputting multiple files 119
viewing reports 40 server handling 116
waiting 196 session log entry 120
Workflow Monitor maximum days 192 session properties 115
Worklet Designer setting DTD/schema reference 118
displaying and hiding tool name 20 validate option 115
worklets XMLWarnDupRows
adding tasks 42 writing to session log 117
configuring properties 42
create non-reusable worklets 41
create reusable worklets 41
declaring events 42 Z
developing 41 zooming
email 180 Workflow Manager 25
fail parent worklet 64
Index 269