Data Movement Utilities Guide and Reference
Data Movement Utilities Guide and Reference
DB2 Version 9
for Linux, UNIX, and Windows
SC10-4227-00
DB2 ®
DB2 Version 9
for Linux, UNIX, and Windows
SC10-4227-00
Before using this information and the product it supports, be sure to read the general information under Notices.
Edition Notice
This document contains proprietary information of IBM. It is provided under a license agreement and is protected
by copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/
planetwide
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 1993, 2006. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About This Book . . . . . . . . . . . v Parallelism and loading . . . . . . . . . . 108
Who Should Use this Book . . . . . . . . . v Privileges, authorities, and authorizations required
How this Book is Structured . . . . . . . . . v to use Load . . . . . . . . . . . . . . 109
Loading data . . . . . . . . . . . . . 110
Chapter 1. Export . . . . . . . . . . . 1 Read access load operations . . . . . . . . 113
Building indexes . . . . . . . . . . . . 115
Export Overview . . . . . . . . . . . . . 1
Using load with identity columns . . . . . . 117
Changes to previous export behavior introduced
Using load with generated columns . . . . . . 118
in DB2 Version 9.1 . . . . . . . . . . . 2
Checking for integrity violations following a load
Privileges, authorities and authorization required to
operation . . . . . . . . . . . . . . . 121
use export . . . . . . . . . . . . . . . 3
Refreshing dependent immediate materialized
Exporting data . . . . . . . . . . . . . . 4
query tables . . . . . . . . . . . . . . 123
Exporting XML data . . . . . . . . . . . . 5
Propagating dependent immediate staging tables 124
LOB and XML file behavior with regard to import
Multidimensional clustering considerations . . . 125
and export . . . . . . . . . . . . . . . 7
Load considerations for partitioned tables . . . . 126
Using export with identity columns . . . . . . 9
Restarting an interrupted load operation . . . . 129
Recreating an exported table . . . . . . . . . 9
Restarting or Terminating an Allow Read Access
Exporting large objects (LOBS) . . . . . . . . 10
Load Operation . . . . . . . . . . . 130
EXPORT . . . . . . . . . . . . . . . 11
Recovering data with the load copy location file 131
EXPORT command using the ADMIN_CMD
LOAD . . . . . . . . . . . . . . . 132
procedure . . . . . . . . . . . . . . . 15
LOAD command using the ADMIN_CMD
db2Export - Export data from a database . . . . 19
procedure . . . . . . . . . . . . . . 145
File type modifiers for the export utility . . . . . 27
LOAD QUERY . . . . . . . . . . . . . 158
Export Sessions - CLP Examples . . . . . . . 33
db2Load - Load data into a table . . . . . . . 161
db2LoadQuery - Get the status of a load operation 181
Chapter 2. Import . . . . . . . . . . 35 File type modifiers for the load utility . . . . . 188
Import Overview . . . . . . . . . . . . 35 Load exception table . . . . . . . . . . . 200
Changes to previous import behavior introduced Load dump file . . . . . . . . . . . . . 201
in DB2 Version 9.1 . . . . . . . . . . . 36 Load temporary files . . . . . . . . . . . 202
Privileges, authorities, and authorization required to Load utility log records . . . . . . . . . . 202
use import . . . . . . . . . . . . . . . 38 Table locking, table states and table space states 203
Importing data . . . . . . . . . . . . . 38 Character set and national language support . . . 206
Importing XML data . . . . . . . . . . . 40 Pending states after a load operation . . . . . 206
Using import in a client/server environment . . . 40 Optimizing load performance . . . . . . . . 207
Using import with buffered inserts . . . . . . 41 Load - CLP examples . . . . . . . . . . . 212
Using import with identity columns . . . . . . 42
Using import with generated columns . . . . . 43
Chapter 4. Loading data in a
Using import to recreate an exported table . . . . 45
Importing large objects (LOBS) . . . . . . . . 46 partitioned database environment . . 217
Importing user-defined distinct types (UDTs) . . . 47 Load in a partitioned database environment -
Table locking during import . . . . . . . . . 47 overview . . . . . . . . . . . . . . . 217
IMPORT . . . . . . . . . . . . . . . 49 Loading data in a partitioned database
IMPORT command using the ADMIN_CMD environment . . . . . . . . . . . . . . 219
procedure . . . . . . . . . . . . . . . 61 Monitoring a load operation in a partitioned
db2Import - Import data into a table, hierarchy, database environment using the LOAD QUERY
nickname or view . . . . . . . . . . . . 73 command . . . . . . . . . . . . . . 225
File type modifiers for the import utility . . . . . 87 Restarting or terminating a load operation in a
Character set and NLS considerations . . . . . 97 partitioned database environment . . . . . . 227
Import sessions - CLP examples . . . . . . . 97 Load configuration options for partitioned
database environments . . . . . . . . . . 229
Examples of loading data in a partitioned database
Chapter 3. Load . . . . . . . . . . 101
environment . . . . . . . . . . . . . . 234
Load overview . . . . . . . . . . . . . 102
Migration and version compatibility . . . . . . 237
Changes to Previous Load Behavior Introduced
Loading data in a partitioned database
in DB2 V9.1 . . . . . . . . . . . . . 105
environment - hints and tips . . . . . . . . 237
Changes to previous load behavior introduced
in DB2 UDB Version 8 . . . . . . . . . 106
Other vendor’s products that move data in and out of databases are also available,
but are not discussed in this book.
It is assumed that you are familiar with the DB2 database system, Structured
Query Language (SQL), and with the operating system environment in which the
DB2 database is running. If you are using native XML data store, you should also
be familiar with handling XML data through SQL/XML and XQuery.
For information about exporting data out of typed tables, see “Moving data
between typed tables” on page 260. For information about exporting data from a
DRDA server database to a file on the DB2 Connect workstation, and the reverse,
see “Moving data with DB2 Connect” on page 245.
Export Overview
The export utility exports data from a database to an operating system file, which
can be in one of several external file formats. This operating system file can then
be used to move the table data to a different server such as DB2 Universal
Database for iSeries™.
Related concepts:
v “Examples of db2batch tests” in Performance Guide
v “Exporting large objects (LOBS)” on page 10
v “Moving data between typed tables” on page 260
v “Privileges, authorities and authorization required to use export” on page 3
v “Recreating an exported table” on page 9
v “Using export with identity columns” on page 9
Related tasks:
v “Exporting data” on page 4
Related reference:
v “Export Sessions - CLP Examples” on page 33
v “Export/Import/Load Utility File Formats” on page 293
v “EXPORT ” on page 11
Related reference:
v “db2Export - Export data from a database” on page 19
v “EXPORT ” on page 11
Chapter 1. Export 3
Exporting data
The export utility exports data from a database to one of several external file
formats. You can specify the data to be exported by supplying an SQL SELECT
statement, or by providing hierarchical information for typed tables.
Authorization:
Prerequisites:
Before invoking the export utility, you must be connected (or be able to implicitly
connect) to the database from which the data will be exported. If implicit connect
is enabled, a connection to the default database is established. Utility access to
Linux, UNIX, or Windows database servers from Linux, UNIX, or Windows clients
must be a direct connection through the engine and not through a DB2 Connect
gateway or loop back environment.
Since the utility will issue a COMMIT statement, you should complete all
transactions and release all locks by performing either a COMMIT or a
ROLLBACK before invoking the export utility. There is no requirement for other
user applications accessing the table and using separate connections to disconnect.
Restrictions:
Procedure:
The export utility can be invoked through the command line processor (CLP), the
Export Table notebook in the Control Centre, or an application programming
interface (API), db2Export.
The following is an example of the EXPORT command issued through the CLP:
db2 export to staff.ixf of ixf select * from userid.staff
For complete syntax and usage information, see the EXPORT command.
Detailed information about the Export Table notebook is provided through the
Control Center online help facility.
Related concepts:
v “Export Overview” on page 1
Related reference:
v “EXPORT ” on page 11
v “Export/Import/Load Utility File Formats” on page 293
v “ROLLBACK statement” in SQL Reference, Volume 2
The destination paths and base names of the exported XML files can be specified
with the XML TO and XMLFILE options. By default, exported XML files are
written to the path of the exported data file. The default base name for exported
XML files is the name of the exported data file, with an appending 3-digit
sequence number, and the .xml extension.
Examples:
For the following examples, imagine a table USER.T1 containing four columns and
two rows:
C1 INTEGER
C2 XML
C3 VARCHAR(10)
C4 XML
Table 1. USER.T1
C1 C2 C3 C4
2 <?xml version=″1.0″ ’char1’ <?xml version=″1.0″
encoding=″UTF-8″ ?><note encoding=″UTF-8″ ?><note
time=″12:00:00″><to>You</ time=″13:00:00″><to>Him</
to><from> Me</ to><from> Her</
from><heading>note1</heading> from><heading>note2</heading>
<body>Hello World!</body></ <body>Hello World!</body></
note> note>
4 NULL ’char2’ <?xml version=″1.0″
encoding=″UTF-8″ ?><note
time=″14:00:00″><to>Us</to><from>
Them</from><heading>note3</
heading> <body>Hello
World!</body></note>
Chapter 1. Export 5
Example 1:
Example 2:
The following command exports the contents of USER.T1 in DEL format to the file
″t1export.del″. XML documents contained in columns C2 and C4 are written to the
path ″/home/user/xmlpath″. The XML files are named with the base name
″xmldocs″, with multiple exported XML documents written to the same XML file.
The XMLSAVESCHEMA option indicates that XML schema information is saved
during the export procedure.
EXPORT TO /mypath/t1export.del OF DEL XML TO /home/user/xmlpath
XMLFILE xmldocs XMLSAVESCHEMA SELECT * FROM USER.T1
Example 3:
The following command is similar to Example 2, except that each exported XML
document is written to a separate XML file.
EXPORT TO /mypath/t1export.del OF DEL XML TO /home/user/xmlpath
XMLFILE xmldocs MODIFIED BY XMLINSEPFILES XMLSAVESCHEMA
SELECT * FROM USER.T1
Example 4:
Note: The result of this particular XQuery does not produce well-formed XML
documents. Therefore, the file exported above could not be directly
imported into an XML column.
Related concepts:
v “Importing XML data” on page 40
v “XML data specifier” on page 244
v “Native XML data store overview” in XML Guide
Related reference:
v “LOB and XML file behavior with regard to import and export” on page 7
v “EXPORT ” on page 11
LOB and XML file behavior with regard to import and export
LOB and XML files have certain shared behaviors with regard to the import and
export utilities.
When exporting data, if one or more LOB paths are specified with the LOBS TO
option, the export utility will cycle between the paths to write each successful LOB
value to the appropriate LOB file. Similarly, if one or more XML paths are specified
Chapter 1. Export 7
with the XML TO option, the export utility will cycle between the paths to write
each successive QDM (XQuery Data Model) instance to the appropriate XML file.
By default, LOB values and QDM instances are written to the same path to which
the exported relational data is written. Unless the LOBSINSEPFILES or
XMLINSEPFILES file type modifier is set, both LOB files and XML files can have
multiple values concatenated to the same file.
The LOBFILE option provides a means to specify the base name of the LOB files
generated by the export utility. Similarly, the XMLFILE option provides a means to
specify the base name of the XML files generated by the export utility. The default
LOB file base name is the name of the exported data file, with the extension .lob.
The default XML file base name is the name of the exported data file, with the
extension .xml. The full name of the exported LOB file or XML file therefore
consists of the base name, followed by a number extension that is padded to three
digits, and the extension .lob or .xml.
When importing data, a LOB Location Specifier (LLS) is compatible with an XML
target column, and an XML Data Specifier (XDS) is compatible with a LOB target
column. If the LOBS FROM option is not specified, the LOB files to import are
assumed to reside in the same path as the input relational data file. Similarly, if the
XML FROM option is not specified, the XML files to import are assumed to reside
in the same path as the input relational data file.
Example 1:
All LOB values are written to the file ″/mypath/t1export.del.001.lob″, and all
QDM instances are written to the file ″/mypath/t1export.del.001.xml″.
Example 2:
The first LOB value will be written to the file ″/lob1/t1export.del.001.lob″, the
second will be written to the file ″/lob2/t1export.del.002.lob″, the third will be
appended to ″/lob1/t1export.del.001.lob″, the fourth will be appended to
″/lob2/t1export.del.002.lob″, and so on.
Example 3:
The first QDM instance will be written to the file ″/xml1/xmlbase.001.xml″, the
second will be written to the file ″/xml2/xmlbase.002.xml″, the third will be
written to ″/xml1/xmlbase.003.xml″, the fourth will be written to
″/xml2/xmlbase.004.xml″, and so on.
Example 4:
The import utility will try to import an XML document from the file
″/lobpath/mylobfile.001.lob″, starting at file offset 123, with its length being 456
bytes.
Related concepts:
v “XML data type” in XML Guide
v “Exporting XML data” on page 5
v “Importing XML data” on page 40
Related concepts:
v “Identity columns” in Administration Guide: Planning
If the column names specified in the index contain either ’-’ or ’+’ characters, the
index information is not collected and warning SQL27984W is returned. The export
utility completes and the data exported is not affected. The index information is
not saved in the IXF file. If you are recreating the table by using the IMPORT
CREATE command, the indexes are not recreated. You must create the indexes
separately, using the db2look utility.
Chapter 1. Export 9
During an IMPORT CREATE command, warning SQL27984W is returned when
some information has not been saved to the PC/IXF file during the export
operation. Some information is not saved to the PC/IXF file in the following
situations:
v index column names contain hexadecimal values of 0x2B or 0x2D
v table contains XML columns
v table is multidimensional clustered
v table contains a table partitioning key
v index name that is longer than 128 bytes due to codepage conversion
v table is a protected table
v contains action strings other than SELECT * FROM <TABLE-NAME>
v method N is specified
The export operation fails if the data you are exporting exceeds the space available
on the file system on which the exported file is created. In this case, you should
limit the amount of data selected by specifying conditions on the WHERE clause,
so that the export file fits on the target file system. You can invoke the export
utility multiple times to export all of the data.
The DEL and ASC file formats do not contain descriptions of the target table, but
they do contain the record data. To recreate a table with data stored in these file
formats, create the target table, and then use the load, or import utility to populate
the table from these files. The db2look utility can be used to capture the original
table definitions, and to generate the corresponding data definition language
(DDL).
Related concepts:
v “Export Overview” on page 1
v “Import Overview” on page 35
v “Using import to recreate an exported table” on page 45
Related reference:
v “db2look - DB2 statistics and DDL extraction tool command” in Command
Reference
v “Export/Import/Load Utility File Formats” on page 293
v “EXPORT ” on page 11
v “IMPORT ” on page 49
Note: The IXF file format does not store the LOB options of the column, such as
whether or not the LOB column is logged. This means that the import utility
cannot recreate a table containing a LOB column that is defined to be 1GB
or larger.
A LOB Location Specifier (LLS) is used to store multiple LOBs in a single file when
exporting LOB information. When exporting data using the lobsinfile modifier,
the export utility selects the entire LOB file and places it in one of the LOB files.
10 Data Movement Utilities
EXPORT
There might be multiple LOBs per LOB file and multiple LOB files in each LOB
path. The data file will contain the LLS records. Use the lobsinsepfiles file type
modifier to write each LOB into separate file.
An LLS is a string indicating where LOB data can be found within a file. The
format of the LLS is filename.ext.nnn.mmm/, where filename.ext is the name of
the file that contains the LOB, nnn is the offset of the LOB within the file
(measured in bytes), and mmm is the length of the LOB (in bytes). For example, an
LLS of db2exp.001.123.456/ indicates that the LOB is located in the file
db2exp.001, begins at an offset of 123 bytes into the file, and is 456 bytes long. If
the indicated size in the LLS is 0, the LOB is considered to have a length of 0. If
the length is -1, the LOB is considered to be NULL and the offset and file name are
ignored.
Related reference:
v “EXPORT ” on page 11
v “Large objects (LOBs)” in SQL Reference, Volume 1
v “db2Export - Export data from a database” on page 19
EXPORT
Exports data from a database to one of several external file formats. The user
specifies the data to be exported by supplying an SQL SELECT statement, or by
providing hierarchical information for typed tables.
Authorization:
Required connection:
Command syntax:
LOBS TO lob-path
, ,
,
Chapter 1. Export 11
EXPORT
XMLSAVESCHEMA ,
METHOD N ( column-name )
select-statement
XQUERY xquery-statement
HIERARCHY STARTING sub-table-name
traversal-order-list where-clause
traversal-order-list:
( sub-table-name )
Command parameters:
HIERARCHY traversal-order-list
Export a sub-hierarchy using the specified traverse order. All sub-tables
must be listed in PRE-ORDER fashion. The first sub-table name is used as
the target table name for the SELECT statement.
HIERARCHY STARTING sub-table-name
Using the default traverse order (OUTER order for ASC, DEL, or WSF files,
or the order stored in PC/IXF data files), export a sub-hierarchy starting
from sub-table-name.
LOBFILE filename
Specifies one or more base file names for the LOB files. When name space
is exhausted for the first name, the second name is used, and so on. The
maximum number of file names that can be specified is 999. This will
implicitly activate the LOBSINFILE behavior.
When creating LOB files during an export operation, file names are
constructed by appending the current base name from this list to the
current path (from lob-path), and then appending a 3-digit sequence
number and the three character identifier lob. For example, if the current
LOB path is the directory /u/foo/lob/path/, and the current LOB file name
is bar, the LOB files created will be /u/foo/lob/path/bar.001.lob,
/u/foo/lob/path/bar.002.lob, and so on.
LOBS TO lob-path
Specifies one or more paths to directories in which the LOB files are to be
stored. There will be at least one file per LOB path, and each file will
contain at least one LOB. The maximum number of paths that can be
specified is 999. This will implicitly activate the LOBSINFILE behavior.
METHOD N column-name
Specifies one or more column names to be used in the output file. If this
parameter is not specified, the column names in the table are used. This
parameter is valid only for WSF and IXF files, but is not valid when
exporting hierarchical data.
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the export
utility.
OF filetype
Specifies the format of the data in the output file:
Chapter 1. Export 13
EXPORT
The schema and name portions of the SQL identifier are stored as the
″OBJECTSCHEMA″ and ″OBJECTNAME″ values in the row of the
SYSCAT.XSROBJECTS catalog table corresponding to the XML schema.
The XMLSAVESCHEMA option is not compatible with XQuery sequences that
do not produce well-formed XML documents.
Usage notes:
v Be sure to complete all table operations and release all locks before starting an
export operation. This can be done by issuing a COMMIT after closing all
cursors opened WITH HOLD, or by issuing a ROLLBACK.
v Table aliases can be used in the SELECT statement.
v The messages placed in the message file include the information returned from
the message retrieval service. Each message begins on a new line.
v The export utility produces a warning message whenever a character column
with a length greater than 254 is selected for export to DEL format files.
v PC/IXF import should be used to move data between databases. If character
data containing row separators is exported to a delimited ASCII (DEL) file and
processed by a text transfer program, fields containing the row separators will
shrink or expand.
v The file copying step is not necessary if the source and the target databases are
both accessible from the same client.
v DB2 Connect can be used to export tables from DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF export is
supported.
v The export utility will not create multiple-part PC/IXF files when invoked from
an AIX system.
v The export utility will store the NOT NULL WITH DEFAULT attribute of the
table in an IXF file if the SELECT statement provided is in the form SELECT *
FROM tablename.
v When exporting typed tables, subselect statements can only be expressed by
specifying the target table name and the WHERE clause. Fullselect and
select-statement cannot be specified when exporting a hierarchy.
v For file formats other than IXF, it is recommended that the traversal order list be
specified, because it tells DB2 how to traverse the hierarchy, and what sub-tables
to export. If this list is not specified, all tables in the hierarchy are exported, and
the default order is the OUTER order. The alternative is to use the default order,
which is the order given by the OUTER function.
v Use the same traverse order during an import operation. The load utility does
not support loading hierarchies or sub-hierarchies.
v When exporting data from a table that has protected rows, the LBAC credentials
held by the session authorization id might limit the rows that are exported.
Rows that the session authorization ID does not have read access to will not be
exported. No error or warning is given.
v If the LBAC credentials held by the session authorization id do not allow
reading from one or more protected columns included in the export then the
export fails and an error (SQLSTATE 42512) is returned.
v Export packages are bound using DATETIME ISO format, thus, all
date/time/timestamp values are converted into ISO format when cast to a string
representation. Since the CLP packages are bound using DATETIME LOC format
(locale specific format), you may see inconsistant behaviour between CLP and
export if the CLP DATETIME format is different from ISO. For instance, the
following SELECT statement may return expected results:
But an export command using the same select clause will not:
db2 export to test.del of del select col2 from test
where char(col2)=’05/10/2005’;
Number of rows exported: 0
Now, replacing the LOCALE date format with ISO format gives the expected
results:
db2 export to test.del of del select col2 from test
where char(col2)=’2005-05-10’;
Number of rows exported: 3
Related concepts:
v “Export Overview” on page 1
v “Privileges, authorities and authorization required to use export” on page 3
Related tasks:
v “Exporting data” on page 4
Related reference:
v “ADMIN_CMD procedure – Run administrative commands” in Administrative
SQL Routines and Views
v “EXPORT command using the ADMIN_CMD procedure” on page 15
v “Export Sessions - CLP Examples” on page 33
v “LOB and XML file behavior with regard to import and export” on page 7
Authorization:
Required connection:
Command syntax:
LOBS TO lob-path
Chapter 1. Export 15
EXPORT using ADMIN_CMD
, ,
,
XMLSAVESCHEMA ,
METHOD N ( column-name )
select-statement
XQUERY xquery-statement
HIERARCHY STARTING sub-table-name
traversal-order-list where-clause
traversal-order-list:
( sub-table-name )
Command parameters:
HIERARCHY traversal-order-list
Export a sub-hierarchy using the specified traverse order. All sub-tables
must be listed in PRE-ORDER fashion. The first sub-table name is used as
the target table name for the SELECT statement.
HIERARCHY STARTING sub-table-name
Using the default traverse order (OUTER order for ASC, DEL, or WSF files,
or the order stored in PC/IXF data files), export a sub-hierarchy starting
from sub-table-name.
LOBFILE filename
Specifies one or more base file names for the LOB files. When name space
is exhausted for the first name, the second name is used, and so on. The
maximum number of file names that can be specified is 999. This will
implicitly activate the LOBSINFILE behavior.
When creating LOB files during an export operation, file names are
constructed by appending the current base name from this list to the
current path (from lob-path), and then appending a 3-digit sequence
number and the three character identifier lob. For example, if the current
LOB path is the directory /u/foo/lob/path/, and the current LOB file name
is bar, the LOB files created will be /u/foo/lob/path/bar.001.lob,
/u/foo/lob/path/bar.002.lob, and so on.
LOBS TO lob-path
Specifies one or more paths to directories in which the LOB files are to be
stored. There will be at least one file per LOB path, and each file will
contain at least one LOB. The maximum number of paths that can be
specified is 999. This will implicitly activate the LOBSINFILE behavior.
METHOD N column-name
Specifies one or more column names to be used in the output file. If this
parameter is not specified, the column names in the table are used. This
parameter is valid only for WSF and IXF files, but is not valid when
exporting hierarchical data.
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the export
utility.
OF filetype
Specifies the format of the data in the output file:
v DEL (delimited ASCII format), which is used by a variety of database
manager and file manager programs.
v WSF (work sheet format), which is used by programs such as:
– Lotus 1-2-3
– Lotus Symphony
When exporting BIGINT or DECIMAL data, only values that fall within
the range of type DOUBLE can be exported accurately. Although values
that do not fall within this range are also exported, importing or loading
these values back might result in incorrect data, depending on the
operating system.
v IXF (integrated exchange format, PC version), in which most of the table
attributes, as well as any existing indexes, are saved in the IXF file,
except when columns are specified in the SELECT statement. With this
format, the table can be recreated, while with the other file formats, the
table must already exist before data can be imported into it.
select-statement
Specifies the SELECT or XQUERY statement that will return the data to be
exported. If the statement causes an error, a message is written to the
message file (or to standard output). If the error code is one of SQL0012W,
SQL0347W, SQL0360W, SQL0437W, or SQL1824W, the export operation
continues; otherwise, it stops.
TO filename
If the name of a file that already exists is specified, the export utility
overwrites the contents of the file; it does not append the information.
XMLFILE filename
Specifies one or more base file names for the XML files. When name space
is exhausted for the first name, the second name is used, and so on.
When creating XML files during an export operation, file names are
constructed by appending the current base name from this list to the
current path (from xml-path), appending a 3-digit sequence number, and
appending the three character identifier xml. For example, if the current
XML path is the directory /u/foo/xml/path/, and the current XML file
name is bar, the XML files created will be /u/foo/xml/path/bar.001.xml,
/u/foo/xml/path/bar.002.xml, and so on.
XML TO xml-path
Specifies one or more paths to directories in which the XML files are to be
stored. There will be at least one file per XML path, and each file will
contain at least one XQuery Data Model (QDM) instance. If more than one
path is specified, then QDM instances are distributed evenly among the
paths.
XMLSAVESCHEMA
Specifies that XML schema information should be saved for all XML
Chapter 1. Export 17
EXPORT using ADMIN_CMD
columns. For each exported XML document that was validated against an
XML schema when it was inserted, the fully qualified SQL identifier of that
schema will be stored as an (SCH) attribute inside the corresponding XML
Data Specifier (XDS). If the exported document was not validated against
an XML schema or the schema object no longer exists in the database, an
SCH attribute will not be included in the corresponding XDS.
The schema and name portions of the SQL identifier are stored as the
″OBJECTSCHEMA″ and ″OBJECTNAME″ values in the row of the
SYSCAT.XSROBJECTS catalog table corresponding to the XML schema.
The XMLSAVESCHEMA option is not compatible with XQuery sequences that
do not produce well-formed XML documents.
Usage notes:
v Be sure to complete all table operations and release all locks before starting an
export operation. This can be done by issuing a COMMIT after closing all
cursors opened WITH HOLD, or by issuing a ROLLBACK.
v Table aliases can be used in the SELECT statement.
v The messages placed in the message file include the information returned from
the message retrieval service. Each message begins on a new line.
v The export utility produces a warning message whenever a character column
with a length greater than 254 is selected for export to DEL format files.
v PC/IXF import should be used to move data between databases. If character
data containing row separators is exported to a delimited ASCII (DEL) file and
processed by a text transfer program, fields containing the row separators will
shrink or expand.
v The file copying step is not necessary if the source and the target databases are
both accessible from the same client.
v DB2 Connect can be used to export tables from DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF export is
supported.
v The export utility will not create multiple-part PC/IXF files when invoked from
an AIX system.
v The export utility will store the NOT NULL WITH DEFAULT attribute of the
table in an IXF file if the SELECT statement provided is in the form SELECT *
FROM tablename.
v When exporting typed tables, subselect statements can only be expressed by
specifying the target table name and the WHERE clause. Fullselect and
select-statement cannot be specified when exporting a hierarchy.
v For file formats other than IXF, it is recommended that the traversal order list be
specified, because it tells DB2 how to traverse the hierarchy, and what sub-tables
to export. If this list is not specified, all tables in the hierarchy are exported, and
the default order is the OUTER order. The alternative is to use the default order,
which is the order given by the OUTER function.
v Use the same traverse order during an import operation. The load utility does
not support loading hierarchies or sub-hierarchies.
v When exporting data from a table that has protected rows, the LBAC credentials
held by the session authorization id might limit the rows that are exported.
Rows that the session authorization ID does not have read access to will not be
exported. No error or warning is given.
But an export command using the same select clause will not:
db2 export to test.del of del select col2 from test
where char(col2)=’05/10/2005’;
Number of rows exported: 0
Now, replacing the LOCALE date format with ISO format gives the expected
results:
db2 export to test.del of del select col2 from test
where char(col2)=’2005-05-10’;
Number of rows exported: 3
Related concepts:
v “Privileges, authorities and authorization required to use export” on page 3
Related reference:
v “ADMIN_CMD procedure – Run administrative commands” in Administrative
SQL Routines and Views
v “ADMIN_GET_MSGS table function – Retrieve messages generated by a data
movement utility that is executed through the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
v “ADMIN_REMOVE_MSGS procedure – Clean up messages generated by a data
movement utility that is executed through the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
v “db2Export - Export data from a database” on page 19
v “Miscellaneous variables” in Performance Guide
v “db2pd - Monitor and troubleshoot DB2 database command” in Command
Reference
Authorization:
Chapter 1. Export 19
db2Export - Export data from a database
v sysadm
v dbadm
Required connection:
SQL_API_RC SQL_API_FN
db2gExport (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
Chapter 1. Export 21
db2Export - Export data from a database
SQL_METH_N
Names. Specify column names to be used in the output file.
SQL_METH_D
Default. Existing column names from the table are to be used in
the output file. In this case, the number of columns and the
column specification array are both ignored. The column names are
derived from the output of the SELECT statement specified in
pActionString.
piActionString
Input. Pointer to an sqllob structure containing a valid dynamic SQL
SELECT statement. The structure contains a 4-byte long field, followed by
the characters that make up the SELECT statement. The SELECT statement
specifies the data to be extracted from the database and written to the
external file.
The columns for the external file (from piDataDescriptor), and the database
columns from the SELECT statement, are matched according to their
respective list/structure positions. The first column of data selected from
the database is placed in the first column of the external file, and its
column name is taken from the first element of the external column array.
piFileType
Input. A string that indicates the format of the data within the external file.
Supported external file formats (defined in sqlutil header file) are:
SQL_DEL
Delimited ASCII, for exchange with dBase, BASIC, and the IBM
Personal Decision Series programs, and many other database
managers and file managers.
SQL_WSF
Worksheet formats for exchange with Lotus Symphony and 1-2-3
programs.
SQL_IXF
PC version of the Integrated Exchange Format, the preferred
method for exporting data from a table. Data exported to this file
format can later be imported or loaded into the same table or into
another database manager table.
piFileTypeMod
Input. A pointer to an sqldcol structure containing a 2-byte long field,
followed by an array of characters that specify one or more processing
options. If this pointer is NULL, or the structure pointed to has zero
characters, this action is interpreted as selection of a default specification.
Not all options can be used with all of the supported file types. See related
link below: ″File type modifiers for the export utility.″
piMsgFileName
Input. A string containing the destination for error, warning, and
informational messages returned by the utility. It can be the path and the
name of an operating system file or a standard device. If the file already
exists, the information is appended . If it does not exist, a file is created.
iCallerAction
Input. An action requested by the caller. Valid values (defined in sqlutil
header file, located in the include directory) are:
SQLU_INITIAL
Initial call. This value must be used on the first call to the API. If
the initial call or any subsequent call returns and requires the
calling application to perform some action prior to completing the
requested export operation, the caller action must be set to one of
the following:
SQLU_CONTINUE
Continue processing. This value can only be used on subsequent
calls to the API, after the initial call has returned with the utility
requesting user input (for example, to respond to an end of tape
condition). It specifies that the user action requested by the utility
has completed, and the utility can continue processing the initial
request.
SQLU_TERMINATE
Terminate processing. This value can only be used on subsequent
calls to the API, after the initial call has returned with the utility
requesting user input (for example, to respond to an end of tape
condition). It specifies that the user action requested by the utility
was not performed, and the utility is to terminate processing the
initial request.
poExportInfoOut
A pointer to the db2ExportOut structure.
piExportInfoIn
Input. Pointer to the db2ExportIn structure.
piXmlPathList
Input. Pointer to an sqlu_media_list structure with its media_type field set
to SQLU_LOCAL_MEDIA, and its sqlu_media_entry structure listing paths
on the client where the XML files are to be stored. Exported XML data will
be distributed evenly among all the paths listed in the sqlu_media_entry
structure.
piXmlFileList
Input. Pointer to an sqlu_media_list structure with its media_type field set
to SQLU_CLIENT_LOCATION, and its sqlu_location_entry structure
containing base file names.
When the name space is exhausted using the first name in this list, the API
will use the second name, and so on. When creating XML files during an
export operation, file names are constructed by appending the current base
name from this list to the current path (from piXmlFileList), and then
appending a 3-digit sequence number and the .xml extension. For example,
if the current XML path is the directory /u/foo/xml/path, the current
XML file name is bar, and the XMLINSEPFILES file type modifier is set,
then the created XML files will be /u/foo/xml/path/bar.001.xml,
/u/foo/xml/path/bar.002.xml, and so on. If the XMLINSEPFILES file type
modifier is not set, then all the XML documents will be concatenated and
put into one file /u/foo/xml/path/bar.001.xml
Chapter 1. Export 23
db2Export - Export data from a database
Usage notes:
Before starting an export operation, you must complete all table operations and
release all locks in one of two ways:
v Close all open cursors that were defined with the WITH HOLD clause, and
commit the data changes by executing the COMMIT statement.
v Roll back the data changes by executing the ROLLBACK statement.
The messages placed in the message file include the information returned from the
message retrieval service. Each message begins on a new line.
If the export utility produces warnings, the message will be written out to a
message file, or standard output if one is not specified.
If the db2uexpm.bnd module or any other shipped .bnd files are bound manually,
the format option on the binder must not be used.
DB2 Connect can be used to export tables from DRDA servers such as DB2 for
z/OS and OS/390, DB2 for VM and VSE, and DB2 for iSeries. Only PC/IXF export
is supported.
PC/IXF import should be used to move data between databases. If character data
containing row separators is exported to a delimited ASCII (DEL) file and
processed by a text transfer program, fields containing the row separators will
shrink or expand.
The export utility will not create multiple-part PC/IXF files when invoked from an
AIX system.
Index definitions for a table are included in the PC/IXF file when the contents of a
single database table are exported to a PC/IXF file with a pActionString parameter
beginning with SELECT * FROM tablename, and the piDataDescriptor parameter
specifying default names. Indexes are not saved for views, or if the SELECT clause
of the piActionString includes a join. A WHERE clause, a GROUP BY clause, or a
HAVING clause in the piActionString parameter will not prevent the saving of
indexes. In all of these cases, when exporting from typed tables, the entire
hierarchy must be exported.
The export utility will store the NOT NULL WITH DEFAULT attribute of the table
in an IXF file if the SELECT statement provided is in the form: SELECT * FROM
tablename.
For file formats other than IXF, it is recommended that the traversal order list be
specified, because it tells DB2 how to traverse the hierarchy, and what sub-tables to
export. If this list is not specified, all tables in the hierarchy are exported, and the
default order is the OUTER order. The alternative is to use the default order, which
is the order given by the OUTER function.
Note: Use the same traverse order during an import operation. The load utility
does not support loading hierarchies or sub-hierarchies.
To ensure that a consistent copy of the table and the corresponding files referenced
by the DATALINK columns are copied for export, do the following:
1. Issue the command: QUIESCE TABLESPACES FOR TABLE tablename SHARE.
This ensures that no update transactions are in progress when EXPORT is run.
2. Issue the EXPORT command.
3. Run the dlfm_export utility at each Data Links server. Input to the dlfm_export
utility is the control file name, which is generated by the export utility. This
produces a tar (or equivalent) archive of the files listed within the control file.
dlfm_export does not capture the ACLs information of the files that are
archived.
4. Issue the command: QUIESCE TABLESPACES FOR TABLE tablename RESET.
This makes the table available for updates.
EXPORT is executed as an SQL application. The rows and columns satisfying the
SELECT statement conditions are extracted from the database. For the DATALINK
columns, the SELECT statement should not specify any scalar function.
Chapter 1. Export 25
db2Export - Export data from a database
CONTINUE EXPORT
STOP EXPORT
If this parameter is NULL, or a value for dcoldata has not been specified,
the utility uses the column names from the database table.
msgfile
File, path, or device name where error and warning messages are to be
sent.
number
A host variable that will contain the number of exported rows.
Related tasks:
v “Exporting data” on page 4
Related reference:
v “sqlchar data structure” in Administrative API Reference
Related samples:
v “expsamp.sqb -- Export and import tables with table data to a DRDA database
(IBM COBOL)”
v “impexp.sqb -- Export and import tables with table data (IBM COBOL)”
v “tload.sqb -- How to export and load table data (IBM COBOL)”
v “tbmove.sqc -- How to move table data (C)”
v “tbmove.sqC -- How to move table data (C++)”
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the
LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length
of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in
the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456
bytes long.
If you specify the “lobsinfile” modifier when using EXPORT, the LOB data is
placed in the locations specified by the LOBS TO clause. Otherwise the LOB data
is sent to the data file directory. The LOBS TO clause specifies one or more paths
to directories in which the LOB files are to be stored. There will be at least one
file per LOB path, and each file will contain at least one LOB. The LOBS TO or
LOBFILE options will implicitly activate the LOBSINFILE behavior.
To indicate a null LOB , enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
xmlinsepfiles Each XQuery Data Model (QDM) instance is written to a separate file. By default,
multiple values are concatenated together in the same file.
lobsinsepfiles Each LOB value is written to a separate file. By default, multiple values are
concatenated together in the same file.
xmlnodeclaration QDM instances are written without an XML declaration tag. By default, QDM
instances are exported with an XML declaration tag at the beginning that includes
an encoding attribute.
Chapter 1. Export 27
File type modifiers for the export utility
Table 2. Valid file type modifiers for the export utility: All file formats (continued)
Modifier Description
xmlchar QDM instances are written in the character codepage. Note that the character
codepage is the value specified by the codepage file type modifier, or the
application codepage if it is not specified. By default, QDM instances are written
out in Unicode.
xmlgraphic If the xmlgraphic modifier is specified with the EXPORT command, the exported
XML document will be encoded in the UTF-16 code page regardless of the
application code page or the codepage file type modifier.
Table 3. Valid file type modifiers for the export utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.2 If you want to explicitly specify the double quotation
mark as the character string delimiter, it should be specified as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter
as follows:
modified by chardel''
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the output data set. Converts character data from this code page to the
application code page during the export operation.
For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the
range of x00 to x3F, inclusive. The codepage modifier cannot be used with the
lobsinfile modifier.
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.2
In the following example, coldel; causes the export utility to use the semicolon
character (;) as a column delimiter for the exported data:
db2 "export to temp of del modified by coldel;
select * from staff where dept = 20"
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The
default value is a period (.). The specified character is used in place of a period as
a decimal point character.2
nochardel Column data will not be surrounded by character delimiters. This option should
not be specified if the data is intended to be imported or loaded using DB2. It is
provided to support vendor data files that do not have character delimiters.
Improper usage might result in data loss or corruption.
This option cannot be specified with chardelx or nodoubledel. These are mutually
exclusive options.
nodoubledel Suppresses recognition of double character delimiters.2
Table 3. Valid file type modifiers for the export utility: DEL (delimited ASCII) file format (continued)
Modifier Description
striplzeros Removes the leading zeros from all exported decimal columns.
In the first export operation, the content of the exported file data will be
+00000000000000000000000000001.10. In the second operation, which is identical
to the first except for the striplzeros modifier, the content of the exported file
data will be +1.10.
Chapter 1. Export 29
File type modifiers for the export utility
Table 3. Valid file type modifiers for the export utility: DEL (delimited ASCII) file format (continued)
Modifier Description
timestampformat=″x″ x is the format of the time stamp in the source file.4 Valid time stamp elements
are:
YYYY - Year (four digits ranging from 0000 - 9999)
M - Month (one or two digits ranging from 1 - 12)
MM - Month (two digits ranging from 01 - 12;
mutually exclusive with M and MMM)
MMM - Month (three-letter case-insensitive abbreviation for
the month name; mutually exclusive with M and MM)
D - Day (one or two digits ranging from 1 - 31)
DD - Day (two digits ranging from 1 - 31; mutually exclusive with D)
DDD - Day of the year (three digits ranging from 001 - 366;
mutually exclusive with other day or month elements)
H - Hour (one or two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24 for a 24 hour system)
HH - Hour (two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24 for a 24 hour system;
mutually exclusive with H)
M - Minute (one or two digits ranging from 0 - 59)
MM - Minute (two digits ranging from 0 - 59;
mutually exclusive with M, minute)
S - Second (one or two digits ranging from 0 - 59)
SS - Second (two digits ranging from 0 - 59;
mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
ranging from 00000 - 86399; mutually
exclusive with other time elements)
UUUUUU - Microsecond (6 digits ranging from 000000 - 999999;
mutually exclusive with all other microsecond elements)
UUUUU - Microsecond (5 digits ranging from 00000 - 99999,
maps to range from 000000 - 999990;
mutually exclusive with all other microseond elements)
UUUU - Microsecond (4 digits ranging from 0000 - 9999,
maps to range from 000000 - 999900;
mutually exclusive with all other microseond elements)
UUU - Microsecond (3 digits ranging from 000 - 999,
maps to range from 000000 - 999000;
mutually exclusive with all other microseond elements)
UU - Microsecond (2 digits ranging from 00 - 99,
maps to range from 000000 - 990000;
mutually exclusive with all other microseond elements)
U - Microsecond (1 digit ranging from 0 - 9,
maps to range from 000000 - 900000;
mutually exclusive with all other microseond elements)
TT - Meridian indicator (AM or PM)
The MMM element will produce the following values: ’Jan’, ’Feb’, ’Mar’, ’Apr’,
’May’, ’Jun’, ’Jul’, ’Aug’, ’Sep’, ’Oct’, ’Nov’, and ’Dec’. ’Jan’ is equal to month 1,
and ’Dec’ is equal to month 12.
Table 4. Valid file type modifiers for the export utility: IXF file format
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the output data set. Converts character data from this code page to the
application code page during the export operation.
For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the
range of x00 to x3F, inclusive. The codepage modifier cannot be used with the
lobsinfile modifier.
Table 5. Valid file type modifiers for the export utility: WSF file format
Modifier Description
1 Creates a WSF file that is compatible with Lotus 1-2-3 Release 1, or Lotus 1-2-3
Release 1a.5 This is the default.
2 Creates a WSF file that is compatible with Lotus Symphony Release 1.0.5
3 Creates a WSF file that is compatible with Lotus 1-2-3 Version 2, or Lotus
Symphony Release 1.1.5
4 Creates a WSF file containing DBCS characters.
Notes:
1. The export utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted, the
export operation fails, and an error code is returned.
2. Delimiter restrictions for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
3. The export utility normally writes
v date data in YYYYMMDD format
v char(date) data in ″YYYY-MM-DD″ format
v time data in ″HH.MM.SS″ format
v time stamp data in ″YYYY-MM-DD-HH. MM.SS.uuuuuu″ format
Data contained in any datetime columns specified in the SELECT statement
for the export operation will also be in these formats.
4. For time stamp formats, care must be taken to avoid ambiguity between the
month and the minute descriptors, since they both use the letter M. A month
field must be adjacent to other date fields. A minute field must be adjacent to
other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute)
"M:M" (Which is which?)
"M:YYYY:M" (Both are interpreted as month.)
"S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
5. These files can also be directed to a specific product by specifying an L for
Lotus 1-2-3, or an S for Symphony in the filetype-mod parameter string. Only
one value or product designator can be specified.
Chapter 1. Export 31
File type modifiers for the export utility
.
The LOBSINFILE file type modifier must be specified in order to have LOB
files generated.
11. The export utility appends a numeric identifier to each LOB file or XML file.
The identifier is a 3 digit, 0 padded sequence value, starting at
.001
. After the 999th LOB file or XML file, the identifier will no longer be padded
with zeroes (for example, the 1000th LOG file or XML file will have an
extension of
.1000
or
.xml
. For example, a generated LOB file would have a name in the format
myfile.del.001.lob
.
12. It is possible to have the export utility export QDM instances that are not
well-formed documents by specifying an XQuery. However, you will not be
32 Data Movement Utilities
File type modifiers for the export utility
Related reference:
v “Delimiter restrictions for moving data” on page 259
v “db2Export - Export data from a database” on page 19
v “EXPORT ” on page 11
The following example shows how to export the information about employees in
Department 20 from the STAFF table in the SAMPLE database (to which the user
must be connected) to awards.ixf, with the output in IXF format:
db2 export to awards.ixf of ixf messages msgs.txt select * from staff
where dept = 20
The following example shows how to export LOBs to a DEL file, specifying a
second directory for files that might not fit into the first directory:
db2 export to myfile.del of del
lobs to /db2exp1/, /db2exp2/ modified by lobsinfile
select * from emp_photo
The following example shows how to export data to a DEL file, using a single
quotation mark as the string delimiter, a semicolon as the column delimiter, and a
comma as the decimal point. The same convention should be used when importing
data back into the database:
db2 export to myfile.del of del
modified by chardel’’ coldel; decpt,
select * from staff
Related concepts:
v “Export Overview” on page 1
Related tasks:
v “Exporting data” on page 4
Related reference:
v “EXPORT ” on page 11
Chapter 1. Export 33
34 Data Movement Utilities
Chapter 2. Import
This chapter describes the DB2 import utility, which uses the SQL INSERT
statement to write data from an input file into a table or view. If the target table or
view already contains data, you can either replace or append to the existing data.
For information about importing data from typed tables, see “Moving data
between typed tables” on page 260. For information about importing data from a
file on the DB2 Connect workstation to a DRDA server database, and the reverse,
see “Moving data with DB2 Connect” on page 245.
Import Overview
The import utility inserts data from an input file into a table or updatable view. If
the table or view receiving the imported data already contains data, you can either
replace or append to the existing data.
Note: Specifying target table column names or a specific importing method makes
importing to a remote database slower.
Related concepts:
v “Moving data between typed tables” on page 260
Related tasks:
v “Importing data” on page 38
Related reference:
v “Export/Import/Load Utility File Formats” on page 293
v “Import sessions - CLP examples” on page 97
v “IMPORT ” on page 49
Chapter 2. Import 37
Privileges, authorities, and authorization required to use import
Privileges enable users to create or access database resources. Authority levels
provide a method of grouping privileges and higher-level database manager
maintenance and utility operations. Together, these act to control access to the
database manager and its database objects. Users can access only those objects for
which they have the appropriate authorization; that is, the required privilege or
authority.
To use the import utility to create a new table, you must have SYSADM authority,
DBADM authority, or CREATETAB privilege for the database. To replace data in
an existing table or view, you must have SYSADM authority, DBADM authority, or
CONTROL privilege for the table or view, or INSERT, SELECT, UPDATE and
DELETE privileges for the table or view. To append data to an existing table or
view, you must have SELECT and INSERT privileges for the table or view. To use
the REPLACE or REPLACE_CREATE option on a table, the session authorization
ID must have the authority to drop the table.
Notes:
v To import data into a table that has protected columns, the session authorization
ID must have LBAC credentials that allow write access to all protected columns
in the table.
v To import data into a table that has protected rows, the session authorization ID
must have been granted a security label for write access that is part of the
security policy protecting the table.
Related reference:
v “IMPORT ” on page 49
v “db2Import - Import data into a table, hierarchy, nickname or view” on page 73
Importing data
The import utility inserts data from an external file with a supported file format
into a table, hierarchy, view or nickname. The load utility is a faster alternative, but
the load utility does not support loading data at the hierarchy level.
Prerequisites:
Before invoking the import utility, you must be connected to (or be able to
implicitly connect to) the database into which the data will be imported. If implicit
connect is enabled, a connection to the default database is established. Utility
access to DB2 for Linux, UNIX, or Windows database servers from DB2 for Linux,
UNIX, or Windows clients must be a direct connection through the engine and not
through a DB2 Connect gateway or loop back environment. Since the utility will
issue a COMMIT or a ROLLBACK statement, you should complete all transactions
and release all locks by issuing a COMMIT statement or a ROLLBACK operation
before invoking import.
Restrictions:
Procedure:
The import utility can be invoked through the command line processor (CLP), the
Import Table notebook in the Control Centre, or by calling an application
programming interface (API), db2Import from a client application.
Detailed information about the Import Table notebook is provided through the
Control Center online help facility.
Related concepts:
v “Import Overview” on page 35
v “Importing large objects (LOBS)” on page 46
Related reference:
v “ROLLBACK statement” in SQL Reference, Volume 2
v “Import sessions - CLP examples” on page 97
v “IMPORT ” on page 49
Chapter 2. Import 39
Importing XML data
When importing data into an XML table column, you can use the XML FROM
option to specify the paths of the input XML data file or files. For example, For an
XML file ″/home/user/xmlpath/xmldocs.001.xml″ that had previously been
exported, the following command could be used to import the data back into the
table.
IMPORT FROM t1export.del OF DEL XML FROM /home/user/xmlpath INSERT INTO USER.T1
You can use the XMLPARSE option to specify whether whitespace in the imported
XML documents is preserved or stripped. In the following example, all imported
XML documents are validated against XML schema information that was saved
when the XML documents were exported, and these documents are parsed with
whitespace preserved.
IMPORT FROM t1export.del OF DEL XML FROM /home/user/xmlpath XMLPARSE PRESERVE
WHITESPACE XMLVALIDATE USING XDS INSERT INTO USER.T1
Related concepts:
v “Exporting XML data” on page 5
v “Native XML data store overview” in XML Guide
Related reference:
v “LOB and XML file behavior with regard to import and export” on page 7
v “IMPORT ” on page 49
When import uses a stored procedure, messages are created in the message file
using the default language installed on the server. The messages are in the
language of the application if the language at the client and the server are the
same.
If you receive an error about writing or opening data on the server, ensure that:
v The directory exists.
v There is sufficient disk space for the files.
v The instance owner has write permission in the directory.
Related concepts:
v “Import Overview” on page 35
When buffered inserts are used, import sets a default WARNINGCOUNT value to
1. As a result, the utility will fail if any rows are rejected. If a record is rejected, the
utility will roll back the current transaction. The number of committed records can
be used to determine which records were successfully inserted into the database.
The number of committed records can be non zero only if the COMMITCOUNT
option was specified.
Use the DB2 bind utility to request buffered inserts. The import package,
db2uimpm.bnd, must be rebound against the database using the INSERT BUF
option. For example:
db2 connect to your_database
db2 bind db2uimpm.bnd insert buf
Related concepts:
v “Import Overview” on page 35
Chapter 2. Import 41
Using import with identity columns
The import utility can be used to import data into a table containing an identity
column. If no identity-related file type modifiers are used, the utility works
according to the following rules:
v If the identity column is GENERATED ALWAYS, an identity value is generated
for a table row whenever the corresponding row in the input file is missing a
value for the identity column, or a NULL value is explicitly given. If a
non-NULL value is specified for the identity column, the row is rejected
(SQL3550W).
v If the identity column is GENERATED BY DEFAULT, the import utility makes
use of user-supplied values, if they are provided; if the data is missing or
explicitly NULL, a value is generated.
The import utility does not perform any extra validation of user-supplied identity
values beyond what is normally done for values of the identity column’s data type
(that is, SMALLINT, INT, BIGINT, or DECIMAL). Duplicate values will not be
reported. In addition, the compound=x modifier cannot be used when importing
data into a table with an identity column.
Two file type modifiers are supported by the import utility to simplify its use with
tables that contain an identity column:
v The identitymissing modifier makes importing a table with an identity column
more convenient if the input data file does not contain any values (not even
NULLS) for the identity column. For example, consider a table defined with the
following SQL statement:
create table table1 (c1 char(30),
c2 int generated by default as identity,
c3 real,
c4 char(1))
A user might want to import data from a file (import.del) into TABLE1, and this
data might have been exported from a table that does not have an identity
column. The following is an example of such a file:
Robert, 45.2, J
Mike, 76.9, K
Leo, 23.4, I
One way to import this file would be to explicitly list the columns to be
imported through the IMPORT command as follows:
db2 import from import.del of del replace into table1 (c1, c3, c4)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of importing the file is to use the
identitymissing file type modifier as follows:
db2 import from import.del of del modified by identitymissing
replace into table1
v The identityignore modifier is in some ways the opposite of the
identitymissing modifier: it indicates to the import utility that even though the
input data file contains data for the identity column, the data should be ignored,
and an identity value should be generated for each row. For example, a user
might want to import the following data from a file (import.del) into TABLE1,
as defined above:
Robert, 1, 45.2, J
Mike, 2, 76.9, K
Leo, 3, 23.4, I
Related concepts:
v “Identity columns” in Administration Guide: Planning
If no generated column-related file type modifiers are used, the import utility
works according to the following rules:
v A value will be generated for a generated column whenever the corresponding
row in the input file is missing a value for the column, or a NULL value is
explicitly given. If a non-NULL value is supplied for a generated column, the
row is rejected (SQL3550W).
v If the server generates a NULL value for a generated column that is not nullable,
the row of data to which this field belongs is rejected (SQL0407N). This could
happen, for example, if a non-nullable generated column were defined as the
sum of two table columns that have NULL values supplied to them in the input
file.
Two file type modifiers are supported by the import utility to simplify its use with
tables that contain generated columns:
v The generatedmissing modifier makes importing data into a table with
generated columns more convenient if the input data file does not contain any
values (not even NULLS) for all generated columns present in the table. For
example, consider a table defined with the following SQL statement:
create table table1 (c1 int,
c2 int,
g1 int generated always as (c1 + c2),
g2 int generated always as (2 * c1),
c3 char(1))
A user might want to import data from a file (load.del) into TABLE1, and this
data might have been exported from a table that does not have any generated
columns. The following is an example of such a file:
1, 5, J
2, 6, K
3, 7, I
Chapter 2. Import 43
One way to import this file would be to explicitly list the columns to be
imported through the IMPORT command as follows:
db2 import from import.del of del replace into table1 (c1, c2, c3)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of importing the file is to use the
generatedmissing file type modifier as follows:
db2 import from import.del of del modified by generatedmissing
replace into table1
v The generatedignore modifier is in some ways the opposite of the
generatedmissing modifier: it indicates to the import utility that even though the
input data file contains data for all generated columns, the data should be
ignored, and values should be generated for each row. For example, a user
might want to import the following data from a file (import.del) into TABLE1,
as defined above:
1, 5, 10, 15, J
2, 6, 11, 16, K
3, 7, 12, 17, I
The user-supplied, non-NULL values of 10, 11, and 12 (for g1), and 15, 16, and
17 (for g2) result in the row being rejected (SQL3550W). To avoid this, the user
could issue the following IMPORT command:
db2 import from import.del of del method P(1, 2, 5)
replace into table1 (c1, c2, c3)
Again, this approach might be cumbersome and prone to error if the table has
many columns. The generatedignore modifier simplifies the syntax as follows:
db2 import from import.del of del modified by generatedignore
replace into table1
v When using the INSERT_UPDATE clause, if the generated column is also a
primary key and the generatedignore modifier is specified, the IMPORT
command honours the generatedignore modifier. The IMPORT command does
not substitute the user supplied value for this column in the WHERE clause of
the UPDATE statement.
Related concepts:
v “Generated Columns” in Developing SQL and External Routines
v “Import Overview” on page 35
v “Importing large objects (LOBS)” on page 46
v “Using import in a client/server environment” on page 40
v “Using import to recreate an exported table” on page 45
v “Using import with buffered inserts” on page 41
v “Using import with identity columns” on page 42
Related tasks:
v “Importing data” on page 38
Related reference:
v “Import sessions - CLP examples” on page 97
v “IMPORT ” on page 49
The following attributes of the original table are not retained (This list is not
exhaustive, use with care):
v Whether the source was a normal table, a materialized query table, a view, or a
set of columns from any or all of these sources
v Unique constraints and other types of constraints or triggers (not including
Primary Key constraints).
v Table information:
– Materialized query table definition (if applicable)
– Materialized query table options (if applicable)
– Table space options; however, this information can be specified through the
IMPORT command
– Multidimensional clustering (MDC) dimensions
– Partitioned table dimensions
– Table partitioning key
– Not logged initially property
Chapter 2. Import 45
– Check constraints
– Table codepage
– Protected table properties
– Table or value compression options
v Column information:
– Any default value except constant values
– LOB options (if any)
– XML properties
– References clause of the create table statement (if any)
– Referential constraints (if any)
– Check constraints (if any)
– Generated column options (if any)
– Columns dependent on database scope Sequences
v Index information:
– Include columns (if any)
– Index name, if the index is a primary key index
– Descending order of keys, if the index is a primary key index (Ascending is
the default)
– Index column names contain hexadecimal values of 0x2B or 0x2D
– Index name contains more than 128 bytes after codepage conversion
– PCTFREE2 value
– UNIQUE constraints
Related concepts:
v “Export Overview” on page 1
v “Import Overview” on page 35
v “Recreating an exported table” on page 9
Related reference:
v “EXPORT ” on page 11
v “IMPORT ” on page 49
The column in the main input data file contains either the import data (default), or
the name of a file where the import data is stored.
Notes:
1. When LOB data is stored in the main input data file, no more than 32KB of
data is allowed. Truncation warnings are ignored.
2. All of the LOB data must be stored in the main file, or each LOB is stored in
separate files. The main file cannot have a mixture of LOB data and file names.
LOB values are imported from separate files by using the lobsinfile modifier,
and the LOBS FROM clause.
An LLS is a string indicating where LOB data can be found within a file. The
format of the LLS is filename.ext.nnn.mmm/, where filename.ext is the name of
the file that contains the LOB, nnn is the offset of the LOB within the file
(measured in bytes), and mmm is the length of the LOB (in bytes). For example, an
LLS of db2exp.001.123.456/ indicates that the LOB is located in the file
db2exp.001, begins at an offset of 123 bytes into the file, and is 256 bytes long. If
the indicated size in the LLS is 0, the LOB is considered to have a length of 0. If
the length is -1, the LOB is considered to be NULL and the offset and file name are
ignored.
Related reference:
v “IMPORT ” on page 49
v “Data Type-Specific Rules Governing PC/IXF File Import into Databases” on
page 330
v “General Rules Governing PC/IXF File Import into Databases” on page 328
v “Large objects (LOBs)” in SQL Reference, Volume 1
Related concepts:
v “User-defined distinct types” in Developing SQL and External Routines
By default, the import utility is bound to the database with isolation level RS (read
stability).
The Import utility acquires a nonexclusive (IX) lock on the target table. Holding
this lock on the table has the following implications:
v If there are other applications holding an incompatible table lock, the import
utility will not start inserting data until all of these applications commit or roll
back their changes.
Chapter 2. Import 47
v While import is running, any other application requesting an incompatible table
lock will wait until the import commits or rolls back the current transaction.
Note that import’s table lock does not persist across a transaction boundary. As a
result, online import has to request and potentially wait for a table lock after
every commit.
v If there are other applications holding an incompatible row lock, the import
utility will stop inserting data until all of these applications commit or roll back
their changes.
v While import is running, any other application requesting an incompatible row
lock will wait until the import operation commits or rolls back the current
transaction.
To preserve the online properties, and to reduce the chance of a deadlock, online
import will periodically commit the current transaction and release all row locks
before escalating to an exclusive (X) table lock. Consequently, during an online
import, commits might be performed even if the commitcount option was not
used. A commit frequency can either be explicitly specified, or the AUTOMATIC
commit mode can be used. No commits will be performed if a commitcount value
of zero is explicitly specified. Note that a deadlock will occur if the concurrent
application holding a conflicting row lock attempts to escalate to a table lock.
Import runs in the online mode if ’ALLOW WRITE ACCESS’ is specified. The
online mode is not compatible with the following:
v REPLACE, CREATE and REPLACE_CREATE import modes
v Buffered inserts
v Imports into a target view
v Imports into a hierarchy table
v Imports into a target table using table lock size
If a large number of rows is being imported into a table, the existing lock might
escalate to an exclusive lock. If another application working on the same table is
holding some row locks, a deadlock will occur if the lock escalates to an exclusive
lock. To avoid this, the import utility requests an exclusive lock on the table at the
beginning of its operation. This is the default import behavior.
Holding a lock on the table has two implications. First, if there are other
applications holding a table lock, or row locks on the import target table, the
import utility will wait until all of those applications commit or roll back their
changes. Second, while import is running, any other application requesting locks
will wait until the import operation has completed. Import runs in the offline
mode if ’ALLOW WRITE ACCESS’ is not specified.
Related concepts:
v “Table locking, table states and table space states” on page 203
IMPORT
Inserts data from an external file with a supported file format into a table,
hierarchy, view or nickname. LOAD is a faster alternative, but the load utility does
not support loading data at the hierarchy level.
Authorization:
v IMPORT using the INSERT option requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT and SELECT privilege on each participating table or view
v IMPORT to an existing table using the INSERT_UPDATE option, requires one of
the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT, SELECT, UPDATE and DELETE privilege on each participating table
or view
v IMPORT to an existing table using the REPLACE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on the table or view
– INSERT, SELECT, and DELETE privilege on the table or view
v IMPORT to a new table using the CREATE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CREATETAB authority on the database and USE privilege on the table space,
as well as one of:
- IMPLICIT_SCHEMA authority on the database, if the implicit or explicit
schema name of the table does not exist
- CREATIN privilege on the schema, if the schema name of the table refers to
an existing schema
v IMPORT to a hierarchy that does not exist using the CREATE, or the
REPLACE_CREATE option, requires one of the following:
– sysadm
– dbadm
– CREATETAB authority on the database and USE privilege on the table space
and one of:
- IMPLICIT_SCHEMA authority on the database, if the schema name of the
table does not exist
- CREATEIN privilege on the schema, if the schema of the table exists
- CONTROL privilege on every sub-table in the hierarchy, if the
REPLACE_CREATE option on the entire hierarchy is used
v IMPORT to an existing hierarchy using the REPLACE option requires one of the
following:
Chapter 2. Import 49
IMPORT
– sysadm
– dbadm
– CONTROL privilege on every sub-table in the hierarchy
v To import data into a table that has protected columns, the session authorization
ID must have LBAC credentials that allow write access to all protected columns
in the table. Otherwise the import fails and an error (SQLSTATE 42512) is
returned.
v To import data into a table that has protected rows, the session authorization ID
must hold LBAC credentials that meets these criteria:
– It is part of the security policy protecting the table
– It was granted to the session authorization ID for write access
The label on the row to insert, the user’s LBAC credentials, the security policy
definition, and the LBAC rules determine determine the label on the row.
v If the REPLACE or REPLACE_CREATE option is specified, the session
authorization ID must have the authority to drop the table.
Required connection:
Command syntax:
MODIFIED BY filetype-mod
,
N ( column-name )
,
P ( column-position )
XMLPARSE STRIP WHITESPACE
PRESERVE
ALLOW NO ACCESS
XMLVALIDATE USING XDS Ignore and Map parameters
DEFAULT schema-sqlid ALLOW WRITE ACCESS
SCHEMA schema-sqlid
SCHEMALOCATION HINTS
COMMITCOUNT n RESTARTCOUNT n ROWCOUNT n WARNINGCOUNT n NOTIMEOUT
AUTOMATIC SKIPCOUNT
( insert-column )
hierarchy description AS ROOT TABLE
UNDER sub-table-name
,
IGNORE ( schema-sqlid )
,
hierarchy description:
ALL TABLES
sub-table-list HIERARCHY STARTING sub-table-name
IN traversal-order-list
sub-table-list:
( sub-table-name )
,
( insert-column )
traversal-order-list:
( sub-table-name )
tblspace-specs:
IN tablespace-name
INDEX IN tablespace-name LONG IN tablespace-name
Command parameters:
ALL TABLES
An implicit keyword for hierarchy only. When importing a hierarchy, the
default is to import all tables specified in the traversal order.
ALLOW NO ACCESS
Runs import in the offline mode. An exclusive (X) lock on the target table
Chapter 2. Import 51
IMPORT
Note: If the data was exported from an MVS host database, and it contains
LONGVAR fields whose lengths, calculated on the page size, are less
than 254, CREATE might fail because the rows are too long. See
Using import to recreate an exported table for a list of restrictions.
In this case, the table should be created manually, and IMPORT with
INSERT should be invoked, or, alternatively, the LOAD command
should be used.
DEFAULT schema-sqlid
This option can only be used when the USING XDS parameter is specified.
The schema specified through the DEFAULT clause identifies a schema to
use for validation when the XML Data Specifier (XDS) of an imported
XML document does not contain an SCH attribute identifying an XML
Schema.
The DEFAULT clause takes precedence over the IGNORE and MAP
clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP
specifications will be ignored.
FROM filename
HIERARCHY
Specifies that hierarchical data is to be imported.
IGNORE schema-sqlid
This option can only be used when the USING XDS parameter is specified.
The IGNORE clause specifies a list of one or more schemas to ignore if
they are identified by an SCH attribute. If an SCH attribute exists in the
XML Data Specifier for an imported XML document, and the schema
identified by the SCH attribute is included in the list of schemas to
IGNORE, then no schema validation will occur for the imported XML
document.
If a schema is specified in the IGNORE clause, it cannot also be present in
the left side of a schema pair in the MAP clause.
The IGNORE clause applies only to the XDS. A schema that is mapped by
the MAP clause will not be subsequently ignored if specified by the
IGNORE clause.
IN tablespace-name
Identifies the table space in which the table will be created. The table space
must exist, and must be a REGULAR table space. If no other table space is
specified, all table parts are stored in this table space. If this clause is not
specified, the table is created in a table space created by the authorization
ID. If none is found, the table is placed into the default table space
USERSPACE1. If USERSPACE1 has been dropped, table creation fails.
INDEX IN tablespace-name
Identifies the table space in which any indexes on the table will be created.
This option is allowed only when the primary table space specified in the
IN clause is a DMS table space. The specified table space must exist, and
must be a REGULAR or LARGE DMS table space.
Note: Specifying which table space will contain an index can only be done
when the table is created.
insert-column
Specifies the name of a column in the table or the view into which data is
to be inserted.
INSERT
Adds the imported data to the table without changing the existing table
data.
Chapter 2. Import 53
IMPORT
INSERT_UPDATE
Adds rows of imported data to the target table, or updates existing rows
(of the target table) with matching primary keys.
INTO table-name
Specifies the database table into which the data is to be imported. This
table cannot be a system table, a declared temporary table or a summary
table.
One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except
in the case of a down-level server, when the fully qualified or the
unqualified table name should be used. A qualified table name is in the
form: schema.tablename. The schema is the user name under which the table
was created.
LOBS FROM lob-path
The names of the LOB data files are stored in the main data file (ASC,
DEL, or IXF), in the column that will be loaded into the LOB column. The
maximum number of paths that can be specified is 999. This will implicitly
activate the LOBSINFILE behaviour.
This parameter is not valid when you import to a nickname.
LONG IN tablespace-name
Identifies the table space in which the values of any long columns (LONG
VARCHAR, LONG VARGRAPHIC, LOB data types, or distinct types with
any of these as source types) will be stored. This option is allowed only if
the primary table space specified in the IN clause is a DMS table space.
The table space must exist, and must be a LARGE DMS table space.
MAP schema-sqlid
This option can only be used when the USING XDS parameter is specified.
Use the MAP clause to specify alternate schemas to use in place of those
specified by the SCH attribute of an XML Data Specifier (XDS) for each
imported XML document. The MAP clause specifies a list of one or more
schema pairs, where each pair represents a mapping of one schema to
another. The first schema in the pair represents a schema that is referred to
by an SCH attribute in an XDS. The second schema in the pair represents
the schema that should be used to perform schema validation.
If a schema is present in the left side of a schema pair in the MAP clause,
it cannot also be specified in the IGNORE clause.
Once a schema pair mapping is applied, the result is final. The mapping
operation is non-transitive, and therefore the schema chosen will not be
subsequently applied to another schema pair mapping.
A schema cannot be mapped more than once, meaning that it cannot
appear on the left side of more than one pair.
METHOD
L Specifies the start and end column numbers from which to import
data. A column number is a byte offset from the beginning of a
row of data. It is numbered starting from 1.
Note: This method can only be used with ASC files, and is the
only valid option for that file type.
N Specifies the names of the columns to be imported.
Note: This method can only be used with IXF or DEL files, and is
the only valid option for the DEL file type.
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the import
utility.
NOTIMEOUT
Specifies that the import utility will not time out while waiting for locks.
This option supersedes the locktimeout database configuration parameter.
Other applications are not affected.
NULL INDICATORS null-indicator-list
This option can only be used when the METHOD L parameter is specified.
That is, the input file is an ASC file. The null indicator list is a
comma-separated list of positive integers specifying the column number of
each null indicator field. The column number is the byte offset of the null
indicator field from the beginning of a row of data. There must be one
entry in the null indicator list for each data field defined in the METHOD
L parameter. A column number of zero indicates that the corresponding
data field always contains data.
A value of Y in the NULL indicator column specifies that the column data
is NULL. Any character other than Y in the NULL indicator column
specifies that the column data is not NULL, and that column data specified
by the METHOD L option will be imported.
The NULL indicator character can be changed using the MODIFIED BY
option, with the nullindchar file type modifier.
OF filetype
Specifies the format of the data in the input file:
v ASC (non-delimited ASCII format)
v DEL (delimited ASCII format), which is used by a variety of database
manager and file manager programs
v WSF (work sheet format), which is used by programs such as:
– Lotus 1-2-3
– Lotus Symphony
v IXF (integrated exchange format, PC version), which means it was
exported from the same or another DB2 table. An IXF file also contains
the table definition and definitions of any existing indexes, except when
columns are specified in the SELECT statement.
Th WSF file type is not supported when you import to a nickname.
REPLACE
Deletes all existing data from the table by truncating the data object, and
inserts the imported data. The table definition and the index definitions are
not changed. This option can only be used if the table exists. If this option
is used when moving data between hierarchies, only the data for an entire
hierarchy, not individual subtables, can be replaced.
This parameter is not valid when you import to a nickname.
This option does not honour the CREATE TABLE statement’s NOT
LOGGED INITIALLY (NLI) clause or the ALTER TABLE statement’s
ACTIVE NOT LOGGED INITIALLY clause.
Chapter 2. Import 55
IMPORT
traversal-order-list
For typed tables with the INSERT, INSERT_UPDATE, or the REPLACE
option, a list of sub-table names is used to indicate the traversal order of
the importing sub-tables in the hierarchy.
UNDER sub-table-name
Specifies a parent table for creating one or more sub-tables.
WARNINGCOUNT n
Stops the import operation after n warnings. Set this parameter if no
warnings are expected, but verification that the correct file and table are
being used is desired. If the import file or the target table is specified
incorrectly, the import utility will generate a warning for each row that it
attempts to import, which will cause the import to fail. If n is zero, or this
option is not specified, the import operation will continue regardless of the
number of warnings issued.
XML FROM xml-path
Specifies one or more paths that contain the XML files.
XMLPARSE
Specifies how XML documents are parsed. If this option is not specified,
the parsing behaviour for XML documents will be determined by the value
of the CURRENT XMLPARSE OPTION special register.
STRIP WHITESPACE
Specifies to remove whitespace when the XML document is parsed.
PRESERVE WHITESPACE
Specifies not to remove whitespace when the XML document is
parsed.
XMLVALIDATE
Specifies that XML documents are validated against a schema, when
applicable.
USING XDS
XML documents are validated against the XML schema identified
by the XML Data Specifier (XDS) in the main data file. By default,
if the XMLVALIDATE option is invoked with the USING XDS
clause, the schema used to perform validation will be determined
by the SCH attribute of the XDS. If an SCH attribute is not present
in the XDS, no schema validation will occur unless a default
schema is specified by the DEFAULT clause.
The DEFAULT, IGNORE, and MAP clauses can be used to modify
the schema determination behavior. These three optional clauses
apply directly to the specifications of the XDS, and not to each
other. For example, if a schema is selected because it is specified by
the DEFAULT clause, it will not be ignored if also specified by the
IGNORE clause. Similarly, if a schema is selected because it is
specified as the first part of a pair in the MAP clause, it will not be
re-mapped if also specified in the second part of another MAP
clause pair.
USING SCHEMA schema-sqlid
XML documents are validated against the XML schema with the
specified SQL identifier. In this case, the SCH attribute of the XML
Data Specifier (XDS) will be ignored for all XML columns.
Chapter 2. Import 57
IMPORT
Usage notes:
Be sure to complete all table operations and release all locks before starting an
import operation. This can be done by issuing a COMMIT after closing all cursors
opened WITH HOLD, or by issuing a ROLLBACK.
The import utility adds rows to the target table using the SQL INSERT statement.
The utility issues one INSERT statement for each row of data in the input file. If an
INSERT statement fails, one of two actions result:
v If it is likely that subsequent INSERT statements can be successful, a warning
message is written to the message file, and processing continues.
v If it is likely that subsequent INSERT statements will fail, and there is potential
for database damage, an error message is written to the message file, and
processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a
REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the
application interrupts the database manager after the table object is truncated, all
of the old data is lost. Ensure that the old data is no longer needed before using
these options.
By default, automatic COMMITs are not performed for the INSERT or the
INSERT_UPDATE option. They are, however, performed if the COMMITCOUNT
parameter is not zero. If automatic COMMITs are not performed, a full log results
in a ROLLBACK.
Offline import does not perform automatic COMMITs if any of the following
conditions is true:
v the target is a view, not a table
v compound inserts are used
v buffered inserts are used
By default, online import performs automatic COMMITs to free both the active log
space and the lock list. Automatic COMMITs are not performed only if a
COMMITCOUNT value of zero is specified.
Whenever the import utility performs a COMMIT, two messages are written to the
message file: one indicates the number of records to be committed, and the other is
written after a successful COMMIT. When restarting the import operation after a
failure, specify the number of records to skip, as determined from the last
successful COMMIT.
The import utility accepts input data with minor incompatibility problems (for
example, character data can be imported using padding or truncation, and numeric
data can be imported with a different numeric data type), but data with major
incompatibility problems is not accepted.
If an error occurs while recreating the foreign keys, modify the data to maintain
referential integrity.
Referential constraints and foreign key definitions are not preserved when creating
tables from PC/IXF files. (Primary key definitions are preserved if the data was
previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a
copy of the input data file, the output message file, and potential growth in the
size of the database.
If an import operation is run against a remote database, and the output message
file is very long (more than 60KB), the message file returned to the user on the
client might be missing messages from the middle of the import operation. The
first 30KB of message information and the last 30KB of message information are
always retained.
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on
a hard drive rather than on diskettes.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF
file formats can be imported; however, if the table does not already exist, IMPORT
CREATE or IMPORT REPLACE_CREATE creates the table when it imports data
from a PC/IXF file. For typed tables, IMPORT CREATE can create the type
hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between
databases. If character data containing row separators is exported to a delimited
ASCII (DEL) file and processed by a text transfer program, fields containing the
row separators will shrink or expand. The file copying step is not necessary if the
source and the target databases are both accessible from the same client.
The data in ASC and DEL files is assumed to be in the code page of the client
application performing the import. PC/IXF files, which allow for different code
pages, are recommended when importing data in different code pages. If the
PC/IXF file and the import utility are in the same code page, processing occurs as
for a regular application. If the two differ, and the FORCEIN option is specified,
Chapter 2. Import 59
IMPORT
the import utility assumes that data in the PC/IXF file has the same code page as
the application performing the import. This occurs even if there is a conversion
table for the two code pages. If the two differ, the FORCEIN option is not
specified, and there is a conversion table, all data in the PC/IXF file will be
converted from the file code page to the application code page. If the two differ,
the FORCEIN option is not specified, and there is no conversion table, the import
operation will fail. This applies only to PC/IXF files on DB2 clients on the AIX
operating system.
For table objects on an 8 KB page that are close to the limit of 1012 columns,
import of PC/IXF data files might cause DB2 to return an error, because the
maximum size of an SQL statement was exceeded. This situation can occur only if
the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not
apply to import of DEL or ASC files. If PC/IXF files are being used to create a new
table, an alternative is use db2look to dump the DDL statement that created the
table, and then to issue that statement through the CLP.
DB2 Connect can be used to import data to DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import (INSERT
option) is supported. The RESTARTCOUNT parameter, but not the
COMMITCOUNT parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined
in the PC/IXF file; sub-table definitions cannot be altered. When using options
other than CREATE with typed tables, the traversal order list enables one to
specify the traverse order; therefore, the traversal order list must match the one
used during the export operation. For the PC/IXF file format, one need only
specify the target sub-table name, and use the traverse order stored in the file.
The import utility can be used to recover a table previously exported to a PC/IXF
file. The table returns to the state it was in when exported.
Security labels in their internal format might contain newline characters. If you
import the file using the DEL file format, those newline characters can be mistaken
for delimiters. If you have this problem use the older default priority for delimiters
by specifying the delprioritychar file type modifier in the IMPORT command.
Federated considerations:
Related concepts:
v “Import Overview” on page 35
v “Privileges, authorities, and authorization required to use import” on page 38
Related tasks:
v “Importing data” on page 38
Related reference:
v “XMLPARSE scalar function” in SQL Reference, Volume 1
v “ADMIN_CMD procedure – Run administrative commands” in Administrative
SQL Routines and Views
v “db2look - DB2 statistics and DDL extraction tool command” in Command
Reference
v “IMPORT command using the ADMIN_CMD procedure” on page 61
v “Import sessions - CLP examples” on page 97
v “LOB and XML file behavior with regard to import and export” on page 7
Authorization:
v IMPORT using the INSERT option requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT and SELECT privilege on each participating table or view
v IMPORT to an existing table using the INSERT_UPDATE option, requires one of
the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view, or nickname
– INSERT, SELECT, UPDATE and DELETE privilege on each participating table
or view
v IMPORT to an existing table using the REPLACE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on the table or view
– INSERT, SELECT, and DELETE privilege on the table or view
v IMPORT to a new table using the CREATE or REPLACE_CREATE option,
requires one of the following:
– sysadm
– dbadm
– CREATETAB authority on the database and USE privilege on the table space,
as well as one of:
Chapter 2. Import 61
IMPORT using ADMIN_CMD
Required connection:
Command syntax:
MODIFIED BY filetype-mod
,
N ( column-name )
,
P ( column-position )
XMLPARSE STRIP WHITESPACE
PRESERVE
ALLOW NO ACCESS
XMLVALIDATE USING XDS Ignore and Map parameters
DEFAULT schema-sqlid ALLOW WRITE ACCESS
SCHEMA schema-sqlid
SCHEMALOCATION HINTS
COMMITCOUNT n RESTARTCOUNT n ROWCOUNT n WARNINGCOUNT n NOTIMEOUT
AUTOMATIC SKIPCOUNT
( insert-column )
hierarchy description AS ROOT TABLE
UNDER sub-table-name
,
IGNORE ( schema-sqlid )
,
hierarchy description:
ALL TABLES
sub-table-list HIERARCHY STARTING sub-table-name
IN traversal-order-list
sub-table-list:
( sub-table-name )
,
( insert-column )
Chapter 2. Import 63
IMPORT using ADMIN_CMD
traversal-order-list:
( sub-table-name )
tblspace-specs:
IN tablespace-name
INDEX IN tablespace-name LONG IN tablespace-name
Command parameters:
ALL TABLES
An implicit keyword for hierarchy only. When importing a hierarchy, the
default is to import all tables specified in the traversal order.
ALLOW NO ACCESS
Runs import in the offline mode. An exclusive (X) lock on the target table
is acquired before any rows are inserted. This prevents concurrent
applications from accessing table data. This is the default import behavior.
ALLOW WRITE ACCESS
Runs import in the online mode. An intent exclusive (IX) lock on the target
table is acquired when the first row is inserted. This allows concurrent
readers and writers to access table data. Online mode is not compatible
with the REPLACE, CREATE, or REPLACE_CREATE import options.
Online mode is not supported in conjunction with buffered inserts. The
import operation will periodically commit inserted data to prevent lock
escalation to a table lock and to avoid running out of active log space.
These commits will be performed even if the COMMITCOUNT option was
not used. During each commit, import will lose its IX table lock, and will
attempt to reacquire it after the commit. This parameter is required when
you import to a nickname and COMMITCOUNT must be specified with a
valid number (AUTOMATIC is not considered a valid option).
AS ROOT TABLE
Creates one or more sub-tables as a stand-alone table hierarchy.
COMMITCOUNT n/AUTOMATIC
Performs a COMMIT after every n records are imported. When a number n
is specified, import performs a COMMIT after every n records are
imported. When compound inserts are used, a user-specified commit
frequency of n is rounded up to the first integer multiple of the compound
count value. When AUTOMATIC is specified, import internally determines
when a commit needs to be performed. The utility will commit for either
one of two reasons:
v to avoid running out of active log space
v to avoid lock escalation from row level to table level
If the ALLOW WRITE ACCESS option is specified, and the
COMMITCOUNT option is not specified, the import utility will perform
commits as if COMMITCOUNT AUTOMATIC had been specified.
If the IMPORT command encounters an SQL0964C (Transaction Log Full)
while inserting or updating a record and the COMMITCOUNT n is specified,
IMPORT will attempt to resolve the issue by performing an unconditional
commit and then reattempt to insert or update the record. If this does not
help resolve the log full condition (which would be the case when the log
full is attributed to other activity on the database), then the IMPORT
command will fail as expected, however the number of rows committed
may not be a multiple of the COMMITCOUNT n value. The RESTARTCOUNT or
SKIPCOUNT option can be used to avoid processing those row already
committed.
CREATE
Creates the table definition and row contents in the code page of the
database. If the data was exported from a DB2 table, sub-table, or
hierarchy, indexes are created. If this option operates on a hierarchy, and
data was exported from DB2, a type hierarchy will also be created. This
option can only be used with IXF files.
This parameter is not valid when you import to a nickname.
Note: If the data was exported from an MVS host database, and it contains
LONGVAR fields whose lengths, calculated on the page size, are less
than 254, CREATE might fail because the rows are too long. See
Using import to recreate an exported table for a list of restrictions.
In this case, the table should be created manually, and IMPORT with
INSERT should be invoked, or, alternatively, the LOAD command
should be used.
DEFAULT schema-sqlid
This option can only be used when the USING XDS parameter is specified.
The schema specified through the DEFAULT clause identifies a schema to
use for validation when the XML Data Specifier (XDS) of an imported
XML document does not contain an SCH attribute identifying an XML
Schema.
The DEFAULT clause takes precedence over the IGNORE and MAP
clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP
specifications will be ignored.
FROM filename
HIERARCHY
Specifies that hierarchical data is to be imported.
IGNORE schema-sqlid
This option can only be used when the USING XDS parameter is specified.
The IGNORE clause specifies a list of one or more schemas to ignore if
they are identified by an SCH attribute. If an SCH attribute exists in the
XML Data Specifier for an imported XML document, and the schema
identified by the SCH attribute is included in the list of schemas to
IGNORE, then no schema validation will occur for the imported XML
document.
If a schema is specified in the IGNORE clause, it cannot also be present in
the left side of a schema pair in the MAP clause.
The IGNORE clause applies only to the XDS. A schema that is mapped by
the MAP clause will not be subsequently ignored if specified by the
IGNORE clause.
IN tablespace-name
Identifies the table space in which the table will be created. The table space
must exist, and must be a REGULAR table space. If no other table space is
specified, all table parts are stored in this table space. If this clause is not
specified, the table is created in a table space created by the authorization
Chapter 2. Import 65
IMPORT using ADMIN_CMD
ID. If none is found, the table is placed into the default table space
USERSPACE1. If USERSPACE1 has been dropped, table creation fails.
INDEX IN tablespace-name
Identifies the table space in which any indexes on the table will be created.
This option is allowed only when the primary table space specified in the
IN clause is a DMS table space. The specified table space must exist, and
must be a REGULAR or LARGE DMS table space.
Note: Specifying which table space will contain an index can only be done
when the table is created.
insert-column
Specifies the name of a column in the table or the view into which data is
to be inserted.
INSERT
Adds the imported data to the table without changing the existing table
data.
INSERT_UPDATE
Adds rows of imported data to the target table, or updates existing rows
(of the target table) with matching primary keys.
INTO table-name
Specifies the database table into which the data is to be imported. This
table cannot be a system table, a declared temporary table or a summary
table.
One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except
in the case of a down-level server, when the fully qualified or the
unqualified table name should be used. A qualified table name is in the
form: schema.tablename. The schema is the user name under which the table
was created.
LOBS FROM lob-path
The names of the LOB data files are stored in the main data file (ASC,
DEL, or IXF), in the column that will be loaded into the LOB column. The
maximum number of paths that can be specified is 999. This will implicitly
activate the LOBSINFILE behaviour.
This parameter is not valid when you import to a nickname.
LONG IN tablespace-name
Identifies the table space in which the values of any long columns (LONG
VARCHAR, LONG VARGRAPHIC, LOB data types, or distinct types with
any of these as source types) will be stored. This option is allowed only if
the primary table space specified in the IN clause is a DMS table space.
The table space must exist, and must be a LARGE DMS table space.
MAP schema-sqlid
This option can only be used when the USING XDS parameter is specified.
Use the MAP clause to specify alternate schemas to use in place of those
specified by the SCH attribute of an XML Data Specifier (XDS) for each
imported XML document. The MAP clause specifies a list of one or more
schema pairs, where each pair represents a mapping of one schema to
another. The first schema in the pair represents a schema that is referred to
by an SCH attribute in an XDS. The second schema in the pair represents
the schema that should be used to perform schema validation.
If a schema is present in the left side of a schema pair in the MAP clause,
it cannot also be specified in the IGNORE clause.
Once a schema pair mapping is applied, the result is final. The mapping
operation is non-transitive, and therefore the schema chosen will not be
subsequently applied to another schema pair mapping.
A schema cannot be mapped more than once, meaning that it cannot
appear on the left side of more than one pair.
METHOD
L Specifies the start and end column numbers from which to import
data. A column number is a byte offset from the beginning of a
row of data. It is numbered starting from 1.
Note: This method can only be used with ASC files, and is the
only valid option for that file type.
N Specifies the names of the columns to be imported.
Note: This method can only be used with IXF or DEL files, and is
the only valid option for the DEL file type.
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the import
utility.
NOTIMEOUT
Specifies that the import utility will not time out while waiting for locks.
This option supersedes the locktimeout database configuration parameter.
Other applications are not affected.
NULL INDICATORS null-indicator-list
This option can only be used when the METHOD L parameter is specified.
That is, the input file is an ASC file. The null indicator list is a
comma-separated list of positive integers specifying the column number of
each null indicator field. The column number is the byte offset of the null
indicator field from the beginning of a row of data. There must be one
entry in the null indicator list for each data field defined in the METHOD
L parameter. A column number of zero indicates that the corresponding
data field always contains data.
A value of Y in the NULL indicator column specifies that the column data
is NULL. Any character other than Y in the NULL indicator column
specifies that the column data is not NULL, and that column data specified
by the METHOD L option will be imported.
The NULL indicator character can be changed using the MODIFIED BY
option, with the nullindchar file type modifier.
OF filetype
Specifies the format of the data in the input file:
v ASC (non-delimited ASCII format)
v DEL (delimited ASCII format), which is used by a variety of database
manager and file manager programs
v WSF (work sheet format), which is used by programs such as:
Chapter 2. Import 67
IMPORT using ADMIN_CMD
– Lotus 1-2-3
– Lotus Symphony
v IXF (integrated exchange format, PC version), which means it was
exported from the same or another DB2 table. An IXF file also contains
the table definition and definitions of any existing indexes, except when
columns are specified in the SELECT statement.
Th WSF file type is not supported when you import to a nickname.
REPLACE
Deletes all existing data from the table by truncating the data object, and
inserts the imported data. The table definition and the index definitions are
not changed. This option can only be used if the table exists. If this option
is used when moving data between hierarchies, only the data for an entire
hierarchy, not individual subtables, can be replaced.
This parameter is not valid when you import to a nickname.
This option does not honour the CREATE TABLE statement’s NOT
LOGGED INITIALLY (NLI) clause or the ALTER TABLE statement’s
ACTIVE NOT LOGGED INITIALLY clause.
If an import with the REPLACE option is performed within the same
transaction as a CREATE TABLE or ALTER TABLE statement where the
NLI clause is invoked, the import will not honor the NLI clause. All inserts
will be logged.
Workaround 1
Delete the contents of the table using the DELETE statement, then
invoke the import with INSERT statement
Workaround 2
Drop the table and recreate it, then invoke the import with INSERT
statement.
This limitation applies to DB2 UDB Version 7 and DB2 UDB Version 8
REPLACE_CREATE
If the table exists, deletes all existing data from the table by truncating the
data object, and inserts the imported data without changing the table
definition or the index definitions.
If the table does not exist, creates the table and index definitions, as well as
the row contents, in the code page of the database. See Using import to
recreate an exported table for a list of restrictions.
This option can only be used with IXF files. If this option is used when
moving data between hierarchies, only the data for an entire hierarchy, not
individual subtables, can be replaced.
This parameter is not valid when you import to a nickname.
RESTARTCOUNT n
Specifies that an import operation is to be started at record n + 1. The first
n records are skipped. This option is functionally equivalent to
SKIPCOUNT. RESTARTCOUNT and SKIPCOUNT are mutually exclusive.
ROWCOUNT n
Specifies the number n of physical records in the file to be imported
(inserted or updated). Allows a user to import only n rows from a file,
starting from the record determined by the SKIPCOUNT or
RESTARTCOUNT options. If the SKIPCOUNT or RESTARTCOUNT
options are not specified, the first n rows are imported. If SKIPCOUNT m
or RESTARTCOUNT m is specified, rows m+1 to m+n are imported. When
compound inserts are used, user specified rowcount n is rounded up to the
first integer multiple of the compound count value.
SKIPCOUNT n
Specifies that an import operation is to be started at record n + 1. The first
n records are skipped. This option is functionally equivalent to
RESTARTCOUNT. SKIPCOUNT and RESTARTCOUNT are mutually
exclusive.
STARTING sub-table-name
A keyword for hierarchy only, requesting the default order, starting from
sub-table-name. For PC/IXF files, the default order is the order stored in the
input file. The default order is the only valid order for the PC/IXF file
format.
sub-table-list
For typed tables with the INSERT or the INSERT_UPDATE option, a list of
sub-table names is used to indicate the sub-tables into which data is to be
imported.
traversal-order-list
For typed tables with the INSERT, INSERT_UPDATE, or the REPLACE
option, a list of sub-table names is used to indicate the traversal order of
the importing sub-tables in the hierarchy.
UNDER sub-table-name
Specifies a parent table for creating one or more sub-tables.
WARNINGCOUNT n
Stops the import operation after n warnings. Set this parameter if no
warnings are expected, but verification that the correct file and table are
being used is desired. If the import file or the target table is specified
incorrectly, the import utility will generate a warning for each row that it
attempts to import, which will cause the import to fail. If n is zero, or this
option is not specified, the import operation will continue regardless of the
number of warnings issued.
XML FROM xml-path
Specifies one or more paths that contain the XML files.
XMLPARSE
Specifies how XML documents are parsed. If this option is not specified,
the parsing behaviour for XML documents will be determined by the value
of the CURRENT XMLPARSE OPTION special register.
STRIP WHITESPACE
Specifies to remove whitespace when the XML document is parsed.
PRESERVE WHITESPACE
Specifies not to remove whitespace when the XML document is
parsed.
XMLVALIDATE
Specifies that XML documents are validated against a schema, when
applicable.
USING XDS
XML documents are validated against the XML schema identified
by the XML Data Specifier (XDS) in the main data file. By default,
if the XMLVALIDATE option is invoked with the USING XDS
Chapter 2. Import 69
IMPORT using ADMIN_CMD
Usage notes:
Be sure to complete all table operations and release all locks before starting an
import operation. This can be done by issuing a COMMIT after closing all cursors
opened WITH HOLD, or by issuing a ROLLBACK.
The import utility adds rows to the target table using the SQL INSERT statement.
The utility issues one INSERT statement for each row of data in the input file. If an
INSERT statement fails, one of two actions result:
v If it is likely that subsequent INSERT statements can be successful, a warning
message is written to the message file, and processing continues.
v If it is likely that subsequent INSERT statements will fail, and there is potential
for database damage, an error message is written to the message file, and
processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a
REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the
application interrupts the database manager after the table object is truncated, all
of the old data is lost. Ensure that the old data is no longer needed before using
these options.
By default, automatic COMMITs are not performed for the INSERT or the
INSERT_UPDATE option. They are, however, performed if the COMMITCOUNT
parameter is not zero. If automatic COMMITs are not performed, a full log results
in a ROLLBACK.
Offline import does not perform automatic COMMITs if any of the following
conditions is true:
v the target is a view, not a table
v compound inserts are used
v buffered inserts are used
By default, online import performs automatic COMMITs to free both the active log
space and the lock list. Automatic COMMITs are not performed only if a
COMMITCOUNT value of zero is specified.
Whenever the import utility performs a COMMIT, two messages are written to the
message file: one indicates the number of records to be committed, and the other is
written after a successful COMMIT. When restarting the import operation after a
failure, specify the number of records to skip, as determined from the last
successful COMMIT.
The import utility accepts input data with minor incompatibility problems (for
example, character data can be imported using padding or truncation, and numeric
data can be imported with a different numeric data type), but data with major
incompatibility problems is not accepted.
If an error occurs while recreating the foreign keys, modify the data to maintain
referential integrity.
Referential constraints and foreign key definitions are not preserved when creating
tables from PC/IXF files. (Primary key definitions are preserved if the data was
previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a
copy of the input data file, the output message file, and potential growth in the
size of the database.
If an import operation is run against a remote database, and the output message
file is very long (more than 60KB), the message file returned to the user on the
client might be missing messages from the middle of the import operation. The
first 30KB of message information and the last 30KB of message information are
always retained.
Chapter 2. Import 71
IMPORT using ADMIN_CMD
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on
a hard drive rather than on diskettes.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF
file formats can be imported; however, if the table does not already exist, IMPORT
CREATE or IMPORT REPLACE_CREATE creates the table when it imports data
from a PC/IXF file. For typed tables, IMPORT CREATE can create the type
hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between
databases. If character data containing row separators is exported to a delimited
ASCII (DEL) file and processed by a text transfer program, fields containing the
row separators will shrink or expand. The file copying step is not necessary if the
source and the target databases are both accessible from the same client.
The data in ASC and DEL files is assumed to be in the code page of the client
application performing the import. PC/IXF files, which allow for different code
pages, are recommended when importing data in different code pages. If the
PC/IXF file and the import utility are in the same code page, processing occurs as
for a regular application. If the two differ, and the FORCEIN option is specified,
the import utility assumes that data in the PC/IXF file has the same code page as
the application performing the import. This occurs even if there is a conversion
table for the two code pages. If the two differ, the FORCEIN option is not
specified, and there is a conversion table, all data in the PC/IXF file will be
converted from the file code page to the application code page. If the two differ,
the FORCEIN option is not specified, and there is no conversion table, the import
operation will fail. This applies only to PC/IXF files on DB2 clients on the AIX
operating system.
For table objects on an 8 KB page that are close to the limit of 1012 columns,
import of PC/IXF data files might cause DB2 to return an error, because the
maximum size of an SQL statement was exceeded. This situation can occur only if
the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not
apply to import of DEL or ASC files. If PC/IXF files are being used to create a new
table, an alternative is use db2look to dump the DDL statement that created the
table, and then to issue that statement through the CLP.
DB2 Connect can be used to import data to DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import (INSERT
option) is supported. The RESTARTCOUNT parameter, but not the
COMMITCOUNT parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined
in the PC/IXF file; sub-table definitions cannot be altered. When using options
other than CREATE with typed tables, the traversal order list enables one to
specify the traverse order; therefore, the traversal order list must match the one
used during the export operation. For the PC/IXF file format, one need only
specify the target sub-table name, and use the traverse order stored in the file.
The import utility can be used to recover a table previously exported to a PC/IXF
file. The table returns to the state it was in when exported.
Security labels in their internal format might contain newline characters. If you
import the file using the DEL file format, those newline characters can be mistaken
for delimiters. If you have this problem use the older default priority for delimiters
by specifying the delprioritychar file type modifier in the IMPORT command.
Federated considerations:
Related concepts:
v “Privileges, authorities, and authorization required to use import” on page 38
Related tasks:
v “Importing data” on page 38
Related reference:
v “ADMIN_CMD procedure – Run administrative commands” in Administrative
SQL Routines and Views
v “ADMIN_GET_MSGS table function – Retrieve messages generated by a data
movement utility that is executed through the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
v “ADMIN_REMOVE_MSGS procedure – Clean up messages generated by a data
movement utility that is executed through the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
v “db2Import - Import data into a table, hierarchy, nickname or view” on page 73
v “db2look - DB2 statistics and DDL extraction tool command” in Command
Reference
v “db2pd - Monitor and troubleshoot DB2 database command” in Command
Reference
Authorization:
v IMPORT using the INSERT option requires one of the following:
– sysadm
– dbadm
– CONTROL privilege on each participating table, view or nickname
– INSERT and SELECT privilege on each participating table or view
Chapter 2. Import 73
db2Import - Import data into a table, hierarchy, nickname or view
Required connection:
SQL_API_RC SQL_API_FN
db2Import (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
Chapter 2. Import 75
db2Import - Import data into a table, hierarchy, nickname or view
db2Uint16 iUsing;
struct db2DMUXmlValidateXds *piXdsArgs;
struct db2DMUXmlValidateSchema *piSchemaArgs;
} db2DMUXmlValidate;
SQL_API_RC SQL_API_FN
db2gImport (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
piDataFileName
Input. A string containing the path and the name of the external input file
from which the data is to be imported.
piLobPathList
Input. Pointer to an sqlu_media_list with its media_type field set to
SQLU_LOCAL_MEDIA, and its sqlu_media_entry structure listing paths
on the client where the LOB files can be found. This parameter is not valid
when you import to a nickname.
piDataDescriptor
Input. Pointer to an sqldcol structure containing information about the
columns being selected for import from the external file. The value of the
dcolmeth field determines how the remainder of the information provided
in this parameter is interpreted by the import utility. Valid values for this
parameter are:
SQL_METH_N
Names. Selection of columns from the external input file is by
column name.
SQL_METH_P
Positions. Selection of columns from the external input file is by
column position.
SQL_METH_L
Locations. Selection of columns from the external input file is by
column location. The database manager rejects an import call with
a location pair that is invalid because of any one of the following
conditions:
v Either the beginning or the ending location is not in the range
from 1 to the largest signed 2-byte integer.
v The ending location is smaller than the beginning location.
v The input column width defined by the location pair is not
compatible with the type and the length of the target column.
A location pair with both locations equal to zero indicates that a
nullable column is to be filled with NULLs.
SQL_METH_D
Default. If piDataDescriptor is NULL, or is set to SQL_METH_D,
default selection of columns from the external input file is done. In
this case, the number of columns and the column specification
array are both ignored. For DEL, IXF, or WSF files, the first n
columns of data in the external input file are taken in their natural
order, where n is the number of database columns into which the
data is to be imported.
piActionString
Input. Pointer to an sqlchar structure containing a 2-byte long field,
followed by an array of characters identifying the columns into which data
is to be imported.
The character array is of the form:
{INSERT | INSERT_UPDATE | REPLACE | CREATE | REPLACE_CREATE}
INTO {tname[(tcolumn-list)] |
[{ALL TABLES | (tname[(tcolumn-list)][, tname[(tcolumn-list)]])}]
[IN] HIERARCHY {STARTING tname | (tname[, tname])}
[UNDER sub-table-name | AS ROOT TABLE]}
[DATALINK SPECIFICATION datalink-spec]
Chapter 2. Import 77
db2Import - Import data into a table, hierarchy, nickname or view
INSERT
Adds the imported data to the table without changing the existing
table data.
INSERT_UPDATE
Adds the imported rows if their primary key values are not in the
table, and uses them for update if their primary key values are
found. This option is only valid if the target table has a primary
key, and the specified (or implied) list of target columns being
imported includes all columns for the primary key. This option
cannot be applied to views.
REPLACE
Deletes all existing data from the table by truncating the table
object, and inserts the imported data. The table definition and the
index definitions are not changed. (Indexes are deleted and
replaced if indexixf is in FileTypeMod, and FileType is SQL_IXF.) If
the table is not already defined, an error is returned.
Note: If an error occurs after the existing data is deleted, that data
is lost.
This parameter is not valid when you import to a nickname.
CREATE
Creates the table definition and the row contents using the
information in the specified PC/IXF file, if the specified table is not
defined. If the file was previously exported by DB2, indexes are
also created. If the specified table is already defined, an error is
returned. This option is valid for the PC/IXF file format only. This
parameter is not valid when you import to a nickname.
REPLACE_CREATE
Replaces the table contents using the PC/IXF row information in
the PC/IXF file, if the specified table is defined. If the table is not
already defined, the table definition and row contents are created
using the information in the specified PC/IXF file. If the PC/IXF
file was previously exported by DB2, indexes are also created. This
option is valid for the PC/IXF file format only.
Note: If an error occurs after the existing data is deleted, that data
is lost.
This parameter is not valid when you import to a nickname.
tname The name of the table, typed table, view, or object view into which
the data is to be inserted. An alias for REPLACE,
INSERT_UPDATE, or INSERT can be specified, except in the case
of a down-level server, when a qualified or unqualified name
should be specified. If it is a view, it cannot be a read-only view.
tcolumn-list
A list of table or view column names into which the data is to be
inserted. The column names must be separated by commas. If
column names are not specified, column names as defined in the
CREATE TABLE or the ALTER TABLE statement are used. If no
column list is specified for typed tables, data is inserted into all
columns within each sub-table.
sub-table-name
Specifies a parent table when creating one or more sub-tables
under the CREATE option.
ALL TABLES
An implicit keyword for hierarchy only. When importing a
hierarchy, the default is to import all tables specified in the
traversal-order-list.
HIERARCHY
Specifies that hierarchical data is to be imported.
STARTING
Keyword for hierarchy only. Specifies that the default order,
starting from a given sub-table name, is to be used.
UNDER
Keyword for hierarchy and CREATE only. Specifies that the new
hierarchy, sub-hierarchy, or sub-table is to be created under a given
sub-table.
AS ROOT TABLE
Keyword for hierarchy and CREATE only. Specifies that the new
hierarchy, sub-hierarchy, or sub-table is to be created as a
stand-alone hierarchy.
DATALINK SPECIFICATION datalink-spec
Specifies parameters pertaining to DB2 Data Links Manager. These
parameters can be specified using the same syntax as in the
IMPORT command.
The tname and the tcolumn-list parameters correspond to the tablename
and the colname lists of SQL INSERT statements, and have the same
restrictions.
The columns in tcolumn-list and the external columns (either specified or
implied) are matched according to their position in the list or the structure
(data from the first column specified in the sqldcol structure is inserted
into the table or view field corresponding to the first element of the
tcolumn-list).
If unequal numbers of columns are specified, the number of columns
actually processed is the lesser of the two numbers. This could result in an
error (because there are no values to place in some non-nullable table
fields) or an informational message (because some external file columns are
ignored).
This parameter is not valid when you import to a nickname.
piFileType
Input. A string that indicates the format of the data within the external file.
Supported external file formats are:
SQL_ASC
Non-delimited ASCII.
SQL_DEL
Delimited ASCII, for exchange with dBase, BASIC, and the IBM
Personal Decision Series programs, and many other database
managers and file managers.
SQL_IXF
PC version of the Integrated Exchange Format, the preferred
Chapter 2. Import 79
db2Import - Import data into a table, hierarchy, nickname or view
Chapter 2. Import 81
db2Import - Import data into a table, hierarchy, nickname or view
- SQLU_ALLOW_NO_ACCESS
Specifies that the import utility locks the table exclusively.
- SQLU_ALLOW_WRITE_ACCESS
Specifies that the data in the table should still be accessible to
readers and writers while the import is in progress.
An intent exclusive (IX) lock on the target table is acquired when the first
row is inserted. This allows concurrent readers and writers to access table
data. Online mode is not compatible with the REPLACE, CREATE, or
REPLACE_CREATE import options. Online mode is not supported in
conjunction with buffered inserts. The import operation will periodically
commit inserted data to prevent lock escalation to a table lock and to avoid
running out of active log space. These commits will be performed even if
the piCommitCount parameter was not used. During each commit, import
will lose its IX table lock, and will attempt to reacquire it after the commit.
This parameter is required when you import to a nickname and
piCommitCount parameter must be specified with a valid number
(AUTOMATIC is not considered a valid option).
piXmlParse
Input. Type of parsing that should occur for XML documents. Valid values
found in the db2ApiDf header file in the include directory, are:
DB2DMU_XMLPARSE_PRESERVE_WS
Whitespace should be preserved.
DB2DMU_XMLPARSE_STRIP_WS
Whitespace should be stripped.
piXmlValidate
Input. Pointer to the db2DMUXmlValidate structure. Indicates that XML
schema validation should occur for XML documents.
iMapToSchema
Input. The SQL identifier of the XML schema to map to.
Chapter 2. Import 83
db2Import - Import data into a table, hierarchy, nickname or view
This parameter applies only when the iUsing parameter in the same
structure is set to DB2DMU_XMLVAL_XDS.
piSchemaArgs
Input. Pointer to a db2DMUXmlValidateSchema structure, representing
arguments that correspond to the CLP ″XMLVALIDATE USING SCHEMA″
clause.
This parameter applies only when the iUsing parameter in the same
structure is set to DB2DMU_XMLVAL_SCHEMA.
Usage notes:
Before starting an import operation, you must complete all table operations and
release all locks in one of two ways:
v Close all open cursors that were defined with the WITH HOLD clause, and
commit the data changes by executing the COMMIT statement.
v Roll back the data changes by executing the ROLLBACK statement.
The import utility adds rows to the target table using the SQL INSERT statement.
The utility issues one INSERT statement for each row of data in the input file. If an
INSERT statement fails, one of two actions result:
v If it is likely that subsequent INSERT statements can be successful, a warning
message is written to the message file, and processing continues.
v If it is likely that subsequent INSERT statements will fail, and there is potential
for database damage, an error message is written to the message file, and
processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a
REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the
application interrupts the database manager after the table object is truncated, all
of the old data is lost. Ensure that the old data is no longer needed before using
these options.
By default, automatic COMMITs are not performed for the INSERT or the
INSERT_UPDATE option. They are, however, performed if the *piCommitcount
parameter is not zero. A full log results in a ROLLBACK.
Whenever the import utility performs a COMMIT, two messages are written to the
message file: one indicates the number of records to be committed, and the other is
written after a successful COMMIT. When restarting the import operation after a
failure, specify the number of records to skip, as determined from the last
successful COMMIT.
The import utility accepts input data with minor incompatibility problems (for
example, character data can be imported using padding or truncation, and numeric
data can be imported with a different numeric data type), but data with major
incompatibility problems is not accepted.
If an error occurs while recreating the foreign keys, modify the data to maintain
referential integrity.
Referential constraints and foreign key definitions are not preserved when creating
tables from PC/IXF files. (Primary key definitions are preserved if the data was
previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a
copy of the input data file, the output message file, and potential growth in the
size of the database.
If an import operation is run against a remote database, and the output message
file is very long (more than 60 KB), the message file returned to the user on the
client may be missing messages from the middle of the import operation. The first
30 KB of message information and the last 30 KB of message information are
always retained.
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on
a hard drive rather than on diskettes. Non-default values for piDataDescriptor, or
specifying an explicit list of table columns in piActionString, makes importing to a
remote database slower.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF
file formats can be imported; however, if the table does not already exist, IMPORT
CREATE or IMPORT REPLACE_CREATE creates the table when it imports data
from a PC/IXF file. For typed tables, IMPORT CREATE can create the type
hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between
databases. If character data containing row separators is exported to a delimited
ASCII (DEL) file and processed by a text transfer program, fields containing the
row separators will shrink or expand.
The data in ASC and DEL files is assumed to be in the code page of the client
application performing the import. PC/IXF files, which allow for different code
pages, are recommended when importing data in different code pages. If the
PC/IXF file and the import utility are in the same code page, processing occurs as
Chapter 2. Import 85
db2Import - Import data into a table, hierarchy, nickname or view
for a regular application. If the two differ, and the FORCEIN option is specified,
the import utility assumes that data in the PC/IXF file has the same code page as
the application performing the import. This occurs even if there is a conversion
table for the two code pages. If the two differ, the FORCEIN option is not
specified, and there is a conversion table, all data in the PC/IXF file will be
converted from the file code page to the application code page. If the two differ,
the FORCEIN option is not specified, and there is no conversion table, the import
operation will fail. This applies only to PC/IXF files on DB2 for AIX clients.
For table objects on an 8KB page that are close to the limit of 1012 columns, import
of PC/IXF data files may cause DB2 to return an error, because the maximum size
of an SQL statement was exceeded. This situation can occur only if the columns
are of type CHAR, VARCHAR, or CLOB. The restriction does not apply to import
of DEL or ASC files.
DB2 Connect can be used to import data to DRDA servers such as DB2 for
OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import (INSERT
option) is supported. The restartcnt parameter, but not the commitcnt parameter, is
also supported.
When using the CREATE option with typed tables, create every sub-table defined
in the PC/IXF file; sub-table definitions cannot be altered. When using options
other than CREATE with typed tables, the traversal order list enables one to
specify the traverse order; therefore, the traversal order list must match the one
used during the export operation. For the PC/IXF file format, one need only
specify the target sub-table name, and use the traverse order stored in the file. The
import utility can be used to recover a table previously exported to a PC/IXF file.
The table returns to the state it was in when exported.
Federated considerations:
When using the db2Import API and the INSERT, UPDATE, or INSERT_UPDATE
parameters, you must ensure that you have CONTROL privilege on the
participating nickname. You must ensure that the nickname you wish to use when
doing an import operation already exists.
Related tasks:
v “Importing data” on page 38
Related reference:
v “IMPORT ” on page 49
v “SQLCA data structure” in Administrative API Reference
v “sqldcol data structure” in Administrative API Reference
v “sqlu_media_list data structure” in Administrative API Reference
v “IMPORT command using the ADMIN_CMD procedure” on page 61
Related samples:
v “expsamp.sqb -- Export and import tables with table data to a DRDA database
(IBM COBOL)”
v “impexp.sqb -- Export and import tables with table data (IBM COBOL)”
v “tbmove.sqc -- How to move table data (C)”
v “dtformat.sqc -- Load and import data format extensions (C)”
v “tbmove.sqC -- How to move table data (C++)”
If this modifier is specified, and the transaction log is not sufficiently large, the
import operation will fail. The transaction log must be large enough to
accommodate either the number of rows specified by COMMITCOUNT, or the
number of rows in the data file if COMMITCOUNT is not specified. It is therefore
recommended that the COMMITCOUNT option be specified to avoid transaction
log overflow.
Chapter 2. Import 87
File type modifiers for the import utility
Table 6. Valid file type modifiers for the import utility: All file formats (continued)
Modifier Description
lobsinfile lob-path specifies the path to the files containing LOB data.
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the
LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length
of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in
the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456
bytes long.
The LOBS FROM clause specifies where the LOB files are located when the
“lobsinfile” modifier is used. The LOBS FROM clause will implicitly activate the
LOBSINFILE behavior. The LOBS FROM clause conveys to the IMPORT utility
the list of paths to search for the LOB files while importing the data.
To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
no_type_id Valid only when importing into a single sub-table. Typical usage is to export data
from a regular table, and then to invoke an import operation (using this modifier)
to convert the data into a single sub-table.
nodefaults If a source column for a target table column is not explicitly specified, and the
table column is not nullable, default values are not loaded. Without this option, if
a source column for one of the target table columns is not explicitly specified, one
of the following occurs:
v If a default value can be specified for a column, the default value is loaded
v If the column is nullable, and a default value cannot be specified for that
column, a NULL is loaded
v If the column is not nullable, and a default value cannot be specified, an error
is returned, and the utility stops processing.
norowwarnings Suppresses all warnings about rejected rows.
seclabelchar Indicates that security labels in the input source file are in the string format for
security label values rather than in the default encoded numeric format. IMPORT
converts each security label into the internal format as it is loaded. If a string is
not in the proper format the row is not loaded and a warning (SQLSTATE 01H53)
is returned. If the string does not represent a valid security label that is part of
the security policy protecting the table then the row is not loaded and a warning
(SQLSTATE 01H53, SQLCODE SQL3243W)) is returned.
Table 6. Valid file type modifiers for the import utility: All file formats (continued)
Modifier Description
usedefaults If a source column for a target table column has been specified, but it contains no
data for one or more row instances, default values are loaded. Examples of
missing data are:
v For DEL files: two adjacent column delimiters (",,") or two adjacent column
delimiters separated by an arbitrary number of spaces (", ,") are specified for a
column value.
v For DEL/ASC/WSF files: A row that does not have enough columns, or is not
long enough for the original specification.
Note: For ASC files, NULL column values are not considered explicitly
missing, and a default will not be substituted for NULL column values. NULL
column values are represented by all space characters for numeric, date, time,
and /timestamp columns, or by using the NULL INDICATOR for a column of
any type to indicate the column is NULL.
Without this option, if a source column contains no data for a row instance, one
of the following occurs:
v For DEL/ASC/WSF files: If the column is nullable, a NULL is loaded. If the
column is not nullable, the utility rejects the row.
Table 7. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the output data set. Converts character data from this code page to the
application code page during the import operation.
A default value of 1 is assigned for each element that is not specified. Some
examples of date formats are:
"D-M-YYYY"
"MM.DD.YYYY"
"YYYYDDD"
Chapter 2. Import 89
File type modifiers for the import utility
Table 7. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
implieddecimal The location of an implied decimal point is determined by the column definition;
it is no longer assumed to be at the end of the value. For example, the value
12345 is loaded into a DECIMAL(8,2) column as 123.45, not 12345.00.
timeformat=″x″ x is the format of the time in the source file.2 Valid time elements are:
H - Hour (one or two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24
for a 24 hour system)
HH - Hour (two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24
for a 24 hour system; mutually exclusive
with H)
M - Minute (one or two digits ranging
from 0 - 59)
MM - Minute (two digits ranging from 0 - 59;
mutually exclusive with M)
S - Second (one or two digits ranging
from 0 - 59)
SS - Second (two digits ranging from 0 - 59;
mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
ranging from 00000 - 86399; mutually
exclusive with other time elements)
TT - Meridian indicator (AM or PM)
A default value of 0 is assigned for each element that is not specified. Some
examples of time formats are:
"HH:MM:SS"
"HH.MM TT"
"SSSSS"
Table 7. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
timestampformat=″x″ x is the format of the time stamp in the source file.2 Valid time stamp elements
are:
YYYY - Year (four digits ranging from 0000 - 9999)
M - Month (one or two digits ranging from 1 - 12)
MM - Month (two digits ranging from 01 - 12;
mutually exclusive with M and MMM)
MMM - Month (three-letter case-insensitive abbreviation for
the month name; mutually exclusive with M and MM)
D - Day (one or two digits ranging from 1 - 31)
DD - Day (two digits ranging from 1 - 31; mutually exclusive with D)
DDD - Day of the year (three digits ranging from 001 - 366;
mutually exclusive with other day or month elements)
H - Hour (one or two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24 for a 24 hour system)
HH - Hour (two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24 for a 24 hour system;
mutually exclusive with H)
M - Minute (one or two digits ranging from 0 - 59)
MM - Minute (two digits ranging from 0 - 59;
mutually exclusive with M, minute)
S - Second (one or two digits ranging from 0 - 59)
SS - Second (two digits ranging from 0 - 59;
mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
ranging from 00000 - 86399; mutually
exclusive with other time elements)
UUUUUU - Microsecond (6 digits ranging from 000000 - 999999;
mutually exclusive with all other microsecond elements)
UUUUU - Microsecond (5 digits ranging from 00000 - 99999,
maps to range from 000000 - 999990;
mutually exclusive with all other microseond elements)
UUUU - Microsecond (4 digits ranging from 0000 - 9999,
maps to range from 000000 - 999900;
mutually exclusive with all other microseond elements)
UUU - Microsecond (3 digits ranging from 000 - 999,
maps to range from 000000 - 999000;
mutually exclusive with all other microseond elements)
UU - Microsecond (2 digits ranging from 00 - 99,
maps to range from 000000 - 990000;
mutually exclusive with all other microseond elements)
U - Microsecond (1 digit ranging from 0 - 9,
maps to range from 000000 - 900000;
mutually exclusive with all other microseond elements)
TT - Meridian indicator (AM or PM)
The valid values for the MMM element include: ’jan’, ’feb’, ’mar’, ’apr’, ’may’,
’jun’, ’jul’, ’aug’, ’sep’, ’oct’, ’nov’ and ’dec’. These values are case insensitive.
The following example illustrates how to import data containing user defined
date and time formats into a table called schedule:
db2 import from delfile2 of del
modified by timestampformat="yyyy.mm.dd hh:mm tt"
insert into schedule
Chapter 2. Import 91
File type modifiers for the import utility
Table 7. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
usegraphiccodepage If usegraphiccodepage is given, the assumption is made that data being imported
into graphic or double-byte character large object (DBCLOB) data fields is in the
graphic code page. The rest of the data is assumed to be in the character code
page. The graphic code page is associated with the character code page. IMPORT
determines the character code page through either the codepage modifier, if it is
specified, or through the code page of the application if the codepage modifier is
not specified.
This modifier should be used in conjunction with the delimited data file
generated by drop table recovery only if the table being recovered has graphic
data.
Restrictions
The usegraphiccodepage modifier MUST NOT be specified with DEL files created
by the EXPORT utility, as these files contain data encoded in only one code page.
The usegraphiccodepage modifier is also ignored by the double-byte character
large objects (DBCLOBs) in files.
xmlchar Specifies that XML documents are encoded in the character code page.
This option is useful for processing XML documents that are encoded in the
specified character code page but do not contain an encoding declaration.
For each document, if a declaration tag exists and contains an encoding attribute,
the encoding must match the character code page, otherwise the row containing
the document will be rejected. Note that the character codepage is the value
specified by the codepage file type modifier, or the application codepage if it is
not specified. By default, either the documents are encoded in Unicode, or they
contain a declaration tag with an encoding attribute.
xmlgraphic Specifies that XML documents are encoded in the specified graphic code page.
This option is useful for processing XML documents that are encoded in a specific
graphic code page but do not contain an encoding declaration.
For each document, if a declaration tag exists and contains an encoding attribute,
the encoding must match the graphic code page, otherwise the row containing
the document will be rejected. Note that the graphic code page is the graphic
component of the value specified by the codepage file type modifier, or the
graphic component of the application code page if it is not specified. By default,
documents are either encoded in Unicode, or they contain a declaration tag with
an encoding attribute.
Note: If the xmlgraphic modifier is specified with the IMPORT command, the
XML document to be imported must be encoded in the UTF-16 code page.
Otherwise, the XML document may be rejected with a parsing error, or it may be
imported into the table with data corruption.
Table 8. Valid file type modifiers for the import utility: ASC (non-delimited ASCII) file format
Modifier Description
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the
source data has a column definition that exceeds the size of the target table
column. Such rows can be successfully imported if code page conversion causes
the source data to shrink; for example, 4-byte EUC data in the source could
shrink to 2-byte DBCS data in the target, and require half the space. This option
is particularly useful if it is known that the source data will fit in all cases despite
mismatched column definitions.
Table 8. Valid file type modifiers for the import utility: ASC (non-delimited ASCII) file format (continued)
Modifier Description
nullindchar=x x is a single character. Changes the character denoting a null value to x. The
default value of x is Y.3
This modifier is case sensitive for EBCDIC data files, except when the character is
an English letter. For example, if the null indicator character is specified to be the
letter N, then n is also recognized as a null indicator.
reclen=x x is an integer with a maximum value of 32 767. x characters are read for each
row, and a new-line character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when loading data into a variable-length field.
If this option is not specified, blank spaces are kept.
This option cannot be specified together with striptnulls. These are mutually
exclusive options. This option replaces the obsolete t option, which is supported
for back-level compatibility only.
striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a
variable-length field. If this option is not specified, NULLs are kept.
This option cannot be specified together with striptblanks. These are mutually
exclusive options. This option replaces the obsolete padwithzero option, which is
supported for back-level compatibility only.
Table 9. Valid file type modifiers for the import utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.34 If you want to explicitly specify the double quotation
mark as the character string delimiter, it should be specified as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter.
In the following example, chardel'' causes the import utility to interpret any
single quotation mark (') it encounters as a character string delimiter:
db2 "import from myfile.del of del
modified by chardel''
method p (1, 4) insert into staff (id, years)"
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.34
In the following example, coldel; causes the import utility to interpret any
semicolon (;) it encounters as a column delimiter:
db2 import from myfile.del of del
modified by coldel;
messages msgs.txt insert into staff
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
Chapter 2. Import 93
File type modifiers for the import utility
Table 9. Valid file type modifiers for the import utility: DEL (delimited ASCII) file format (continued)
Modifier Description
decptx x is a single character substitute for the period as a decimal point character. The
default value is a period (.). The specified character is used in place of a period as
a decimal point character.34
In the following example, decpt; causes the import utility to interpret any
semicolon (;) it encounters as a decimal point:
db2 "import from myfile.del of del
modified by chardel'
decpt; messages msgs.txt insert into staff"
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter,
column delimiter. This modifier protects existing applications that depend on the
older priority by reverting the delimiter priorities to: character delimiter, record
delimiter, column delimiter. Syntax:
db2 import ... modified by delprioritychar ...
With the delprioritychar modifier specified, there will be only two rows in this
data file. The second <row delimiter> will be interpreted as part of the first data
column of the second row, while the first and the third <row delimiter> are
interpreted as actual record delimiters. If this modifier is not specified, there will
be three rows in this data file, each delimited by a <row delimiter>.
keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR,
LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks
that are not inside character delimiters are removed, and a NULL is inserted into
the table for all blank fields.
nochardel The import utility will assume all bytes found between the column delimiters to
be part of the column’s data. Character delimiters will be parsed as part of
column data. This option should not be specified if the data was exported using
DB2 (unless nochardel was specified at export time). It is provided to support
vendor data files that do not have character delimiters. Improper usage might
result in data loss or corruption.
Table 10. Valid file type modifiers for the import utility: IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress
translation between code pages.
Fixed length target fields are checked to verify that they are large enough for the
data. If nochecklengths is specified, no checking is done, and an attempt is made
to import each row.
indexixf Directs the utility to drop all indexes currently defined on the existing table, and
to create new ones from the index definitions in the PC/IXF file. This option can
only be used when the contents of a table are being replaced. It cannot be used
with a view, or when a insert-column is specified.
Table 10. Valid file type modifiers for the import utility: IXF file format (continued)
Modifier Description
indexschema=schema Uses the specified schema for the index name during index creation. If schema is
not specified (but the keyword indexschema is specified), uses the connection user
ID. If the keyword is not specified, uses the schema in the IXF file.
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the
source data has a column definition that exceeds the size of the target table
column. Such rows can be successfully imported if code page conversion causes
the source data to shrink; for example, 4-byte EUC data in the source could
shrink to 2-byte DBCS data in the target, and require half the space. This option
is particularly useful if it is known that the source data will fit in all cases despite
mismatched column definitions.
forcecreate Specifies that the table should be created with possible missing or limited
information after returning SQL3311N during an import operation.
Notes:
1. The import utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted, the
import operation fails, and an error code is returned.
2. Double quotation marks around the date format string are mandatory. Field
separators cannot contain any of the following: a-z, A-Z, and 0-9. The field
separator should not be the same as the character delimiter or field delimiter
in the DEL file format. A field separator is optional if the start and end
positions of an element are unambiguous. Ambiguity can exist if (depending
on the modifier) elements such as D, H, M, or S are used, because of the
variable length of the entries.
Chapter 2. Import 95
File type modifiers for the import utility
For time stamp formats, care must be taken to avoid ambiguity between the
month and the minute descriptors, since they both use the letter M. A month
field must be adjacent to other date fields. A minute field must be adjacent to
other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute)
"M:M" (Which is which?)
"M:YYYY:M" (Both are interpreted as month.)
"S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be
preceded by an escape character (for example, \).
3. The character must be specified in the code page of the source data.
The character code point (instead of the character symbol), can be specified
using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the
code point. For example, to specify the # character as a column delimiter, use
one of the following:
... modified by coldel# ...
... modified by coldel0x23 ...
... modified by coldelX23 ...
4. Delimiter restrictions for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
5. The following file type modifers are not allowed when importing into a
nickname:
v indexixf
v indexschema
v dldelfiletype
v nodefaults
v usedefaults
v no_type_idfiletype
v generatedignore
v generatedmissing
v identityignore
v identitymissing
v lobsinfile
6. The WSF file format is not supported for XML columns.
7. The CREATE mode is not supported for XML columns.
8. All XML data must reside in XML files that are separate from the main data
file. An XML Data Specifier (XDS) (or a NULL value) must exist for each XML
column in the main data file.
9. XML documents are assumed to be in Unicode format or to contain a
declaration tag that includes an encoding attribute, unless the XMLCHAR or
XMLGRAPHIC file type modifier is specified.
10. Rows containing documents that are not well-formed will be rejected.
Related reference:
v “Delimiter restrictions for moving data” on page 259
v “db2Import - Import data into a table, hierarchy, nickname or view” on page 73
v “IMPORT ” on page 49
Related concepts:
v “Character set and national language support” on page 206
Related reference:
v “IMPORT ” on page 49
The following example shows how to import information from myfile.ixf to the
STAFF table:
db2 import from myfile.ixf of ixf messages msg.txt insert into staff
SQL3150N The H record in the PC/IXF file has product "DB2 01.00", date
"19970220", and time "140848".
SQL3110N The utility has completed processing. "58" rows were read from the
input file.
Chapter 2. Import 97
File type modifiers for the import utility
SQL3149N "58" rows were processed from the input file. "58" rows were
successfully inserted into the table. "0" rows were rejected.
The following command generates identity values for rows 1 and 2, since no
identity values are supplied in DATAFILE1 for those rows. Rows 3 and 4, however,
are assigned the user-supplied identity values of 100 and 101, respectively.
db2 import from datafile1.del of del replace into table1
To import DATAFILE1 into TABLE1 so that identity values are generated for all
rows, issue one of the following commands:
db2 import from datafile1.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
db2 import from datafile1.del of del modified by identityignore
replace into table1
To import DATAFILE2 into TABLE1 so that identity values are generated for each
row, issue one of the following commands:
db2 import from datafile2.del of del replace into table1 (c1, c3, c4)
db2 import from datafile2.del of del modified by identitymissing
replace into table1
Data Records:
1...5....10...15...20...25...30...35...40
Test data 1 XXN 123abcdN
Test data 2 and 3 QQY wxyzN
Test data 4,5 and 6 WWN6789 Y
Related concepts:
v “Import Overview” on page 35
v “Importing large objects (LOBS)” on page 46
v “Importing user-defined distinct types (UDTs)” on page 47
v “Importing XML data” on page 40
Related tasks:
v “Importing data” on page 38
Related reference:
v Appendix B, “Differences between the import and load utility,” on page 281
Chapter 2. Import 99
File type modifiers for the import utility
The load process consists of four distinct phases (see Figure 1):
v Load, during which the data is written to the table.
During the load phase, data is loaded into the table, and index keys and table
statistics are collected, if necessary. Save points, or points of consistency, are
established at intervals specified through the SAVECOUNT parameter in the
LOAD command. Messages are generated, indicating how many input rows
were successfully loaded at the time of the save point. If a failure occurs, you
can restart the load operation; the RESTART option automatically restarts the
load operation from the last successful consistency point. The TERMINATE
option rolls back the failed load operation.
Load Load Build Build Delete Delete Index Copy Index Copy
Phase Phase Phase Phase Phase Phase Phase Phase
Starts Ends Starts Ends Starts Ends Starts Ends
Figure 1. The Four Phases of the Load Process: Load, Build, Delete, and Index Copy. While
the load operation is taking place, the target table is in the load in progress state. If the table
has constraints, the table will also be in the set integrity pending state. If the ALLOW READ
ACCESS option was specified, the table will also be in the read access only state.
Note: Each deletion event is logged. If you have a large number of records that
violate the uniqueness condition, the log could fill up during the delete
phase.
v Index copy, during which the index data is copied from a system temporary
table space to the original table space. This will only occur if a system
Note: After you invoke the load utility, you can use the LIST UTILITIES command
to monitor the progress of the load operation. For more information, refer to
LIST UTILITIES command.
Note: This can be accomplished by using the READ ACCESS option and is not
supported when the load utility is invoked in REPLACE mode.
v Whether the load operation should wait for other utilities or applications to
finish using the table or force the other applications off before proceeding.
v An alternate system temporary table space in which to build the index.
Note: This is only supported when the READ ACCESS option is specified with
a full index rebuild.
v The paths and the names of the input files in which LOBs are stored. The
lobsinfile modifier tells the load utility that all LOB data is being loaded from
files.
v A message file name. During operations such as exporting, importing, loading,
binding, or restoring data, you can specify that message files be created to
contain the error, warning, and informational messages associated with those
operations. Specify the name of these files with the MESSAGES parameter. These
message files are standard ASCII text files. To print them, use the printing
procedure for your operating system; to view them, use any ASCII editor.
Notes:
1. You can only view the contents of a message file after the operation is
finished.
2. Each message in a message file begins on a new line and contains
information provided by the DB2 message retrieval facility.
(recovery time-line)
Note: Prior to DB2 UDB Version 8, when the COPY NO option was specified on
a recoverable database, the table space was put in backup pending state
only after the load operation was committed. In DB2 UDB Version 8, the
table space will be placed in backup pending state when the load
operation begins and will remain in that state even if the load operation
fails and is rolled back. As in previous releases, when the COPY NO
option is specified and load operation completes successfully, the
rollforward utility will put dependent table spaces in restore pending
state during a rollforward operation.
v You can also specify that users have read access to the data that existed in the
table prior to the load. This means that after the load operation has completed,
you will not be able to view the new data if there are constraints on the table
and integrity checking has not been completed. You can also specify that the
index be rebuilt in a separate table space during a load operation by specifying
the READ ACCESS and INDEXING MODE REBUILD options. The index will be
copied back to the original table space during the index copy phase which
occurs after the other phases of the load operation.
v The functionality of the LOAD QUERY command has been expanded and it now
returns the table state of the target into which data is being loaded in addition
to the status information it previously included on a load operation in progress.
The LOAD QUERY command might also be used to query the table state
whether or not a load operation is in progress on that table.
v Extent allocations in DMS table spaces are now logged. The LOAD command
will now write two log records for every extent it allocates in a DMS table space.
In DB2 UDB Version 8, you must issue the following command to remove the
table space from the quiesced exclusive state:
Related concepts:
v “Rollforward recovery” in Data Recovery and High Availability Guide and Reference
Related tasks:
v “Loading data” on page 110
v “Loading data in a partitioned database environment” on page 219
Related reference:
v “LIST UTILITIES command” in Command Reference
v “Load configuration options for partitioned database environments” on page 229
v “RUNSTATS command” in Command Reference
Database
Related concepts:
v “Load considerations for partitioned tables” on page 126
v “Load overview” on page 102
Related tasks:
v “Loading data in a partitioned database environment” on page 219
Related reference:
v “Load configuration options for partitioned database environments” on page 229
Since all load processes (and all DB2 server processes, in general), are owned by
the instance owner, and all of these processes use the identification of the instance
owner to access needed files, the instance owner must have read access to input
data files. These input data files must be readable by the instance owner, regardless
of who invokes the command.
If the REPLACE option is specified, the session authorization ID must have the
authority to drop the table.
Notes:
v To load data into a table that has protected columns, the session authorization
ID must have LBAC credentials that allow write access to all protected columns
in the table.
v To load data into a table that has protected rows, the session authorization ID
must have been granted a security label for write access that is part of the
security policy protecting the table.
Related reference:
v “db2Load - Load data into a table” on page 161
v “LOAD ” on page 132
Prerequisites:
Before invoking the load utility, you must be connected to (or be able to implicitly
connect to) the database into which the data will be loaded. Since the utility will
issue a COMMIT statement, you should complete all transactions and release all
locks by issuing either a COMMIT or a ROLLBACK statement before invoking the
load utility.
Data is loaded in the sequence that appears in the input file, except when using
multi-dimensional clustering (MDC) tables, partitioned tables, or the ANYORDER
clause. If a particular sequence is desired, sort the data before attempting a load
operation.
If clustering is required, the data should be sorted on the clustering index prior to
loading. When loading data into multidimensional clustered tables (MDC), sorting
is not required prior to the load operation, and data is clustered according to the
MDC table definition.
When loading data into partitioned tables, sorting is not required prior to the load
operation, and data is partitioned according to the table definition.
Authorization:
Since all load processes (and all DB2 server processes, in general) are owned by the
instance owner, and all of these processes use the identification of the instance
Restrictions:
The following restrictions apply to the load utility when loading into a partitioned
table:
v Consistency points are not supported.
v Loading data into a subset of data partitions while keeping the remaining data
partitions fully online is not supported.
v The exception table used by a load operation or a set integrity pending
operation cannot be partitioned.
v A unique index cannot be rebuilt when the load utility is running in insert mode
or restart mode, and the load target table has any detached dependents.
Procedure:
The load utility can be invoked through the command line processor (CLP), the
Load wizard in the Control Centre, or an application programming interface (API),
db2Load.
The following is an example of the LOAD command issued through the CLP:
db2 load from stafftab.ixf of ixf messages staff.msgs
insert into userid.staff copy yes use tsm data buffer 4000
In this example:
v Any warning or error messages are placed in the staff.msgs file.
v A copy of the changes made is stored in Tivoli® Storage Manager (TSM).
v Four thousand pages of buffer space are to be used during the load operation.
The following is another example of the LOAD command issued through the CLP:
db2 load from stafftab.ixf of ixf messages staff.msgs
tempfiles path /u/myuser replace into staff
In this example:
Note: These examples use relative path names for the load input file. Relative path
names are only allowed on calls from a client on the same database partition
as the database. The use of fully qualified path names is recommended.
Detailed information about the Load wizard is provided through its online help
facility.
After you invoke the load utility, you can use the LIST UTILITIES command to
monitor the progress of the load operation. In the case of a load operation
performed in either INSERT mode, REPLACE mode, or RESTART mode, detailed
progress monitoring support is available. Issue the LIST UTILITIES command with
the SHOW DETAILS option to view detailed information about the current load
phase. Details are not available for a load operation performed in TERMINATE
mode. The LIST UTILITIES command will simply show that a load terminate
utility is currently running.
Related concepts:
v “Load considerations for partitioned tables” on page 126
v “Load overview” on page 102
v “Loading data in a partitioned database environment - hints and tips” on page
237
Related tasks:
v “Troubleshooting load issues” in Troubleshooting Guide
v “Loading data in a partitioned database environment” on page 219
Related reference:
v “Tivoli Storage Manager” in Data Recovery and High Availability Guide and
Reference
v “Load configuration options for partitioned database environments” on page 229
v “LIST UTILITIES command” in Command Reference
v “LOAD ” on page 132
Table data and index data that exist prior to the start of a load operation are visible
to queries while the load operation is in progress. Consider the following example:
1. Create a table with one integer column:
create table ED (ed int)
2. Load three rows:
load from File1 of del insert into ED
...
Number of rows read = 3
Number of rows skipped = 0
Number of rows loaded = 3
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 3
3. Query the table:
select * from ED
ED
-----------
1
2
3
3 record(s) selected.
4. Perform a load operation with the ALLOW READ ACCESS option specified
and load two more rows of data:
load from File2 of del insert into ED allow read access
5. At the same time, on another connection query the table while the load
operation is in progress:
select * from ED
ED
-----------
1
2
3
3 record(s) selected.
6. Wait for the load operation to finish and then query the table:
select * from ED
ED
-----------
1
2
3
4
5
5 record(s) selected.
Read access is provided throughout the load operation except for two instances at
the beginning and end of the operation.
Firstly, the load operation acquires a special Z-lock for a short duration of time
near the end of its setup phase. If an application holds an incompatible lock on the
table prior to the load operation requesting this special Z-lock, then the load
operation waits a finite amount of time for this incompatible lock to be released
before timing out and failing. The amount of time is determined by the
LOCKTIMEOUT database configuration parameter. If the LOCK WITH FORCE
option is specified then the load operation forces other applications off to avoid
timing out. The load operation acquires the special Z-lock, commits the phase,
releases the lock and then continues onto the load phase. Any application that
requests a lock on the table for reading after the start of the load operation in
ALLOW READ ACCESS mode, is granted the lock and it does not conflict with
this special Z-lock. New applications attempting to read existing data from the
target table are able to do so.
Secondly, before data is committed at the end of the load operation, the utility
acquires an exclusive lock (Z-lock) on the table. The load utility waits until all
applications that hold locks on the table, release them. This can cause a delay
before the data is committed. The LOCK WITH FORCE option is used to force off
conflicting applications, and allow the load operation to proceed without having to
wait. Usually, a load operation in ALLOW READ ACCESS mode acquires an
exclusive lock for a short amount of time; however, if the USE <tablespaceName>
option is specified, the exclusive lock lasts for the entire period of the index copy
phase.
Notes:
1. If a load operation is aborted, it remains at the same access level that was
specified when the load operation was issued. So, if a load operation in
ALLOW NO ACCESS mode aborts, the table data is inaccessible until a load
terminate or a load restart is issued. If a load operation in ALLOW READ
ACCESS mode aborts, the pre-loaded table data is still accessible for read
access.
2. If the ALLOW READ ACCESS option was specified for an aborted load
operation, it can also be specified for the load restart or load terminate
operation. However, if the aborted load operation specified the ALLOW NO
ACCESS option, the ALLOW READ ACCESS option cannot be specified for the
load restart or load terminate operation.
Generally, if table data is taken offline, read access is not available during a load
operation until the table data is back online.
Related concepts:
v “Building indexes” on page 115
v “Checking for integrity violations following a load operation” on page 121
v “Table locking, table states and table space states” on page 203
Building indexes
Indexes are built during the build phase of a load operation. There are four
indexing modes that can be specified in the LOAD command:
1. REBUILD. All indexes are rebuilt.
2. INCREMENTAL. Indexes are extended with new data.
3. AUTOSELECT. The load utility automatically decides between REBUILD or
INCREMENTAL mode. This is the default.
Note: You might decide to explicitly choose an indexing mode because the
behavior of the REBUILD and INCREMENTAL modes are quite
different.
4. DEFERRED. The load utility does not attempt index creation if this mode is
specified. Indexes are marked as needing a refresh, and a rebuild might be
forced the first time they are accessed. This option is not compatible with the
ALLOW READ ACCESS option because it does not maintain the indexes and
index scanners require a valid index.
Load operations that specify the ALLOW READ ACCESS option require special
consideration in terms of space usage and logging depending on the type of
indexing mode chosen. When the ALLOW READ ACCESS option is specified, the
load utility keeps indexes available for queries even while they are being rebuilt.
When a load operation in ALLOW READ ACCESS mode specifies the INDEXING
MODE INCREMENTAL option, the load utility writes some log records that
protect the integrity of the index tree. The number of log records written is a
fraction of the number of inserted keys and is a number considerably less than
would be needed by a similar SQL insert operation. A load operation in ALLOW
NO ACCESS mode with the INDEXING MODE INCREMENTAL option specified
writes only a small log record beyond the normal space allocation logs.
When a load operation in ALLOW READ ACCESS mode specifies the INDEXING
MODE REBUILD option, new indexes are built as a shadow either in the same table
space as the original index or in a system temporary table space. The original
indexes remain intact and are available during the load operation and are only
By default, the shadow index is built in the same table space as the original index.
Since both the original index and the new index are maintained simultaneously,
there must be sufficient table space to hold both indexes at the same time. If the
load operation is aborted, the extra space used to build the new index is released.
If the load operation commits, the space used for the original index is released and
the new index becomes the current index. When the new indexes are built in the
same table space as the original indexes, replacing the original indexes takes place
almost instantaneously.
If the indexes are built in a DMS table space, you cannot see the new shadow index.
If the indexes are built within an SMS table space, you can see index files in the
table space directory with the .IN1 suffix and the .INX suffix. These suffixes do not
indicate which is the original index and which is the shadow index.
The new index can be built in a system temporary table space to avoid running
out of space in the original table space. The USE <tablespaceName> option allows
the indexes to be rebuilt in a system temporary table space when using INDEXING
MODE REBUILD and ALLOW READ ACCESS options. The system temporary
table can be an SMS or a DMS table space, but the page size of the system
temporary table space must match the page size of the original index table space.
A load restart operation can use an alternate table space for building an index even
if the original load operation did not use an alternate table space. A load restart
operation cannot be issued in ALLOW READ ACCESS mode if the original load
operation was not issued in ALLOW READ ACCESS mode. Load terminate
operations do not rebuild indexes, so the USE <tablespaceName> option is
ignored.
During the build phase of the load operation, the indexes are built in the system
temporary table space. Then, during the index copy phase, the index is copied
from the system temporary table space to the original index table space. To make
sure that there is sufficient space in the original index table space for the new
index, space is allocated in the original table space during the build phase. So, if
the load operation runs out of index space, it will do so during the build phase. If
this happens, the original index is not lost.
The index copy phase occurs after the build and delete phases. Before the index
copy phase begins, the table is locked exclusively. That is, it is unavailable for read
access throughout the index copy phase. Since the index copy phase is a physical
copy, the table might be unavailable for a significant amount of time.
Related concepts:
v “Load overview” on page 102
v “Read access load operations” on page 113
The load utility does not perform any extra validation of user-supplied identity
values beyond what is normally done for values of the identity column’s data type
(that is, SMALLINT, INT, BIGINT, or DECIMAL). Duplicate values are not
reported.
Three (mutually exclusive) file type modifiers are supported by the load utility to
simplify its use with tables that contain an identity column:
v The identitymissing modifier makes loading a table with an identity column
more convenient if the input data file does not contain any values (not even
NULLS) for the identity column. For example, consider a table defined with the
following SQL statement:
Note: When using this modifier, it is possible to violate the uniqueness property
of GENERATED ALWAYS columns.
Related concepts:
v “Identity columns” in Administration Guide: Planning
If a table has been placed in set integrity pending state because a Version 7 or
earlier client was used to load data into a table with generated columns, the
following statement will take the table out of set integrity pending state and force
the generation of values:
SET INTEGRITY FOR tablename IMMEDIATE CHECKED FORCE GENERATED;
If no generated column-related file type modifiers are used, the load utility works
according to the following rules:
v Values are created for generated columns when the corresponding row of the
data file is missing a value for the column or a NULL value is supplied. If a
non-NULL value is supplied for a generated column, the row is rejected
(SQL3550W).
v If a NULL value is created for a generated column that is not nullable, the entire
row of data is rejected (SQL0407N). This could occur if, for example, a
non-nullable generated column is defined as the sum of two table columns that
include NULL values in the data file.
Three (mutually exclusive) file type modifiers are supported by the load utility to
simplify its use with tables that contain generated columns:
v The generatedmissing modifier makes loading a table with generated columns
more convenient if the input data file does not contain any values (not even
NULLS) for all generated columns present in the table. For example, consider a
table defined with the following SQL statement:
CREATE TABLE table1 (c1 INT,
c2 INT,
g1 INT GENERATED ALWAYS AS (c1 + c2),
g2 INT GENERATED ALWAYS AS (2 * c1),
c3 CHAR(1))
If you want to load TABLE1 with data from a file (load.del) that has been
exported from a table that does not have any generated columns, see the
following example:
1, 5, J
2, 6, K
3, 7, I
One way to load this file would be to explicitly list the columns to be loaded
through the LOAD command as follows:
DB2 LOAD FROM load.del of del REPLACE INTO table1 (c1, c2, c3)
For a table with many columns, however, this syntax might be cumbersome and
prone to error. An alternate method of loading the file is to use the
generatedmissing file type modifier as follows:
DB2 LOAD FROM load.del of del MODIFIED BY generatedmissing
REPLACE INTO table1
v The generatedignore modifier is in some ways the opposite of the
generatedmissing modifier: it indicates to the load utility that even though the
input data file contains data for all generated columns present in the target table,
the data should be ignored, and the computed values should be loaded into
each generated column. For example, if you want to load TABLE1, as defined
above, from a data file (load.del) containing the following data:
1, 5, 10, 15, J
2, 6, 11, 16, K
3, 7, 12, 17, I
To take the table out of set integrity pending state and force verification of the
user-supplied values, issue the following command:
SET INTEGRITY FOR table-name IMMEDIATE CHECKED.
For these generated columns, the data for the dependent columns must appear
within the first 32KB of data for each row being loaded.
For example, consider a table created with the following SQL statement:
CREATE TABLE table1 (c1 INT, c2 INT, g1 INT GENERATED ALWAYS AS (c1 + c2))
DISTRIBUTE BY hash (g1)
In order to successfully load data into this table, all of the data for columns c1 and
c2 must be located within the first 32KB of each row being loaded. Any row that
does not satisfy this restriction is rejected.
Note: There is one case where load does NOT support generating column values:
that is when one of the generated column expressions contains a
user-defined function that is FENCED. If you attempt to load into such a
table the load utility will fail. However, you can provide your own values
for these types of generated columns by using the generatedoverride file
type modifier of the load utility.
Related concepts:
v “Generated Columns” in Developing SQL and External Routines
If the loaded table has descendent tables, the SET INTEGRITY PENDING
CASCADE parameter can be specified to indicate whether or not the set integrity
pending state of the loaded table should be immediately cascaded to the
descendent tables.
If the loaded table has constraints as well as descendent foreign key tables,
dependent materialized query tables and dependent staging tables, and if all of the
tables are in normal state prior to the load operation, the following will result
based on the load parameters specified:
INSERT, ALLOW READ ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE
The loaded table, its dependent materialized query tables and dependent
staging tables are placed in set integrity pending state with read access.
INSERT, ALLOW READ ACCESS, and SET INTEGRITY PENDING CASCADE
DEFERRED
Only the loaded table is placed in set integrity pending with read access.
Descendent foreign key tables, descendent materialized query tables and
descendent staging tables remain in their original states.
INSERT, ALLOW NO ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE
The loaded table, its dependent materialized query tables and dependent
staging tables are placed in set integrity pending state with no access.
INSERT or REPLACE, ALLOW NO ACCESS, and SET INTEGRITY PENDING
CASCADE DEFERRED
Only the loaded table is placed in set integrity pending state with no
access. Descendent foreign key tables, descendent immediate materialized
query tables and descendent immediate staging tables remain in their
original states.
REPLACE, ALLOW NO ACCESS, and SET INTEGRITY PENDING CASCADE
IMMEDIATE
The table and all its descendent foreign key tables, descendent immediate
materialized query tables, and descendent immediate staging tables are
placed in set integrity pending state with no access.
To remove the set integrity pending state, use the SET INTEGRITY statement. The
SET INTEGRITY statement checks a table for constraints violations, and takes the
table out of set integrity pending state. If all the load operations are performed in
INSERT mode, the SET INTEGRITY statement can be used to incrementally process
the constraints (that is, it checks only the appended portion of the table for
constraints violations). For example:
db2 load from infile1.ixf of ixf insert into table1
db2 set integrity for table1 immediate checked
You can override the No Data Movement state by specifying the FULL ACCESS
option when you issue the SET INTEGRITY statement. The table is fully accessible,
however a full re-computation of the dependent materialized query tables takes
place in subsequent REFRESH TABLE statements and the dependent staging tables
are forced into an incomplete state.
If the ALLOW READ ACCESS option is specified for a load operation, the table
remains in read access state until the SET INTEGRITY statement is used to check
for constraints violations. Applications can query the table for data that existed
prior to the load operation once it has been committed, but will not be able to
view the newly loaded data until the SET INTEGRITY statement is issued.
Several load operations can take place on a table before checking for constraints
violations. If all of the load operations are completed in ALLOW READ ACCESS
mode, only the data that existed in the table prior to the first load operation is
available for queries.
The SET INTEGRITY statement does not activate any DELETE triggers as a result
of deleting rows that violate constraints, but once the table is removed from set
integrity pending state, triggers are active. Thus, if you correct data and insert
rows from the exception table into the loaded table, any INSERT triggers defined
on the table are activated. The implications of this should be considered. One
option is to drop the INSERT trigger, insert rows from the exception table, and
then recreate the INSERT trigger.
Related concepts:
v “Load exception table” on page 200
v “Pending states after a load operation” on page 206
v “Read access load operations” on page 113
Related reference:
v “SET INTEGRITY statement” in SQL Reference, Volume 2
v “Exception tables” in SQL Reference, Volume 1
If the materialized query table has one or more W values in the CONST_CHECKED
column of the SYSCAT.TABLES catalog, and if the NOT INCREMENTAL option is
not specified in the SET INTEGRITY statement, the table will be incrementally
refreshed and the CONST_CHECKED column of SYSCAT.TABLES will be marked
U to indicate that not all data has been verified by the system.
The following example illustrates a load insert operation into the underlying table
UTI of the materialized query table AST1. UT1 will be checked for data integrity
and will be placed in no data movement mode. UT1 will be put back into full
access state once the incremental refresh of AST1 is complete. In this scenario, both
the integrity checking for UT1 and the refreshing of AST1 will be processed
incrementally.
Related concepts:
v “Checking for integrity violations following a load operation” on page 121
The following example illustrates a load insert operation into the underlying table
UT1 of staging table G1 and its dependent deferred materialized query table AST1.
In this scenario, both the integrity checking for UT1 and the refreshing of AST1
will be processed incrementally:
Related concepts:
v “Checking for integrity violations following a load operation” on page 121
When using the LOAD command with MDC, violations of unique constraints will
be handled as follows:
v If the table included a unique key prior to the load operation and duplicate
records are loaded into the table, the original record will remain and the new
records will be deleted during the delete phase.
Chapter 3. Load 125
v If the table did not include a unique key prior to the load operation and both a
unique key and duplicate records are loaded into the table, only one of the
records with the unique key will be loaded and the others will be deleted during
the delete phase.
Note: There is no explicit technique for determining which record will be loaded
and which will be deleted.
Performance Considerations
To improve the performance of the load utility when loading MDC tables, the
UTIL_HEAP_SZ database configuration parameter value should be increased. The
mdc-load algorithm will perform significantly better when more memory is
available to the utility.. This will reduce disk I/O during the clustering of data that
is performed during the load phase. When the DATA BUFFER option of LOAD
command is specified, its value should also be increased. If the LOAD command is
being used to load several MDC tables concurrently, the UTIL_HEAP_SZ
configuration parameter should be increased accordingly.
MDC load operations will always have a build phase since all MDC tables have
block indexes.
During the load phase, extra logging for the maintenance of the block map will be
performed. There are approximately two extra log records per extent allocated. To
ensure good performance, the LOGBUFSZ database configuration parameter
should be set to a value that takes this into account.
A system temporary table with an index is used to load data into MDC tables. The
size of the table is proportional to the number of distinct cells loaded. The size of
each row in the table is proportional to the size of the MDC dimension key. To
minimize disk I/O caused by the manipulation of this table during a load
operation, ensure that the buffer pool for the temporary table space is large
enough.
Related concepts:
v “Optimizing load performance” on page 207
v “Multidimensional clustering tables” in Administration Guide: Planning
The load utility inserts data records into the correct data partition. There is no
requirement to use an external utility, such as a splitter, to partition the input data
before loading.
The load utility does not access any detached or attached data partitions. Data is
inserted into visible data partitions only. Visible data partitions are neither attached
nor detached. In addition, a load replace operation does not truncate detached or
attached data partitions. Since the load utility acquires locks on the catalog system
tables, the load utility waits for any uncommitted ALTER TABLE transactions.
Such transactions acquire an exclusive lock on the relevant rows in the catalog
tables, and the exclusive lock must terminate before the load operation can
proceed. This means that there can be no uncommitted ALTER TABLE ...ATTACH,
DETACH, or ADD PARTITION transactions while load operation is running. Any
input source records destined for an attached or detached data partition are
rejected, and can be retrieved from the exception table if one is specified. An
informational message is written to the message file to indicate some of the target
table data partitions were in an attached or detached state. Locks on the relevant
catalog table rows corresponding to the target table prevent users from changing
the partitioning of the target table by issuing any ALTER TABLE ...ATTACH,
DETACH, or ADD PARTITION operations while the load utility is running.
When the load utility encounters a record that does not belong to any of the visible
data partitions the record is rejected and the load utility continues processing. The
number of records rejected because of the range constraint violation is not
explicitly displayed, but is included in the overall number of rejected records.
Rejecting a record because of the range violation does not increase the number of
row warnings. A single message (SQL0327N) is written to the load utility message
file indicating that range violations are found, but no per-record messages are
logged. The load utility offers an option to have otherwise valid rows that were
rejected because of a range constraint violation, inserted into the exception table. In
addition to all columns of the target table, the exception table includes columns
describing the type of violation that had occurred for a particular row. Rows
containing invalid data, including data that cannot be partitioned, are written to
the dump file. Because exception table inserts are expensive, you can control which
constraint violations are inserted into the exception table. If you do not specify the
exception table, or opt not to have range violating rows inserted into the exception
table, information about rows violating the range constraint are lost.
History file
If the target table is partitioned, the corresponding history file entry does not
include a list of the table spaces spanned by the target table. A different operation
granularity identifier (’R’ instead of ’T’) indicates that a load operation ran against
a partitioned table.
Generated columns
Data availability
The current online load algorithm extends to partitioned tables. An online load
(’ALLOW READ ACCESS) specified on the LOAD command allows concurrent
readers to access the whole table, including both loading and non-loading data
partitions.
After a successful load, visible data partitions might change to either or both SET
INTEGRITY PENDING or READ ONLY state, under certain conditions. Data
partitions might be placed in these states if there are constraints on the table which
the load operation cannot maintain. Such constraints might include check
contraints and detached materialized query tables. A failed load operation leaves
all visible data partitions in the LOAD PENDING state.
Error isolation
Error isolation at the data partition level is not supported. Isolating the errors
means continuing a load on data partitions that did not run into an error and
stopping on data partitions that did run into an error. Errors can be isolated
between different database partitions, but the load utility cannot commit
transactions on a subset of visible data partitions and rollback the remaining
visible data partitions.
Other considerations
v Incremental indexing is not supported if any of the indexes are marked invalid.
An index is considered invalid if it requires a rebuild or if detached dependents
require validation with the SET INTEGRITY statement.
v Loading into tables partitioned using any combination of partitioned by range,
distributed by hash or organized by dimension algorithms is also supported.
Related concepts:
v “Load in a partitioned database environment - overview” on page 217
v “Partitioned tables” in Administration Guide: Planning
v “Data partitions” in Administration Guide: Planning
v “Load overview” on page 102
Related tasks:
v “Loading data into a table using the Load wizard” in Administration Guide:
Implementation
v “Loading data” on page 110
Related reference:
v “LOAD ” on page 132
v “db2Load - Load data into a table” on page 161
v “Load - CLP examples” on page 212
v “Restrictions on native XML data store” in XML Guide
If a failure occurs while loading data, you can restart the load operation from the
last consistency point (using the RESTART option), or reload the entire table (using
the REPLACE option). Specify the same parameters as in the previous invocation,
so that the utility can find the necessary temporary files. Because the SAVECOUNT
parameter is not supported for multi-dimensional clustering (MDC) tables, a load
restart will only take place at the beginning of the load, build, or delete phase.
Note: A load operation that specified the ALLOW READ ACCESS option can be
restarted using either the ALLOW READ ACCESS option or the ALLOW
NO ACCESS option. Conversely, a load operation that specified the ALLOW
NO ACCESS option can not be restarted using the ALLOW READ ACCESS
option.
If the original load operation was aborted in the index copy phase, a restart
operation in the ALLOW READ ACCESS mode is not permitted because the index
might be corrupted.
If a load operation in ALLOW READ ACCESS mode was aborted in the load
phase, it will restart in the load phase. If it was aborted in any phase other than
the load phase, it will restart in the build phase. If the original load operation was
in ALLOW NO ACCESS mode, a restart operation might occur in the delete phase
if the original load operation reached that point and the index is valid. If the index
is marked invalid, the load utility will restart the load operation from the build
phase.
Note: All load restart operations will choose the REBUILD indexing mode even if
the INDEXING MODE INCREMENTAL option is specified.
Issuing a LOAD TERMINATE command will generally cause the aborted load
operation to be rolled back with minimal delay. However, when issuing a LOAD
TERMINATE command for a load operation where ALLOW READ ACCESS and
INDEXING MODE INCREMENTAL are specified, there might be a delay while the
load utility scans the indexes and corrects any inconsistencies. The length of this
delay will depend on the size of the indexes and will occur whether or not the
ALLOW READ ACCESS option is specified for the load terminate operation. The
delay will not occur if the original load operation failed prior to the build phase.
Note: The delay resulting from corrections to inconsistencies in the index will be
considerably less than the delay caused by marking the indexes as invalid
and rebuilding them.
A load restart operation cannot be undertaken on a table that is in the not load
restartable table state. The table can be placed in the not load restartable table state
during a rollforward operation. This can occur if you roll forward to a point in
time that is prior to the end of a load operation, or if you roll forward through an
aborted load operation but do not roll forward to the end of the load terminate or
load restart operation.
Related concepts:
v “Restarting or terminating a load operation in a partitioned database
environment” on page 227
v “Table locking, table states and table space states” on page 203
v “Load dump file” on page 201
v “Load exception table” on page 200
v “Load overview” on page 102
If the location file does not exist, or no matching entry is found in the file, the
information from the log record is used.
The information in the file might be overwritten before rollforward recovery takes
place.
Notes:
1. In a multi-partition database, the DB2LOADREC registry variable must be set
for all the database partition servers using the db2set command.
2. In a multi-partition database, the load copy file must exist at each database
partition server, and the file name (including the path) must be the same.
3. If an entry in the file identified by the DB2LOADREC registry variable is not
valid, the old load copy location file is used to provide information to replace
the invalid entry.
The following information is provided in the location file. The first five parameters
must have valid values, and are used to identify the load copy. The entire structure
is repeated for each load copy recorded. For example:
TIMestamp 19950725182542 *
Time stamp generated at load time
DBPartition 0 *
DB Partition number (OPTIONAL)
SCHema PAYROLL *
Schema of table loaded
TABlename EMPLOYEES *
Table name
DATabasename DBT *
Database name
DB2instance toronto *
DB2INSTANCE
BUFfernumber NULL *
Number of buffers to be used for
recovery
SESsionnumber NULL * Number of sessions to be used for
recovery
TYPeofmedia L * Type of media - L for local device
A for TSM
O for other vendors
LOCationnumber 3 * Number of locations
ENTry /u/toronto/dbt.payroll.employes.001
ENT /u/toronto/dbt.payroll.employes.002
ENT /dev/rmt0
TIM 19950725192054
DBP 18
SCH PAYROLL
TAB DEPT
DAT DBT
DB2 toronto
BUF NULL
SES NULL
TYP A
TIM 19940325192054
SCH PAYROLL
TAB DEPT
DAT DBT
DB2 toronto
BUF NULL
SES NULL
TYP O
SHRlib /@sys/lib/backup_vendor.a
Notes:
1. The first three characters in each keyword are significant. All keywords are
required in the specified order. Blank lines are not accepted.
2. The time stamp is in the form yyyymmddhhmmss.
3. All fields are mandatory, except for BUF and SES (which can be NULL), and
DBP (which can be missing from the list).. If SES is NULL, the value specified
by the numloadrecses configuration parameter is used. If BUF is NULL, the
default value is SES+2.
4. If even one of the entries in the location file is invalid, the previous load copy
location file is used to provide those values.
5. The media type can be local device (L for tape, disk or diskettes), TSM (A), or
other vendor (O). If the type is L, the number of locations, followed by the
location entries, is required. If the type is A, no further input is required. If the
type is O, the shared library name is required.
6. The SHRlib parameter points to a library that has a function to store the load
copy data.
7. If you invoke a load operation, specifying the COPY NO or the
NONRECOVERABLE option, and do not take a backup copy of the database or
affected table spaces after the operation completes, you cannot restore the
database or table spaces to a point in time that follows the load operation. That
is, you cannot use rollforward recovery to recreate the database or table spaces
to the state they were in following the load operation. You can only restore the
database or table spaces to a point in time that precedes the load operation.
If you want to use a particular load copy, you can use the recovery history file for
the database to determine the time stamp for that specific load operation. In a
multi-partition database, the recovery history file is local to each database partition.
Related reference:
v “Tivoli Storage Manager” in Data Recovery and High Availability Guide and
Reference
LOAD
Loads data into a DB2 table. Data residing on the server can be in the form of a
file, tape, or named pipe. If the COMPRESS attribute for the table is set to YES, the
data loaded will be subject to compression on every data and database partition
for which a dictionary already exists in the table.
Restrictions:
The load utility does not support loading data at the hierarchy level. The load
utility is not compatible with range-clustered tables.
Scope:
Authorization:
Since all load processes (and all DB2 server processes, in general) are owned by the
instance owner, and all of these processes use the identification of the instance
owner to access needed files, the instance owner must have read access to input
data files. These input data files must be readable by the instance owner, regardless
of who invokes the command.
Required connection:
Command syntax:
, SAVECOUNT n ROWCOUNT n
N ( column-name )
,
P ( column-position )
, STATISTICS USE PROFILE
(1) (2) NO
FOR EXCEPTION table-name
NORANGEEXC
NOUNIQUEEXC
NO DATA BUFFER buffer-size SORT BUFFER buffer-size
COPY YES USE TSM
OPEN num-sess SESSIONS
,
TO device/directory
LOAD lib-name
OPEN num-sess SESSIONS
NONRECOVERABLE
CPU_PARALLELISM n DISK_PARALLELISM n YES INDEXING MODE AUTOSELECT
FETCH_PARALLELISM NO REBUILD
INCREMENTAL
DEFERRED
ALLOW NO ACCESS
ALLOW READ ACCESS SET INTEGRITY PENDING CASCADE IMMEDIATE LOCK WITH FORCE
USE tablespace-name DEFERRED
SOURCEUSEREXIT executable
REDIRECT INPUT FROM BUFFER input-buffer PARALLELIZE
FILE input-file OUTPUT TO FILE output-file
OUTPUT TO FILE output-file
PARTITIONED DB CONFIG
partitioned-db-option
Notes:
1 These keywords can appear in any order.
2 Each of these keywords can only appear once.
Command parameters:
FROM filename/pipename/device/
Notes:
1. If data is exported into a file using the EXPORT command using the
ADMIN_CMD procedure, the data file is owned by the fenced user ID.
This file is not usually accessible by the instance owner. To run the
LOAD from CLP or the ADMIN_CMD procedure, the data file must be
accessible by the instance owner ID, so read access to the data file must
be granted to the instance owner.
2. Loading data from multiple IXF files is supported if the files are
physically separate, but logically one file. It is not supported if the files
are both logically and physically separate. (Multiple physical files
would be considered logically one if they were all created with one
invocation of the EXPORT command.)
OF filetype
Specifies the format of the data:
v ASC (non-delimited ASCII format)
v DEL (delimited ASCII format)
v IXF (integrated exchange format, PC version), exported from the same or
from another DB2 table
v CURSOR (a cursor declared against a SELECT or VALUES statement).
LOBS FROM lob-path
The path to the data files containing LOB values to be loaded. The path
must end with a slash (/). The names of the LOB data files are stored in
the main data file (ASC, DEL, or IXF), in the column that will be loaded
into the LOB column. The maximum number of paths that can be specified
is 999. This will implicitly activate the LOBSINFILE behaviour.
This option is ignored when specified in conjunction with the CURSOR
filetype.
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the load
utility.
METHOD
L Specifies the start and end column numbers from which to load
data. A column number is a byte offset from the beginning of a
row of data. It is numbered starting from 1. This method can only
be used with ASC files, and is the only valid method for that file
type.
NULL INDICATORS null-indicator-list
This option can only be used when the METHOD L
parameter is specified; that is, the input file is an ASC file).
The null indicator list is a comma-separated list of positive
integers specifying the column number of each null
indicator field. The column number is the byte offset of the
null indicator field from the beginning of a row of data.
There must be one entry in the null indicator list for each
data field defined in the METHOD L parameter. A column
number of zero indicates that the corresponding data field
always contains data.
A value of Y in the NULL indicator column specifies that
the column data is NULL. Any character other than Y in
the NULL indicator column specifies that the column data
is not NULL, and that column data specified by the
METHOD L option will be loaded.
The NULL indicator character can be changed using the
MODIFIED BY option.
N Specifies the names of the columns in the data file to be loaded.
The case of these column names must match the case of the
corresponding names in the system catalogs. Each table column
that is not nullable should have a corresponding entry in the
METHOD N list. For example, given data fields F1, F2, F3, F4, F5,
and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT
NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid
request, while method N (F2, F1) is not valid. This method can
only be used with file types IXF or CURSOR.
P Specifies the field numbers (numbered from 1) of the input data
fields to be loaded. Each table column that is not nullable should
have a corresponding entry in the METHOD P list. For example,
given data fields F1, F2, F3, F4, F5, and F6, and table columns C1
INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method
P (2, 1, 4, 3) is a valid request, while method P (2, 1) is not
valid. This method can only be used with file types IXF, DEL, or
CURSOR, and is the only valid method for the DEL file type.
SAVECOUNT n
Specifies that the load utility is to establish consistency points after every n
rows. This value is converted to a page count, and rounded up to intervals
of the extent size. Since a message is issued at each consistency point, this
option should be selected if the load operation will be monitored using
LOAD QUERY. If the value of n is not sufficiently high, the
synchronization of activities performed at each consistency point will
impact performance.
The default value is zero, meaning that no consistency points will be
established, unless necessary.
This option is ignored when specified in conjunction with the CURSOR
filetype.
ROWCOUNT n
Specifies the number of n physical records in the file to be loaded. Allows
a user to load only the first n rows in a file.
WARNINGCOUNT n
Stops the load operation after n warnings. Set this parameter if no
warnings are expected, but verification that the correct file and table are
being used is desired. If the load file or the target table is specified
incorrectly, the load utility will generate a warning for each row that it
attempts to load, which will cause the load to fail. If n is zero, or this
option is not specified, the load operation will continue regardless of the
number of warnings issued. If the load operation is stopped because the
threshold of warnings was encountered, another load operation can be
started in RESTART mode. The load operation will automatically continue
from the last consistency point. Alternatively, another load operation can
be initiated in REPLACE mode, starting at the beginning of the input file.
TEMPFILES PATH temp-pathname
Specifies the name of the path to be used when creating temporary files
during a load operation, and should be fully qualified according to the
server database partition.
Temporary files take up file system space. Sometimes, this space
requirement is quite substantial. Following is an estimate of how much file
system space should be allocated for all temporary files:
v 136 bytes for each message that the load utility generates
v 15KB overhead if the data file contains long field data or LOBs. This
quantity can grow significantly if the INSERT option is specified, and
there is a large amount of long field or LOB data already in the table.
INSERT
One of four modes under which the load utility can execute. Adds the
loaded data to the table without changing the existing table data.
REPLACE
One of four modes under which the load utility can execute. Deletes all
existing data from the table, and inserts the loaded data. The table
definition and index definitions are not changed. If this option is used
when moving data between hierarchies, only the data for an entire
hierarchy, not individual subtables, can be replaced.
RESTART
One of four modes under which the load utility can execute. Restarts a
previously interrupted load operation. The load operation will
automatically continue from the last consistency point in the load, build, or
delete phase.
TERMINATE
One of four modes under which the load utility can execute. Terminates a
previously interrupted load operation, and rolls back the operation to the
point in time at which it started, even if consistency points were passed.
The states of any table spaces involved in the operation return to normal,
and all table objects are made consistent (index objects might be marked as
invalid, in which case index rebuild will automatically take place at next
access). If the load operation being terminated is a load REPLACE, the
table will be truncated to an empty table after the load TERMINATE
operation. If the load operation being terminated is a load INSERT, the
table will retain all of its original records after the load TERMINATE
operation.
The load terminate option will not remove a backup pending state from
table spaces.
INTO table-name
Specifies the database table into which the data is to be loaded. This table
cannot be a system table or a declared temporary table. An alias, or the
fully qualified or unqualified table name can be specified. A qualified table
name is in the form schema.tablename. If an unqualified table name is
specified, the table will be qualified with the CURRENT SCHEMA.
insert-column
Specifies the table column into which the data is to be inserted.
The load utility cannot parse columns whose names contain one or more
spaces. For example,
will fail because of the Int 4 column. The solution is to enclose such
column names with double quotation marks:
FOR EXCEPTION table-name
Specifies the exception table into which rows in error will be copied. Any
row that is in violation of a unique index or a primary key index is copied.
If an unqualified table name is specified, the table will be qualified with
the CURRENT SCHEMA.
Information that is written to the exception table is not written to the
dump file. In a partitioned database environment, an exception table must
be defined for those database partitions on which the loading table is
defined. The dump file, on the other hand, contains rows that cannot be
loaded because they are invalid or have syntax errors.
NORANGEEXC
Indicates that if a row is rejected because of a range violation it will not be
inserted into the exception table.
NOUNIQUEEXC
Indicates that if a row is rejected because it violates a unique constraint it
will not be inserted into the exception table.
STATISTICS USE PROFILE
Instructs load to collect statistics during the load according to the profile
defined for this table. This profile must be created before load is executed.
The profile is created by the RUNSTATS command. If the profile does not
exist and load is instructed to collect statistics according to the profile, a
warning is returned and no statistics are collected.
STATISTICS NO
Specifies that no statistics are to be collected, and that the statistics in the
catalogs are not to be altered. This is the default.
COPY NO
Specifies that the table space in which the table resides will be placed in
backup pending state if forward recovery is enabled (that is, logretain or
userexit is on). The COPY NO option will also put the table space state into
the Load in Progress table space state. This is a transient state that will
disappear when the load completes or aborts. The data in any table in the
table space cannot be updated or deleted until a table space backup or a
full database backup is made. However, it is possible to access the data in
any table by using the SELECT statement.
LOAD with COPY NO on a recoverable database leaves the table spaces in
a backup pending state. For example, performing a LOAD with COPY NO
and INDEXING MODE DEFERRED will leave indexes needing a refresh.
Certain queries on the table might require an index scan and will not
succeed until the indexes are refreshed. The index cannot be refreshed if it
resides in a table space which is in the backup pending state. In that case,
access to the table will not be allowed until a backup is taken. Index
refresh is done automatically by the database when the index is accessed
by a query.
COPY YES
Specifies that a copy of the loaded data will be saved. This option is
invalid if forward recovery is disabled (both logretain and userexit are off).
USE TSM
Specifies that the copy will be stored using Tivoli Storage Manager
(TSM).
OPEN num-sess SESSIONS
The number of I/O sessions to be used with TSM or the vendor
product. The default value is 1.
TO device/directory
Specifies the device or directory on which the copy image will be
created.
LOAD lib-name
The name of the shared library (DLL on Windows operating
systems) containing the vendor backup and restore I/O functions
to be used. It can contain the full path. If the full path is not given,
it will default to the path where the user exit programs reside.
NONRECOVERABLE
Specifies that the load transaction is to be marked as non-recoverable and
that it will not be possible to recover it by a subsequent roll forward
action. The roll forward utility will skip the transaction and will mark the
table into which data was being loaded as "invalid". The utility will also
ignore any subsequent transactions against that table. After the roll
forward operation is completed, such a table can only be dropped or
restored from a backup (full or table space) taken after a commit point
following the completion of the non-recoverable load operation.
With this option, table spaces are not put in backup pending state
following the load operation, and a copy of the loaded data does not have
to be made during the load operation.
WITHOUT PROMPTING
Specifies that the list of data files contains all the files that are to be
loaded, and that the devices or directories listed are sufficient for the entire
load operation. If a continuation input file is not found, or the copy targets
are filled before the load operation finishes, the load operation will fail,
and the table will remain in load pending state.
DATA BUFFER buffer-size
Specifies the number of 4KB pages (regardless of the degree of parallelism)
to use as buffered space for transferring data within the utility. If the value
specified is less than the algorithmic minimum, the minimum required
resource is used, and no warning is returned.
This memory is allocated directly from the utility heap, whose size can be
modified through the util_heap_sz database configuration parameter.
If a value is not specified, an intelligent default is calculated by the utility
at run time. The default is based on a percentage of the free space available
in the utility heap at the instantiation time of the loader, as well as some
characteristics of the table.
SORT BUFFER buffer-size
This option specifies a value that overrides the SORTHEAP database
configuration parameter during a load operation. It is relevant only when
loading tables with indexes and only when the INDEXING MODE
parameter is not specified as DEFERRED. The value that is specified
cannot exceed the value of SORTHEAP. This parameter is useful for
throttling the sort memory that is used when loading tables with many
indexes without changing the value of SORTHEAP, which would also
affect general query processing.
CPU_PARALLELISM n
Specifies the number of processes or threads that the load utility will
spawn for parsing, converting, and formatting records when building table
objects. This parameter is designed to exploit intra-partition parallelism. It
is particularly useful when loading presorted data, because record order in
the source data is preserved. If the value of this parameter is zero, or has
not been specified, the load utility uses an intelligent default value (usually
based on the number of CPUs available) at run time.
Notes:
1. If this parameter is used with tables containing either LOB or LONG
VARCHAR fields, its value becomes one, regardless of the number of
system CPUs or the value specified by the user.
USE tablespace-name
If the indexes are being rebuilt, a shadow copy of the index is built
in table space tablespace-name and copied over to the original table
space at the end of the load during an INDEX COPY PHASE. Only
system temporary table spaces can be used with this option. If not
specified then the shadow index will be created in the same table
space as the index object. If the shadow copy is created in the same
table space as the index object, the copy of the shadow index object
over the old index object is instantaneous. If the shadow copy is in
a different table space from the index object a physical copy is
performed. This could involve considerable I/O and time. The
copy happens while the table is offline at the end of a load during
the INDEX COPY PHASE.
Without this option the shadow index is built in the same table
space as the original. Since both the original index and shadow
index by default reside in the same table space simultaneously,
there might be insufficient space to hold both indexes within one
table space. Using this option ensures that you retain enough table
space for the indexes.
This option is ignored if the user does not specify INDEXING
MODE REBUILD or INDEXING MODE AUTOSELECT. This option
will also be ignored if INDEXING MODE AUTOSELECT is chosen
and load chooses to incrementally update the index.
SET INTEGRITY PENDING CASCADE
If LOAD puts the table into Set Integrity Pending state, the SET
INTEGRITY PENDING CASCADE option allows the user to specify
whether or not Set Integrity Pending state of the loaded table is
immediately cascaded to all descendents (including descendent foreign key
tables, descendent immediate materialized query tables and descendent
immediate staging tables).
IMMEDIATE
Indicates that Set Integrity Pending state is immediately extended
to all descendent foreign key tables, descendent immediate
materialized query tables and descendent staging tables. For a
LOAD INSERT operation, Set Integrity Pending state is not
extended to descendent foreign key tables even if the IMMEDIATE
option is specified.
When the loaded table is later checked for constraint violations
(using the IMMEDIATE CHECKED option of the SET INTEGRITY
statement), descendent foreign key tables that were placed in Set
Integrity Pending Read Access state will be put into Set Integrity
Pending No Access state.
DEFERRED
Indicates that only the loaded table will be placed in the Set
Integrity Pending state. The states of the descendent foreign key
tables, descendent immediate materialized query tables and
descendent immediate staging tables will remain unchanged.
Descendent foreign key tables might later be implicitly placed in
Set Integrity Pending state when their parent tables are checked for
constraint violations (using the IMMEDIATE CHECKED option of
the SET INTEGRITY statement). Descendent immediate
materialized query tables and descendent immediate staging tables
FILE output-file
The STDOUT and STDERR file descriptors are
captured to the fully qualified server-side file
specified.
PARALLELIZE
Increases the throughput of data coming into the load utility by
invoking multiple user exit processes simultaneously. This option is
only applicable in multi-partition database environments and is
ingored in single-partition database enviroments.
Usage notes:
v Data is loaded in the sequence that appears in the input file. If a particular
sequence is desired, the data should be sorted before a load is attempted.
v The load utility builds indexes based on existing definitions. The exception
tables are used to handle duplicates on unique keys. The utility does not enforce
referential integrity, perform constraints checking, or update materialized query
tables that are dependent on the tables being loaded. Tables that include
referential or check constraints are placed in Set Integrity Pending state.
Summary tables that are defined with REFRESH IMMEDIATE, and that are
dependent on tables being loaded, are also placed in Set Integrity Pending state.
Issue the SET INTEGRITY statement to take the tables out of Set Integrity
Pending state. Load operations cannot be carried out on replicated materialized
query tables.
v If a clustering index exists on the table, the data should be sorted on the
clustering index prior to loading. Data does not need to be sorted prior to
loading into a multidimensional clustering (MDC) table, however.
v If you specify an exception table when loading into a protected table, any rows
that are protected by invalid security labels will be sent to that table. This might
allow users that have access to the exception table to access to data that they
would not normally be authorized to access. For better security be careful who
you grant exception table access to, delete each row as soon as it is repaired and
copied to the table being loaded, and drop the exception table as soon as you
are done with it.
v Security labels in their internal format might contain newline characters. If you
load the file using the DEL file format, those newline characters can be mistaken
for delimiters. If you have this problem use the older default priority for
delimiters by specifying the delprioritychar file type modifier in the LOAD
command.
v For performing a load using the CURSOR filetype where the DATABASE keyword
was specified during the DECLARE CURSOR command, the user ID and
password used to authenticate against the database currently connected to (for
the load) will be used to authenticate against the source database (specified by
the DATABASE option of the DECLARE CURSOR command). If no user ID or
password was specified for the connection to the loading database, a user ID
and password for the source database must be specified during the DECLARE
CURSOR command.
Related concepts:
v “Load overview” on page 102
v “Privileges, authorities, and authorizations required to use Load” on page 109
Related tasks:
v “Loading data” on page 110
Related reference:
v “QUIESCE TABLESPACES FOR TABLE command” in Command Reference
v “LOAD command using the ADMIN_CMD procedure” on page 145
v “Load - CLP examples” on page 212
v “Load configuration options for partitioned database environments” on page 229
Restrictions:
The load utility does not support loading data at the hierarchy level. The load
utility is not compatible with range-clustered tables.
Scope:
Authorization:
Since all load processes (and all DB2 server processes, in general) are owned by the
instance owner, and all of these processes use the identification of the instance
owner to access needed files, the instance owner must have read access to input
data files. These input data files must be readable by the instance owner, regardless
of who invokes the command.
Required connection:
Command syntax:
, SAVECOUNT n ROWCOUNT n
N ( column-name )
,
P ( column-position )
, STATISTICS USE PROFILE
(1) (2) NO
FOR EXCEPTION table-name
NORANGEEXC
NOUNIQUEEXC
NO DATA BUFFER buffer-size SORT BUFFER buffer-size
COPY YES USE TSM
OPEN num-sess SESSIONS
,
TO device/directory
LOAD lib-name
OPEN num-sess SESSIONS
NONRECOVERABLE
CPU_PARALLELISM n DISK_PARALLELISM n YES INDEXING MODE AUTOSELECT
FETCH_PARALLELISM NO REBUILD
INCREMENTAL
DEFERRED
ALLOW NO ACCESS
ALLOW READ ACCESS SET INTEGRITY PENDING CASCADE IMMEDIATE LOCK WITH FORCE
USE tablespace-name DEFERRED
SOURCEUSEREXIT executable
REDIRECT INPUT FROM BUFFER input-buffer PARALLELIZE
FILE input-file OUTPUT TO FILE output-file
OUTPUT TO FILE output-file
PARTITIONED DB CONFIG
partitioned-db-option
Notes:
1 These keywords can appear in any order.
2 Each of these keywords can only appear once.
Command parameters:
FROM filename/pipename/device/
Notes:
1. If data is exported into a file using the EXPORT command using the
ADMIN_CMD procedure, the data file is owned by the fenced user ID.
This file is not usually accessible by the instance owner. To run the
LOAD from CLP or the ADMIN_CMD procedure, the data file must be
accessible by the instance owner ID, so read access to the data file must
be granted to the instance owner.
2. Loading data from multiple IXF files is supported if the files are
physically separate, but logically one file. It is not supported if the files
are both logically and physically separate. (Multiple physical files
would be considered logically one if they were all created with one
invocation of the EXPORT command.)
OF filetype
Specifies the format of the data:
v ASC (non-delimited ASCII format)
v DEL (delimited ASCII format)
v IXF (integrated exchange format, PC version), exported from the same or
from another DB2 table
v CURSOR (a cursor declared against a SELECT or VALUES statement).
LOBS FROM lob-path
The path to the data files containing LOB values to be loaded. The path
must end with a slash (/). The names of the LOB data files are stored in
the main data file (ASC, DEL, or IXF), in the column that will be loaded
into the LOB column. The maximum number of paths that can be specified
is 999. This will implicitly activate the LOBSINFILE behaviour.
This option is ignored when specified in conjunction with the CURSOR
filetype.
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the load
utility.
METHOD
L Specifies the start and end column numbers from which to load
data. A column number is a byte offset from the beginning of a
row of data. It is numbered starting from 1. This method can only
be used with ASC files, and is the only valid method for that file
type.
NULL INDICATORS null-indicator-list
This option can only be used when the METHOD L
parameter is specified; that is, the input file is an ASC file).
The null indicator list is a comma-separated list of positive
integers specifying the column number of each null
indicator field. The column number is the byte offset of the
null indicator field from the beginning of a row of data.
There must be one entry in the null indicator list for each
data field defined in the METHOD L parameter. A column
number of zero indicates that the corresponding data field
always contains data.
A value of Y in the NULL indicator column specifies that
the column data is NULL. Any character other than Y in
the NULL indicator column specifies that the column data
is not NULL, and that column data specified by the
METHOD L option will be loaded.
The NULL indicator character can be changed using the
MODIFIED BY option.
N Specifies the names of the columns in the data file to be loaded.
The case of these column names must match the case of the
corresponding names in the system catalogs. Each table column
that is not nullable should have a corresponding entry in the
METHOD N list. For example, given data fields F1, F2, F3, F4, F5,
and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT
NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid
request, while method N (F2, F1) is not valid. This method can
only be used with file types IXF or CURSOR.
P Specifies the field numbers (numbered from 1) of the input data
fields to be loaded. Each table column that is not nullable should
have a corresponding entry in the METHOD P list. For example,
given data fields F1, F2, F3, F4, F5, and F6, and table columns C1
INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method
P (2, 1, 4, 3) is a valid request, while method P (2, 1) is not
valid. This method can only be used with file types IXF, DEL, or
CURSOR, and is the only valid method for the DEL file type.
SAVECOUNT n
Specifies that the load utility is to establish consistency points after every n
rows. This value is converted to a page count, and rounded up to intervals
of the extent size. Since a message is issued at each consistency point, this
option should be selected if the load operation will be monitored using
LOAD QUERY. If the value of n is not sufficiently high, the
synchronization of activities performed at each consistency point will
impact performance.
The default value is zero, meaning that no consistency points will be
established, unless necessary.
This option is ignored when specified in conjunction with the CURSOR
filetype.
ROWCOUNT n
Specifies the number of n physical records in the file to be loaded. Allows
a user to load only the first n rows in a file.
WARNINGCOUNT n
Stops the load operation after n warnings. Set this parameter if no
warnings are expected, but verification that the correct file and table are
being used is desired. If the load file or the target table is specified
incorrectly, the load utility will generate a warning for each row that it
attempts to load, which will cause the load to fail. If n is zero, or this
option is not specified, the load operation will continue regardless of the
number of warnings issued. If the load operation is stopped because the
threshold of warnings was encountered, another load operation can be
started in RESTART mode. The load operation will automatically continue
from the last consistency point. Alternatively, another load operation can
be initiated in REPLACE mode, starting at the beginning of the input file.
TEMPFILES PATH temp-pathname
Specifies the name of the path to be used when creating temporary files
during a load operation, and should be fully qualified according to the
server database partition.
Temporary files take up file system space. Sometimes, this space
requirement is quite substantial. Following is an estimate of how much file
system space should be allocated for all temporary files:
v 136 bytes for each message that the load utility generates
v 15KB overhead if the data file contains long field data or LOBs. This
quantity can grow significantly if the INSERT option is specified, and
there is a large amount of long field or LOB data already in the table.
INSERT
One of four modes under which the load utility can execute. Adds the
loaded data to the table without changing the existing table data.
REPLACE
One of four modes under which the load utility can execute. Deletes all
existing data from the table, and inserts the loaded data. The table
definition and index definitions are not changed. If this option is used
when moving data between hierarchies, only the data for an entire
hierarchy, not individual subtables, can be replaced.
RESTART
One of four modes under which the load utility can execute. Restarts a
previously interrupted load operation. The load operation will
automatically continue from the last consistency point in the load, build, or
delete phase.
TERMINATE
One of four modes under which the load utility can execute. Terminates a
previously interrupted load operation, and rolls back the operation to the
point in time at which it started, even if consistency points were passed.
The states of any table spaces involved in the operation return to normal,
and all table objects are made consistent (index objects might be marked as
invalid, in which case index rebuild will automatically take place at next
access). If the load operation being terminated is a load REPLACE, the
table will be truncated to an empty table after the load TERMINATE
operation. If the load operation being terminated is a load INSERT, the
table will retain all of its original records after the load TERMINATE
operation.
The load terminate option will not remove a backup pending state from
table spaces.
INTO table-name
Specifies the database table into which the data is to be loaded. This table
cannot be a system table or a declared temporary table. An alias, or the
fully qualified or unqualified table name can be specified. A qualified table
name is in the form schema.tablename. If an unqualified table name is
specified, the table will be qualified with the CURRENT SCHEMA.
insert-column
Specifies the table column into which the data is to be inserted.
The load utility cannot parse columns whose names contain one or more
spaces. For example,
will fail because of the Int 4 column. The solution is to enclose such
column names with double quotation marks:
FOR EXCEPTION table-name
Specifies the exception table into which rows in error will be copied. Any
row that is in violation of a unique index or a primary key index is copied.
If an unqualified table name is specified, the table will be qualified with
the CURRENT SCHEMA.
Information that is written to the exception table is not written to the
dump file. In a partitioned database environment, an exception table must
be defined for those database partitions on which the loading table is
defined. The dump file, on the other hand, contains rows that cannot be
loaded because they are invalid or have syntax errors.
NORANGEEXC
Indicates that if a row is rejected because of a range violation it will not be
inserted into the exception table.
NOUNIQUEEXC
Indicates that if a row is rejected because it violates a unique constraint it
will not be inserted into the exception table.
STATISTICS USE PROFILE
Instructs load to collect statistics during the load according to the profile
defined for this table. This profile must be created before load is executed.
The profile is created by the RUNSTATS command. If the profile does not
exist and load is instructed to collect statistics according to the profile, a
warning is returned and no statistics are collected.
STATISTICS NO
Specifies that no statistics are to be collected, and that the statistics in the
catalogs are not to be altered. This is the default.
COPY NO
Specifies that the table space in which the table resides will be placed in
backup pending state if forward recovery is enabled (that is, logretain or
userexit is on). The COPY NO option will also put the table space state into
the Load in Progress table space state. This is a transient state that will
disappear when the load completes or aborts. The data in any table in the
table space cannot be updated or deleted until a table space backup or a
full database backup is made. However, it is possible to access the data in
any table by using the SELECT statement.
LOAD with COPY NO on a recoverable database leaves the table spaces in
a backup pending state. For example, performing a LOAD with COPY NO
and INDEXING MODE DEFERRED will leave indexes needing a refresh.
Certain queries on the table might require an index scan and will not
succeed until the indexes are refreshed. The index cannot be refreshed if it
resides in a table space which is in the backup pending state. In that case,
access to the table will not be allowed until a backup is taken. Index
refresh is done automatically by the database when the index is accessed
by a query.
COPY YES
Specifies that a copy of the loaded data will be saved. This option is
invalid if forward recovery is disabled (both logretain and userexit are off).
USE TSM
Specifies that the copy will be stored using Tivoli Storage Manager
(TSM).
OPEN num-sess SESSIONS
The number of I/O sessions to be used with TSM or the vendor
product. The default value is 1.
TO device/directory
Specifies the device or directory on which the copy image will be
created.
LOAD lib-name
The name of the shared library (DLL on Windows operating
systems) containing the vendor backup and restore I/O functions
to be used. It can contain the full path. If the full path is not given,
it will default to the path where the user exit programs reside.
NONRECOVERABLE
Specifies that the load transaction is to be marked as non-recoverable and
that it will not be possible to recover it by a subsequent roll forward
action. The roll forward utility will skip the transaction and will mark the
table into which data was being loaded as "invalid". The utility will also
ignore any subsequent transactions against that table. After the roll
forward operation is completed, such a table can only be dropped or
restored from a backup (full or table space) taken after a commit point
following the completion of the non-recoverable load operation.
With this option, table spaces are not put in backup pending state
following the load operation, and a copy of the loaded data does not have
to be made during the load operation.
WITHOUT PROMPTING
Specifies that the list of data files contains all the files that are to be
loaded, and that the devices or directories listed are sufficient for the entire
load operation. If a continuation input file is not found, or the copy targets
are filled before the load operation finishes, the load operation will fail,
and the table will remain in load pending state.
DATA BUFFER buffer-size
Specifies the number of 4KB pages (regardless of the degree of parallelism)
to use as buffered space for transferring data within the utility. If the value
specified is less than the algorithmic minimum, the minimum required
resource is used, and no warning is returned.
This memory is allocated directly from the utility heap, whose size can be
modified through the util_heap_sz database configuration parameter.
If a value is not specified, an intelligent default is calculated by the utility
at run time. The default is based on a percentage of the free space available
in the utility heap at the instantiation time of the loader, as well as some
characteristics of the table.
SORT BUFFER buffer-size
This option specifies a value that overrides the SORTHEAP database
configuration parameter during a load operation. It is relevant only when
loading tables with indexes and only when the INDEXING MODE
parameter is not specified as DEFERRED. The value that is specified
cannot exceed the value of SORTHEAP. This parameter is useful for
throttling the sort memory that is used when loading tables with many
indexes without changing the value of SORTHEAP, which would also
affect general query processing.
CPU_PARALLELISM n
Specifies the number of processes or threads that the load utility will
spawn for parsing, converting, and formatting records when building table
objects. This parameter is designed to exploit intra-partition parallelism. It
is particularly useful when loading presorted data, because record order in
the source data is preserved. If the value of this parameter is zero, or has
not been specified, the load utility uses an intelligent default value (usually
based on the number of CPUs available) at run time.
Notes:
1. If this parameter is used with tables containing either LOB or LONG
VARCHAR fields, its value becomes one, regardless of the number of
system CPUs or the value specified by the user.
USE tablespace-name
If the indexes are being rebuilt, a shadow copy of the index is built
in table space tablespace-name and copied over to the original table
space at the end of the load during an INDEX COPY PHASE. Only
system temporary table spaces can be used with this option. If not
specified then the shadow index will be created in the same table
space as the index object. If the shadow copy is created in the same
table space as the index object, the copy of the shadow index object
over the old index object is instantaneous. If the shadow copy is in
a different table space from the index object a physical copy is
performed. This could involve considerable I/O and time. The
copy happens while the table is offline at the end of a load during
the INDEX COPY PHASE.
Without this option the shadow index is built in the same table
space as the original. Since both the original index and shadow
index by default reside in the same table space simultaneously,
there might be insufficient space to hold both indexes within one
table space. Using this option ensures that you retain enough table
space for the indexes.
This option is ignored if the user does not specify INDEXING
MODE REBUILD or INDEXING MODE AUTOSELECT. This option
will also be ignored if INDEXING MODE AUTOSELECT is chosen
and load chooses to incrementally update the index.
SET INTEGRITY PENDING CASCADE
If LOAD puts the table into Set Integrity Pending state, the SET
INTEGRITY PENDING CASCADE option allows the user to specify
whether or not Set Integrity Pending state of the loaded table is
immediately cascaded to all descendents (including descendent foreign key
tables, descendent immediate materialized query tables and descendent
immediate staging tables).
IMMEDIATE
Indicates that Set Integrity Pending state is immediately extended
to all descendent foreign key tables, descendent immediate
materialized query tables and descendent staging tables. For a
LOAD INSERT operation, Set Integrity Pending state is not
extended to descendent foreign key tables even if the IMMEDIATE
option is specified.
When the loaded table is later checked for constraint violations
(using the IMMEDIATE CHECKED option of the SET INTEGRITY
statement), descendent foreign key tables that were placed in Set
Integrity Pending Read Access state will be put into Set Integrity
Pending No Access state.
DEFERRED
Indicates that only the loaded table will be placed in the Set
Integrity Pending state. The states of the descendent foreign key
tables, descendent immediate materialized query tables and
descendent immediate staging tables will remain unchanged.
Descendent foreign key tables might later be implicitly placed in
Set Integrity Pending state when their parent tables are checked for
constraint violations (using the IMMEDIATE CHECKED option of
the SET INTEGRITY statement). Descendent immediate
materialized query tables and descendent immediate staging tables
FILE output-file
The STDOUT and STDERR file descriptors are
captured to the fully qualified server-side file
specified.
PARALLELIZE
Increases the throughput of data coming into the load utility by
invoking multiple user exit processes simultaneously. This option is
only applicable in multi-partition database environments and is
ingored in single-partition database enviroments.
Usage notes:
v Data is loaded in the sequence that appears in the input file. If a particular
sequence is desired, the data should be sorted before a load is attempted.
v The load utility builds indexes based on existing definitions. The exception
tables are used to handle duplicates on unique keys. The utility does not enforce
referential integrity, perform constraints checking, or update materialized query
tables that are dependent on the tables being loaded. Tables that include
referential or check constraints are placed in Set Integrity Pending state.
Summary tables that are defined with REFRESH IMMEDIATE, and that are
dependent on tables being loaded, are also placed in Set Integrity Pending state.
Issue the SET INTEGRITY statement to take the tables out of Set Integrity
Pending state. Load operations cannot be carried out on replicated materialized
query tables.
v If a clustering index exists on the table, the data should be sorted on the
clustering index prior to loading. Data does not need to be sorted prior to
loading into a multidimensional clustering (MDC) table, however.
v If you specify an exception table when loading into a protected table, any rows
that are protected by invalid security labels will be sent to that table. This might
allow users that have access to the exception table to access to data that they
would not normally be authorized to access. For better security be careful who
you grant exception table access to, delete each row as soon as it is repaired and
copied to the table being loaded, and drop the exception table as soon as you
are done with it.
v Security labels in their internal format might contain newline characters. If you
load the file using the DEL file format, those newline characters can be mistaken
for delimiters. If you have this problem use the older default priority for
delimiters by specifying the delprioritychar file type modifier in the LOAD
command.
v For performing a load using the CURSOR filetype where the DATABASE keyword
was specified during the DECLARE CURSOR command, the user ID and
password used to authenticate against the database currently connected to (for
the load) will be used to authenticate against the source database (specified by
the DATABASE option of the DECLARE CURSOR command). If no user ID or
password was specified for the connection to the loading database, a user ID
and password for the source database must be specified during the DECLARE
CURSOR command.
Related concepts:
v “Privileges, authorities, and authorizations required to use Load” on page 109
v “Load overview” on page 102
Related reference:
v “ADMIN_GET_MSGS table function – Retrieve messages generated by a data
movement utility that is executed through the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
v “ADMIN_REMOVE_MSGS procedure – Clean up messages generated by a data
movement utility that is executed through the ADMIN_CMD procedure” in
Administrative SQL Routines and Views
v “EXPORT command using the ADMIN_CMD procedure” on page 15
v “ADMIN_CMD procedure – Run administrative commands” in Administrative
SQL Routines and Views
v “Load configuration options for partitioned database environments” on page 229
v “db2pd - Monitor and troubleshoot DB2 database command” in Command
Reference
LOAD QUERY
Checks the status of a load operation during processing and returns the table state.
If a load is not processing, then the table state alone is returned. A connection to
the same database, and a separate CLP session are also required to successfully
invoke this command. It can be used either by local or remote users.
Authorization:
None
Required connection:
Database
Command syntax:
SHOWDELTA
Command parameters:
NOSUMMARY
Specifies that no load summary information (rows read, rows skipped,
rows loaded, rows rejected, rows deleted, rows committed, and number of
warnings) is to be reported.
SHOWDELTA
Specifies that only new information (pertaining to load events that have
occurred since the last invocation of the LOAD QUERY command) is to be
reported.
SUMMARYONLY
Specifies that only load summary information is to be reported.
TABLE table-name
Specifies the name of the table into which data is currently being loaded. If
an unqualified table name is specified, the table will be qualified with the
CURRENT SCHEMA.
TO local-message-file
Specifies the destination for warning and error messages that occur during
the load operation. This file cannot be the message-file specified for the
LOAD command. If the file already exists, all messages that the load utility
has generated are appended to it.
Examples:
A user loading a large amount of data into the STAFF table wants to check the
status of the load operation. The user can specify:
db2 connect to <database>
db2 load query table staff to /u/mydir/staff.tempmsg
Tablestate:
Load in Progress
Usage Notes:
In addition to locks, the load utility uses table states to control access to the table.
The LOAD QUERY command can be used to determine the table state; LOAD
QUERY can be used on tables that are not currently being loaded. For a
partitioned table, the state reported is the most restrictive of the corresponding
visible data partition states. For example, if a single data partition is in the READ
ACCESS state and all other data partitions are in NORMAL state, the load query
operation returns the READ ACCESS state. A load operation will not leave a subset
of data partitions in a state different from the rest of the table. The table states
described by LOAD QUERY are as follows:
Normal
No table states affect the table.
Set Integrity Pending
The table has constraints which have not yet been verified. Use the SET
INTEGRITY statement to take the table out of Set Integrity Pending state.
The load utility places a table in Set Integrity Pending state when it begins
a load operation on a table with constraints.
Load in Progress
There is a load operation in progress on this table.
Load Pending
A load operation has been active on this table but has been aborted before
the data could be committed. Issue a LOAD TERMINATE, LOAD
RESTART, or LOAD REPLACE command to bring the table out of this
state.
Read Access Only
The table data is available for read access queries. Load operations using
the ALLOW READ ACCESS option place the table in read access only
state.
Reorg Pending
A reorg recommended ALTER TABLE statement has been executed on the
table. A classic reorg must be performed before the table is accessable
again.
Unavailable
The table is unavailable. The table can only be dropped or restored from a
backup. Rolling forward through a non-recoverable load operation will
place a table in the unavailable state.
Not Load Restartable
The table is in a partially loaded state that will not allow a load restart
operation. The table will also be in load pending state. Issue a LOAD
TERMINATE or a LOAD REPLACE command to bring the table out of the
not load restartable state. A table is placed in not load restartable state
when a rollforward operation is performed after a failed load operation
that has not been successfully restarted or terminated, or when a restore
operation is performed from an online backup that was taken while the
table was in load in progress or load pending state. In either case, the
information required for a load restart operation is unreliable, and the not
load restartable state prevents a load restart operation from taking place.
Unknown
The LOAD QUERY command is unable determine the table state.
The progress of a load operation can also be monitored with the LIST UTILITIES
command.
Related concepts:
v “Load overview” on page 102
v “Monitoring a load operation in a partitioned database environment using the
LOAD QUERY command” on page 225
v “Table locking, table states and table space states” on page 203
Related reference:
v “LIST UTILITIES command” in Command Reference
Authorization:
– INSERT and DELETE privilege on the table when the load utility is invoked
in REPLACE mode, TERMINATE mode (to terminate a previous load replace
operation), or RESTART mode (to restart a previous load replace operation)
– INSERT privilege on the exception table, if such a table is used as part of the
load operation.
Note: In general, all load processes and all DB2 server processes are owned by the
instance owner. All of these processes use the identification of the instance
owner to access needed files. Therefore, the instance owner must have read
access to the input files, regardless of who invokes the command.
Required connection:
{
db2Uint64 iRowcount;
db2Uint64 iRestartcount;
char *piUseTablespace;
db2Uint32 iSavecount;
db2Uint32 iDataBufferSize;
db2Uint32 iSortBufferSize;
db2Uint32 iWarningcount;
db2Uint16 iHoldQuiesce;
db2Uint16 iCpuParallelism;
db2Uint16 iDiskParallelism;
db2Uint16 iNonrecoverable;
db2Uint16 iIndexingMode;
db2Uint16 iAccessLevel;
db2Uint16 iLockWithForce;
db2Uint16 iCheckPending;
char iRestartphase;
char iStatsOpt;
db2Uint16 iSetIntegrityPending;
struct db2LoadUserExit *piSourceUserExit;
} db2LoadIn;
{
db2Uint64 oRowsRdPartAgents;
db2Uint64 oRowsRejPartAgents;
db2Uint64 oRowsPartitioned;
struct db2LoadAgentInfo *poAgentInfoList;
db2Uint32 iMaxAgentInfoEntries;
db2Uint32 oNumAgentInfoEntries;
} db2PartLoadOut;
SQL_API_RC SQL_API_FN
db2gLoad (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
start the sessions with different sequence numbers, but with the
same data in the one sqlu_vendor entry.
piDataDescriptor
Input. Pointer to an sqldcol structure containing information about the
columns being selected for loading from the external file.
If the pFileType parameter is set to SQL_ASC, the dcolmeth field of this
structure must either be set to SQL_METH_L or be set to SQL_METH_D
and specifies a file name with POSITIONSFILE pFileTypeMod modifier
which contains starting and ending pairs and null indicator positions. The
user specifies the start and end locations for each column to be loaded.
If the file type is SQL_DEL, dcolmeth can be either SQL_METH_P or
SQL_METH_D. If it is SQL_METH_P, the user must provide the source
column position. If it is SQL_METH_D, the first column in the file is
loaded into the first column of the table, and so on.
If the file type is SQL_IXF, dcolmeth can be one of SQL_METH_P,
SQL_METH_D, or SQL_METH_N. The rules for DEL files apply here,
except that SQL_METH_N indicates that file column names are to be
provided in the sqldcol structure.
piActionString
Input. Pointer to an sqlchar structure, followed by an array of characters
specifying an action that affects the table.
The character array is of the form:
"INSERT|REPLACE|RESTART|TERMINATE
INTO tbname [(column_list)]
[DATALINK SPECIFICATION datalink-spec]
[FOR EXCEPTION e_tbname]"
INSERT
Adds the loaded data to the table without changing the existing
table data.
REPLACE
Deletes all existing data from the table, and inserts the loaded data.
The table definition and the index definitions are not changed.
RESTART
Restarts a previously interrupted load operation. The load
operation will automatically continue from the last consistency
point in the load, build, or delete phase.
TERMINATE
Terminates a previously interrupted load operation, and rolls back
the operation to the point in time at which it started, even if
consistency points were passed. The states of any table spaces
involved in the operation return to normal, and all table objects are
made consistent (index objects may be marked as invalid, in which
case index rebuild will automatically take place at next access). If
the table spaces in which the table resides are not in load pending
state, this option does not affect the state of the table spaces.
The load terminate option will not remove a backup pending state
from table spaces.
tbname
The name of the table into which the data is to be loaded. The
table cannot be a system table or a declared temporary table. An
piVendorSortWorkPaths
Input. A pointer to the sqlu_media_list structure which specifies the
Vendor Sort work directories.
piCopyTargetList
Input. A pointer to an sqlu_media_list structure used (if a copy image is to
be created) to provide a list of target paths, devices, or a shared library to
which the copy image is to be written.
The values provided in this structure depend on the value of the
media_type field. Valid values for this parameter (defined in sqlutil header
file, located in the include directory) are:
SQLU_LOCAL_MEDIA
If the copy is to be written to local media, set the media_type to
this value and provide information about the targets in
sqlu_media_entry structures. The sessions field specifies the
number of sqlu_media_entry structures provided.
SQLU_TSM_MEDIA
If the copy is to be written to TSM, use this value. No further
information is required.
SQLU_OTHER_MEDIA
If a vendor product is to be used, use this value and provide
further information via an sqlu_vendor structure. Set the shr_lib
field of this structure to the shared library name of the vendor
product. Provide only one sqlu_vendor entry, regardless of the
value of sessions. The sessions field specifies the number of
sqlu_media_entry structures provided. The load utility will start
the sessions with different sequence numbers, but with the same
data provided in the one sqlu_vendor entry.
piNullIndicators
Input. For ASC files only. An array of integers that indicate whether or not
the column data is nullable. There is a one-to-one ordered correspondence
between the elements of this array and the columns being loaded from the
data file. That is, the number of elements must equal the dcolnum field of
the pDataDescriptor parameter. Each element of the array contains a
number identifying a location in the data file that is to be used as a NULL
indicator field, or a zero indicating that the table column is not nullable. If
the element is not zero, the identified location in the data file must contain
a Y or an N. A Y indicates that the table column data is NULL, and N
indicates that the table column data is not NULL.
piLoadInfoIn
Input. A pointer to the db2LoadIn structure.
poLoadInfoOut
Input. A pointer to the db2LoadOut structure.
piPartLoadInfoIn
Input. A pointer to the db2PartLoadIn structure.
poPartLoadInfoOut
Output. A pointer to the db2PartLoadOut structure.
iCallerAction
Input. An action requested by the caller. Valid values (defined in sqlutil
header file, located in the include directory) are:
SQLU_INITIAL
Initial call. This value (or SQLU_NOINTERRUPT) must be used on
the first call to the API.
SQLU_NOINTERRUPT
Initial call. Do not suspend processing. This value (or
SQLU_INITIAL) must be used on the first call to the API.
If the initial call or any subsequent call returns and requires the
calling application to perform some action prior to completing the
requested load operation, the caller action must be set to one of the
following:
SQLU_CONTINUE
Continue processing. This value can only be used on subsequent
calls to the API, after the initial call has returned with the utility
requesting user input (for example, to respond to an end of tape
condition). It specifies that the user action requested by the utility
has completed, and the utility can continue processing the initial
request.
SQLU_TERMINATE
Terminate processing. Causes the load utility to exit prematurely,
leaving the table spaces being loaded in LOAD_PENDING state.
This option should be specified if further processing of the data is
not to be done.
SQLU_ABORT
Terminate processing. Causes the load utility to exit prematurely,
leaving the table spaces being loaded in LOAD_PENDING state.
This option should be specified if further processing of the data is
not to be done.
SQLU_RESTART
Restart processing.
SQLU_DEVICE_TERMINATE
Terminate a single device. This option should be specified if the
utility is to stop reading data from the device, but further
processing of the data is to be done.
iCheckPending
This parameter is being deprecated as of Version 9.1. Use the
iSetIntegrityPending parameter instead.
iRestartphase
Input. Reserved. Valid value is a single space character ’ ’.
iStatsOpt
Input. Granularity of statistics to collect. Valid values are:
SQLU_STATS_NONE
No statistics to be gathered.
SQLU_STATS_USE_PROFILE
Statistics are collected based on the profile defined for the current
table. This profile must be created using the RUNSTATS command.
If no profile exists for the current table, a warning is returned and
no statistics are collected.
iSetIntegrityPending
Input. Specifies to put the table into set integrity pending state. If the value
SQLU_SI_PENDING_CASCADE_IMMEDIATE is specified, set integrity
pending state will be immediately cascaded to all dependent and
descendent tables. If the value
SQLU_SI_PENDING_CASCADE_DEFERRED is specified, the cascade of
set integrity pending state to dependent tables will be deferred until the
target table is checked for integrity violations.
SQLU_SI_PENDING_CASCADE_DEFERRED is the default value if the
option is not specified.
piPartFileLocation
Input. In PARTITION_ONLY, LOAD_ONLY, and
LOAD_ONLY_VERIFY_PART modes, this parameter can be used to specify
the location of the partitioned files. This location must exist on each
database partition specified by the piOutputNodes option.
For the SQL_CURSOR file type, this parameter cannot be NULL and the
location does not refer to a path, but to a fully qualified file name. This
will be the fully qualified base file name of the partitioned files that are
created on each output database partition for PARTITION_ONLY mode, or
the location of the files to be read from each database partition for
LOAD_ONLY mode. For PARTITION_ONLY mode, multiple files may be
created with the specified base name if there are LOB columns in the target
table. For file types other than SQL_CURSOR, if the value of this
parameter is NULL, it will default to the current directory.
piOutputNodes
Input. The list of Load output database partitions. A NULL indicates that
all nodes on which the target table is defined.
piPartitioningNodes
Input. The list of partitioning nodes. A NULL indicates the default. Refer to
the Load command in the Data Movement Guide and Reference for a
description of how the default is determined.
piMode
Input. Specifies the load mode for partitioned databases. Valid values
(defined in db2ApiDf header file, located in the include directory) are:
- DB2LOAD_PARTITION_AND_LOAD
Data is distributed (perhaps in parallel) and loaded simultaneously
on the corresponding database partitions.
- DB2LOAD_PARTITION_ONLY
Data is distributed (perhaps in parallel) and the output is written
to files in a specified location on each loading database partition.
For file types other than SQL_CURSOR, the name of the output file
on each database partition will have the form filename.xxx, where
filename is the name of the first input file specified by piSourceList
and xxx is the database partition number.For the SQL_CURSOR file
type, the name of the output file on each database partition will be
determined by the piPartFileLocation parameter. Refer to the
piPartFileLocation parameter for information about how to specify
the location of the database partition file on each database
partition.
Note: This mode cannot be used when loading a data file located
on a remote client, nor can it be used for a CLI LOAD.
DB2LOAD_LOAD_ONLY_VERIFY_PART
Data is assumed to be already distributed, but the data file does
not contain a database partition header. The distribution process is
skipped, and the data is loaded simultaneously on the
corresponding database partitions. During the load operation, each
row is checked to verify that it is on the correct database partition.
Rows containing database partition violations are placed in a
dumpfile if the dumpfile file type modifier is specified. Otherwise,
the rows are discarded. If database partition violations exist on a
particular loading database partition, a single warning will be
written to the load message file for that database partition. The
input file name on each database partition is expected to be of the
form filename.xxx, where filename is the name of the first file
specified by piSourceList and xxx is the 13-digit database partition
number.
Note: This mode cannot be used when loading a data file located
on a remote client, nor can it be used for a CLI LOAD.
DB2LOAD_ANALYZE
An optimal distribution map with even distribution across all
database partitions is generated.
piMaxNumPartAgents
Input. The maximum number of partitioning agents. A NULL value
indicates the default, which is 25.
piIsolatePartErrs
Input. Indicates how the load operation will react to errors that occur on
individual database partitions. Valid values (defined in db2ApiDf header
file, located in the include directory) are:
DB2LOAD_SETUP_ERRS_ONLY
In this mode, errors that occur on a database partition during
setup, such as problems accessing a database partition or problems
accessing a table space or table on a database partition, will cause
the load operation to stop on the failing database partitions but to
continue on the remaining database partitions. Errors that occur on
a database partition while data is being loaded will cause the
entire operation to fail and rollback to the last point of consistency
on each database partition.
DB2LOAD_LOAD_ERRS_ONLY
In this mode, errors that occur on a database partition during setup
will cause the entire load operation to fail. When an error occurs
while data is being loaded, the database partitions with errors will
be rolled back to their last point of consistency. The load operation
will continue on the remaining database partitions until a failure
occurs or until all the data is loaded. On the database partitions
where all of the data was loaded, the data will not be visible
following the load operation. Because of the errors in the other
database partitions the transaction will be aborted. Data on all of
records if RECLEN file type modifier is also specified. Possible values are
TRUE and FALSE. If NULL, the value defaults to FALSE.
piDistfile
Input. Name of the database partition distribution file. If a NULL is
specified, the value defaults to ″DISTFILE″.
piOmitHeader
Input. Indicates that a distribution map header should not be included in
the database partition file when using DB2LOAD_PARTITION_ONLY
mode. Possible values are TRUE and FALSE. If NULL, the default is
FALSE.
piRunStatDBPartNum
Specifies the database partition on which to collect statistics. The default
value is the first database partition in the output database partition list.
oTableState
The final status of the table on the database partition on which the
agent executed (relevant only for load agents).
It is up to the caller of the API to allocate memory for this list prior to
calling the API. The caller should also indicate the number of entries for
which they allocated memory in the iMaxAgentInfoEntries parameter. If
the caller sets poAgentInfoList to NULL or sets iMaxAgentInfoEntries to 0,
then no information will be returned about the load agents.
iMaxAgentInfoEntries
Input. The maximum number of agent information entries allocated by the
user for poAgentInfoList. In general, setting this parameter to 3 times the
number of database partitions involved in the load operation should be
sufficient.
oNumAgentInfoEntries
Output. The actual number of agent information entries produced by the
load operation. This number of entries will be returned to the user in the
poAgentInfoList parameter as long as iMaxAgentInfoEntries is greater than
or equal to oNumAgentInfoEntries. If iMaxAgentInfoEntries is less than
oNumAgentInfoEntries, then the number of entries returned in
poAgentInfoList is equal to iMaxAgentInfoEntries.
oAgentType
Output. The agent type. Valid values (defined in db2ApiDf header file,
located in the include directory) are :
v DB2LOAD_LOAD_AGENT
v DB2LOAD_PARTITIONING_AGENT
v DB2LOAD_PRE_PARTITIONING_AGENT
v DB2LOAD_FILE_TRANSFER_AGENT
v DB2LOAD_LOAD_TO_FILE_AGENT
Usage notes:
Data is loaded in the sequence that appears in the input file. If a particular
sequence is desired, the data should be sorted before a load is attempted.
The load utility builds indexes based on existing definitions. The exception tables
are used to handle duplicates on unique keys. The utility does not enforce
referential integrity, perform constraints checking, or update summary tables that
are dependent on the tables being loaded. Tables that include referential or check
constraints are placed in set integrity pending state. Summary tables that are
defined with REFRESH IMMEDIATE, and that are dependent on tables being
loaded, are also placed in set integrity pending state. Issue the SET INTEGRITY
statement to take the tables out of set integrity pending state. Load operations
cannot be carried out on replicated summary tables.
For clustering indexes, the data should be sorted on the clustering index prior to
loading. The data need not be sorted when loading into an multi-dimensionally
clustered (MDC) table.
Related tasks:
v “Loading data” on page 110
Related reference:
v “LOAD ” on page 132
v “sqldcol data structure” in Administrative API Reference
v “sqlu_media_list data structure” in Administrative API Reference
v “db2LoadQuery - Get the status of a load operation” on page 181
v “db2Export - Export data from a database” on page 19
v “db2Import - Import data into a table, hierarchy, nickname or view” on page 73
Related samples:
v “dtformat.sqc -- Load and import data format extensions (C)”
v “tbload.sqc -- How to load into a partitioned database (C)”
v “tbmove.sqc -- How to move table data (C)”
v “tbmove.sqC -- How to move table data (C++)”
Authorization:
None
Required connection:
Database
SQL_API_RC SQL_API_FN
db2gLoadQuery (
db2Uint32 versionNumber,
void * pParmStruct,
struct sqlca * pSqlca);
oRowsLoaded
Output. Number of rows loaded into the target table so far.
oRowsRejected
Output. Number of rows rejected from the target table so far.
oRowsDeleted
Output. Number of rows deleted from the target table so far (during the
delete phase).
oCurrentIndex
Output. Index currently being built (during the build phase).
oNumTotalIndexes
Output. Total number of indexes to be built (during the build phase).
oCurrentMPPNode
Output. Indicates which database partition server is being queried (for
partitioned database environment mode only).
oLoadRestarted
Output. A flag whose value is TRUE if the load operation being queried is
a load restart operation.
oWhichPhase
Output. Indicates the current phase of the load operation being queried.
Valid values (defined in db2ApiDf header file, located in the include
directory) are:
DB2LOADQUERY_LOAD_PHASE
Load phase.
DB2LOADQUERY_BUILD_PHASE
Build phase.
DB2LOADQUERY_DELETE_PHASE
Delete phase.
DB2LOADQUERY_INDEXCOPY_PHASE
Index copy phase.
oWarningCount
Output. Total number of warnings returned so far.
oTableState
Output. The table states. Valid values (defined in db2ApiDf header file,
located in the include directory) are:
DB2LOADQUERY_NORMAL
No table states affect the table.
DB2LOADQUERY_SI_PENDING
The table has constraints and the constraints have yet to be
verified. Use the SET INTEGRITY command to take the table out
of the DB2LOADQUERY_SI_PENDING state. The load utility puts
a table into the DB2LOADQUERY_SI_PENDING state when it
begins a load on a table with constraints.
DB2LOADQUERY_LOAD_IN_PROGRESS
There is a load actively in progress on this table.
DB2LOADQUERY_LOAD_PENDING
A load has been active on this table but has been aborted before
the load could commit. Issue a load terminate, a load restart, or a
oLoadRestarted
Output. A flag whose value is TRUE if the load operation being queried is
a load restart operation.
oWhichPhase
Output. Indicates the current phase of the load operation being queried.
Valid values (defined in db2ApiDf header file, located in the include
directory) are:
DB2LOADQUERY_LOAD_PHASE
Load phase.
DB2LOADQUERY_BUILD_PHASE
Build phase.
DB2LOADQUERY_DELETE_PHASE
Delete phase.
DB2LOADQUERY_INDEXCOPY_PHASE
Index copy phase.
oWarningCount
Output. Total number of warnings returned so far.
oTableState
Output. The table states. Valid values (defined in db2ApiDf header file,
located in the include directory) are:
DB2LOADQUERY_NORMAL
No table states affect the table.
DB2LOADQUERY_SI_PENDING
The table has constraints and the constraints have yet to be
verified. Use the SET INTEGRITY command to take the table out
of the DB2LOADQUERY_SI_PENDING state. The load utility puts
a table into the DB2LOADQUERY_SI_PENDING state when it
begins a load on a table with constraints.
DB2LOADQUERY_LOAD_IN_PROGRESS
There is a load actively in progress on this table.
DB2LOADQUERY_LOAD_PENDING
A load has been active on this table but has been aborted before
the load could commit. Issue a load terminate, a load restart, or a
load replace to bring the table out of the
DB2LOADQUERY_LOAD_PENDING state.
DB2LOADQUERY_REORG_PENDING
A reorg recommended alter has been performed on this table. A
classic reorg must be performed before the table will be accessible.
DB2LOADQUERY_READ_ACCESS
The table data is available for read access queries. Loads using the
DB2LOADQUERY_READ_ACCESS option put the table into Read
Access Only state.
DB2LOADQUERY_NOTAVAILABLE
The table is unavailable. The table may only be dropped or it may
be restored from a backup. Rollforward through a non-recoverable
load will put a table into the unavailable state.
DB2LOADQUERY_NO_LOAD_RESTART
The table is in a partially loaded state that will not allow a load
restart. The table will also be in the Load Pending state. Issue a
load terminate or a load replace to bring the table out of the No
Load Restartable state. The table can be placed in the
DB2LOADQUERY_NO_LOAD_RESTART state during a
rollforward operation. This can occur if you rollforward to a point
in time that is prior to the end of a load operation, or if you roll
forward through an aborted load operation but do not roll forward
to the end of the load terminate or load restart operation.
DB2LOADQUERY_TYPE1_INDEXES
The table currently uses type-1 indexes. The indexes can be
converted to type-2 using the CONVERT option when using the
REORG utility on the indexes.
iLocalMessageFileLen
Input. Specifies the length in bytes of piLocalMessageFile parameter.
Usage notes:
This API reads the status of the load operation on the table specified by piString,
and writes the status to the file specified by pLocalMsgFileName.
Related concepts:
v “Monitoring a load operation in a partitioned database environment using the
LOAD QUERY command” on page 225
Related reference:
v “LOAD QUERY ” on page 158
v “SQLCA data structure” in Administrative API Reference
v “db2Load - Load data into a table” on page 161
Related samples:
v “loadqry.sqb -- Query the current status of a load (MF COBOL)”
v “tbload.sqc -- How to load into a partitioned database (C)”
v “tbmove.sqc -- How to move table data (C)”
v “tbmove.sqC -- How to move table data (C++)”
Table 12. Valid file type modifiers for the load utility: All file formats (continued)
Modifier Description
generatedoverride This modifier instructs the load utility to accept user-supplied data for all
generated columns in the table (contrary to the normal rules for these types of
columns). This is useful when migrating data from another database system, or
when loading a table from data that was recovered using the RECOVER
DROPPED TABLE option on the ROLLFORWARD DATABASE command. When
this modifier is used, any rows with no data or NULL data for a non-nullable
generated column will be rejected (SQL3116W). When this modifier is used, the
table will be placed in Set Integrity Pending state. To take the table out of Set
Integrity Pending state without verifying the user-supplied values, issue the
following command after the load operation:
SET INTEGRITY FOR < table-name > GENERATED COLUMN
IMMEDIATE UNCHECKED
To take the table out of Set Integrity Pending state and force verification of the
user-supplied values, issue the following command after the load operation:
SET INTEGRITY FOR < table-name > IMMEDIATE CHECKED.
When this modifier is specified and there is a generated column in any of the
partitioning keys, dimension keys or distribution keys, then the LOAD command
will automatically convert the modifier to generatedignore and proceed with the
load. This will have the effect of regenerating all of the generated column values.
Table 12. Valid file type modifiers for the load utility: All file formats (continued)
Modifier Description
indexfreespace=x x is an integer between 0 and 99 inclusive. The value is interpreted as the
percentage of each index page that is to be left as free space when load rebuilds
the index. Load with INDEXING MODE INCREMENTAL ignores this option. The
first entry in a page is added without restriction; subsequent entries are added
the percent free space threshold can be maintained. The default value is the one
used at CREATE INDEX time.
This value takes precedence over the PCTFREE value specified in the CREATE
INDEX statement; the registry variable DB2 INDEX FREE takes precedence over
indexfreespace. The indexfreespace option affects index leaf pages only.
lobsinfile lob-path specifies the path to the files containing LOB data. The ASC, DEL, or IXF
load input files contain the names of the files having LOB data in the LOB
column.
The LOBS FROM clause specifies where the LOB files are located when the
“lobsinfile” modifier is used. The LOBS FROM clause will implicitly activate the
LOBSINFILE behavior. The LOBS FROM clause conveys to the LOAD utility the
list of paths to search for the LOB files while loading the data.
Each path contains at least one file that contains at least one LOB pointed to by a
Lob Location Specifier (LLS) in the data file. The LLS is a string representation of
the location of a LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the
LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length
of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in
the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456
bytes long.
To indicate a null LOB , enter the size as -1. If the size is specified as 0, it is
treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file
name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.
noheader Skips the header verification code (applicable only to load operations into tables
that reside in a single-partition database partition group).
Table 12. Valid file type modifiers for the load utility: All file formats (continued)
Modifier Description
seclabelchar Indicates that security labels in the input source file are in the string format for
security label values rather than in the default encoded numeric format. LOAD
converts each security label into the internal format as it is loaded. If a string is
not in the proper format the row is not loaded and a warning (SQLSTATE 01H53,
SQLCODE SQL3242W) is returned. If the string does not represent a valid
security label that is part of the security policy protecting the table then the row
is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3243W) is
returned.
If you have a table consisting of a single DB2SECURITYLABEL column, the data file
might look like this:
"CONFIDENTIAL:ALPHA:G2"
"CONFIDENTIAL;SIGMA:G2"
"TOP SECRET:ALPHA:G2"
To load or import this data, the SECLABELCHAR file type modifier must be
used:
LOAD FROM input.del OF DEL MODIFIED BY SECLABELCHAR INSERT INTO t1
seclabelname Indicates that security labels in the input source file are indicated by their name
rather than the default encoded numeric format. LOAD will convert the name to
the appropriate security label if it exists. If no security label exists with the
indicated name for the security policy protecting the table the row is not loaded
and a warning (SQLSTATE 01H53, SQLCODE SQL3244W) is returned.
If you have a table consisting of a single DB2SECURITYLABEL column, the data file
might consist of security label names similar to:
"LABEL1"
"LABEL1"
"LABEL2"
To load or import this data, the SECLABELNAME file type modifier must be used:
LOAD FROM input.del OF DEL MODIFIED BY SECLABELNAME INSERT INTO t1
Note: If the file type is ASC, any spaces following the name of the security label
will be interpreted as being part of the name. To avoid this use the striptblanks
file type modifier to make sure the spaces are removed.
totalfreespace=x x is an integer greater than or equal to 0 . The value is interpreted as the
percentage of the total pages in the table that is to be appended to the end of the
table as free space. For example, if x is 20, and the table has 100 data pages after
the data has been loaded, 20 additional empty pages will be appended. The total
number of data pages for the table will be 120. The data pages total does not
factor in the number of index pages in the table. This option does not affect the
index object. If two loads are done with this option specified, the second load will
not reuse the extra space appended to the end by the first load.
Table 12. Valid file type modifiers for the load utility: All file formats (continued)
Modifier Description
usedefaults If a source column for a target table column has been specified, but it contains no
data for one or more row instances, default values are loaded. Examples of
missing data are:
v For DEL files: two adjacent column delimiters (",,") or two adjacent column
delimiters separated by an arbitrary number of spaces (", ,") are specified for a
column value.
v For DEL/ASC/WSF files: A row that does not have enough columns, or is not
long enough for the original specification. For ASC files, NULL column values
are not considered explicitly missing, and a default will not be substituted for
NULL column values. NULL column values are represented by all space
characters for numeric, date, time, and /timestamp columns, or by using the
NULL INDICATOR for a column of any type to indicate the column is NULL.
Without this option, if a source column contains no data for a row instance, one
of the following occurs:
v For DEL/ASC/WSF files: If the column is nullable, a NULL is loaded. If the
column is not nullable, the utility rejects the row.
Table 13. Valid file type modifiers for the load utility: ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the
data in the input data set. Converts character data (and numeric data specified in
characters) from this code page to the database code page during the load
operation.
A default value of 1 is assigned for each element that is not specified. Some
examples of date formats are:
"D-M-YYYY"
"MM.DD.YYYY"
"YYYYDDD"
Table 13. Valid file type modifiers for the load utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
dumpfile = x x is the fully qualified (according to the server database partition) name of an
exception file to which rejected rows are written. A maximum of 32 KB of data is
written per record. Following is an example that shows how to specify a dump
file:
db2 load from data of del
modified by dumpfile = /u/user/filename
insert into table_name
The file will be created and owned by the instance owner. To override the default
file permissions, use the dumpfileaccessall file type modifier.
Notes:
1. In a partitioned database environment, the path should be local to the loading
database partition, so that concurrently running load operations do not
attempt to write to the same file.
2. The contents of the file are written to disk in an asynchronous buffered mode.
In the event of a failed or an interrupted load operation, the number of
records committed to disk cannot be known with certainty, and consistency
cannot be guaranteed after a LOAD RESTART. The file can only be assumed
to be complete for a load operation that starts and completes in a single pass.
dumpfileaccessall Grants read access to ’OTHERS’ when a dump file is created.
Table 13. Valid file type modifiers for the load utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
timeformat=″x″ x is the format of the time in the source file.1 Valid time elements are:
H - Hour (one or two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24
for a 24 hour system)
HH - Hour (two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24
for a 24 hour system; mutually exclusive
with H)
M - Minute (one or two digits ranging
from 0 - 59)
MM - Minute (two digits ranging from 0 - 59;
mutually exclusive with M)
S - Second (one or two digits ranging
from 0 - 59)
SS - Second (two digits ranging from 0 - 59;
mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
ranging from 00000 - 86399; mutually
exclusive with other time elements)
TT - Meridian indicator (AM or PM)
A default value of 0 is assigned for each element that is not specified. Some
examples of time formats are:
"HH:MM:SS"
"HH.MM TT"
"SSSSS"
Table 13. Valid file type modifiers for the load utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
timestampformat=″x″ x is the format of the time stamp in the source file.1 Valid time stamp elements
are:
YYYY - Year (four digits ranging from 0000 - 9999)
M - Month (one or two digits ranging from 1 - 12)
MM - Month (two digits ranging from 01 - 12;
mutually exclusive with M and MMM)
MMM - Month (three-letter case-insensitive abbreviation for
the month name; mutually exclusive with M and MM)
D - Day (one or two digits ranging from 1 - 31)
DD - Day (two digits ranging from 1 - 31; mutually exclusive with D)
DDD - Day of the year (three digits ranging from 001 - 366;
mutually exclusive with other day or month elements)
H - Hour (one or two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24 for a 24 hour system)
HH - Hour (two digits ranging from 0 - 12
for a 12 hour system, and 0 - 24 for a 24 hour system;
mutually exclusive with H)
M - Minute (one or two digits ranging from 0 - 59)
MM - Minute (two digits ranging from 0 - 59;
mutually exclusive with M, minute)
S - Second (one or two digits ranging from 0 - 59)
SS - Second (two digits ranging from 0 - 59;
mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
ranging from 00000 - 86399; mutually
exclusive with other time elements)
UUUUUU - Microsecond (6 digits ranging from 000000 - 999999;
mutually exclusive with all other microsecond elements)
UUUUU - Microsecond (5 digits ranging from 00000 - 99999,
maps to range from 000000 - 999990;
mutually exclusive with all other microseond elements)
UUUU - Microsecond (4 digits ranging from 0000 - 9999,
maps to range from 000000 - 999900;
mutually exclusive with all other microseond elements)
UUU - Microsecond (3 digits ranging from 000 - 999,
maps to range from 000000 - 999000;
mutually exclusive with all other microseond elements)
UU - Microsecond (2 digits ranging from 00 - 99,
maps to range from 000000 - 990000;
mutually exclusive with all other microseond elements)
U - Microsecond (1 digit ranging from 0 - 9,
maps to range from 000000 - 900000;
mutually exclusive with all other microseond elements)
TT - Meridian indicator (AM or PM)
The valid values for the MMM element include: ’jan’, ’feb’, ’mar’, ’apr’, ’may’,
’jun’, ’jul’, ’aug’, ’sep’, ’oct’, ’nov’ and ’dec’. These values are case insensitive.
The following example illustrates how to import data containing user defined
date and time formats into a table called schedule:
db2 import from delfile2 of del
modified by timestampformat="yyyy.mm.dd hh:mm tt"
insert into schedule
Table 13. Valid file type modifiers for the load utility: ASCII file formats (ASC/DEL) (continued)
Modifier Description
usegraphiccodepage If usegraphiccodepage is given, the assumption is made that data being loaded
into graphic or double-byte character large object (DBCLOB) data field(s) is in the
graphic code page. The rest of the data is assumed to be in the character code
page. The graphic codepage is associated with the character code page. LOAD
determines the character code page through either the codepage modifier, if it is
specified, or through the code page of the database if the codepage modifier is not
specified.
This modifier should be used in conjunction with the delimited data file
generated by drop table recovery only if the table being recovered has graphic
data.
Restrictions
The usegraphiccodepage modifier MUST NOT be specified with DEL files created
by the EXPORT utility, as these files contain data encoded in only one code page.
The usegraphiccodepage modifier is also ignored by the double-byte character
large objects (DBCLOBs) in files.
Table 14. Valid file type modifiers for the load utility: ASC file formats (Non-delimited ASCII)
Modifier Description
binarynumerics Numeric (but not DECIMAL) data must be in binary form, not the character
representation. This avoids costly conversions.
This option is supported only with positional ASC, using fixed length records
specified by the reclen option.
NULLs cannot be present in the data for columns affected by this modifier.
Blanks (normally interpreted as NULL) are interpreted as a binary value when
this modifier is used.
nochecklengths If nochecklengths is specified, an attempt is made to load each row, even if the
source data has a column definition that exceeds the size of the target table
column. Such rows can be successfully loaded if code page conversion causes the
source data to shrink; for example, 4-byte EUC data in the source could shrink to
2-byte DBCS data in the target, and require half the space. This option is
particularly useful if it is known that the source data will fit in all cases despite
mismatched column definitions.
nullindchar=x x is a single character. Changes the character denoting a NULL value to x. The
default value of x is Y.2
This modifier is case sensitive for EBCDIC data files, except when the character is
an English letter. For example, if the NULL indicator character is specified to be
the letter N, then n is also recognized as a NULL indicator.
Table 14. Valid file type modifiers for the load utility: ASC file formats (Non-delimited ASCII) (continued)
Modifier Description
packeddecimal Loads packed-decimal data directly, since the binarynumerics modifier does not
include the DECIMAL field type.
This option is supported only with positional ASC, using fixed length records
specified by the reclen option.
NULLs cannot be present in the data for columns affected by this modifier.
Blanks (normally interpreted as NULL) are interpreted as a binary value when
this modifier is used.
Regardless of the server platform, the byte order of binary data in the load source
file is assumed to be big-endian; that is, when using this modifier on Windows
operating systems, the byte order must not be reversed.
This option cannot be specified together with striptnulls. These are mutually
exclusive options. This option replaces the obsolete t option, which is supported
for back-level compatibility only.
striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a
variable-length field. If this option is not specified, NULLs are kept.
This option cannot be specified together with striptblanks. These are mutually
exclusive options. This option replaces the obsolete padwithzero option, which is
supported for back-level compatibility only.
zoneddecimal Loads zoned decimal data, since the BINARYNUMERICS modifier does not
include the DECIMAL field type. This option is supported only with positional
ASC, using fixed length records specified by the RECLEN option.
Table 15. Valid file type modifiers for the load utility: DEL file formats (Delimited ASCII)
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation
mark ("). The specified character is used in place of double quotation marks to
enclose a character string.23 If you wish to explicitly specify the double quotation
mark(″) as the character string delimiter, you should specify it as follows:
modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter
as follows:
modified by chardel''
coldelx x is a single character column delimiter. The default value is a comma (,). The
specified character is used in place of a comma to signal the end of a column.23
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank
space instead of a plus sign (+). The default action is to prefix positive decimal
values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The
default value is a period (.). The specified character is used in place of a period as
a decimal point character.23
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter,
column delimiter. This modifier protects existing applications that depend on the
older priority by reverting the delimiter priorities to: character delimiter, record
delimiter, column delimiter. Syntax:
db2 load ... modified by delprioritychar ...
With the delprioritychar modifier specified, there will be only two rows in this
data file. The second <row delimiter> will be interpreted as part of the first data
column of the second row, while the first and the third <row delimiter> are
interpreted as actual record delimiters. If this modifier is not specified, there will
be three rows in this data file, each delimited by a <row delimiter>.
keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR,
LONG VARCHAR, or CLOB. Without this option, all leading and tailing blanks
that are not inside character delimiters are removed, and a NULL is inserted into
the table for all blank fields.
The following example illustrates how to load data into a table called TABLE1,
while preserving all leading and trailing spaces in the data file:
db2 load from delfile3 of del
modified by keepblanks
insert into table1
nochardel The load utility will assume all bytes found between the column delimiters to be
part of the column’s data. Character delimiters will be parsed as part of column
data. This option should not be specified if the data was exported using DB2
(unless nochardel was specified at export time). It is provided to support vendor
data files that do not have character delimiters. Improper usage might result in
data loss or corruption.
Table 16. Valid file type modifiers for the load utility: IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress
translation between code pages.
Fixed length target fields are checked to verify that they are large enough for the
data. If nochecklengths is specified, no checking is done, and an attempt is made
to load each row.
nochecklengths If nochecklengths is specified, an attempt is made to load each row, even if the
source data has a column definition that exceeds the size of the target table
column. Such rows can be successfully loaded if code page conversion causes the
source data to shrink; for example, 4-byte EUC data in the source could shrink to
2-byte DBCS data in the target, and require half the space. This option is
particularly useful if it is known that the source data will fit in all cases despite
mismatched column definitions.
Notes:
1. Double quotation marks around the date format string are mandatory. Field
separators cannot contain any of the following: a-z, A-Z, and 0-9. The field
separator should not be the same as the character delimiter or field delimiter in
the DEL file format. A field separator is optional if the start and end positions
of an element are unambiguous. Ambiguity can exist if (depending on the
modifier) elements such as D, H, M, or S are used, because of the variable
length of the entries.
For time stamp formats, care must be taken to avoid ambiguity between the
month and the minute descriptors, since they both use the letter M. A month
field must be adjacent to other date fields. A minute field must be adjacent to
other time fields. Following are some ambiguous time stamp formats:
"M" (could be a month, or a minute)
"M:M" (Which is which?)
"M:YYYY:M" (Both are interpreted as month.)
"S:M:YYYY" (adjacent to both a time value and a date value)
In ambiguous cases, the utility will report an error message, and the operation
will fail.
Following are some unambiguous time stamp formats:
"M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be
preceded by an escape character (for example, \).
2. The character must be specified in the code page of the source data.
The character code point (instead of the character symbol), can be specified
using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the
code point. For example, to specify the # character as a column delimiter, use
one of the following:
... modified by coldel# ...
... modified by coldel0x23 ...
... modified by coldelX23 ...
3. Delimiter restrictions for moving data lists restrictions that apply to the
characters that can be used as delimiter overrides.
4. The load utility does not issue a warning if an attempt is made to use
unsupported file types with the MODIFIED BY option. If this is attempted, the
load operation fails, and an error code is returned.
Table 17. LOAD behavior when using codepage and usegraphiccodepage
codepage=N usegraphiccodepage LOAD behavior
Absent Absent All data in the file is assumed to be in the database code
page, not the application code page, even if the CLIENT
option is specified.
Present Absent All data in the file is assumed to be in code page N.
Related reference:
v “db2Load - Load data into a table” on page 161
v “Delimiter restrictions for moving data” on page 259
v “LOAD ” on page 132
A load exception table can be assigned to the table space where the table being
loaded resides, or to another table space. In either case, the load exception table
should be assigned to the same database partition group and have the same
distribution key as the table being loaded.
Note: Any rows rejected because of invalid data before the building of an index
are not inserted into the exception table.
Rows are appended to existing information in the exception table; this can include
invalid rows from previous load operations. If you want only the invalid rows
from the current load operation, you must remove the existing rows before
invoking the utility.
The exception table used with the load utility is identical to the exception tables
used by the SET INTEGRITY statement.
An exception table should be used when loading data which has a unique index
and the possibility of duplicate records. If an exception table is not specified, and
duplicate records are found, the load operation continues, and only a warning
message is issued about the deleted duplicate records. The records themselves are
not logged.
After the load operation completes, information in the exception table can be used
to correct data that is in error. The corrected data can then be inserted into the
table.
Related reference:
v “Exception tables” in SQL Reference, Volume 1
Then rows that were rejected by the Load Subagent on database partition five will
be stored in a file named /u/usrname/dumpit.load.005, rows that were rejected by
the Load Subagent on database partition two will be stored in a file named
/u/usrname/dumpit.load.002, and rows that were rejected by the Partitioning
Subagent on database partition two will be stored in a file named
/u/usrname/dumpit.part.002, and so on.
For rows rejected by the Load Subagent, if the row is less than 32 768 bytes in
length, the record is copied to the dump file in its entirety; if it is longer, a row
fragment (including the final bytes of the record) is written to the file.
For rows rejected by the Partitioning Subagent, the entire row is copied to the
dump file regardless of the record size.
Related reference:
v “LOAD ” on page 132
The temporary files are written to a path that can be specified through the
temp-pathname parameter of the LOAD command, or in the piTempFilesPath
parameter of the db2Load API. The default path is a subdirectory of the database
directory.
The temporary files path resides on the server machine and is accessed by the DB2
instance exclusively. Therefore, it is imperative that any path name qualification
given to the temp-pathname parameter reflects the directory structure of the server,
not the client, and that the DB2 instance owner has read and write permission on
the path.
Note: In an MPP system, the temporary files path should reside on a local disk,
not on an NFS mount. If the path is on an NFS mount, there will be
significant performance degradation during the load operation.
Attention: The temporary files written to this path must not be tampered with
under any circumstances. Doing so will cause the load operation to malfunction,
and will place your database in jeopardy.
Related reference:
v “LOAD ” on page 132
v “db2Load - Load data into a table” on page 161
The following list outlines the log records that the load utility will create
depending on the size of the input data:
v Two log records will be created for every table space extent allocated or deleted
by the utility in a DMS table space.
v One log record will be created for every chunk of identity values consumed.
Related reference:
v “db2Load - Load data into a table” on page 161
v “LOAD ” on page 132
Before a load operation in ALLOW READ ACCESS mode begins, the load utility
will wait for all applications that began before the load operation to release locks
on the target table. Since locks are not persistent, they are supplemented by table
states that will remain even if a load operation is aborted. These states can be
checked by using the LOAD QUERY command. By using the LOCK WITH FORCE
option, the load utility will force applications holding conflicting locks off the table
that it is trying to load into.
At the beginning of a load operation, the load utility acquires an update (U) lock
on the table. It holds this lock until the data is being committed. The update lock
allows applications with compatible locks to access the table during the load
operation. For example, applications that use read only queries will be able to
access the table, while applications that try to insert data into the table will be
denied. When the load utility acquires the update lock on the table, it will wait for
all applications that hold locks on the table prior to the start of the load operation
to release them, even if they have compatible locks. This is achieved by
temporarily upgrading the update lock to a special exclusive (Z) lock which does
not conflict with new table lock requests on the target table as long as the
requested locks are compatible with the load operation’s update lock. In addition,
the load operation upgrades the update lock to an exclusive (Z) lock when the data
is being committed, hence there can be some delay in commit time while the load
utility waits for applications with conflicting locks to finish.
Note: The load operation can timeout while it waits for the applications to release
their locks on the table prior to loading. However, the load operation will
not timeout while waiting for the exclusive lock needed to commit the data.
For a load operation in ALLOW NO ACCESS mode, all applications holding table
locks that exist at the start of the load operation will be forced.
For a load operation in ALLOW READ ACCESS mode applications holding the
following locks will be forced:
v Table locks that conflict with a table update lock (for example, import or insert).
v All table locks that exist at the commit phase of the load operation.
Table States
In addition to locks, the load utility uses table states to control access to tables. A
table state can be checked by using the LOAD QUERY command. The states
returned by the LOAD QUERY command are as follows:
Normal
No table states affect the table.
Set Integrity Pending
The table has constraints which have not yet been verified. Use the SET
INTEGRITY statement to take the table out of Set Integrity Pending state.
The load utility places a table in the Set Integrity Pending state when it
begins a load operation on a table with constraints.
Load in Progress
There is a load operation in progress on this table.
Load Pending
A load operation has been active on this table but has been aborted before
the data could be committed. Issue a LOAD TERMINATE, LOAD
RESTART, or LOAD REPLACE command to bring the table out of this
state.
Read Access Only
The table data is available for read access queries. Load operations using
the ALLOW READ ACCESS option place the table in read access only
state.
Unavailable
The table is unavailable. The table can only be dropped or restored from a
backup. Rolling forward through a non-recoverable load operation will
place a table in the unavailable state.
Not Load Restartable
The table is in a partially loaded state that will not allow a load restart
operation. The table will also be in load pending state. Issue a LOAD
TERMINATE or a LOAD REPLACE command to bring the table out of the
A table can be in several states at the same time. For example, if data is loaded
into a table with constraints and the ALLOW READ ACCESS option is specified,
table state would be:
Tablestate:
Set Integrity Pending
Load in Progress
Read Access Only
After the load operation but before issuing the SET INTEGRITY statement, the
table state would be:
Tablestate:
Set Integrity Pending
Read Access Only
After the SET INTEGRITY statement has been issued the table state would be:
Tablestate:
Normal
When a table space is in backup pending state, it is still available for read access.
The table space can only be taken out of backup pending state by taking a backup
of the table space. Even if the load operation is aborted, the table space will remain
in backup pending state because the table space state is changed at the beginning
of the load operation, and cannot be rolled back if it fails. The load in progress
table space state prevents online backups of a load operation with the COPY NO
option specified while data is being loaded. The load in progress state is removed
when the load operation is completed or aborts.
Related reference:
v “LOAD ” on page 132
The load and build phases of the load process place the target table in the load in
progress table state. The load utility also places table spaces in the load in progress
state when the COPY NO option is specified on a recoverable database. The table
spaces remain in this state for the duration of the load operation and are returned
to normal state if the transaction is committed or rolled back.
If the NO ACCESS option has been specified, the table cannot be accessed while
the load is in progress. If the ALLOW READ ACCESS option has been specified,
the data in the table that existed prior to the invocation of the load command will
be available in read only mode during the load operation. If the ALLOW READ
ACCESS option is specified and the load operation fails, the data that existed in
the table prior to the load operation will continue to be available in read only
mode after the failure.
To remove the load in progress table state (if the load operation has failed, or was
interrupted), do one of the following:
v Restart the load operation. First, address the cause of the failure; for example, if
the load utility ran out of disk space, add containers to the table space before
attempting a load restart operation.
v Terminate the load operation.
During a load operation, table spaces are placed in backup pending after the first
commit, and:
v The database is recoverable (database configuration parameter logarchmeth1 or
logarchmeth2 is not set to OFF) and
v The load option COPY YES is not specified, and
v The load option NONRECOVERABLE is not specified.
The fourth possible state associated with the load process (Set Integrity Pending
state) pertains to referential and check constraints, generated column constraints,
materialized query computation, or staging table propagation. For example, if an
existing table is a parent table containing a primary key referenced by a foreign
key in a dependent table, replacing data in the parent table places both tables (not
the table space) in Set Integrity Pending state. To validate a table for referential
integrity and check constraints, issue the SET INTEGRITY statement after the load
process completes, if the table has been left in Set Integrity Pending state.
Related concepts:
v “Checking for integrity violations following a load operation” on page 121
v “Table locking, table states and table space states” on page 203
Related reference:
v “LIST TABLESPACES command” in Command Reference
Time
Time
Figure 5. Increasing load performance through concurrent indexing and statistics collection.
Tables are normally built in three steps: data loading, index building, and statistics collection.
This causes multiple data I/O during the load operation, during index creation (there can be
several indexes for each table), and during statistics collection (which causes I/O on the table
data and on all of the indexes). A much faster alternative is to let the load utility complete all
of these tasks in one pass through the data.
When tuning index creation performance, the amount of memory dedicated to the
sorting of index keys during a load operation is controlled by the sortheap database
configuration parameter. For example, to direct the load utility to use 4000 pages of
main memory per index for key sorting, set the sortheap database configuration
parameter to be 4000 pages, disconnect all applications from the database, and
then issue the LOAD command. If an index is so large that it cannot be sorted in
memory, a sort spill occurs. That is, the data is divided among several ″sort runs″
and stored in a temporary table space that is merged later. If there is no way to
avoid a sort spill by increasing the size of the sortheap parameter, it is important
that the buffer pool for temporary table spaces be large enough to minimize the
amount of disk I/O that spilling causes. Furthermore, to achieve I/O parallelism
during the merging of sort runs, it is recommended that temporary table spaces be
declared with multiple containers, each residing on a different disk device. If there
is more than one index defined on a table, memory consumption increases
proportionally because the load operation keeps all keys in memory.
Load operations with the ALLOW READ ACCESS and INDEXING MODE
REBUILD options allow you to specify the USE <table space> option for storing a
shadow index. While the index still has to be copied to the target table space
before becoming visible, this option minimizes use of the target table space while a
load operation is in progress.
For Index Rebuild, load uses a single table scanner, which also does the sorting, to
pick up existing keys and create indexes. Multiple table scanners are used with
Index Manager code (IXM), which builds the indexes outside of the load operation.
The advantage of building the indexes with a CREATE INDEX statement instead of
a load operation is that the CREATE INDEX statement can use multiple processes
(also known as threads) to sort keys if INTRA PARALLEL is on. The actual
building of the index is not executed in parallel.
Use of the SET INTEGRITY statement might lengthen the total time needed for a
table to become usable again. If all the load operations are performed in INSERT
mode, the SET INTEGRITY statement checks the table for integrity violations
208 Data Movement Utilities
incrementally (by checking only the appended portion of the table). If a table
cannot be checked for integrity violations incrementally, the entire table is checked,
and it might be some time before the table is usable again.
The load utility performs equally well in INSERT mode and in REPLACE mode.
However, if indexing mode is REBUILD, REPLACE mode will perform better than
INSERT mode because there is no need to scan existing data.
Figure 6. Record Order in the Source Data is Preserved When Intra-partition Parallelism is
Exploited During a Load Operation
DATA BUFFER
The DATA BUFFER parameter specifies the total amount of memory
allocated to the load utility as a buffer. It is recommended that this buffer
be several extents in size. An extent is the unit of movement for data within
DB2, and the extent size can be one or more 4KB pages. The DATA
BUFFER parameter is useful when working with large objects (LOBs); it
reduces I/O waiting time. The data buffer is allocated from the utility
heap. Depending on the amount of storage available on your system, you
should consider allocating more memory for use by the DB2 utilities. The
database configuration parameter util_heap_sz can be modified accordingly.
The default value for the Utility Heap Size configuration parameter is 5 000
4KB pages. Because load is only one of several utilities that use memory
from the utility heap, it is recommended that no more than fifty percent of
the pages defined by this parameter be available for the load utility, and
that the utility heap be defined large enough.
DISK_PARALLELISM
The DISK_PARALLELISM parameter specifies the number of processes or
threads used by the load utility to write data records to disk. Use this
parameter to exploit available containers when loading data, and
significantly improve load performance. The maximum number allowed is
the greater of four times the CPU_PARALLELISM value (actually used by
the load utility), or 50. By default, DISK_PARALLELISM is equal to the
sum of the table space containers on all table spaces containing objects for
the table being loaded, except where this value exceeds the maximum
number allowed.
FASTPARSE
Use the fastparse file type modifier to reduce the data checking that is
performed on user-supplied column values, and enhance performance.
This option should only be used when the data being loaded is known to
be valid. It can improve performance by about 10 or 20 percent.
NONRECOVERABLE
Use this parameter if you do not need to be able to recover load
transactions against a table. Load performance is enhanced, because no
Related concepts:
v “DB2 registry and environment variables” in Performance Guide
v “Multidimensional clustering considerations ” on page 125
Related reference:
v “SET INTEGRITY statement” in SQL Reference, Volume 2
v “BIND command” in Command Reference
v “UPDATE DATABASE CONFIGURATION command” in Command Reference
v “stat_heap_sz - Statistics heap size configuration parameter” in Performance Guide
v “util_heap_sz - Utility heap size configuration parameter” in Performance Guide
Data Records:
1...5...10...15...20...25...30...35...40
Test data 1 XXN 123abcdN
Test data 2 and 3 QQY XXN
Test data 4,5 and 6 WWN6789 Y
If an attempt is made to load the following data records into this table,
23, 24, bobby
, 45, john
4,, mary
The utility will load ″sam″ in the third column of the table, and the characters
″sdf″ will be flagged in a warning. The record is not rejected. Another example:
22 3, 34,"bob"
The utility will load 22,34,"bob", and generate a warning that some data in
column one following the 22 was ignored. The record is not rejected.
In this case, rows 1 and 2 will be rejected, because the utility has been
instructed to override system-generated identity values in favor of
user-supplied values. If user-supplied values are not present, however, the row
must be rejected, because identity columns are implicitly not NULL.
5. If DATAFILE1 is loaded into TABLE2 without using any of the identity-related
file type modifiers, rows 1 and 2 will be loaded, but rows 3 and 4 will be
rejected, because they supply their own non-NULL values, and the identity
column is GENERATED ALWAYS.
The following command loads all the data from MY.TABLE1 into MY.TABLE2:
load from mycursor of cursor method P(1,2,3) insert into
my.table2(one,two,three)
Notes:
1. Only one cursor name can be specified in a single LOAD command. That is,
load from mycurs1, mycurs2 of cursor... is not allowed.
2. P and N are the only valid METHOD values for loading from a cursor.
3. In this example, METHOD P and the insert column list (one,two,three) could
have been omitted since they represent default values.
4. MY.TABLE1 can be a table, view, alias, or nickname.
Related concepts:
v “Load overview” on page 102
Related reference:
v “Examples of loading data in a partitioned database environment” on page 234
v “LOAD ” on page 132
v “LOAD QUERY ” on page 158
Loading data into a multi-partition database takes place in two phases: A setup
phase, where database partition resources such as table locks are acquired, and a
load phase where the data is loaded into the database partitions. You can use the
ISOLATE_PART_ERRS option of the LOAD command to select how errors will be
handled during either of these phases, and how errors on one or more of the
database partitions will affect the load operation on the database partitions that are
not experiencing errors.
When loading data into a a multi-partition database you can use one of the
following modes:
v PARTITION_AND_LOAD. Data is distributed (perhaps in parallel) and loaded
simultaneously on the corresponding database partitions.
v PARTITION_ONLY. Data is distributed (perhaps in parallel) and the output is
written to files in a specified location on each loading database partition. Each
file includes a partition header that specifies how the data was distributed across
the database partitions, and that the file can be loaded into the database using
the LOAD_ONLY mode.
The following terminology will be used when discussing the behavior and
operation of the load utility in a partitioned database environment with multiple
database partitions:
v The coordinator partition is the database partition to which the user connects to
perform the load operation. In the PARTITION_AND_LOAD,
PARTITION_ONLY, and ANALYZE modes, it is assumed that the data file
resides on this database partition unless the CLIENT option of the LOAD
command is specified. Specifying the CLIENT option of the LOAD command
indicates that the data to be loaded resides on a remotely connected client.
v In the PARTITION_AND_LOAD, PARTITION_ONLY, and ANALYZE modes, the
pre-partitioning agent reads the user data and distributes it in round-robin fashion
to the partitioning agents which will distribute the data. This process is always
performed on the coordinator partition. A maximum of one partitioning agent is
allowed per database partition for any load operation.
v In the PARTITION_AND_LOAD, LOAD_ONLY and
LOAD_ONLY_VERIFY_PART modes, load agents run on each output database
partition and coordinate the loading of data to that database partition.
v Load to file agents run on each output database partition during a
PARTITION_ONLY load operation. They receive data from partitioning agents
and write it to a file on their database partition.
v The SOURCEUSEREXIT option provides a facility through which the load utility
can execute a customized script or executable, referred to herein as the user exit.
Partitioning
Load agent
agent
Pre-partitioning
agent
Partitioning
Load agent
agent
Figure 7. Partitioned Database Load Overview. The source data is read by the
pre-partitioning agent, approximately half of the data is sent to each of two partitioning agents
which distribute the data and send it to one of three database partitions. The load agent at
each database partition loads the data.
Related concepts:
v “Data organization schemes” in Administration Guide: Planning
v “Load overview” on page 102
v “Loading data in a partitioned database environment - hints and tips” on page
237
v “Monitoring a load operation in a partitioned database environment using the
LOAD QUERY command” on page 225
v “Restarting or terminating a load operation in a partitioned database
environment” on page 227
Related tasks:
v “Loading data” on page 110
v “Loading data in a partitioned database environment” on page 219
Related reference:
v “Load configuration options for partitioned database environments” on page 229
Restrictions:
The following restrictions apply when using the load utility to load data in a
multi-partition database:
v The location of the input files to the load operation cannot be a tape device.
v The ROWCOUNT option is not supported unless the ANALYZE mode is being
used.
v If the target table has an identity column that is needed for distributing and the
identityoverride modifier is not specified, or if you are using multiple database
partitions to distribute and then load the data, the use of a SAVECOUNT greater
than zero on the LOAD command is not supported.
v If an identity column forms part of the distribution key, only
PARTITION_AND_LOAD mode is supported.
v The LOAD_ONLY and LOAD_ONLY_VERIFY_PART modes cannot be used with
the CLIENT option of the LOAD command.
v The LOAD_ONLY_VERIFY_PART mode cannot be used with the CURSOR input
source type.
v The distribution error isolation modes LOAD_ERRS_ONLY and
SETUP_AND_LOAD_ERRS cannot be used with the ALLOW READ ACCESS
and COPY YES options of the LOAD command.
v Multiple load operations can load data into the same table concurrently if the
database partitions specified by the OUTPUT_DBPARTNUMS and
PARTITIONING_DBPARTNUMS options do not overlap. For example, if a table
is defined on database partitions 0 through 3, one load operation can load data
into database partitions 0 and 1 while a second load operation can load data into
database partitions 2 and 3.
v Only Non-delimited ASCII (ASC) and Delimited ASCII (DEL) files can be
distributed across tables spanning multiple database partitions. PC/IXF files
cannot be distributed. To load a PC/IXF file into a table spanning multiple
database partitions, you can first load it into a table residing on a single
database partition by setting the environment variable
DB2_PARTITIONEDLOAD_DEFAULT=NO and using the
LOAD_ONLY_VERIFY_PART mode. Then you can perform a load operation
using the CURSOR file type to move the data into a table that is distributed
over multiple database partitions. You can also load a PC/IXF file into a table
that is distributed over multiple database partitions using the load operation in
the LOAD_ONLY_VERIFY_PART mode.
Procedure:
The following examples illustrate how to use the LOAD command to initiate
various types of load operations. The database used in the following examples has
five database partitions: 0, 1, 2, 3 and 4. Each database partition has a local
directory /db2/data/. Two tables, TABLE1 and TABLE2, are defined on database
In this scenario you are connected to a database partition that might or might not
be a database partition where TABLE1 is defined. The data file load.del resides in
the current working directory of this database partition. To load the data from
load.del into all of the database partitions where TABLE1 is defined, issue the
following command: LOAD FROM LOAD.DEL of DEL REPLACE INTO TABLE1
Note: In this example, default values are used for all of the configuration
parameters for partitioned database environments: The MODE parameter
default to PARTITION_AND_LOAD, the OUTPUT_DBPARTNUMS options
default to all database partitions on which TABLE1 is defined, and the
PARTITIONING_DBPARTNUMS defaults to the set of database partitions
selected according to the LOAD command rules for choosing database
partitions when none are specified.
In this scenario you are connected to a database partition that might or might not
be a database partition where TABLE1 is defined. The data file load.del resides in
the current working directory of this database partition. To distribute (but not load)
load.del to all the database partitions on which TABLE1 is defined, using database
partitions 3 and 4 issue the following command:
LOAD FROM LOAD.DEL of DEL REPLACE INTO TABLE1
PARTITIONED DB CONFIG MODE PARTITION_ONLY
PART_FILE_LOCATION /db2/data
PARTITIONING_DBPARTNUMS (3,4)
This results in a file load.del.xxx being stored in the /db2/data directory on each
database partition, where xxx is a three-digit representation of the database
partition number.
The LOAD command can be used to load data files without distribution headers
directly into several database partitions. If the data files exist in the /db2/data
directory on each database partition where TABLE1 is defined and have the name
load.del.xxx, where xxx is the database partition number, the files can be loaded
by issuing the following command:
LOAD FROM LOAD.DEL OF DEL modified by dumpfile=rejected.rows
REPLACE INTO TABLE1
PARTITIONED DB CONFIG MODE LOAD_ONLY_VERIFY_PART
PART_FILE_LOCATION /db2/data
To load the data into database partition 1 only, issue the following command:
LOAD FROM LOAD.DEL OF DEL modified by dumpfile=rejected.rows
REPLACE INTO TABLE1
PARTITIONED DB CONFIG MODE LOAD_ONLY_VERIFY_PART
PART_FILE_LOCATION /db2/data
OUTPUT_DBPARTNUMS (1)
Note: Rows that do not belong on the database partition from which they were
loaded are rejected and put into the dumpfile, if one has been specified.
To distribute all the rows in the answer set of the statement SELECT * FROM TABLE1
to a file on each database partition named /db2/data/select.out.xxx (where xxx is
the database partition number), for future loading into TABLE2, issue the following
commands:
DECLARE C1 CURSOR FOR SELECT * FROM TABLE1
The data files produced by the above operation can then be loaded by issuing the
following LOAD command:
LOAD FROM C1 OF CURSOR REPLACE INTO TABLE2
PARTITIONED CB CONFIG MODE LOAD_ONLY
PART_FILE_LOCATION /db2/data/select.out
Related concepts:
v “The Design Advisor” in Performance Guide
v “Moving data using the CURSOR file type” on page 267
Related reference:
v “db2Load - Load data into a table” on page 161
During a load operation, message files are created by some of the load processes
on the database partitions where they are being executed. These files store all
information, warning and error messages produced during the execution of the
load operation. The load processes that produce message files that can be viewed
by the user are the load agent, pre-partitioning agent and partitioning agent.
You can connect to individual database partitions during a load operation and
issue the LOAD QUERY command against the target table. When issued from the
This command initiates a load operation that includes load agents running on
database partitions 0, 1, 2 and 3; a partitioning agent running on database partition
1; and a pre-partitioning agent running on database partition 0.
Database partition 0 contains one message file for the pre-partitioning agent and
one for the load agent on that database partition. To view the contents of these
files at the same time, start a new session and issue the following commands from
the CLP:
set client connect_node 0
connect to wsdb
load query table table1
Database partition 1 contains one file for the load agent and one for the
partitioning agent. To view the contents of these files, start a new session and issue
the following commands from the CLP:
set client connect_node 1
connect to wsdb
load query table table1
If a load operation is initiated through the db2Load API, the messages option
(piLocalMsgFileName) must be specified and the message files are brought from
the server to the client and stored for you to view.
For multi-partition database load operations initiated from the CLP, the message
files are not displayed to the console or retained. To save or view the contents of
these files after a multi-partition database load is complete, the MESSAGES option
of the LOAD command must be specified. If this option is used, once the load
operation is complete the message files on each database partition are transferred
to the client machine and stored in files using the base name indicated by the
MESSAGES option. For multi-partition database load operations, the name of the
file corresponding to the load process that produced it is listed below:
Related reference:
v “db2LoadQuery - Get the status of a load operation” on page 181
When a load operation fails on at least one database partition during the setup
stage and the setup stage errors are not being isolated (that is, the error isolation
mode is either LOAD_ERRS_ONLY or NO_ISOLATION), the entire load operation
is aborted and the state of the table on each database partition is rolled back to the
state it was in prior to the load operation. In this case, there is no need to issue a
LOAD RESTART or LOAD TERMINATE command.
When a load operation fails on at least one database partition during the initial
setup stage and setup stage errors are being isolated (that is, the error isolation
mode is either SETUP_ERRS_ONLY or SETUP_AND_LOAD_ERRS), the load
operation continues on the database partitions where the setup stage was
successful, but the table on each of the failing database partitions is rolled back to
the state it was in prior to the load operation. In this case, there is no need to
perform a load restart or terminate operation, unless there is also a failure during
the load stage.
To complete the load process on the database partitions where the load operation
failed during the setup stage, issue a LOAD REPLACE or LOAD INSERT
command and use the OUTPUT_DBPARTNUMS option to specify only the
database partition numbers of the database partitions that failed during the
original load operation.
During the set up stage of the load operation there is a failure on database
partitions 1 and 3. Since setup stage errors are isolated, the load operation
completes successfully and data is loaded on database partitions 0 and 2. To
complete the load operation by loading data on database partitions 1 and 3, issue
the following command:
load from load.del of del replace into table1 partitioned db config
output_dbpartnums (1, 3)
If a load operation fails on at least one database partition during the load stage of
a multi-partition database load operation, a LOAD RESTART or LOAD
TERMINATE command must be issued on all database partitions involved in the
load operation whether or not they encountered an error while loading data. This
is necessary because loading data in a multi-partition database is done through a
single transaction. If a load restart operation is initiated, the load operation
continues where it left off on all database partitions.
During the load stage of the load operation there is a failure on database partitions
1 and 3. To resume the load operation, the LOAD RESTART command must
specify the same set of output database partitions as the original command since
the load operation must be restarted on all database partitions:
load from load.del of del restart into table1 partitioned db config
isolate_part_errs no_isolation
Note: For load restart operations, the options specified in the LOAD RESTART
command will be honored, so it is important that they are identical to the
ones specified in the original LOAD command.
If a failure occurs during the load stage, the load operation can be terminated by
issuing a LOAD TERMINATE command that specifies the same output parameters
as the original command:
load from load.del of del terminate into table1 partitioned db config
isolate_part_errs no_isolation
Related concepts:
Note: This mode cannot be used when both the ALLOW READ
ACCESS and the COPY YES options of the LOAD command are
specified.
v SETUP_AND_LOAD_ERRS. In this mode, database partition-level errors
during setup or loading data cause processing to stop only on the
affected database partitions. As with the LOAD_ERRS_ONLY mode,
when partition errors do occur while data is loaded, the data on all
database partitions remains invisible until a load restart operation is
performed.
Related concepts:
v “Moving data using a customized application (user exit)” on page 270
Related tasks:
v “Loading data in a partitioned database environment” on page 219
Related reference:
v “REDISTRIBUTE DATABASE PARTITION GROUP command” in Command
Reference
Example 1
To load data into TABLE1 from the user data file load.del which resides on
database partition 0, connect to database partition 0 and then issue the following
command:
load from load.del of del replace into table1
The output indicates that there was one load agent on each database partition and
each ran successfully. It also shows that there was one pre-partitioning agent
running on the coordinator partition and one partitioning agent running on
database partition 1. These processes completed successfully with a normal SQL
return code of 0. The statistical summary shows that the pre-partitioning agent
read 100,000 rows, the partitioning agent distributed 100,000 rows, and the sum of
all rows loaded by the load agents is 100,000.
Example 2
The output indicates that there was a load-to-file agent running on each output
database partition, and these agents ran successfully. There was a pre-partitioning
agent on the coordinator partition, and a partitioning agent running on database
partition 1. The statistical summary indicates that 100,000 rows were successfully
read by the pre-partitioning agent and 100,000 rows were successfully distributed
by the partitioning agent. Since no rows were loaded into the table, no summary of
the number of rows loaded appears.
Example 3
To load the files that were generated during the PARTITION_ONLY load operation
above, issue the following command:
load from load.del of del replace into table1 partitioned db config mode
load_only part_file_location /db/data
and one of the loading database partitions runs out of space in the table space
during the load operation, the following output is returned:
SQL0289N Unable to allocate new pages in table space "DMS4KT".
SQLSTATE=57011
The output indicates that the load operation returned error SQL0289. The database
partition summary indicates that database partition 1 ran out of space. Since the
default error isolation mode is NO_ISOLATION. the load operation is aborted on
all database partitions and a load restart or load terminate operation must be
invoked. If additional space is added to the containers of the table space on
database partition 1, the load operation can be restarted as follows:
load from load.del of del restart into table1
Related concepts:
v “Load considerations for partitioned tables” on page 126
v “Load overview” on page 102
v “Loading data in a partitioned database environment - hints and tips” on page
237
Related tasks:
You can use the load utility to load data in a multi-partition database.
Then issue the following commands from the DB2 Command Line Processor:
CONNECT RESET
CONNECT TO DB MYDB
Related reference:
v “LOAD ” on page 132
Troubleshooting
Note: You must specify the MESSAGES option of the LOAD command in order
for these files to exist.
v Interrupt the current load operation if you find errors suggesting that one of the
load processes encountered errors.
Related concepts:
Related tasks:
v “Loading data” on page 110
v “Loading data in a partitioned database environment” on page 219
Related reference:
v “Load configuration options for partitioned database environments” on page 229
To provide compatibility of PC/IXF files among all products in the DB2 family, the
export utility creates files with numeric data in Intel format, and the import utility
expects it in this format.
Since DEL export files are text files, they can be transferred from one operating
system to another. File transfer programs can handle operating system-dependant
differences if you transfer the files in text mode; the conversion of row separator
and end-of-file characters is not performed in binary mode.
Note: If character data fields contain row separator characters, these will also be
converted during file transfer. This conversion causes unexpected changes to
the data and, for this reason, it is recommended that you do not use DEL
export files to move data across platforms. Use the PC/IXF file format
instead.
As a result of this consistency in internal formats, exported WSF files from DB2
products can be used by Lotus 1-2-3® or Symphony running on a different
platform. DB2 products can also import WSF files that were created on different
platforms.
Transfer WSF files between operating systems in binary (not text) mode.
Note: Do not use the WSF file format to transfer data between DB2 databases on
different platforms, because a loss of data can occur. Use the PC/IXF file
format instead.
Related reference:
v “Export/Import/Load Utility File Formats” on page 293
The import utility can be used to insert XML documents into a regular relational
table. Only well-formed XML documents can be imported.
Data may be exported from tables that include one or more columns with an XML
data type. Exported XML data is stored in files separate from the main data file
containing the exported relational data. Information about each exported XML
document is represented in the main exported data file by an XML data specifier
(XDS). The XDS is a string that specifies the name of the system file in which the
XML document is stored, the exact location and length of the XML document
inside of this file, and the XML schema used to validate the XML document.
You can use the XMLFILE, XML TO, and XMLSAVESCHEMA parameters of the
EXPORT command to specify details about how exported XML documents are
stored. The xmlinsepfiles, xmlnodeclaration, xmlchar, and xmlgraphic file type
modifiers allow you to specify further details about the storage location and the
encoding of the exported XML data.
Related concepts:
v “Exporting XML data” on page 5
v “Importing XML data” on page 40
v “Native XML data store overview” in XML Guide
Related reference:
v “EXPORT ” on page 11
v “IMPORT ” on page 49
Related concepts:
v “Exporting XML data” on page 5
v “Importing XML data” on page 40
v “XML data movement overview” on page 242
The XDS is interpreted as a character field in the data file and is subject to the file
format’s parsing behavior for character columns. For the delimited ASCII file
format (DEL), for example, if the character delimiter is present in the XDS, it must
be doubled. The special characters (<, >, &, ’, ") within the attribute values must
always be escaped. Consider a FIL attribute with the value: abc&"def".del. To
include this in a delimited ASCII file, where the character delimiter is the ″
character, this XDS would be included as follows: <XDS FIL=""abc&"def
".del"" /> where the ″ characters are doubled and special characters are
escaped.
Example:
Related concepts:
v “Export Overview” on page 1
v “Exporting XML data” on page 5
v “Import Overview” on page 35
v “Importing XML data” on page 40
Related tasks:
v “Exporting data” on page 4
v “Importing data” on page 38
Individual QDM instances can be written to one or more XML files by means of
the EXPORT command.
Related concepts:
v “XML data movement overview” on page 242
v “XML data type” in XML Guide
The DB2 export and import utilities allow you to move data from a host or iSeries
server database to a file on the DB2 Connect workstation, and the reverse. You can
then use the data with any other application or relational database management
system that supports this export or import format. For example, you can export
data from a host or iSeries server database into a PC/IXF file, and then import it
into a DB2 for Windows database.
You can perform export and import operations from a database client or from the
DB2 Connect workstation.
Notes:
1. The data to be exported or imported must comply with the size and data type
restrictions that are applicable to both databases.
2. To improve import performance, you can use compound queries. Specify the
compound file type modifier in the import utility to group a specified number of
query statements into a block. This can reduce network overhead and improve
response time.
Restrictions:
With DB2 Connect, export and import operations must meet the following
conditions:
v The file type must be PC/IXF.
v A target table with attributes that are compatible with the data must be created
on the target server before you can import to it. The db2look utility can be used
to get the attributes of the source table. Import through DB2 Connect cannot
create a table, because INSERT is the only supported option.
If any of these conditions is not met, the operation fails, and an error message is
returned.
If you export or import mixed data (columns containing both single-byte and
double-byte data), consider the following:
Example
The following example illustrates how to move data from a workstation to a host
or AS/400 and iSeries server database.
1. Export the data into an external IXF format by issuing the following command:
db2 export to staff.ixf of ixf select * from userid.staff
2. Issue the following command to establish a DRDA® connection to the target
DB2 database:
db2 connect to cbc664 user admin using xxx
3. If it doesn’t already exit, create the target table on the target DB2 database
instance_
CREATE TABLE mydb.staff (ID SMALLINT NOT NULL, NAME VARCHAR(9),
DEPT SMALLINT, JOB CHAR(5), YEARS SMALLINT, SALARY DECIMAL(7,2),
COMM DECIMAL(7,2))
4. To import the data issue the following command:
db2 import from staff.ixf of ixf insert into mydb.staff
Each row of data will be read from the file in IXF format, and an SQL INSERT
statement will be issued to insert the row into table mydb.staff. Single rows
will continue to be inserted until all of the data has been moved to the target
table.
Related concepts:
v “Moving data across platforms - file format considerations” on page 241
Related reference:
v “EXPORT ” on page 11
v “IMPORT ” on page 49
Authorization:
This tool calls the DB2 export, import, and load APIs, depending on the action
requested by the user. Therefore, the requesting user ID must have the correct
authorization required by those APIs, or the request will fail.
Command syntax:
Command parameters:
dbname
Name of the database.
action Must be one of:
EXPORT
Exports all tables that meet the filtering criteria in options. If no
options are specified, exports all the tables. Internal staging
information is stored in the db2move.lst file.
IMPORT
Imports all tables listed in the internal staging file db2move.lst.
Use the -io option for IMPORT specific actions.
LOAD
Loads all tables listed in the internal staging file db2move.lst. Use
the -lo option for LOAD specific actions.
COPY Duplicates a schema(s) into a target database. Use the -sn option to
specify one or more schemas. See the -co option for COPY specific
options. Use the -tn or -tf option to filter tables in LOAD_ONLY
mode.
See below for a list of files that are generated during each action.
-tc table-definers. The default is all definers.
This is an EXPORT action only. If specified, only those tables created by
the definers listed with this option are exported. If not specified, the
default is to use all definers. When specifying multiple definers, they must
be separated by commas; no blanks are allowed between definer IDs. This
option can be used with the “-tn” table-names option to select the tables
for export.
An asterisk (*) can be used as a wildcard character that can be placed
anywhere in the string.
-tn table-names. The default is all user tables.
This is an EXPORT or COPY action only. If specified, only those tables
whose names match exactly those in the specified string are exported or
copied. If not specified, the default is to use all user tables. When
specifying multiple table names, they must be separated by commas; no
blanks are allowed between table names. When using the COPY action, the
table names should be listed with their schema qualifier in the format
“schema”.“table”. When using the EXPORT action, the table names should
be listed unqualified. This option can be used with the “-tc” table-definers
option to select the tables for export. db2move will only act on those tables
whose names match the specified table names and whose definers match
the specified table definers.
For export, an asterisk (*) can be used as a wildcard character that can be
placed anywhere in the string.
-sn schema-names. The default for EXPORT is all schemas (not for COPY).
If specified, only those tables whose schema names match exactly will be
exported or copied. If multiple schema names are specified, they must be
separated by commas; no blanks are allowed between schema names.
Schema names of less than 8 character are padded to 8 characters in
length.
In the case of export:
If the asterisk wildcard character (*) is used in the schema names, it will be
changed to a percent sign (%) and the table name (with percent sign) will
be used in the LIKE predicate of the WHERE clause. If not specified, the
default is to use all schemas. If used with the -tn or -tc option, db2move
will only act on those tables whose schemas match the specified schema
names and whose definers match the specified definers. A schema name
’fred’ has to be specified ″-sn fr*d*″ instead of ″-sn fr*d″ when using an
asterisk.
-ts tablespace-names. The default is all table spaces.
This is an EXPORT action only. If this option is specified, only those tables
that reside in the specified table space will be exported. If the asterisk
wildcard character (*) is used in the table space name, it will be changed to
a percent sign (%) and the table name (with percent sign) will be used in
the LIKE predicate in the WHERE clause. If the -ts option is not specified,
the default is to use all table spaces. If multiple table space names are
specified, they must be separated by commas; no blanks are allowed
between table space names. Table space names less than 8 characters are
padded to 8 characters in length. For example, a table space name ’mytb’
has to be specified ″-ts my*b*″ instead of ″-sn my*b″ when using the
asterisk.
-tf filename
This is an EXPORT or COPY action only. If specified, only the tables listed
in the given file will be exported or copied. The tables should be listed one
per line, and each table should be fully qualified. Here is an example of
the contents of a file:
"SCHEMA1"."TABLE NAME1"
"SCHEMA NAME77"."TABLE155"
-io import-option. The default is REPLACE_CREATE.
Valid options are: INSERT, INSERT_UPDATE, REPLACE, CREATE, and
REPLACE_CREATE.
-lo load-option. The default is INSERT.
Valid options are: INSERT and REPLACE.
-co When the db2move action is COPY, the following -co follow-on options
will be available:
“TARGET_DB <db name> [USER <userid> USING <password>]”
Allows the user to specify the name of the target database and the
user/password. (The source database user/password can be
specified using the existing -p and -u options). The USER/USING
clause is optional. If USER specifies a userid, then the password
must either be supplied following the USING clause, or if it’s not
specified, then db2move will prompt for the password information.
The reason for prompting is for security reasons discussed below.
TARGET_DB is a mandatory option for the COPY action. The
TARGET_DB cannot be the same as the source database. The
ADMIN_COPY_SCHEMA procedure can be used for copying schemas
within the same database. The COPY action requires inputting at
least one schema (-sn) or one table (-tn or -tf).
Running multiple db2move commands to copy schemas from one
database to another will result in deadlocks. Only one db2move
command should be issued at a time. Changes to tables in the
source schema during copy processing may mean that the data in
the target schema is not identical following a copy.
“MODE”
DDL_AND_LOAD
Creates all supported objects from the source schema, and
populates the tables with the source table data. This is the
default option.
DDL_ONLY
Creates all supported objects from the source schema, but
does not repopulate the tables.
LOAD_ONLY
Loads all specified tables from the source database to the
target database. The tables must already exist on the target.
This is an optional option that is only used with the COPY action.
“SCHEMA_MAP”
Allows user to rename schema when copying to target. Provides a
list of the source-target schema mapping, separated by commas,
surrounded by brackets. e.g schema_map ((s1, t1), (s2, t2)). This
would mean objects from schema s1 will be copied to schema t1 on
the target; objects from schema s2 will be copied to schema t2 on
the target. The default, and recommended, target schema name is
the source schema name. The reason for this is db2move will not
attempt to modify the schema for any qualified objects within
object bodies. Therefore, using a different target schema name may
lead to problems if there are qualified objects within the object
body.
For example: create view FOO.v1 as ‘select c1 from FOO.t1’
In this case, copy of schema FOO to BAR, v1 will be regenerated
as: create view BAR.v1 as ‘select c1 from FOO.t1’
This will either fail since schema FOO does not exist on the target
database, or have an unexpected result due to FOO being different
than BAR. Maintaining the same schema name as the source will
avoid these issues. If there are cross dependencies between
schemas, all inter-dependant schemas must be copied or there may
be errors copying the objects with the cross dependencies.
For example: create view FOO.v1 as ‘select c1 from BAR.t1’
In this case, the copy of v1 will either fail if BAR is not copied as
well, or have an unexpected result if BAR on the target is different
than BAR from the source. db2move will not attempt to detect
cross schema dependencies.
This is an optional option that is only used with the COPY action.
“NONRECOVERABLE”
This option allows the user to override the default behaviour of the
load to be done with COPY-NO. With the default behaviour, the
user will be forced to take backups of each tablespace that was
loaded into. When specifying this NONRECOVERABLE keyword,
the user will not be forced to take backups of the tablespaces
immediately. It is, however, highly recommended that the backups
be taken as soon as possible to ensure the newly created tables will
be properly recoverable. This is an optional option available to the
COPY action.
“OWNER”
Allows the user to change the owner of each new object created in
the target schema after a successful COPY. The default owner of
the target objects will be the connect user; if this option is
specified, ownership will be transfered to the new owner. This
option is pending due to containability Q1/2006 delivery but this
parameter will be in the first design. This is an optional option
available to the COPY action.
“TABLESPACE_MAP”
The user may specify tablespace name mappings to be used
instead of the tablespaces from the source system during a copy.
This will be an array of tablespace mappings surrounded by
brackets. For example, tablespace_map ((TS1,
(such as data truncation) the user might wish to allow such tables to be
included in the db2move.lst file. Specifing this option allows tables which
receive warnings during export to be included in the .lst file.
Examples:
v To export all tables in the SAMPLE database (using default values for all
options), issue:
db2move sample export
v To export all tables created by userid1 or user IDs LIKE us%rid2, and with the
name tbname1 or table names LIKE %tbname2, issue:
db2move sample export -tc userid1,us*rid2 -tn tbname1,*tbname2
v To import all tables in the SAMPLE database (LOB paths D:\LOBPATH1 and
C:\LOBPATH2 are to be searched for LOB files; this example is applicable to
Windows operating systems only), issue:
db2move sample import -l D:\LOBPATH1,C:\LOBPATH2
v To load all tables in the SAMPLE database (/home/userid/lobpath subdirectory
and the tmp subdirectory are to be searched for LOB files; this example is
applicable to Linux and UNIX-based systems only), issue:
db2move sample load -l /home/userid/lobpath,/tmp
v To import all tables in the SAMPLE database in REPLACE mode using the
specified user ID and password, issue:
db2move sample import -io replace -u userid -p password
v To duplicate schema schema1 from source database dbsrc to target database
dbtgt, issue:
db2move dbsrc COPY -sn schema1 -co TARGET_DB dbtgt USER myuser1 USING mypass1
v To duplicate schema schema1 from source database dbsrc to target database
dbtgt, rename the schema to newschema1 on the target, and map source
tablespace ts1 to ts2 on the target, issue:
db2move dbsrc COPY -sn schema1 -co TARGET_DB dbtgt USER myuser1 USING mypass1
SCHEMA_MAP ((schema1,newschema1)) TABLESPACE_MAP ((ts1,ts2), SYS_ANY))
Usage notes:
v Loading data into tables containing XML columns is not supported. The
workaround is to manually issue the IMPORT or EXPORT commands, or use
the db2move -Export and db2move -Import behaviour. If these tables also
contain generated always identity columns, data cannot be imported into the
tables.
v This tool exports, imports, or loads user-created tables. If a database is to be
duplicated from one operating system to another operating system, db2move
facilitates the movement of the tables. It is also necessary to move all other
objects associated with the tables, such as aliases, views, triggers, user-defined
functions, and so on. If the import utility with the REPLACE_CREATE option is
used to create the tables on the target database, then the limitations outlined in
Using import to recreate an exported table are imposed. If unexpected errors are
encountered during the db2move import phase when the REPLACE_CREATE
option is used, examine the appropriate tabnnn.msg message file and consider
that the errors might be the result of the limitations on table creation.
v When export, import, or load APIs are called by db2move, the FileTypeMod
parameter is set to lobsinfile. That is, LOB data is kept in file separate from the
PC/IXF file, for every table.
v The LOAD command must be run locally on the machine where the database
and the data file reside. When the load API is called by db2move, the
NONRECOVERABLE option is used. If logretain is on, and the -lo option is INSERT,
the load operation marks the table as inaccessible and it must be dropped. The
table space where the loaded tables reside is placed in backup pending state,
and is not accessible. A full database backup, or a table space backup, is
required to take the table space out of backup pending state. Performance for
the DB2MOVE command with the IMPORT action can be improved by altering
the default buffer pool, IBMDEFAULTBP; and by updating the configuration
parameters sortheap, util_heap_sz, logfilsz, and logprimary.
v For more information on the NONRECOVERABLE recoverability option see the
Data Movement Utilities Guide and Reference.
v Output:
LOAD.out The summarized result of the LOAD action.
tabnnn.msg The LOAD message file of the corresponding table.
Related reference:
v “db2look - DB2 statistics and DDL extraction tool command” in Command
Reference
Authorization:
None
Command syntax:
db2relocatedb -f configFilename
Command parameters:
-f configFilename
Specifies the name of the file containing the configuration information
necessary for relocating the database. This can be a relative or absolute file
name. The format of the configuration file is:
DB_NAME=oldName,newName
DB_PATH=oldPath,newPath
INSTANCE=oldInst,newInst
NODENUM=nodeNumber
LOG_DIR=oldDirPath,newDirPath
CONT_PATH=oldContPath1,newContPath1
CONT_PATH=oldContPath2,newContPath2
...
STORAGE_PATH=oldStoragePath1,newStoragePath1
STORAGE_PATH=oldStoragePath2,newStoragePath2
...
Where:
DB_NAME
Specifies the name of the database being relocated. If the database
name is being changed, both the old name and the new name must
be specified. This is a required field.
DB_PATH
Specifies the original path of the database being relocated. If the
database path is changing, both the old path and new path must
be specified. This is a required field.
INSTANCE
Specifies the instance where the database exists. If the database is
being moved to a new instance, both the old instance and new
instance must be specified. This is a required field.
NODENUM
Specifies the node number for the database node being changed.
The default is 0.
LOG_DIR
Specifies a change in the location of the log path. If the log path is
being changed, both the old path and new path must be specified.
This specification is optional if the log path resides under the
database path, in which case the path is updated automatically.
CONT_PATH
Specifies a change in the location of table space containers. Both
the old and new container path must be specified. Multiple
CONT_PATH lines can be provided if there are multiple container
path changes to be made. This specification is optional if the
container paths reside under the database path, in which case the
paths are updated automatically. If you are making changes to
more than one container where the same old path is being replaced
by a common new path, a single CONT_PATH entry can be used. In
such a case, an asterisk (*) could be used both in the old and new
paths as a wildcard.
STORAGE_PATH
This is only applicable to databases with automatic storage
enabled. It specifies a change in the location of one of the storage
paths for the database. Both the old storage path and the new
storage path must be specified. Multiple STORAGE_PATH lines
can be given if there are several storage path changes to be made.
Blank lines or lines beginning with a comment character (#) are ignored.
Usage notes:
If the instance that a database belongs to is changing, the following must be done
before running this command to ensure that changes to the instance and database
support files are made:
v If a database is being moved to another instance, create the new instance.
v Copy the files and devices belonging to the databases being copied onto the
system where the new instance resides. The path names must be changed as
necessary. However, if there are already databases in the directory where the
database files are moved to, you can mistakenly overwrite the existing sqldbdir
file, thereby removing the references to the existing databases. In this scenario,
the db2relocatedb utility cannot be used. Instead of db2relocatedb, an
alternative is a redirected restore operation.
v Change the permission of the files/devices that were copied so that they are
owned by the instance owner.
If the instance is changing, the tool must be run by the new instance owner.
Examples:
Example 1
To change the name of the database TESTDB to PRODDB in the instance db2inst1
that resides on the path /home/db2inst1, create the following configuration file:
DB_NAME=TESTDB,PRODDB
DB_PATH=/home/db2inst1
INSTANCE=db2inst1
NODENUM=0
Save the configuration file as relocate.cfg and use the following command to
make the changes to the database files:
db2relocatedb -f relocate.cfg
Example 2
To move the database DATAB1 from the instance jsmith on the path /dbpath to the
instance prodinst do the following:
1. Move the files in the directory /dbpath/jsmith to /dbpath/prodinst.
2. Use the following configuration file with the db2relocatedb command to make
the changes to the database files:
DB_NAME=DATAB1
DB_PATH=/dbpath
INSTANCE=jsmith,prodinst
NODENUM=0
Example 3
The database PRODDB exists in the instance inst1 on the path /databases/PRODDB.
The location of two table space containers needs to be changed as follows:
v SMS container /data/SMS1 needs to be moved to /DATA/NewSMS1.
v DMS container /data/DMS1 needs to be moved to /DATA/DMS1.
After the physical directories and files have been moved to the new locations, the
following configuration file can be used with the db2relocatedb command to make
changes to the database files so that they recognize the new locations:
DB_NAME=PRODDB
DB_PATH=/databases/PRODDB
INSTANCE=inst1
NODENUM=0
CONT_PATH=/data/SMS1,/DATA/NewSMS1
CONT_PATH=/data/DMS1,/DATA/DMS1
Example 4
The database TESTDB exists in the instance db2inst1 and was created on the path
/databases/TESTDB. Table spaces were then created with the following containers:
TS1
TS2_Cont0
TS2_Cont1
/databases/TESTDB/TS3_Cont0
/databases/TESTDB/TS4/Cont0
/Data/TS5_Cont0
/dev/rTS5_Cont1
TESTDB is to be moved to a new system. The instance on the new system will be
newinst and the location of the database will be /DB2.
When moving the database, all of the files that exist in the /databases/TESTDB/
db2inst1 directory must be moved to the /DB2/newinst directory. This means that
the first 5 containers will be relocated as part of this move. (The first 3 are relative
to the database directory and the next 2 are relative to the database path.) Since
these containers are located within the database directory or database path, they
do not need to be listed in the configuration file. If the 2 remaining containers are
to be moved to different locations on the new system, they must be listed in the
configuration file.
After the physical directories and files have been moved to their new locations, the
following configuration file can be used with db2relocatedb to make changes to
the database files so that they recognize the new locations:
DB_NAME=TESTDB
DB_PATH=/databases/TESTDB,/DB2
INSTANCE=db2inst1,newinst
NODENUM=0
CONT_PATH=/Data/TS5_Cont0,/DB2/TESTDB/TS5_Cont0
CONT_PATH=/dev/rTS5_Cont1,/dev/rTESTDB_TS5_Cont1
Example 5
The database TESTDB has two database partitions on database partition servers 10
and 20. The instance is servinst and the database path is /home/servinst on both
database partition servers. The name of the database is being changed to SERVDB
and the database path is being changed to /databases on both database partition
servers. In addition, the log directory is being changed on database partition server
20 from /testdb_logdir to /servdb_logdir.
Since changes are being made to both database partitions, a configuration file must
be created for each database partition and db2relocatedb must be run on each
database partition server with the corresponding configuration file.
On database partition server 10, the following configuration file will be used:
DB_NAME=TESTDB,SERVDB
DB_PATH=/home/servinst,/databases
INSTANCE=servinst
NODE_NUM=10
On database partition server 20, the following configuration file will be used:
DB_NAME=TESTDB,SERVDB
DB_PATH=/home/servinst,/databases
INSTANCE=servinst
NODE_NUM=20
LOG_DIR=/testdb_logdir,/servdb_logdir
Example 6
The database MAINDB exists in the instance maininst on the path /home/maininst.
The location of four table space containers needs to be changed as follows:
After the physical directories and files are moved to the new locations, the
following configuration file can be used with the db2relocatedb command to make
changes to the database files so that they recognize the new locations.
Related reference:
v “db2inidb - Initialize a mirrored database command” in Command Reference
It is the user’s responsibility to ensure that the chosen delimiter character is not
part of the data to be moved. If it is, unexpected errors might occur. The following
restrictions apply to column, string, DATALINK, and decimal point delimiters
when moving data:
v Delimiters are mutually exclusive.
v A delimiter cannot be binary zero, a line-feed character, a carriage-return, or a
blank space.
v The default decimal point (.) cannot be a string delimiter.
v The following characters are specified differently by an ASCII-family code page
and an EBCDIC-family code page:
– The Shift-In (0x0F) and the Shift-Out (0x0E) character cannot be delimiters for
an EBCDIC MBCS data file.
– Delimiters for MBCS, EUC, or DBCS code pages cannot be greater than 0x40,
except the default decimal point for EBCDIC MBCS data, which is 0x4b.
– Default delimiters for data files in ASCII code pages or EBCDIC MBCS code
pages are:
" (0x22, double quotation mark; string delimiter)
, (0x2c, comma; column delimiter)
– Default delimiters for data files in EBCDIC SBCS code pages are:
" (0x7F, double quotation mark; string delimiter)
, (0x6B, comma; column delimiter)
– The default decimal point for ASCII data files is 0x2e (period).
– The default decimal point for EBCDIC data files is 0x4B (period).
– If the code page of the server is different from the code page of the client, it is
recommended that the hex representation of non-default delimiters be
specified. For example,
db2 load from ... modified by chardel0x0C coldelX1e ...
The following information about support for double character delimiter recognition
in DEL files applies to the export, import, and load utilities:
v Character delimiters are permitted within the character-based fields of a DEL
file. This applies to fields of type CHAR, VARCHAR, LONG VARCHAR, or
CLOB (except when lobsinfile is specified). Any pair of character delimiters
found between the enclosing character delimiters is imported or loaded into the
database. For example,
"What a ""nice"" day!"
will be imported as:
What a "nice" day!
In the case of export, the rule applies in reverse. For example,
I am 6" tall.
will be exported to a DEL file as:
"I am 6"" tall."
v In a DBCS environment, the pipe (|) character delimiter is not supported.
Related reference:
v “File type modifiers for the export utility” on page 27
v “File type modifiers for the import utility” on page 87
v “File type modifiers for the load utility” on page 188
The IMPORT CREATE option allows you to create both the table hierarchy and the
type hierarchy.
One method is to proceed from the top of the hierarchy (or the root table), down
the hierarchy (subtables) to the bottom subtable, then back up to its supertable,
down to the next “right-most” subtable(s), then back up to next higher supertable,
down to its subtables, and so on.
The following figure shows a hierarchy with four valid traverse orders:
v Person, Employee, Manager, Architect, Student.
v Person, Student, Employee, Manager, Architect (this traverse order is marked
with the dotted line).
v Person, Employee, Architect, Manager, Student.
v Person, Student, Employee, Architect, Manager.
8 1
Person
3 Person_t 2
(Oid, Name, Age)
5 Employee 6
Student
Employee_t
Student_t
4 (SerialNum, Salary, REF 7 (SerialNum, Marks)
(Department_t))
Manager Architect
Manager_t Architect_t
(Bonus) (StockOption)
Figure 12.
Related concepts:
v “Export Overview” on page 1
v “Import Overview” on page 35
Traverse Order
There is a default traverse order, in which all relevant types refer to all reachable
types in the hierarchy from a given starting point in the hierarchy. The default
order includes all tables in the hierarchy, and each table is ordered by the scheme
used in the OUTER order predicate. There is also a user-specified traverse order, in
If you are specifying the traverse order, remember that the subtables must be
traversed in PRE-ORDER fashion (that is, each branch in the hierarchy must be
traversed to the bottom before a new branch is started).
Exporting data to the PC/IXF file format creates a record of all relevant types, their
definitions, and relevant tables. Export also completes the mapping of an index
value to each table. During import, this mapping is used to ensure accurate
movement of the data to the target database. When working with the PC/IXF file
format, you should use the default traverse order.
With the ASC, DEL, or WSF file format, the order in which the typed rows and the
typed tables were created could be different, even though the source and target
hierarchies might be structurally identical. This results in time differences that the
default traverse order will identify when proceeding through the hierarchies. The
creation time of each type determines the order taken through the hierarchy at
both the source and the target when using the default traverse order. Ensure that
the creation order of each type in both the source and the target hierarchies is
identical, and that there is structural identity between the source and the target. If
these conditions cannot be met, select a user-specified traverse order.
the import utility guarantees the accurate movement of data to the target database.
Although you determine the starting point and the path down the hierarchy when
defining the traverse order, each branch must be traversed to the end before the
next branch in the hierarchy can be started. The export and import utilities look for
violations of this condition within the specified traverse order.
Related reference:
v “Delimited ASCII (DEL) File Format” on page 294
v “Non-delimited ASCII (ASC) file format” on page 299
v “PC Version of IXF File Format” on page 302
v “Worksheet File Format (WSF)” on page 339
The import utility controls what is placed in the target database. You can specify
an attributes list at the end of each subtable name to restrict the attributes that are
moved to the target database. If no attributes list is used, all of the columns in
each subtable are moved.
The import utility controls the size and the placement of the hierarchy being
moved through the CREATE, INTO table-name, UNDER, and AS ROOT TABLE
parameters.
Related reference:
v “IMPORT ” on page 49
Department Person
Department_t Person_t
(Oid, Name, Headcount) (Oid, Name, Age)
Employee Student
Employee_t Student_t
(SerialNum, Salary, REF (Department_t)) (SerialNum, Marks)
Manager Architect
Manager_t Architect_t
(Bonus) (StockOption)
Figure 13.
Example 1
Each type in the hierarchy is created if it does not exist. If these types already
exist, they must have the same definition in the target database as in the source
database. An SQL error (SQL20013N) is returned if they are not the same. Since
new hierarchy is being created, none of the subtables defined in the data file being
moved to the target database (Target_db) can exist. Each of the tables in the source
database hierarchy is created. Data from the source database is imported into the
correct subtables of the target database.
Example 2
The target tables Person, Employee, and Architect must all exist. Data is imported
into the Person, Employee, and Architect subtables. That is, the following will be
imported:
v All columns in Person into Person.
v All columns in Person plus Salary in Employee into Employee.
v All columns in Person plus Salary in Employee, plus all columns in Architect
into Architect.
Columns SerialNum and REF(Employee_t) will not be imported into Employee or its
subtables (that is, Architect, which is the only subtable having data imported into
it).
Note: Because Architect is a subtable of Employee, and the only import column
specified for Employee is Salary, Salary will also be the only
Employee-specific column imported into Architect. That is, neither
SerialNum nor REF(Employee_t) columns are imported into either Employee
or Architect rows.
Data for the Manager and the Student tables is not imported.
Example 3
This example shows how to export from a regular table, and import as a single
subtable in a hierarchy. The EXPORT command operates on regular (non-typed)
tables, so there is no Type_id column in the data file. The modifier no_type_id is
used to indicate this, so that the import utility does not expect the first column to
be the Type_id column.
DB2 CONNECT TO Source_db
DB2 EXPORT TO Student_sub_table.del OF DEL SELECT * FROM
Regular_Student
DB2 CONNECT TO Target_db
DB2 IMPORT FROM Student_sub_table.del OF DEL METHOD P(1,2,3,5,4)
MODIFIED BY NO_TYPE_ID INSERT INTO HIERARCHY (Student)
In this example, the target table Student must exist. Since Student is a subtable, the
modifier no_type_id is used to indicate that there is no Type_id in the first column.
However, you must ensure that there is an existing Object_id column, in addition
to all of the other attributes that exist in the Student table. Object-id is expected to
be the first column in each row imported into the Student table. The METHOD
clause reverses the order of the last two attributes.
Related concepts:
v “Moving data between typed tables” on page 260
The IBM replication tools are a set of programs and DB2 database tools that copy
data between distributed relational database management systems:
v Between DB2 database platforms.
v Between DB2 database platforms and host databases supporting Distributed
Relational Database Architecture™ (DRDA) connectivity.
v Between host databases that support DRDA connectivity.
Data can also be replicated to non-IBM relational database management systems by
way of Websphere Federation Server.
You can use the IBM replication tools to define, synchronize, automate, and
manage copy operations from a single control point for data across your enterprise.
The replication tools in IBM DB2 V9.1 offer replication between relational
databases. They also work in conjunction with IMS™ DataPropagator™ (formerly
DPropNR) to replicate IMS and VSAM data, and with Lotus NotesPump to
replicate to and from Lotus Notes® databases.
Replication allows you to give end users and applications access to production
data without putting extra load on the production database. You can copy the data
to a database that is local to a user or an application, rather than have them access
the data remotely. A typical replication scenario involves a source table with copies
in one or more remote databases; for example, a central bank and its local
branches. At predetermined times, automatic updates of the databases takes place,
and all changes to the source database are copied to the target database tables.
The replication tools allow you to customize the copy table structure. You can use
SQL when copying to the target database to enhance the data being copied. You
can produce read-only copies that duplicate the source table, capture data at a
specified point in time, provide a history of changes, or stage data to be copied to
additional target tables. Moreover, you can create read-write copies that can be
updated by end users or applications, and then have the changes replicated back
to the master table, or to peer tables at multiple servers. You can replicate views of
source tables, or views of copies. Event-driven replication is also possible.
You can replicate data between DB2 databases on the following platforms: AIX®,
iSeries, HP-UX, Linux, Windows, OS/390, SCO UnixWare, Solaris operating
system, Sequent®, VM, and VSE. You can also replicate data between DB2 and the
following non-DB2 databases: Informix®, Microsoft® Jet, Microsoft SQL Server,
Oracle, Sybase, and Sybase SQLAnywhere. In conjunction with other IBM
products, you can replicate DB2 data to and from IMS, VSAM, or Lotus Notes.
Finally, you can also replicate data to DB2 Everywhere on Windows CE, or Palm
OS devices.
Related concepts:
v “The IBM Replication Tools by Component” on page 266
The primary components of Q replication are the Q Capture program and the Q
Apply program. The primary components of SQL replication are the Capture
program and Apply program. Both types of replication share the Replication Alert
Monitor tool. You can set up and administer these replication components using
the Replication Center and the ASNCLP command-line program.
Q Capture program:
Reads the DB2 recovery log looking for changes to DB2 source tables and
translates committed source data into WebSphere MQ messages that can be
published in XML format to a subscribing application, or replicated in a compact
format to the Q Apply program.
Q Apply program:
Takes WebSphere MQ messages from a queue, transforms the messages into SQL
statements, and updates a target table or stored procedure. Supported targets
include DB2 databases or subsystems and Oracle, Sybase, Informix and Microsoft
SQL Server databases that are accessed through federated server nicknames.
Capture program:
Reads the DB2 recovery log for changes made to registered source tables or views
and then stages committed transactional data in relational tables called change-data
(CD) tables, where they are stored until the target system is ready to copy them.
SQL replication also provides Capture triggers that populate a staging table called
a consistent-change-data (CCD) table with records of changes to non-DB2 source
tables.
Apply program:
Reads data from staging tables and makes the appropriate changes to targets. For
non-DB2 data sources, the Apply program reads the CCD table through that table's
nickname on the federated database and makes the appropriate changes to the
target table.
A utility that checks the health of the Q Capture, Q Apply, Capture, and Apply
programs. It checks for situations in which a program terminates, issues a warning
or error message, reaches a threshold for a specified value, or performs a certain
action, and then issues notifications to an email server, pager, or the z/OS console.
Related concepts:
v “Using replication to move data” on page 265
There are three approaches for moving data using the CURSOR file type. The first
approach uses the Command Line Processor (CLP), the second the API and the
third uses the ADMIN_CMD procedure. The key difference between the CLP and
the ADMIN_CMD procedure are outlined in the following table.
Table 18. . Differences between the CLP and ADMIN_CMD procedure.
Differences CLP ADMIN_CMD_procedure
Syntax The query statement as well The query statement as well
as the source database used as the source database used
by the cursor are defined by the cursor is defined
outside of the LOAD within the LOAD command
command using a DECLARE using the LOAD from (
CURSOR statement. DATABASE database-alias
query-statement)
User authorization for If the data is in a different If the data is in a different
accessing a different database database than the one you database than the one you
currently connect to, the are currently connected to,
DATABASE keyword must the DATABASE keyword
be used in the DECLARE must be used in the LOAD
CURSOR statement. You can command before the query
specify the user id and statement. The user id and
password in the same password explicitly specified
statement as well. If the user for the source database
id and password are not connection are required to
specified in the DECLARE access the target database.
CURSOR statement, the user You cannot specify a userid
id and password explicitly or password for the source
specified for the source database. Therefore, if no
database connection are used userid and password were
to access the target database. specified when the
connection to the target
database was made, or the
userid and password
specified cannot be used to
authenticate against the
source database, the
ADMIN_CMD procedure
cannot be used to perform
the load.
For example:
1. Suppose a source and target table both reside in the same database with the
following definitions:
Table ABC.TABLE1 has 3 columns:
v ONE INT
v TWO CHAR(10)
v THREE DATE
Table ABC.TABLE2 has 3 columns:
v ONE VARCHAR
v TWO INT
v THREE DATE
Executing the following CLP commands will load all the data from
ABC.TABLE1 into ABC.TABLE2:
DECLARE mycurs CURSOR FOR SELECT TWO, ONE, THREE FROM abc.table1
LOAD FROM mycurs OF cursor INSERT INTO abc.table2
Note: The above example shows how to load from an SQL query through the
CLP. However, loading from an SQL query can also be accomplished
through the db2Load API. Define the piSourceList of the sqlu_media_list
structure to use the sqlu_statement_entry structure and SQLU_SQL_STMT
media type and define the piFileType value as SQL_CURSOR.
2. Suppose the source and target tables reside in different databases with the
following definitions:
You can declare a nickname against the source database, and then declare a cursor
against this nickname, and invoke the LOAD command with the FROM CURSOR
option, as demonstrated in the following example:
<enable federation and define datasource>
CREATE NICKNAME myschema1.table1 FOR dsdbsource.abc.table1
DECLARE mycurs CURSOR FOR SELECT TWO,ONE,THREE FROM myschema1.table1
LOAD FROM mycurs OF cursor INSERT INTO abc.table2
Or, you can use the DATABASE option of the DECLARE CURSOR statement, as
demonstrated in the following example:
Using the DATABASE option of the DECLARE CURSOR statement (also known as
the remotefetch media type when using the Load API) has some benefits over the
nickname approach:
Performance
Fetching of data using the remotefetch media type is tightly integrated within a
load operation. There are fewer layers of transition to fetch a record compared to
the nickname approach. Additionally, when source and target tables are distributed
identically in a multi-partition database, the load utility can parallelize the fetching
of data, which can further improve performance.
Ease of use
While this method can be used with cataloged databases, the use of nicknames
provides a robust facility for fetching from various data sources which cannot
simply be cataloged.
Restrictions:
When loading from a cursor defined using the DATABASE option (or equivalently
when using the sqlu_remotefetch_entry media entry with the db2Load API), the
following restrictions apply:
1. The SOURCEUSEREXIT options cannot be specified concurrently.
2. The METHOD N option is not supported.
3. The USEDEFAULTS option is not supported.
Related tasks:
v “Copying a schema” in Administration Guide: Implementation
As Figure 14 shows, the load utility creates a one or more named pipes and
spawns a process to execute your customized executable. Your user exit feeds data
into the named pipe(s) while the load utility simultaneously reads.
Figure 14. The Load utility reads from the pipe and processes the incoming data.
The data fed into the pipe must reflect the load options specified, including the file
type and any file type modifiers. The load utility does not directly read the data
files specified. Instead, the data files specified are passed as arguments to your
user exit when it is executed.
The user exit must reside in the bin subdirectory of the DB2 installation directory
(often known as sqllib). The load utility invokes the user exit executable with the
following command line arguments:
<base pipename> <number of source media>
<source media 1> <source media 2> ... <user exit ID>
<number of user exits> <database partition number>
Where:
< base pipename >
Is the base name for named-pipes that the Load utility creates and reads
data from. The utility creates one pipe for every source file provided to the
LOAD command, and each of these pipes is appended with .xxx, where
xxx is the index of the source file provided. For example, if there are 2
source files provided to the LOAD command, and the <base pipename>
argument passed to the user exit is pipe123, then the two named pipes that
your user exit should feed with data are pipe123.000 and pipe123.001. In
a partitioned database environment, the load utility appends the database
partition (DBPARTITION) number .yyy to the base pipe name, resulting in
the pipe name pipe123.xxx.yyy..
<number of source media>
Is the number of media arguments which follow.
<source media 1> <source media 2> ...
Is the list of one or more source files specified in the LOAD command.
Each source file is placed inside double quotation marks.
<user exit ID>
Is a special value useful when the PARALLELIZE option is enabled. This
integer value (from 1 to N, where N is the total number of user exits being
spawned) identifies a particular instance of a running user exit. When the
PARALLELIZE option is not enabled, this value defaults to 1.
<number of user exits>
Is a special value useful when the PARALLELIZE option is enabled. This
value represents the total number of concurrently running user exits. When
the PARALLELIZE option is not enabled, this value defaults to 1.
<database partition number>
Is a special value useful when the PARALLELIZE option is enabled. This is
the database partition (DBPARTITION) number on which the user exit is
executing. When the PARALLELIZE option is not enabled, this value
defaults to 0.
A user could pass this information using the INPUT FROM BUFFER
option as shown in the following LOAD command:
LOAD FROM myfile1 OF DEL INSERT INTO table1
SOURCEUSEREXIT myuserexit1 REDIRECT INPUT FROM BUFFER myuseridmypasswd
Note: The load utility limits the size of the <buffer> to the maximum size
of a LOB value. However, from within the command line processor
(CLP), the size of the <buffer> is restricted to the maximum size of a
CLP statement. From within CLP, it is also recommended that the
<buffer> contain only traditional ASCII characters. These issues can
be avoided if the load utility is invoked using the db2Load API, or if
the INPUT FROM FILE option is used instead.
INPUT FROM FILE <filename>
Allows you to pass the contents of a client side file directly into the STDIN
input stream of your user exit. This option is almost identical to the
INPUT FROM BUFFER option, however this option avoids the potential
CLP limitation. The filename must be a fully qualified client side file and
must not be larger than the maximum size of a LOB value.
OUTPUT TO FILE < filename>
Allows you to capture the STDOUT and STDERR streams from your user
exit process into a server side file. After spawning the process which
executes the user exit executable, the load utility redirects the STDOUT
and STDERR handles from this new process into the filename specified.
This option is useful for debugging and logging errors and activity within
your user exit. The filename must be a fully qualified server side file. The
filename must be a fully qualified server side file. When the
PARALLELIZE option is enabled, one file exists per user exit and each file
appends a 3 digit numeric identifier, such as filename.000.
PARALLELIZE
This option can increase the throughput of data coming into the load
utility by invoking multiple user exit processes simultaneously. This option
is only applicable to a multi-partition database. The number of user exit
instances invoked is equal to the number of distribution agents if data is to
be distributed across multiple database partitions during the load
operation, otherwise it is equal to the number of loading agents.
foreach record
{
if ((unique-integer MOD N) == i)
{
write this record to my named-pipe
}
}
The number of user exit processes spawned depends on the distribution mode
specified for database partitioning:
1. As Figure 15 shows, one user exit process is spawned for every
distribution-agent when PARTITION_AND_LOAD (default) or
PARTITION_ONLY is specified.
Restrictions:
v The LOAD_ONLY and LOAD_ONLY_VERIFY_PART partitioned-db-cfg mode
options are not supported when the SOURCEUSEREXIT PARALLELIZE option
is not specified.
Related concepts:
v “Load overview” on page 102
v “Loading data in a partitioned database environment - hints and tips” on page
237
v “Moving data using the CURSOR file type” on page 267
v “Schemas” in SQL Reference, Volume 1
Related tasks:
v “Copying a schema” in Administration Guide: Implementation
v “Restarting a failed copy schema operation” in Administration Guide:
Implementation
Related reference:
v “LOAD ” on page 132
Read the syntax diagrams from left to right and top to bottom, following the path
of the line.
The ─── symbol indicates that the syntax is continued on the next line.
The ─── symbol indicates that the syntax is continued from the previous line.
Syntax fragments start with the ├─── symbol and end with the ───┤ symbol.
required_item
required_item
optional_item
If an optional item appears above the main path, that item has no effect on
execution, and is used only for readability.
optional_item
required_item
If you can choose from two or more items, they appear in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it will appear above the main path, and the
remaining choices will be shown below.
default_choice
required_item
optional_choice
optional_choice
An arrow returning to the left, above the main line, indicates an item that can be
repeated. In this case, repeated items must be separated by one or more blanks.
required_item repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a
comma.
required_item repeatable_item
A repeat arrow above a stack indicates that you can make more than one choice
from the stacked items or repeat a single choice.
Keywords appear in uppercase (for example, FROM). They must be spelled exactly
as shown. Variables appear in lowercase (for example, column-name). They
represent user-supplied names or values in the syntax.
required_item parameter-block
parameter-block:
parameter1
parameter2 parameter3
parameter4
Adjacent segments occurring between “large bullets” (*) may be specified in any
sequence.
The above diagram shows that item2 and item3 may be specified in either order.
Both of the following are valid:
required_item item1 item2 item3 item4
required_item item1 item3 item2 item4
Related concepts:
v “Import Overview” on page 35
v “Load overview” on page 102
Related reference:
v “IMPORT ” on page 49
v “LOAD ” on page 132
The source file for this sample program (tbmove.sqc) can be found in the
\sqllib\samples\c directory. It contains both DB2 APIs and embedded SQL calls.
The script file bldapp.cmd, located in the same directory, contains the commands to
build this and other sample programs.
To run the sample program (executable file), enter tbmove. You might find it useful
to examine some of the generated files, such as the message file, and the delimited
ASCII data file.
/****************************************************************************
** Licensed Materials - Property of IBM
**
** Governed under the terms of the International
** License Agreement for Non-Warranted Sample Code.
**
** (C) COPYRIGHT International Business Machines Corp. 1996 - 2002
** All Rights Reserved.
**
** US Government Users Restricted Rights - Use, duplication or
** disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
*****************************************************************************
**
** SOURCE FILE NAME: tbmove.sqc
**
** SAMPLE: How to move table data
**
** DB2 APIs USED:
** db2Export -- Export
** db2Import -- Import
** sqluvqdp -- Quiesce Table Spaces for Table
** db2Load -- Load
** db2LoadQuery -- Load Query
**
** SQL STATEMENTS USED:
** PREPARE
** DECLARE CURSOR
** OPEN
** FETCH
** CLOSE
** CREATE TABLE
** DROP
**
** OUTPUT FILE: tbmove.out (available in the online documentation)
*****************************************************************************
**
** For more information on the sample programs, see the README file.
**
** For information on developing C applications, see the Application
** Development Guide.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sqlenv.h>
#include <sqlutil.h>
#include <db2ApiDf.h>
#include "utilemb.h"
/* support function */
int ExportedDataDisplay(char *);
int NewTableDisplay(void);
/* connect to database */
rc = DbConn(dbAlias, user, pswd);
if (rc != 0)
{
return rc;
}
#if(defined(DB2NT))
sprintf(dataFileName, "%s%stbmove.DEL", getenv("DB2PATH"), PATH_SEP);
#else /* UNIX */
sprintf(dataFileName, "%s%stbmove.DEL", getenv("HOME"), PATH_SEP);
#endif
rc = DataExport(dataFileName);
rc = TbImport(dataFileName);
rc = TbLoad(dataFileName);
rc = TbLoadQuery();
return 0;
} /* main */
fp = fopen(dataFileName, "r");
if (fp == NULL)
{
return 1;
}
if (ferror(fp))
{
fclose(fp);
return 1;
}
else
{
fclose(fp);
}
return 0;
} /* ExportedDataDisplay */
int NewTableDisplay(void)
{
struct sqlca sqlca;
return 0;
} /* NewTableDisplay */
printf("\n-----------------------------------------------------------");
printf("\nUSE THE DB2 API:\n");
printf(" db2Export -- Export\n");
printf("TO EXPORT DATA TO A FILE.\n");
/* export data */
dataDescriptor.dcolmeth = SQL_METH_D;
strcpy(actionString, "SELECT deptnumb, deptname FROM org");
pAction = (struct sqllob *)malloc(sizeof(sqluint32) +
sizeof(actionString) + 1);
pAction->length = strlen(actionString);
strcpy(pAction->data, actionString);
strcpy(msgFileName, "tbexport.MSG");
exportParmStruct.piDataFileName = dataFileName;
exportParmStruct.piLobPathList = NULL;
exportParmStruct.piLobFileList = NULL;
exportParmStruct.piDataDescriptor = &dataDescriptor;
exportParmStruct.piActionString = pAction;
exportParmStruct.piFileType = SQL_DEL;
/* export data */
db2Export(db2Version820,
&exportParmStruct,
&sqlca);
DB2_API_CHECK("data -- export");
return 0;
} /* DataExport */
printf("\n-----------------------------------------------------------");
printf("\nUSE THE DB2 API:\n");
printf(" db2Import -- Import\n");
printf("TO IMPORT DATA TO A TABLE.\n");
/* import table */
dataDescriptor.dcolmeth = SQL_METH_D;
strcpy(actionString, "INSERT INTO newtable");
pAction = (struct sqlchar *)malloc(sizeof(short) +
sizeof(actionString) + 1);
pAction->length = strlen(actionString);
strcpy(pAction->data, actionString);
strcpy(msgFileName, "tbimport.MSG");
ImportparmStruct.piDataFileName = dataFileName;
importParmStruct.piLobPathList = NULL;
importParmStruct.piDataDescriptor = &dataDescriptor;
importParmStruct.piActionString = pAction;
importParmStruct.piFileType = SQL_DEL;
importParmStruct.piFileTypeMod = NULL;
importParmStruct.piMsgFileName = msgFileName;
importParmStruct.piImportInfoIn = &inputInfo;
importParmStruct.poImportInfoOut = &outputInfo;
importParmStruct.piNullIndicators = NULL;
importParmStruct.iCallerAction = SQLU_INITIAL;
/* import table */
db2Import(db2Version820,
&importParmStruct,
&sqlca);
DB2_API_CHECK("table -- import");
return 0;
} /* TbImport */
printf("\n-----------------------------------------------------------");
printf("\nUSE THE DB2 API:\n");
printf(" sqluvqdp -- Quiesce Table Spaces for Table\n");
/* load table */
mediaList.media_type = SQLU_CLIENT_LOCATION;
mediaList.sessions = 1;
mediaList.target.location =
(struct sqlu_location_entry *)malloc(sizeof(struct sqlu_location_entry) *
mediaList.sessions);
strcpy(mediaList.target.location->location_entry, dataFileName);
dataDescriptor.dcolmeth = SQL_METH_D;
strcpy(localMsgFileName, "tbload.MSG");
/* load table */
db2Load (db2Version810, /* Database version number */
¶mStruct, /* In/out parameters */
&sqlca); /* SQLCA */
DB2_API_CHECK("table -- load");
return 0;
} /* TbLoad */
int TbLoadQuery(void)
{
int rc = 0;
struct sqlca sqlca;
char tableName[128];
char loadMsgFileName[128];
db2LoadQueryStruct loadQueryParameters;
db2LoadQueryOutputStruct loadQueryOutputStructure;
printf("\n-----------------------------------------------------------");
printf("\nUSE THE DB2 API:\n");
printf(" db2LoadQuery -- Load Query\n");
printf("TO CHECK THE STATUS OF A LOAD OPERATION.\n");
/* Initialize structures */
memset(&loadQueryParameters, 0, sizeof(db2LoadQueryStruct));
memset(&loadQueryOutputStructure, 0, sizeof(db2LoadQueryOutputStruct));
/* load query */
db2LoadQuery(db2Version810, &loadQueryParameters, &sqlca);
printf("\n Note: the table load for ’%s’ is NOT in progress.\n", tableName);
printf(" So an empty message file ’%s’ will be created,\n", loadMsgFileName);
printf(" and the following values will be zero.\n");
DB2_API_CHECK("status of load operation -- check");
return 0;
} /* TbLoadQuery */
When using DEL, WSF, or ASC data file formats, define the table, including its
column names and data types, before importing the file. The data types in the
operating system file fields are converted into the corresponding type of data in
the database table. The import utility accepts data with minor incompatibility
problems, including character data imported with possible padding or truncation,
and numeric data imported into different types of numeric fields.
When using the PC/IXF data file format, the table does not need to exist before
beginning the import operation. User-defined distinct types (UDTs) are not made
part of the new table column types; instead, the base type is used. Similarly, when
exporting to the PC/IXF data file format, UDTs are stored as base data types in the
PC/IXF file.
When using the CURSOR file type, the table, including its column names and data
types, must be defined before beginning the load operation. The column types of
the SQL query must be compatible with the corresponding column types in the
target table. It is not necessary for the specified cursor to be open before starting
the load operation. The load utility will process the entire result of the query
associated with the specified cursor whether or not the cursor has been used to
fetch rows.
Related concepts:
v “Queries and table expressions” in SQL Reference, Volume 1
Related reference:
v “Delimited ASCII (DEL) File Format” on page 294
v “Non-delimited ASCII (ASC) file format” on page 299
The following table describes the format of DEL files that can be imported, or that
can be generated as the result of an export action.
DEL file ::= Row 1 data || Row delimiter ||
Row 2 data || Row delimiter ||
.
.
.
Row n data || Optional row delimiter
Decimal digit ::= Any one of the characters ’0’, ’1’, ... ’9’
Related reference:
v “DEL Data Type Descriptions” on page 296
"Smith, Bob",4973,15.46
"Jones, Bill",12345,16.34
"Williams, Sam",452,193.78
The following example illustrates the use of non-delimited character strings. The
column delimiter has been changed to a semicolon, because the character data
contains a comma.
Notes:
1. A space (X'20') is never a valid delimiter.
2. Spaces that precede the first character, or that follow the last character of a cell
value, are discarded during import. Spaces that are embedded in a cell value
are not discarded.
3. A period (.) is not a valid character string delimiter, because it conflicts with
periods in time stamp values.
4. For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to
the range of x00 to x3F, inclusive.
5. For DEL data specified in an EBCDIC code page, the delimiters might not
coincide with the shift-in and shift-out DBCS characters.
6. On the Windows operating system, the first occurrence of an end-of-file
character (X'1A') that is not within character delimiters indicates the end-of-file.
Any subsequent data is not imported.
7. A null value is indicated by the absence of a cell value where one would
normally occur, or by a string of spaces.
8. Since some products restrict character fields to 254 or 255 bytes, the export
utility generates a warning message whenever a character column of maximum
length greater than 254 bytes is selected for export. The import utility
accommodates fields that are as long as the longest LONG VARCHAR and
LONG VARGRAPHIC columns.
Related reference:
v “Delimited ASCII (DEL) File Format” on page 294
v “DEL Data Type Descriptions” on page 296
Related reference:
v “Delimited ASCII (DEL) File Format” on page 294
v “Example DEL File” on page 295
v “Data types” in SQL Reference, Volume 1
When importing or loading ASC data, specifying the RECLEN file type modifier
will indicate that the datafile is fixed length ASC. Not specifying it means that the
datafile is flexible length ASC.
The non-delimited ASCII format, can be used for data exchange with any ASCII
product that has a columnar format for data, including word processors. Each ASC
file is a stream of ASCII characters consisting of data values ordered by row and
column. Rows in the data stream are separated by row delimiters. Each column
within a row is defined by a beginning-ending location pair (specified by IMPORT
parameters). Each pair represents locations within a row specified as byte
positions. The first position within a row is byte position 1. The first element of
each location pair is the byte on which the column begins, and the second element
of each location pair is the byte on which the column ends. The columns might
overlap. Every row in an ASC file has the same column definition.
Related reference:
v “ASC Data Type Descriptions” on page 300
Related reference:
v “ASC Data Type Descriptions” on page 300
Related reference:
v “Example ASC File” on page 299
v “Data types” in SQL Reference, Volume 1
A PC/IXF file might also contain application records of record type A, anywhere
after the H record. These records are permitted in PC/IXF files to enable an
application to include additional data, not defined by the PC/IXF format, in a
PC/IXF file. A records are ignored by any program reading a PC/IXF file that does
not have particular knowledge about the data format and content implied by the
application identifier in the A record.
Every record in a PC/IXF file begins with a record length indicator. This is a 6-byte
right justified character representation of an integer value specifying the length, in
bytes, of the portion of the PC/IXF record that follows the record length indicator;
that is, the total record size minus 6 bytes. Programs reading PC/IXF files should
use these record lengths to locate the end of the current record and the beginning
of the next record. H, T, and C records must be sufficiently large to include all of
their defined fields, and, of course, their record length fields must agree with their
actual lengths. However, if extra data (for example, a new field), is added to the
end of one of these records, pre-existing programs reading PC/IXF files should
ignore the extra data, and generate no more than a warning message. Programs
writing PC/IXF files, however, should write H, T and C records that are the
precise length needed to contain all of the defined fields.
If a PC/IXF file contains LOB Location Specifier (LLS) columns, each LLS column
must have its own D record. D records are automatically created by the export
utility, but you will need to create them manually if you are using a third party
tool to generate the PC/IXF files. Further, an LLS is required for each LOB column
in a table, including those with a null value. If a LOB column is null, you will
need to create an LLS representing a null LOB.
The D record entry for each XML column will contain two bytes little endian
indicating the XML data specifier (XDS) length, followed by the XDS itself.
Numeric fields in H, T, and C records, and in the prefix portion of D and A records
should be right justified single-byte character representations of integer values,
filled with leading zeros or blanks. A value of zero should be indicated with at
least one (right justified) zero character, not blanks. Whenever one of these numeric
fields is not used, for example IXFCLENG, where the length is implied by the data
type, it should be filled with blanks. These numeric fields are:
Note: The database manager PC/IXF file format is not identical to the System/370.
Related reference:
v “Data Type-Specific Rules Governing PC/IXF File Import into Databases” on
page 330
v “Differences Between PC/IXF and Version 0 System/370 IXF” on page 338
v “FORCEIN Option” on page 332
v “General Rules Governing PC/IXF File Import into Databases” on page 328
v “PC/IXF Data Type Descriptions” on page 325
v “PC/IXF data types” on page 320
v “PC/IXF Record Types” on page 304
Each PC/IXF record type is defined as a sequence of fields; these fields are
required, and must appear in the order shown.
HEADER RECORD
The length of the data area of a D record cannot exceed 32 771 bytes.
APPLICATION RECORD
One record of this type is specified for each user defined index. This record is
located after all of the C records for the table. The following fields are contained in
DB2 index records:
IXFARECL
The record length indicator. A 6-byte character representation of an integer
value specifying the length, in bytes, of the portion of the PC/IXF record
that follows the record length indicator; that is, the total record size minus
6 bytes. Each A record must be sufficiently long to include at least the
entire IXFAPPID field.
IXFARECT
The IXF record type, which is set to A for this record, indicating that this is
an application record. These records are ignored by programs which do not
have particular knowledge about the content and the format of the data
implied by the application identifier.
IXFAPPID
The application identifier, which identifies DB2 as the application creating
this A record.
IXFAITYP
Specifies that this is subtype ″I″ of DB2 application records.
IXFADATE
The date on which the file was written, in the form yyyymmdd. This field
must have the same value as IXFHDATE.
IXFATIME
The time at which the file was written, in the form hhmmss. This field must
have the same value as IXFHTIME.
IXFANDXL
The length, in bytes, of the index name in the IXFANDXN field.
IXFANDXN
The name of the index.
IXFANCL
The length, in bytes, of the index creator name in the IXFANCN field.
One record of this type is used to describe a hierarchy. All subtable records (see
below) must be located immediately after the hierarchy record, and hierarchy
records are located after all of the C records for the table. The following fields are
contained in DB2 hierarchy records:
This record is found at the end of each file that is part of a multi-volume IXF file,
unless that file is the final volume; it can also be found at the beginning of each
file that is part of a multi-volume IXF file, unless that file is the first volume. The
purpose of this record is to keep track of file order. The following fields are
contained in DB2 continuation records:
IXFARECL
The record length indicator. A 6-byte character representation of an integer
value specifying the length, in bytes, of the portion of the PC/IXF record
that follows the record length indicator; that is, the total record size minus
6 bytes. Each A record must be sufficiently long to include at least the
entire IXFAPPID field.
IXFARECT
The IXF record type, which is set to A for this record, indicating that this is
an application record. These records are ignored by programs which do not
have particular knowledge about the content and the format of the data
implied by the application identifier.
IXFAPPID
The application identifier, which identifies DB2 as the application creating
this A record.
IXFACTYP
Specifies that this is subtype ″C″ of DB2 application records.
IXFADATE
The date on which the file was written, in the form yyyymmdd. This field
must have the same value as IXFHDATE.
IXFATIME
The time at which the file was written, in the form hhmmss. This field must
have the same value as IXFHTIME.
IXFALAST
This field is a binary field, in little-endian format. The value should be one
less than the value in IXFATHIS.
IXFATHIS
This field is a binary field, in little-endian format. The value in this field on
consecutive volumes should also be consecutive. The first volume has a
value of 1.
IXFANEXT
This field is a binary field, in little-endian format. The value should be one
more than the value in IXFATHIS, unless the record is at the beginning of
the file, in which case the value should be zero.
This record is the end-of-file marker found at the end of an IXF file. The following
fields are contained in DB2 terminate records:
IXFARECL
The record length indicator. A 6-byte character representation of an integer
value specifying the length, in bytes, of the portion of the PC/IXF record
that follows the record length indicator; that is, the total record size minus
6 bytes. Each A record must be sufficiently long to include at least the
entire IXFAPPID field.
IXFARECT
The IXF record type, which is set to A for this record, indicating that this is
an application record. These records are ignored by programs which do not
have particular knowledge about the content and the format of the data
implied by the application identifier.
IXFAPPID
The application identifier, which identifies DB2 as the application creating
this A record.
IXFAETYP
Specifies that this is subtype ″E″ of DB2 application records.
IXFADATE
The date on which the file was written, in the form yyyymmdd. This field
must have the same value as IXFHDATE.
IXFATIME
The time at which the file was written, in the form hhmmss. This field must
have the same value as IXFHTIME.
DB2 IDENTITY RECORD
Related reference:
v “PC/IXF Data Type Descriptions” on page 325
v “PC/IXF data types” on page 320
Related reference:
v “FORCEIN Option” on page 332
v “PC/IXF Data Type Descriptions” on page 325
v “PC/IXF Record Types” on page 304
Related reference:
v “PC/IXF data types” on page 320
v “PC/IXF Record Types” on page 304
Note: The IMPORT FORCEIN option extends the scope of compatible columns.
v Incompatible Columns — Existing Table
If, during import to an existing database table, a PC/IXF column is selected that
is incompatible with the target database column, one of two actions is possible:
– If the target table column is nullable, all values for the PC/IXF column are
ignored, and the table column values are set to NULL
– If the target table column is not nullable, the import utility terminates. The
entire PC/IXF file is rejected, and no data is imported. The existing table
remains unaltered.
Note: The IMPORT FORCEIN option extends the scope of compatible columns.
v Invalid Values
If, during import, a PC/IXF column value is encountered that is not valid for the
target database column, the import utility rejects the values of all columns in the
PC/IXF row that contains the invalid value (the entire row is rejected), and
processing continues with the next PC/IXF row.
Related reference:
v “PC/IXF data types” on page 320
v “FORCEIN Option” on page 332
Table 24 summarizes PC/IXF file import into new or existing database tables
without the FORCEIN option.
Table 24. Summary of PC/IXF File Import without FORCEIN Option
DATABASE COLUMN DATA TYPE
PC/IXF
COLUMN SMALL (SBCS, (SBCS, TIME
DATA TYPE INT INT BIGINT DEC FLT (0,0) 0)d DBCS)b GRAPHb DATE TIME STAMP
-SMALLINT N
E E E Ea E
-INTEGER N
Ea E E Ea E
-BIGINT N
Ea Ea E Ea E
-DECIMAL N
Ea Ea Ea Ea E
-FLOAT N
Ea Ea Ea Ea E
-(0,0) N
E Ec Ec Ec
-(SBCS,0) N N
E E E Ec Ec Ec
c c
-(SBCS, DBCS) N E E Ec
E E
-GRAPHIC N
-DATE N
E
-TIME N
E
-TIME STAMP N
E
Notes:
1. The table is a matrix of all valid PC/IXF and database manager data types. If a PC/IXF column can be imported into a database column, a letter
is displayed in the matrix cell at the intersection of the PC/IXF data type matrix row and the database manager data type matrix column. An ’N’
indicates that the utility is creating a new database table (a database column of the indicated data type is created). An ’E’ indicates that the utility
is importing data to an existing database table (a database column of the indicated data type is a valid target).
2. Character string data types are distinguished by code page attributes. These attributes are shown as an ordered pair (SBCS,DBCS), where:
v SBCS is either zero or denotes a non-zero value of the single-byte code page attribute of the character data type
v DBCS is either zero or denotes a non-zero value of the double-byte code page attribute of the character data type.
3. If the table indicates that a PC/IXF character column can be imported into a database character column, the values of their respective code page
attribute pairs satisfy the rules governing code page equality.
a
Individual values are rejected if they are out of range for the target numeric data type.
b
Data type is available only in DBCS environments.
c
Individual values are rejected if they are not valid date or time values.
d
Data type is not available in DBCS environments.
Related reference:
v “PC/IXF data types” on page 320
v “PC/IXF Data Type Descriptions” on page 325
v “General Rules Governing PC/IXF File Import into Databases” on page 328
v “PC Version of IXF File Format” on page 302
v “PC/IXF Record Types” on page 304
FORCEIN Option
The FORCEIN option permits import of a PC/IXF file despite code page
differences between data in the PC/IXF file and the target database. It offers
additional flexibility in the definition of compatible columns.
FORCEIN Example
Notes:
1. See the notes for Table 26.
Table 26. Summary of Import Utility Code Page Semantics (New Table) for DBCS. This
table assumes there is no conversion table between a and x.
CODE PAGE ATTRIBUTES OF DATABASE TABLE
COLUMN
CODE PAGE ATTRIBUTES
of PC/IXF DATA TYPE Without FORCEIN With FORCEIN
(0,0) (0,0) (0,0)
(a,0) (a,b) (a,b)
(x,0) reject (a,b)
(a,b) (a,b) (a,b)
(x,y) reject (a,b)
(a,y) reject (a,b)
(x,b) reject (a,b)
(0,b) (-,b) (-,b)
(0,y) reject (-,b)
Notes:
1. Code page attributes of a PC/IXF data type are shown as an ordered pair, where x
represents a non-zero single-byte code page value, and y represents a non-zero
double-byte code page value. A ’-’ represents an undefined code page value.
2. The use of different letters in various code page attribute pairs is deliberate. Different
letters imply different values. For example, if a PC/IXF data type is shown as (x,y), and
the database column as (a,y), x does not equal a, but the PC/IXF file and the database
have the same double-byte code page value y.
3. Only character and graphic data types are affected by the FORCEIN code page
semantics.
4. It is assumed that the database containing the new table has code page attributes of
(a,0); therefore, all character columns in the new table must have code page attributes
of either (0,0) or (a,0).
In a DBCS environment, it is assumed that the database containing the new table has
code page attributes of (a,b); therefore, all graphic columns in the new table must have
code page attributes of (-,b), and all character columns must have code page attributes
of (a,b). The SBCS CPGID is shown as ’-', because it is undefined for graphic data
types.
5. The data type of the result is determined by the rules described in “FORCEIN Data
Type Semantics” on page 337.
6. The reject result is a reflection of the rules for invalid or incompatible data types.
Notes:
1. See the notes for Table 25 on page 334.
2. The null or reject result is a reflection of the rules for invalid or incompatible data
types.
Table 28. Summary of Import Utility Code Page Semantics (Existing Table) for DBCS. This
table assumes there is no conversion table between a and x.
CODE PAGE
ATTRIBUTES OF RESULTS OF IMPORT
CODE PAGE TARGET
ATTRIBUTES OF DATABASE
PC/IXF DATA TYPE COLUMN Without FORCEIN With FORCEIN
(0,0) (0,0) accept accept
(a,0) (0,0) accept accept
(x,0) (0,0) accept accept
(a,b) (0,0) accept accept
(x,y) (0,0) accept accept
(a,y) (0,0) accept accept
(x,b) (0,0) accept accept
(0,b) (0,0) accept accept
(0,y) (0,0) accept accept
Notes:
1. See the notes for Table 25 on page 334.
2. The null or reject result is a reflection of the rules for invalid or incompatible data
types.
-(0,0) N
E E w/F E w/F E w/F Ec Ec Ec
-(SBCS,0) N N
E E E Ec Ec Ec
-(SBCS, DBCS) N w/Fd N Ec Ec Ec
E E w/F E
-GRAPHIC N w/Fd N
E E
-DATE N
E
-TIME N
E
-TIME STAMP N
E
Note: If a PC/IXF column can be imported into a database column only with the FORCEIN option, the string ’w/F’ is displayed together with an
’N’ or an ’E’. An ’N’ indicates that the utility is creating a new database table; an ’E’ indicates that the utility is importing data to an existing
database table. The FORCEIN option affects compatibility of character and graphic data types only.
a
Individual values are rejected if they are out of range for the target numeric data type.
b
Data type is available only in DBCS environments.
c
Individual values are rejected if they are not valid date or time values.
d
Applies only if the source PC/IXF data type is not supported by the target database.
e
Data type is not available in DBCS environments.
Related reference:
v “PC/IXF data types” on page 320
v “General Rules Governing PC/IXF File Import into Databases” on page 328
Related reference:
v “Data Type-Specific Rules Governing PC/IXF File Import into Databases” on
page 330
v “General Rules Governing PC/IXF File Import into Databases” on page 328
v “PC/IXF data types” on page 320
v “PC/IXF Data Type Descriptions” on page 325
Each WSF file represents one worksheet. The database manager uses the following
conventions to interpret worksheets and to provide consistency in worksheets
generated by its export operations:
v Cells in the first row (ROW value 0) are reserved for descriptive information
about the entire worksheet. All data within this row is optional. It is ignored
during import.
v Cells in the second row (ROW value 1) are used for column labels.
v The remaining rows are data rows (records, or rows of data from the table).
v Cell values under any column heading are values for that particular column or
field.
To create a file that is compliant with the WSF format during an export operation,
some loss of data might occur.
WSF files use a Lotus code point mapping that is not necessarily the same as
existing code pages supported by DB2 database. As a result, when importing or
exporting a WSF file, data is converted from the Lotus code points to or from the
code points used by the application code page. DB2 supports conversion between
the Lotus code points and code points defined by code pages 437, 819, 850, 860,
863, and 865.
Related concepts:
v “Moving data across platforms - file format considerations” on page 241
The DEL, ASC, and PC/IXF file formats are supported for a UCS-2 database, as
described in this section. The WSF format is not supported.
When exporting from a UCS-2 database to an ASCII delimited (DEL) file, all
character data is converted to the application code page. Both character string and
graphic string data are converted to the same SBCS or MBCS code page of the
client. This is expected behavior for the export of any database, and cannot be
changed, because the entire delimited ASCII file can have only one code page.
Therefore, if you export to a delimited ASCII file, only those UCS-2 characters that
exist in your application code page will be saved. Other characters are replaced
with the default substitution character for the application code page. For UTF-8
clients (code page 1208), there is no data loss, because all UCS-2 characters are
supported by UTF-8 clients.
When importing from an ASCII file (DEL or ASC) to a UCS-2 database, character
string data is converted from the application code page to UTF-8, and graphic
string data is converted from the application code page to UCS-2. There is no data
loss. If you want to import ASCII data that has been saved under a different code
page, you should change the data file code page before issuing the IMPORT
command. One way to accomplish this is to set DB2CODEPAGE to the code page
of the ASCII data file.
The range of valid ASCII delimiters for SBCS and MBCS clients is identical to what
is currently supported by IBM DB2 V9.1 for those clients. The range of valid
delimiters for UTF-8 clients is X’01’ to X’7F’, with the usual restrictions.
When exporting from a UCS-2 database to a PC/IXF file, character string data is
converted to the SBCS/MBCS code page of the client. Graphic string data is not
converted, and is stored in UCS-2 (code page 1200). There is no data loss.
When importing from a PC/IXF file to a UCS-2 database, character string data is
assumed to be in the SBCS/MBCS code page stored in the PC/IXF header, and
graphic string data is assumed to be in the DBCS code page stored in the PC/IXF
header. Character string data is converted by the import utility from the code page
specified in the PC/IXF header to the code page of the client, and then from the
client code page to UTF-8 (by the INSERT statement). graphic string data is
converted by the import utility from the DBCS code page specified in the PC/IXF
header directly to UCS-2 (code page 1200).
The load utility places the data directly into the database and, by default, assumes
data in ASC or DEL files to be in the code page of the database. Therefore, by
default, no code page conversion takes place for ASCII files. When the code page
for the data file has been explicitly specified (using the codepage modifier), the
load utility uses this information to convert from the specified code page to the
database code page before loading the data. For PC/IXF files, the load utility
always converts from the code pages specified in the IXF header to the database
code page (1208 for CHAR, and 1200 for GRAPHIC).
For example, the following command will load the Shift_JISX0213 data file
u/jp/user/x0213/data.del residing on a remotely connected client into MYTABLE:
Since only connections between a Unicode client and a Unicode server are
supported, so you need to use either a Unicode client or set the DB2 registry
variable DB2CODEPAGE to 1208 prior to using the load, import, or export utilities.
Conversion from code page 1394, 1392, or 5488 to Unicode can result in expansion.
For example, a 2-byte character can be stored as two 16-bit Unicode characters in
the GRAPHIC columns. You need to ensure the target columns in the Unicode
database are wide enough to contain any expanded Unicode byte.
Loading data into tables containing XML columns using the load utility is not
supported. Data movement of XML data should be performed using the import
and export utilities.
Incompatibilities
For applications connected to a UCS-2 database, graphic string data is always in
UCS-2 (code page 1200). For applications connected to non-UCS-2 databases, the
graphic string data is in the DBCS code page of the application, or not allowed if
the application code page is SBCS. For example, when a 932 client is connected to
Related reference:
v “CREATE DATABASE command” in Command Reference
v “DEL Data Type Descriptions” on page 296
v “Non-delimited ASCII (ASC) file format” on page 299
v “PC Version of IXF File Format” on page 302
v “Restrictions on native XML data store” in XML Guide
Related concepts:
v “About isolation levels” in Administration Guide: Planning
v “Binding” in Administration Guide: Planning
Related tasks:
v “Binding utilities to the database” in Administration Guide: Implementation
The message number is also referred to as the SQLCODE. The SQLCODE is passed
to the application as a positive or negative number, depending on its message type
(N, W, or C). N and C yield negative values, whereas W yields a positive value.
DB2 returns the SQLCODE to the application, and the application can get the
message associated with the SQLCODE. DB2 also returns an SQLSTATE value for
conditions that could be the result of an SQL or XQuery statement. Some
SQLCODE values have associated SQLSTATE values.
You can use the information contained in this topic to identify an error or problem,
and to resolve the problem by using the appropriate recovery action. This
information can also be used to understand where messages are generated and
logged.
SQL messages, and the message text associated with SQLSTATE values, are also
accessible from the operating system command line. To access help for these error
messages, enter the following at the operating system command prompt:
db2 ? SQLnnnnn
where nnnnn represents the message number. On UNIX based systems, the use of
double quotation mark delimiters is recommended; this will avoid problems if
there are single character file names in the directory:
db2 "? SQLnnnnn"
The message identifier accepted as a parameter for the db2 command is not case
sensitive, and the terminating letter is not required. Therefore, the following
commands will produce the same result:
db2 ? SQL0000N
db2 ? sql0000
db2 ? SQL0000n
If the message text is too long for your screen, use the following command (on
UNIX based operating systems and others that support the ″more″ pipe):
db2 ? SQLnnnnn | more
You can also redirect the output to a file which can then be browsed.
Help can also be invoked from interactive input mode. To access this mode, enter
the following at the operating system command prompt:
db2
To get DB2 message help in this mode, type the following at the command prompt
(db2 =>):
Related concepts:
v “Introduction to Messages” in Message Reference Volume 1
IBM periodically makes documentation updates available. If you access the online
version on the DB2 Information Center at ibm.com®, you do not need to install
documentation updates because this version is kept up-to-date by IBM. If you have
installed the DB2 Information Center, it is recommended that you install the
documentation updates. Documentation updates allow you to update the
information that you installed from the DB2 Information Center CD or downloaded
from Passport Advantage as new information becomes available.
Note: The DB2 Information Center topics are updated more frequently than either
the PDF or the hard-copy books. To get the most current information, install
the documentation updates as they become available, or refer to the DB2
Information Center at ibm.com.
You can access additional DB2 technical information such as technotes, white
papers, and Redbooks™ online at ibm.com. Access the DB2 Information
Management software library site at http://www.ibm.com/software/data/sw-
library/.
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for
how we can improve the DB2 documentation, send an e-mail to
db2docs@ca.ibm.com. The DB2 documentation team reads all of your feedback, but
cannot respond to you directly. Provide specific examples wherever possible so
that we can better understand your concerns. If you are providing feedback on a
specific topic or help file, include the topic title and URL.
Do not use this e-mail address to contact DB2 Customer Support. If you have a
DB2 technical issue that the documentation does not resolve, contact your local
IBM service center for assistance.
Related tasks:
v “Invoking command help from the command line processor” in Command
Reference
v “Invoking message help from the command line processor” in Command
Reference
v “Updating the DB2 Information Center installed on your computer or intranet
server” on page 355
Related reference:
v “DB2 technical library in hardcopy or PDF format” on page 350
Although the tables identify books available in print, the books might not be
available in your country or region.
The information in these books is fundamental to all DB2 users; you will find this
information useful whether you are a programmer, a database administrator, or
someone who works with DB2 Connect or other DB2 products.
Table 30. DB2 technical information
Name Form Number Available in print
Administration Guide: SC10-4221 Yes
Implementation
Administration Guide: Planning SC10-4223 Yes
Administrative API Reference SC10-4231 Yes
Administrative SQL Routines and SC10-4293 No
Views
Call Level Interface Guide and SC10-4224 Yes
Reference, Volume 1
Call Level Interface Guide and SC10-4225 Yes
Reference, Volume 2
Command Reference SC10-4226 No
Data Movement Utilities Guide SC10-4227 Yes
and Reference
Data Recovery and High SC10-4228 Yes
Availability Guide and Reference
Developing ADO.NET and OLE SC10-4230 Yes
DB Applications
Developing Embedded SQL SC10-4232 Yes
Applications
Note: The DB2 Release Notes provide additional information specific to your
product’s release and fix pack level. For more information, see the related
links.
Related concepts:
v “Overview of the DB2 technical information” on page 349
v “About the Release Notes” in Release notes
Related tasks:
v “Ordering printed DB2 books” on page 352
Printed versions of many of the DB2 books available on the DB2 PDF
Documentation CD can be ordered for a fee from IBM. Depending on where you
are placing your order from, you may be able to order books online, from the IBM
Publications Center. If online ordering is not available in your country or region,
you can always order printed DB2 books from your local IBM representative. Note
that not all books on the DB2 PDF Documentation CD are available in print.
Procedure:
Related concepts:
v “Overview of the DB2 technical information” on page 349
Related reference:
v “DB2 technical library in hardcopy or PDF format” on page 350
Procedure:
To invoke SQL state help, open the command line processor and enter:
? sqlstate or ? class code
where sqlstate represents a valid five-digit SQL state and class code represents the
first two digits of the SQL state.
For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help
for the 08 class code.
Related tasks:
v “Invoking command help from the command line processor” in Command
Reference
v “Invoking message help from the command line processor” in Command
Reference
For DB2 Version 8 topics, go to the Version 8 Information Center URL at:
http://publib.boulder.ibm.com/infocenter/db2luw/v8/.
Related tasks:
v “Updating the DB2 Information Center installed on your computer or intranet
server” on page 355
Procedure:
Note: Adding a language does not guarantee that the computer has the fonts
required to display the topics in the preferred language.
v To move a language to the top of the list, select the language and click the
Move Up button until the language is first in the list of languages.
3. Clear the browser cache and then refresh the page to display the DB2
Information Center in your preferred language.
On some browser and operating system combinations, you might have to also
change the regional settings of your operating system to the locale and language of
your choice.
To determine if there is an update available for the entire DB2 Information Center,
look for the 'Last updated' value on the Information Center home page. Compare
the value in your locally installed home page to the date of the most recent
downloadable update at http://www.ibm.com/software/data/db2/udb/support/
icupdate.html. You can then update your locally-installed Information Center if a
more recent downloadable update is available.
Note: Updates are also available on CD. For details on how to configure your
Information Center to install updates from CD, see the related links.
If update packages are available, use the Update feature to download the
packages. (The Update feature is only available in stand-alone mode.)
3. Stop the stand-alone Information Center, and restart the DB2 Information
Center service on your computer.
Procedure:
Note: The help_end batch file contains the commands required to safely
terminate the processes that were started with the help_start batch file.
Do not use Ctrl-C or any other method to terminate help_start.bat.
v On Linux, run the help_end script using the fully qualified path for the DB2
Information Center:
<DB2 Information Center dir>/doc/bin/help_end
Related concepts:
v “DB2 Information Center installation options” in Quick Beginnings for DB2 Servers
Related tasks:
v “Installing the DB2 Information Center using the DB2 Setup wizard (Linux)” in
Quick Beginnings for DB2 Servers
v “Installing the DB2 Information Center using the DB2 Setup wizard (Windows)”
in Quick Beginnings for DB2 Servers
You can view the XHTML version of the tutorial from the Information Center at
http://publib.boulder.ibm.com/infocenter/db2help/.
Some lessons use sample data or code. See the tutorial for a description of any
prerequisites for its specific tasks.
DB2 tutorials:
Related concepts:
v “Visual Explain overview” in Administration Guide: Implementation
Related concepts:
v “Introduction to problem determination” in Troubleshooting Guide
v “Overview of the DB2 technical information” on page 349
Personal use: You may reproduce these Publications for your personal, non
commercial use provided that all proprietary notices are preserved. You may not
distribute, display or make derivative work of these Publications, or any portion
thereof, without the express consent of IBM.
Commercial use: You may reproduce, distribute and display these Publications
solely within your enterprise provided that all proprietary notices are preserved.
You may not make derivative works of these Publications, or reproduce, distribute
or display these Publications or any portion thereof outside your enterprise,
without the express consent of IBM.
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the Publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country/region or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product, and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information may contain examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:
Trademarks
Company, product, or service names identified in the documents of the DB2
Version 9 documentation library may be trademarks or service marks of
International Business Machines Corporation or other companies. Information on
the trademarks of IBM Corporation in the United States, other countries, or both is
located at http://www.ibm.com/legal/copytrade.shtml.
Microsoft, Windows, Windows NT®, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Intel, Itanium®, Pentium®, and Xeon® are trademarks of Intel Corporation in the
United States, other countries, or both.
Java™ and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in
the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Index 365
SQL statements traverse order XQuery
displaying help 353 default 261 XQuery data model 245
SQLCODE typed tables 35, 261
overview 347 user-specified 261
SQLSTATE
overview 347
troubleshooting
online information 357
Z
zoned decimal file type modifier 132,
sqluexpr API 19 tutorials 357
161
sqluimpr API 73 tutorials
staging tables troubleshooting and problem
dependent immediate 124 determination 357
propagating 124 Visual Explain 357
storage typed tables
XML data specifier 244 data movement examples 263
striptblanks file type modifier 49, 73, exporting 260
132, 161 importing 260
striptnulls file type modifier 49, 73, 132, moving data between 260
161 selecting during data movement 262
structure traverse order 35, 261
delimited ASCII (DEL) files 294
non-delimited ASCII (ASC) files 299
subtable record
PC/IXF 304
U
Unicode (UCS-2)
subtableconvert file type modifier 132
data movement considerations 341
summary tables
updates
import restriction 38
DB2 Information Center 355
syntax
Information Center 355
changes, LOAD utility 102
usedefaults file type modifier 49, 73,
description 277
132, 161
System370 IXF
user exit
contrasted with PC/IXF 338
customize 270
contrasted with System370 338
data movement 270
user-defined types (UDTs)
distinct types
T importing 47
table load delete start log record 202 utilities
table record file formats 293
PC/IXF 304
table spaces
states 203
table states
V
valid PC/IXF data type 320
backup pending 206
Visual Explain
delete pending 206
tutorial 357
load pending 206
set integrity pending 206
tables
exported, recreating 45 W
exporting to files 11, 19 warning messages
importing files 49, 73 overview 347
loading files to 132 worksheets
locking 203 file format (WSF) 339
states 203 WSF (worksheet) file format
temporary files description 339
LOAD command 132 moving data across platforms 241
load utility 202
terminate record
PC/IXF 304
termination
X
XML data
load operations
considerations for moving 243
allow read access mode 129
exporting 5
in multi-partition databases 227
importing 40
terms and conditions
XQuery data model 245
use of publications 358
XML data movement
timeformat file type modifier 49, 73,
overview 242
132, 161
XML data type
timestampformat file type modifier 49,
importing and exporting 7
73, 132, 161
totalfreespace file type modifier 132, 161
Printed in USA
SC10-4227-00
Spine information: