8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?
tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
abinitio>
Component Reference
Ab Initio Components
====================
Section #1
==========
Component Groups
----------------
1) Compress
2) Continuous
3) Database
4) Datasets
5) Departition
6) Deprecated
6) Examples
7) FTP
8) Miscellanious
9) My Components
10) Partition
11) Sort
12) Transform
13) Translate
14) Validate
Section #2
==========
Component Listing
-----------------
1) Compress
1) Deflate
2) Inflate
3) Compress
4) Uncompress
2) Comtinuous
Will do later
3) Database
1) Call Stored Procedure
2) Input Table
3) Join with DB
4) Multi Update Table
5) Output Table
6) Run SQL
7) Truncate Table
8) Update Table
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 1/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
4) Dataset
1) Input File
2) Input Table
3) Intermediate File
4) Lookup File
5) Output File
6) Output Table
7) Read Multiple files
8) Read Shared
9) Write Multiple Files
5) Departition
1) Concatinate
2) Gather
3) Interleave
4) Merge
6) FTP
1) FTP From
2) FTP To
3) SFTP From
4) SFTP To
7) Miscellanious
1) Assign Keys
2) Buffered Copy
3) Documentation
4) Gather Logs
5) Leading Records
6) Meta Pivot
7) Redefine Format
8) Replicate
9) Run Program
10) Throttle
11) Recirculate and Compute Closure
12) Trash
8) Partition
1) Broadcast
2) Partition by Expression
3) Partition by Key
4) Partition by Percentage
5) Partition by Range
6) Partition by Round-Robin
7) Partition with Load Balance
9) Sort
1) Checkpointed Sort
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 2/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
2) Partition by Key and Sort
3) Sample
4) Sort
5) Sort Within Groups
10) Transform
1) Aggregate
2) Dedup Sorted
3) Denormalize Sorted
4) Filter By Expression
5) Fuse
6) Join
7) Match Sorted
8) Multi Reformat
9) Normalize
10) Reformat
11) Rollup
12) Scan
13) Scan With Rollup
11) Translate
Some components like decoder, encoder are not included
1) Read Seperated Values
2) Read Tagged Values
3) Read XML
4) Read XML Tranform
5) Write XML
6) Write XML Transform
7) XML Reformat
12) Validate
1) Check Order
2) Compare Checksums
3) Compare Records
4) Compute Checksum
5) Generate Random Bytes
6) Generate Records
7) Validate Records
Section #3
==========
Component Description
---------------------
1) Aggregate ( Tranform )
<Result : One Record per group>
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 3/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
< transform (temp, in) >
Generates group summary. (Use rollup instead of Aggregate)
Use Aggregate only for finding Maximum and Minimum from a group of records.
Use Tranform function
Parameters are : "temp" and "in" (Ie. this component uses temporary storage)
2) Assign Keys ( Miscellanious )
<Result :- Assigns Surrogate Keys >
It has a natural key field (against which the surrogate key will be generated)
It has a surrogate key.
<IN PORT : in>
<IN PORT : key> This is the list of already existing Keys
<OUT PORT : first> (When a new surrogate key is generated for the first time,
record will be sent to this port)
<OUT PORT : new > (All record for which a new surrogate key is assigned will be
sent to this post)
<OUT PORT : old> ( All records for which the surrogate key was present in key
input port is sent to this)
Surrogate Keys will be numeric in the incremental way.
3) Buffered Copy ( Miscellanious )
Copies data to a buffer if the down stream flow stops. Copy the data from buffer
when the flow resumes.
This component can also be used for changing record format.
Has multiple inputs.
1. buffer-bytes
2.buffer_records
4) Call Stored Procedure ( Database )
Used to invoke stored procedure
With DB2, it can also return result sets
With other databases, it will accept input and output parameters.
Recommentations
---------------
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 4/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
a) Use Update Table component to execute a stored procedure which takes only input
parameter
b) Use Run SQL to execute a stored procedure which donot take any parameters
c) Use Join With DB to execute stored procesures which have both input and output
parameters.
5) Call Web Service ( Connectors )
Use this component to access Web Services.
6) Check Order ( Validate )
Cheks the order of records
< Importatnt : Error message will be written to OUT port. >
If it finds an unordered record, it will write a single-line error message into OUT
port.
If the number of errors is greater than the LIMIT parameter, Component will stop
the graph.
7) Check Pointed Sort ( Sort )
This component sorts and merges records, inserting a CheckPoint between Sorting and
Merging phases.
This component reads records from in port until the max-core memory is fill.
Then component will sort all the records in memory and write the sorted records
into a temporary file
Read the remaining records and do the same job.
Once all the records are finished, do a checkpoint. (This is very inexpensive sine
all data are already wriotten as temp files)
Now the merge will merge all temp files keeping the sort order.
8) Compare Checksums ( Validate )
Compares two checksums.
This component donot have any out ports.
If the checksums are equal, component will exit with status code '0'
Else it will exit with status '-1' and stops the graph.
9) Compaer Records ( Validate )
Similar to Check Order.
Compares the records from its two in ports.
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 5/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
Writes one line text reports for each mismatching records.
If the number of mismatches exceeds the LIMIT parameter, component will stop the
graph.
10) Compute Checksum ( Validate )
Reads all records from in port and calculates the CheckSum (irrespective of the
order of records)
11) Concatinate ( Departition )
Reads data from in ports one flow at a time in order and write data in to out
port.
This component keep order of records
All flows should have exactly the same record format.
12) Dedup Sorted ( Tranform )
Needs sorted records.
Deletes all duplicate records
Following options are available
a) Keep first record in group
b) Keep last record
c) Keep only unique record.
13) Deflate ( Compress )
To compress data .
Always use Inflate component to reverse the action before processing records
14) Denormalize Sorted ( Tranform )
Summarizes groups of records with a Vector field for each group
Optionaly it also generates summary fields for each gorup.
<tranform function >
For denormalize;
a) Define element_type ;
This element_type will be referenced in denormalize function as variable 'elt'
b) Define denormalization_type as a fixed size vector of type 'element_type'
c) If rollup is needed, define temporary_type
d) Now define the initialize function to initiualize temporary_type at the start of
each group. (This is for rollup)
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 6/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
e) Define function initial_denormalization for initializing the vector at the stat of
each group.
f) Define function rollup (if roll up is needed)
g) Define denormalize function
Parameters are
a) temp (temporary_type) if rollup is needed
b) denom (denormalization_type vector)
c) in
d) Current records count (or position) in the current group)
Out parameter has three elements
a) index : This is the index of the denormalization_type vector
b) elt : element_type of vector
c) update : If it has a non zero value, the element at 'index' in the vector
will be replaced with 'elt'
h) Define finalize function
Parameters
a) temp (if rollup is needed)
b) denorm (denormalization vector)
c) in (This is the last record in the group)
<Important : This component has output_select function which can be used to filter the
records coming from the finalize function)
15) Documentation ( Miscellanious )
<Important : This component has multiple in and out ports>
This component is used to something relatedto EME datastore dependancy analysis
Need to read more.
Component where we can write the documentation
16) Filter By Expression ( Tranform )
Based on the expressions values, write the record to either out or deselect ports.
If deselected records are NOT needed, better use the select method in components
to do this filter. This will give more performance.
17) Find Splitters ( Sort )
Finds the splitters which will devide the records almost evenly to defined number
of partitions.
Component reads all records, sorts the records and find teh splitters which can be
sent to 'split' port of
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 7/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
Partition By Range component.
18) FTP From ( FTP )
Reads a file from remote machine not running Co-Operating system through FTP and
writes the data into the out port
19) FTP To (FTP )
Reads data from the in port and write the data in to a remote machine using FTP.
20) Fuse ( Tranform )
This component joins multiple inflows and write the combined record retgurned by
'fuse' transform function to out port
Reads first record from all in flows and write the record.
Reads second record from all in flows and write the record.
so on ....
It has select function to filter records from inflows
21) Gather ( Departition )
Combines records from multiple flows arbitrarily.
22) Gather Logs ( Miscellanious ) Should use 'HANDLE_LOGS' for new development
Collects records from the out port of components
In transform package for components, we can define a "final_log_output" function
to log a record into log port just before
exiting the component. In this method we can write the summary log entry.
23) GATHER TRANSACTIONS
Gathers transactions from multiple flows in to a single flow so that faster
running transactions proceed faster.
Available only in continuous graph.
24) GATHER TRANSACTIONS FROM POOL
Reads available transactions from fan in in port and process. Once all records are
done, it will signal PARTITION TRANSACTIONS TO POOL component that the transactions are
done and send the next set of transactions to POOL
In a graph <PARTITION TRANSACTIONS TO POOL> --> <SERVICE COMPONENT> --> <GATHER
TRANSACTIONS FROM POOL>
23) Generate Random Bytes ( Validate ) Should use CREATE DATA in new graphs
Generates specified number of records with specified number of random bytes
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 8/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
24) Generate Records ( Validate ) Should use CREATE DATA in new graphs
Generates specified number of records.
Using 'command_line' we can modify these data. Or we can use a transform component
to modify this data.
25) Hadoop Parallel Read
Initiates a hadoop map reduce job and reads the results through one or multiple
TCP connections.
26) Handle Errors
This is a new component for handling errors. Dynamic script generation should be
enabled.
Each component will have a 'error_group'. Error records from various components
can be send to this component by connecting their error ports to inputport of this
component ( Explicit flow)
Or define the error_group property for the other components with the error group
name for the handle error component.
Parent Graph and subgraph can have different components with same error_group
name. This was error escalation is possible from the sub graph to the parent graph.
For error escalation following functions are used in the handle_error component;
1) make_error()
In this function, depending on rules, use force_error to escalate the selected
errors.
2) log_error()
Handle_Error component should be can in the same phase or higher phases. It cannot
be in lower phases.
If its in higher phases, data will be lost if the graph fails.
27) Handle Logs
Works in similar way as that of Handle Errors. Only difference is that it do not
have escalation option which is nopt needed.
25) Inflate ( Compress )
Restores data which was compressed by 'Deflate' component. Supports files
compressed by gzip and z. But do not support files compressed by unix pack utility.
UNCOMPRESS component can uncompress this type of file.
INFLATE and DEFLATE copmponents can compress and uncompress data better than
COMPRESS and UNCOMPRESS components.
INFLATE and DEFLATE can accept multiple input flows (implicit gather), but
COMPRESS and UNCOMPRESS cannot
INFLATE and DEFLATE can work in continuos graph, but COMPRESS and UNCOMPRESS
cannot work in conmtinous graph.
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialog… 9/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
INFLATE and DEFLATE cannot work on files compresses with unix pack utility, but
COMPRESS and UNCOMPRESS can work. THis is the only situtation, we should use COMPRESS
and UNCOMPRESS.
26) Input File ( Datasets )
27) Input Table ( Datasets )
a) field_type_preference
delimitted, variable and fixed.
b) max_rows
c) ablocal_expr
d) ABLOCAL clause
d) ABLOCAL_SUBPART clause
How Parallellism is achieved in different databases
...................................................
With Oracle
parameter 'parallel_mode' can have two diff values
1) rowid
2) subpartition.
with rowid, the load will be devided in to range of rowids, and seperate sqls will
be invoked based on the rowid range.
with subpartition, seperate sqls will be for each table parthitions.
Component's default layout is decided (automaticallly) using below logic.
If the table is partitioned, it will be the number of partitions.
If not partitioned, then it will be the number of data files used by the table.
This is dangerous because the number of data files can vary, affecting the graph
or even the graph may fail.
if ABLOCAL() is uased in the sql, this will be replaced with the value from
ab_local_exp parameter
if ABLOCAL(table name), then the table name will be used as the driving table to
control parallel load.
if ABLOCAL_SUBPART() is used, it will be replaced with the subpartition(partition
name) clause in SQL. Each partitioned qry using its own partitiona name.
Since the database table partition(layout) can change due to database maintenance,
it is not advisable to use default database layout for input table. If we use this, use
a partition component after this so that changes in table layout will not affect
downstream components
Or use MFS as the layout for the input table, or use propogation from neighbor
28) Interleave ( Departition )
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 10/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
Combines blocks of record from multiple flows in round robin fashion. This
reverses the effect of 'partition by round robin'.
29) Intermediate File ( Datasets )
Saves the data into an intermediate file for review later.
received data through 'write' port.
downstream components read data through the 'read' port.
30) Join ( Transform )
Combines records from two or more in flows and write the transformed records in to
the out port.
Has 'reject' and 'unused' and 'error' ports for each in port also.
Join can have upto 20 in ports.
Join Types are : 1) Inner Join, 2) Outer Join and 3) Explicit join.
dedupn parameter specifies to remove duplicate records.
selectn for filter on input flows
With sorted records, the last record will be kept
With unsorted records, the first record will be kept
'driving' property specifies the driving port. (Used when input record is not
sorted. Join component will all other ports in to memory). driving port must be the
biggest port.
31) Join With DB ( Datasets )
Connects the records from the in ports with the data it retrieved from database
and does a tranform.
select_sql : sql file
match_required
maximum_matches
execute_on_miss
cache_query_results
Optional 'do_query' to decide whether to execute the database query.
Optional 'compute_key' to derive the keys for the sql query from the data from in
record. If provided, compute_key takes input record and compute a temporary value of
type 'key_type' which will be used in the sql query for bind variables.
One possible use is to call 'stored procedures' --> call test_procedure(:emp_id,
:emp_name. :a,:b)
32) Leading Records ( Miscellanious )
Copies first N number of records .
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 11/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
33) Lookup File ( Datasets ) (Do a sample graph for interval keys)
Represents one or more serial files or multifile. Represents keys and asosciated
data. Keys will be indexed. Data
will be kept in memory for easy retrievel.
Keys can be
a) exact
b) interval
c) reg expression
d) with interval_bottom and interval_top
We should use Lookup file only when the data in small. If large, we should use
Join component.
Use DML functions like 'lookup', 'lookup_count' and 'lookup_next' to retrieve the
data.
34) Lookup Template
Defines a record format and a key for dyanamically defining a lookup.
keep_on_disk : can be on disk or can be in memory
block_compressed :
lookup_load to load data
lookup_unload to unload
User WRITE_LOOKUP to create data and index files
35) Match Sorted
Combines multiple flows using transform function
See also Join and Fuse
36) Merge ( Departition )
Combines records from multiple flow partitions which are already sorted, keeping
the sort ordder.
37) Meta Pivot ( Miscellanious )
Splits each record by column in to multiple records
38) Multireformat ( Transform )
Just like a mulitple reformat components with no select parameter.
This component can have upto 20 in ports and out ports
This component can be used just infront of a custom component to avoid dead lock.
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 12/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
39) Multi Update Table ( Database )
Executes SQL statements against the incoming records to update one mor multiple
tables.
count : defines the number of primary and secondary sql pairs
primarySQLn
seconddarSQLn
All SQLs are either commited or rolledback together.
filtern : defines whether the corresponding SQL pair needs to be executed.
optional transform package :- to provide Header detail record inserts/updates.
Transform Package :- Has statement_input_type type
length() fuction to return the number of details records
compute_statement_type() will be called length() numb er of time and each time the
primary/secondary sql will be executed with the data it teurns
40) Normalize ( Transform )
Devides one record into multiple output records.
Specify the number of records using 'length' method or implements while do loop
using 'finished' method. (it will keep on calling normalize until finished function
returns 1 for the record)
optionally define temporary_type
a) input_select
b) initialize
c) length
d) normalize
e) finalize
f) output_select
41) Output File ( Datasets )
Writes data into an external file
42) Output Table ( Datasets )
Loads from Grapgh into table
Speciy the table name or use the SQL to insert into table.
Using SQL statement can insert into one or multiple tables.
rows_per_commit :- number of ROWS to process before commiting records
commit_number :- Number of input records to process before commiting.
Commit_table :-
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 13/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
direct
indexing_mode
If there are indexes on columns other than primary key, loading the data in serial
(than in parallel) may
provide better performnace. This is because otherwise multiple partiotions will
try to compete to aquire
lock on same objects.
Oracle loads can be seeped up by specifying 'unrecoverable' option. This means the
process will not write into
redo log files.
This will be applicable when we are loading a table completely.
43) Partition By Expression ( Partition )
Partitions data based on a DML expression.
'function' : specify the DML expression which returns values from 0 to no of
(partitions - 1)
Or it can use 'partition_index(in)' method to decide the partition.
44) Partition By Key ( Partition )
Partitions data according to Key Value
45) Partition By Key and Sort ( Partition )
This is a sub-graph component which does partition based on key and then sort the
records
46) Partition By Percentage ( Partition )
Partitions data based on the specified partition.
perentage can be specifgied either by
a) percentage parameter
b) pct port
47) Partition By Range ( Partition )
Partitions records based on key range.
ranges are specified in split port. (Use Find Splitters component to get the
splitters)
48) Partition By Round-Robin ( Partition )
Partitions data evenly to all flows connected in a round robin fashion.
block-size parameter.
49) Partition With Load Balance ( Partition )
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 14/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
Partitions the data in such a way that it copies data based on the speed at which
data is consumed.
50) Read Multifiles ( Datasets )
Reads data from in port and converts each record as a file name calling
'getFileName' method
Reads the files and write the data into outport
Optionally we can call reformat function to reformat the records and wrfite the
reformatted records into out port
skip_count and read_count properties
51) Read Row ( Translate Folder )
Reads data in row formats and parse the data and write into out port as records
parse function to parse
read_string and similar methods to read data
write_string, write_data to write records
52) Read Seperated Values ( Translate Folder )
Process flow of comma seperated or tab seperated values
53) Read Shared ( Datasets folder )
Helps to reduce disk read when different graphs read a same large file.
A temporary server will be started which will read the file and nmkeep in memrory
sections so thet different read
shared components can get data
For different components in the same graph, similar feature is done through
'REPLICATE' component.
54) Read Tagged Values
Reads a Name Value pair and converts into output records
55) Read XML
Some parameters are
root element
base element
leaf element
xml-to-dml utility to convert XML schema into DML
or Import From XML dialog to do this.
56) Read XML Transform
This component is better.
has methods
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 15/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
a) prepare_document
b) create_output
XML_SPLIT is a better component and should be used in all new graphs
57) Redefine Format (Misc )
Reads data from in port and writes data UNCHANGED into outport according to the
out port format. out port dml will be different.
58) Reformat ( Transform )
Reformat data from in port to out port
Used tranform function
This has method 'output_index' which will decide to which "tranform funtion" to be
called
and out port the record will be written. Also it has output_indexes if one record
need to go to multiple out ports
select expression to filter records before transforming
59) Repair Input
Repairs malformed records.
All good records will be directly sent to out port
All malformed records will be sent to 'repair-input-transform' method. If this
function is success, the corrected record
will be sent to out port
60) Replicate
Arbitrarily combines records from its in port and writes a copy of this data into
all its out port
61) Rollup
Evaluates a group of input records and generate a record which summarizes the
group.
Two Modes
1) Template Mode
If the roll up is to calculate the basic aggregate functions like sum, maximum,
minimum etc, use template
mode along with the aggregate functions. During runtime, Co-Op system will expand this
into different functions
to perform the operation.
This mode provides no control on deriving the summary records
2) Expanded Mode
This mode gives maximum control
provides
a) a Temporary Type
b) initialize method
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 16/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
c) rollup method
d) finalize method
input_select
key_change
'accummulation' function can be used in rollup() template method to create a vetor for
a field
eg : out.amounts :: accumulation(in.amnt) ;
62) Run Commands
Runs processes which runs operating system commands outside Co-Op system
In potr will have record type like
command
arguments
etc.
63) Run Program
Executes an executable program ina graph
Reads the records from in port and executes the program (given in command line)
with this record and writes
the data into out port
Only one command is allowed. If multiple commands are needed, then wrap the
commands in shell
eg : command_line = ksh -c grep "san" | head -100
64) Run SQL
Runs an SQL
65) Sample
Retrieves specified number of records from in port
66) Scan
For every in record, generates one record which includes *running cumilative* of
the group the record belpongs to
input_select
initialize
scan
finalize
output_select
76) Scan With Rollup
Similar to scan. But also sends a summary record to its rollup port
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 17/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
Has the below additional methods
a) rollup_finalize (produces the rollup record)
b) rollup_select (If defined, this is a filter whether to print the record or not)
77) Sort
Sorts records
Records with NULL key values are listed first (in asc order)
Supports implecit format.
78) Sort within Groups
Sort within Group refines the sorting of records which are already sorted on
certain keys.
major_key and minor_key
79) SPLIT
Split is reverse of COMBINE component. It normalizese a complex (DML with vector)
in to flattened or multiple output flows
Or select a subset of from the input data.
There is no tranform function. Operations are done based on the defined DML on the
in and out ports.
79) Throttle
Copies records from IN port to OUT port at a rate we specify
80) Trash
Ends a flow by accepting all records and discarding it
Trash is a BROADCAST component without output port
81) Truncate Table
If inserting records after truncate, we should use Output Table's truncate switch
instead.
82) Update Table
Has an
Update SQL
Insert SQL
Insert SQL will get executed only when update sql failes.
This was used in ArteMIS file
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 18/19
8/18/22, 3:19 PM https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2…
83) Write BLOCK-COMPRESSED Lookup
HAs three out ports
out
index
address
84) Write Excel Flow
Use to create Excel data
85) WRITE EXCEL SPREADSHEET
Use to create excel spreadsheet
86) Write XML
Writes data in to XML file
Use new component XML COMBINE instead of Write XML
87) XML Combine
Reads DML described data and writes in to XML document
88) XML Split
To read XML data. Its reverse of XML Combine component.
==============================================================================
https://sites.google.com/site/jibjabmonitor/abinitio/component-reference?tmpl=%2Fsystem%2Fapp%2Ftemplates%2Fprint%2F&showPrintDialo… 19/19