|
Most UNIX distributions contain utilities such as sort and join that are capable of performing typical processing operations on data in sequential files. These utilities are well integrated into UNIX shell environments but lack some desirable features needed in large batch processing applications. Standard UNIX utilities do not allow data field values to be referenced by field name. They require developers to reference data field values by numbers that represent relative field position within records. This makes their use in applications much less maintainable. Most standard UNIX data utilities were initially developed when typical servers had single CPUs. Today, servers commonly have multiple CPUs. To take advantage of new multi-processing environments, features supporting scalable processing should be included in utilities. Finally, records using traditional character delimited fields are inherently inefficient to process. This record format requires every character of data to be examined to identify field delimiters. A more processing efficient record format is needed. DFILE Tools were created to address these issues in batch processing applications on UNIX servers.
DFILE Tools support processing of text data in sequential files. They provide several configurable utilities to perform typical processing operations. When business rules require custom programming, a C language API is available to access software libraries for reading and writing data files. The following are DFILE features not found in standard UNIX tools:
A notable weakness associated with standard UNIX tools is their variable length record format. Using special characters for delimiting fields and records is convenient for configuration information but inadequate for processing actual data. Using printable characters as delimiter characters introduces a risk that the delimiter character may exist as data. On the other hand, non-printable delimiter characters are inconvenient to use with some UNIX tools. In either case, processing delimiter formats are inefficient. A better alternative is to store the length of each field in one byte prior to the actual value. This limits field values to 255 characters but is significantly more efficient than processing the delimiter format. For flexibility, both methods are supported. The following are available record formats:
APPLICATION DATA | STORED DATA | |||
---|---|---|---|---|
RECORD 1 | AAAAA | BBB | CCCC | [0x05]AAAAA[0x03]BBB[0x04]CCCC |
RECORD 2 | XXX | YY | Z | [0x03]XXX[0x02]YY[0x01]Z |
APPLICATION DATA | STORED DATA | |||
---|---|---|---|---|
RECORD 1 | AAAAA | BBBBB | CCCCC | AAAAA|BBBBB|CCCCC |
RECORD 2 | XXXXX | YYYYY | ZZZZZ | XXXXX|YYYYY|ZZZZZ |
APPLICATION DATA | STORED DATA | |||
---|---|---|---|---|
RECORD 1 | AAAAA | BB|BB | CC||C | AAAAA|BB\|BB|CC\|\|C |
RECORD 2 | XXXXX | Y^JYY | ZZZZZ | XXXXX|Y\^JYY|ZZZZZ |
Since DFILE Tools need record format and layout information to
process data files, meta-data is maintained in configuration files.
Environment variable ${CFGPATH}
contains directories
to search for configuration entries.
There are two types of configuration files. The configuration file
accessed first by DFILE Tools is dfile.cfg
. It contains
configuration records for dfiles. Each dfile configuration record
contains a file name to an additional configuration file containing
field names comprising the record layout. Each dfile.cfg
configuration file entry uses the colon character (:) as a field
delimiter, and each configuration record contains the following
information:
If field delimiter and record separator are not specified, the variable length record format will be used that stores one byte field lengths adjacent to field values.
UNIX file system paths may contain two different types of variables. They may have tags such as %g and %p that are replaced by values passed into utilities at run time. This may occur with command line arguments or control file. Environment variables may be specified in braces such as ${HOME}.
GZIP compression is used directly by DFILE Tools when UNIX file system paths have a .gz suffix. This is a standard practice for files stored with GZIP compression. Below are example entries:
extract.mai_inv::::mai_inv.cfg:${DATA}/extract/mai_inv/%p.dat
sort.mai_inv:|:10::mai_inv.cfg:${DATA}/sort/mai_inv/%p.dat
mai_inv:|:10:\:mai_inv.cfg:${DATA}/dfile/mai_inv/%g/%p.dat.gz
Record layout configuration files contain field names and their order within the record. Field names are not case sensitive when using utilities. Programmers passing field names associated with host variables to API need to remember API expects field names to be in upper case. The following is an example layout configuration file:
SBSCRP_ID
ACCT_NBR
DSS_EXPR_DT
DSS_EFF_DT
LOAD_DT
SOURCE_SYS_ID
USER_ID
EFF_DT
EXPR_DT
DELETE_IND
DELETE_DT
LAST_UPDT_DT
Since batch applications are processing intensive, it is
important they scale to hardware architecture. DFILE Tools
does this by partitioning data records for a large
relation/table into multiple UNIX files. Records are
assigned partition units based on leading key field values. This
makes partition unit file location predictable per key field
values. DFILE Tools contains a process manager that executes
concurrent instances of utilities based on configured
CPU and data partition information. Each execution instance
processes only one data partition unit. Partition unit identification is
passed from process manager to utility using either environment
variables or command line tag assignments (-t %p=99
).
Sometimes it is advantageous to hierarchically partition records
based on the first few key fields.
Often processes require only a portion of records from a file based on business rules. All utilities have a predicate scripting language that allows unwanted records to be filtered at run time. While it is functionally similar to SQL WHERE clauses, the language looks much like LISP. The following are examples:
( where ( = $acc_status O ) )
( where
( or
( in $src_owner_cd ( SH HO IL BR SC ) )
( in $src_sals_owner_cd ( SH HO IL BR SC ) ) ) )
( where
( and
( or
( = $gross_actv_mgrtn_in_qty 1.0 )
( = $mgrtn_in_qty 1.0 ) )
( or
( = $gross_actv_mgrtn_out_qty 0.0 )
( = $mgrtn_out_qty 0.0 ) ) ) )
For those unaccustomed to LISP and S-expressions, these statements may seem awkward. The previous statement is equivalent to the following SQL WHERE clause:
where ( gross_actv_mgrtn_in_qty = 1.0 or mgrtn_in_qty = 1.0 )
and ( gross_actv_mgrtn_out_qty = 0.0 or mgrtn_out_qty = 0.0 )
As expressions grow in complexity, the S-expression form is easier to visualize as a decision tree.
( where
( and
( = $network_ind C )
( not ( = $message_type 1 ) )
( in $toll_type ( 6 7 8 M ) )
( not ( = $mps_feature_cd_1 VM ) )
( not ( = $mps_feature_cd_2 VM ) )
( not ( = $mps_feature_cd_3 VM ) )
( not ( = $mps_feature_cd_4 VM ) )
( not ( = $mps_feature_cd_5 VM ) ) ) )
( WHERE
( AND
( = $network_ind C )
( NOT
( OR
( LIKE $mps_feature_cd_1 [lL][cC][iI][bB] )
( LIKE $mps_feature_cd_2 [lL][cC][iI][bB] )
( LIKE $mps_feature_cd_3 [lL][cC][iI][bB] )
( LIKE $mps_feature_cd_4 [lL][cC][iI][bB] )
( LIKE $mps_feature_cd_5 [lL][cC][iI][bB] ) ) ) ) )
Sometimes constant semantics are ambiguous. When the interpreter notices a constant beginning with digit, it attempts to use it as a double floating point number. To force a numeric constant to be used as ASCII, it is necessary to prefix it with a tick mark ('). The list of constants used with the IN operator are exceptions. They are always placed in a hash table based on their ASCII value. The tick mark is also useful to represent a zero length value. Examples are as follows:
( where
( and
( >= $event_dt '20080601 )
( <= $event_dt '20080630 ) ) )
( where
( not
( = $orig_tid_cid ' ) )
When special characters [ \t\n()"
] are needed in constants,
the special characters may be escaped using a backslash
character (\). If the quote character (") is not involved,
the quote character may be used to surround constants containing
special characters.
The following is a description of the predicate language grammar:
START → ( <where> <condition> )
condition → <compare>
| ( <and> <conjunction> )
| ( <or> <conjunction> )
| ( <not> <condition> )
conjunction → <condition> <conjunction>
| <condition>
compare → ( = <datum> <datum> )
| ( > <datum> <datum> )
| ( >= <datum> <datum> )
| ( < <datum> <datum> )
| ( <= <datum> <datum> )
| ( <in> <variable> ( <literal_list> ) )
| ( <like> <variable> <literal> )
literal_list → <literal> <literal_list>
| <literal>
datum → <variable>
| <number>
| <literal>
where → [Ww][Hh][Ee][Rr][Ee]
and → [Aa][Nn][Dd]
or → [Oo][Rr]
not → [Nn][Oo][Tt]
in → [Ii][Nn]
like → [Ll][Ii][Kk][Ee]
literal → <ascii_string>
| <default_literal>
ascii_string → ['].*
number → [-]?(([0-9]+)|([0-9]*[.][0-9]+))
variable → [$][-a-zA-Z0-9_.]+
default_literal → .+
Since the recommended record format does not lend itself for use with standard UNIX tools, utility dcat is available for ad hoc instances to view data. Its default behavior is to write data to stdout with pipe (|) as a field delimiter and new line (\n) a record separator. The required argument is expected to be a dfile name. Optional command line argument -h prints a header record containing field names. The following is an example:
$ dcat -h -t %g=current processed_revenue_cycle | head
cycle_dt|load_dt
20070702|2007-08-05 18:24:58
20070703|2007-08-05 18:24:58
20070704|2007-08-05 18:24:58
20070705|2007-08-05 18:24:58
20070706|2007-08-05 18:24:58
20070707|2007-08-05 18:24:58
20070708|2007-08-05 18:24:58
20070709|2007-08-05 18:24:58
20070710|2007-08-05 18:24:58
If data is known to contain pipe characters, an alternate field delimiter can be specified with -F argument. No checking is performed to ensure output field delimiter is not contained in data.
Executing concurrent instances of a utility where each instance
processes a data partition unit requires a process manager. DFILE Tools'
process manager is called dfile_exec. Required run time arguments
include a command, the maximum number of concurrent processes, and
partition unit information in the form of a positive integer or UNIX
file name. Positive integers represent hash partition units and files
contain ASCII range partition unit values. At least one %s
tag is expected to be used in arguments to the command. As dfile_exec
spawns processes, it replaces tags with a value that identifies
a partition unit. The following are examples:
$ dfile_exec -m 1 -f 5 -c 'echo partition unit %s' 2>/dev/null
partition unit 0
partition unit 1
partition unit 2
partition unit 3
partition unit 4
$ cat partition_list.cfg
A
B
C
D
E
$ dfile_exec -m 1 -f partition_list.cfg -c 'echo partition unit %s' 2>/dev/null
partition unit A
partition unit B
partition unit C
partition unit D
partition unit E
The previous examples indicate one process will run at a time (-m 1). The first example processes hash partitions containing five units. The second example processes range partition unit values A to E.
Some data lends itself to being partitioned at multiple levels. An example is customer billing information in monthly cycles. It can be partitioned by month, cycles within a month, and customers within each cycle. Processing three levels of partitions can occur by nesting execution of dfile_exec. It is possible to process them with one instance of dfile_exec, but it requires a partition definition file to be created containing records with a field per partition level. dfile_exec passes the information to a wrapper script to parse the fields and reformat them as different arguments to utilities.
In most cases stdout and stderr of each processed partition unit should
be kept in individual log files. This is possible by specifying the
-o argument with a UNIX file path containing a %s
tag.
The tag is expanded to partition unit name a run time. Also, suffixes .out
and .err are appended to the UNIX file path for stdout and stderr.
When partition unit processes fail at run time and are restarted, the
stdout file is truncated for new output results but stderr is
appended. Confusion concerning stderr output between original
execution and subsequent runs is avoided by outputting each process
start and end times.
Checkpoint tracking for failure recovery is achieved using the -l argument followed by a UNIX file path. When partition units are processed and not all are successful, dfile_exec writes a list of partition units that were successful. During a re-run, this list is loaded to prevent successful partition units from being re-processed.
$ cat monthly_cycles.txt
05
10
15
20
25
$ cat dfile_sort_accounts.ksh
export cycle=${1}
dfile_exec -m4 -f13 -o ${LOGS}/dfile_sort/%s \
-l ${RECOVERY}/${cycle}_accounts.log \
-c 'dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=${cycle} -t %a=%s'
$ dfile_exec -m1 -f monthly_cycles.txt -o ${LOGS}/dfile_sort_accounts/%s \
-l ${RECOVERY}/dfile_sort_accounts.log \
-c 'ksh dfile_sort_accounts.ksh %s'
Above is a nested dfile_exec example. For each of the five cycles,
13 account hash partition units are processed. While each cycle partition
unit is processed one at a time, account partition units are processed
four at a time. Tags %c
and %a
are used in the
UNIX file path configured in dfile.cfg
for entries
extract.account
and sort.account
as place
holders for cycle and account partition unit values. Shell script
dfile_sort_accounts.ksh and utility dfile_sort will run as follows:
ksh dfile_sort_accounts.ksh 05
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=05 -t %a=00
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=05 -t %a=01
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=05 -t %a=02
...
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=05 -t %a=11
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=05 -t %a=12
ksh dfile_sort_accounts.ksh 10
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=10 -t %a=00
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=10 -t %a=01
...
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=20 -t %a=11
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=20 -t %a=12
ksh dfile_sort_accounts.ksh 25
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=25 -t %a=00
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=25 -t %a=01
...
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=25 -t %a=10
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=25 -t %a=11
dfile_sort -k acct_nbr -i extract.account -o sort.account -t %c=25 -t %a=12
In batch processing environments it is common to run several job
streams at a time. Having excessive active UNIX processes can
diminish overall system performance due to memory paging. Less active
UNIX processes such as wrapper shell scripts to establish run-time
environments and utility dfile_exec do not require much
system resource. Active UNIX processes are generally processes that
apply processing rules directly to data. Ideally, the minimum number
of active UNIX processes should execute that can keep all CPUs busy.
This can be nearly achieved associating a UNIX semaphore with
dfile_exec. A UNIX semaphore can be created and initially set
to approximately the number of system CPUs. As dfile_exec is
started to directly spawn active UNIX processes, command line argument
-s
is used to specify a configuration file containing the
UNIX semaphore ID--a positive hexadecimal value. Prior to spawning
processes, dfile_exec checks the semaphore to find the
maximum number of processes it can create. If the number is greater
than zero, it decrements the semaphore by an appropriate number and
spawns the same number of processes. When dfile_exec completes,
the semaphore value is increased by the value it was earlier decreased.
A large dfile can be split into many UNIX files using utility dfile_partition. Each UNIX file represents an individual partition unit. Partitions are defined according to rules associated with hash and range partitioning methods. The hash partitioning method involves specifying a key field contained in each record to apply an ASCII hashing algorithm. Records are written to UNIX files based on results of the algorithm and number of defined partition units. Specifically, partition unit file number is remainder from dividing hash value by number of defined partition units. Range partitioning method also requires a key field from each record. Key field values are used to perform binary search of partition definition values. Correct partition unit is located when the maximum ASCII value is found that is less than or equal to record key value. The following are examples:
$ cat dfile.cfg bsa_sid:|:10::bsa_sid.cfg:bsa_sid.dat bsa_sid.hash_partition:|:10::bsa_sid.cfg:bsa_sid/hash_partition/%p.dat bsa_sid.range_partition:|:10::bsa_sid.cfg:bsa_sid/hash_partition/%p.dat $ cat bsa_sid.cfg BSA_ID OWNER_CD OWNER_TYPE_CD SID $ cat bsa_sid.dat AUHLOG504|35|TPCS|70275 AVIRHN815|AU|AFG|70285 BPSNWT917|35|TPCS|07106 CJIVAL719|35|TPCS|07384 FKNALA104|35|TPCS|07654 HERNOR303|35|TPCS|70092 HUOPLA036|35|TPCS|07622 IMUPDC470|35|TPCS|70303 LESMAD556|K7|AFG|70343 LEXCAN661|35|TPCS|70109 MOAKEY605|35|TPCS|07151 NACNTL962|35|TPCS|07162 NDRSAN019|35|TPCS|07376 NNXSCR505|AR|AFG|70338 NYRFRM518|35|TPCS|70571 OJIMAR840|35|TPCS|07418 ONAMRT341|IL|AFG|73764 POTWEI804|35|TPCS|07171 SFAWTR805|BR|AFG|70413 SGRBRK310|35|TPCS|07183 SVLCOL718|35|TPCS|07190 $ dfile_partition -i bsa_sid -o bsa_sid.hash_partition -f bsa_id -h 7 $ ls -l bsa_sid/hash_partition total 14 -rw-rw-rw- 1 kcrane kcrane 96 Oct 24 14:53 0.dat -rw-rw-rw- 1 kcrane kcrane 94 Oct 24 14:53 1.dat -rw-rw-rw- 1 kcrane kcrane 48 Oct 24 14:53 2.dat -rw-rw-rw- 1 kcrane kcrane 71 Oct 24 14:53 3.dat -rw-rw-rw- 1 kcrane kcrane 95 Oct 24 14:53 4.dat -rw-rw-rw- 1 kcrane kcrane 47 Oct 24 14:53 5.dat -rw-rw-rw- 1 kcrane kcrane 48 Oct 24 14:53 6.dat $ for file in bsa_sid/hash_partition/?.dat > do > echo "\n$file" > cat $file > done bsa_sid/hash_partition/0.dat AVIRHN815|AU|AFG|70285 ONAMRT341|IL|AFG|73764 POTWEI804|35|TPCS|07171 SGRBRK310|35|TPCS|07183 bsa_sid/hash_partition/1.dat LEXCAN661|35|TPCS|70109 NNXSCR505|AR|AFG|70338 NYRFRM518|35|TPCS|70571 bsa_sid/hash_partition/2.dat BPSNWT917|35|TPCS|07106 CJIVAL719|35|TPCS|07384 NACNTL962|35|TPCS|07162 bsa_sid/hash_partition/3.dat LESMAD556|K7|AFG|70343 MOAKEY605|35|TPCS|07151 OJIMAR840|35|TPCS|07418 SFAWTR805|BR|AFG|70413 SVLCOL718|35|TPCS|07190 bsa_sid/hash_partition/4.dat bsa_sid/hash_partition/5.dat AUHLOG504|35|TPCS|70275 FKNALA104|35|TPCS|07654 HERNOR303|35|TPCS|70092 HUOPLA036|35|TPCS|07622 IMUPDC470|35|TPCS|70303 bsa_sid/hash_partition/6.dat NDRSAN019|35|TPCS|07376 $ cat partition.cfg A G M S $ dfile_partition -i bsa_sid -o bsa_sid.range_partition -f bsa_id -h partiton.cfg $ ls -l bsa_sid/range_partition total 8 -rw-rw-rw- 1 kcrane kcrane 119 Oct 24 15:56 A.dat -rw-rw-rw- 1 kcrane kcrane 119 Oct 24 15:56 G.dat -rw-rw-rw- 1 kcrane kcrane 190 Oct 24 15:56 M.dat -rw-rw-rw- 1 kcrane kcrane 71 Oct 24 15:56 S.dat $ for file in bsa_sid/range_partition/?.dat > do > echo "\n$file" > cat $file > done bsa_sid/range_partition/A.dat AUHLOG504|35|TPCS|70275 AVIRHN815|AU|AFG|70285 BPSNWT917|35|TPCS|07106 CJIVAL719|35|TPCS|07384 FKNALA104|35|TPCS|07654 bsa_sid/range_partition/G.dat HERNOR303|35|TPCS|70092 HUOPLA036|35|TPCS|07622 IMUPDC470|35|TPCS|70303 LESMAD556|K7|AFG|70343 LEXCAN661|35|TPCS|70109 bsa_sid/range_partition/M.dat MOAKEY605|35|TPCS|07151 NACNTL962|35|TPCS|07162 NDRSAN019|35|TPCS|07376 NNXSCR505|AR|AFG|70338 NYRFRM518|35|TPCS|70571 OJIMAR840|35|TPCS|07418 ONAMRT341|IL|AFG|73764 POTWEI804|35|TPCS|07171 bsa_sid/range_partition/S.dat SFAWTR805|BR|AFG|70413 SGRBRK310|35|TPCS|07183 SVLCOL718|35|TPCS|07190
Data records can be ordered using utility dfile_sort. This includes sorting unordered records and merging already ordered records. Command line arguments can support sorting and merging of single dfiles, but control files are necessary to order records from multiple input dfiles into a single output dfile.
All sorting algorithms used in the utility support internal sorting
only. It makes no attempt to create temporary work files for memory
conservation. Partitioning data prior to sorting allows processing to
be properly scaled to hardware. Quicksort is the utility's default
algorithm. Algorithm may be specified at run time using the
-a
command line argument followed by algorithm flag.
Below is a list of available algorithms.
ALGORITHM | FLAG | STRENGTH | WEAKNESS |
---|---|---|---|
Insertion Sort | I | reordering records that were previously sorted using same leading fields of key | sorting many unordered records |
Shell Sort | S | similar to Insertion Sort | similar to Insertion Sort |
Heap Sort | H | conserves memory | generally requires twice as much CPU as Quicksort |
Merge Sort | M | run time is consistent | uses much memory and generally requires 13% more CPU than Quicksort |
Quicksort | Q | generally fastest run time | certain pre-ordered record patterns result in exterme increase of CPU time far beyond Heap Sort and Merge Sort |
A list of key sorting fields may be specified with the -k
command line argument. Field names are separated by commas (,). By
default key field values are sorted in ASCII ascending order. This can
be changed by appending optional flags to affected field names.
Optional flags are separated by periods (.). First flag is expected to
be (A)sending or (D)ecending. Last flag specifies data values to be
compared as (A)SCII, (N)umeric, or (H)igh value null. High value null
is an ASCII comparison but considers zero length values to be special
cases that are highest possible value. This is useful for sorting
expiration/termination dates. Examples are shown below.
$ dfile_sort -k acct_nbr,eff_dt -i extract.account -o sort.account
$ dfile_sort -k acct_nbr.d,eff_dt -i extract.account -m pending_account -o sort.account
$ dfile_sort -a m -k acct_nbr,expr_dt.a.h -i extract.account -o sort.account
Sometimes several input dfiles need to be sorted or merged into one
output dfile. This is done using a control file. Control files are
specified with the -c
command line argument. Specifying
a control file causes the utility to ignore most command line arguments.
Below are some control file examples:
( ( order-by ( sbscr_nbr ) ( cust_sys_cd ) ( RLS_EQPT_ID ) )
( merge
( dfile dfile_join.subs_act_bsa
( where ( in $KW_INDCR ( C B ) ) ) )
( dfile dfile_join.sw2_migrate_in3
( where ( in $KW_INDCR ( C B ) ) ) )
( dfile dfile_join.sw2_migrate_out3
( where ( in $KW_INDCR ( C B ) ) ) )
( output ( dfile dfile_sort.subs_act_bsa ) ) )
( ( order-by ( os_cust_id ) ( seq_nbr ) )
( merge
( dfile dfile_sort_basic.subs_rev.sbscrp_bsa
( where ( not ( = src_geo_id ' ) ) ) )
( dfile dfile_join.subs_rev.subs_rev_src_geo_id ) )
( output ( dfile dfile_sort.rls_src_invc_chg ) ) )
( ( order-by ( snpsht_dt ) )
( sort
( dfile cat_partitions.swy_swz_trigger )
( dfile max_swy_swz_trigger
( tag ( %g current ) ) ) )
( output ( dfile dfile_sort.swy_swz_trigger ) ) )
( ( order-by ( sw_id ) ( sys_update_date ) ( effective_date ) )
( sort
( dfile dfile_cache_join.bsa_sw_owner
( where
( and
( like $sw_id "^\([0-2]?[0-9]\{1,2\}[.]\)\{3,3\}[0-2]?[0-9]\{1,2\}$" )
( <= $sys_update_date '20081231000000 )
( <= $effective_date '20081231000000 )
( > $expiration_date '20081231000000 ) ) ) ) )
( output ( dfile sw_owner ) ) )
( ( order-by ( snpsht_dt ) )
( sort
( dfile cat_partitions.sw2_trigger
( where ( > $snpsht_dt '20081231000000 ) ) ) )
( merge
( dfile sw2_pend_trigger
( tag ( %g current ) )
( where ( > $snpsht_dt '20081231000000 ) ) ) )
( output ( dfile dfile_sort.sw2_pend_trigger ) ) )
Grammar for control file is as follows:
START → ( <order_by_section> <sort_section> <merge_section> <output_section> )
order_by_section → ( <order_by> <order_by_list> )
order_by_list → <order_by_field> <order_by_list>
| <order_by_field>
order_by_field → ( <field_name> <field_attribute> )
field_attribute → null
| ( <order_by_direction> )
| ( <compare_method> )
order_by_direction → <ascending>
| <descending>
compare_method → <ascii>
| <numeric>
| <high-value-null>
sort_section → null
| ( <sort> <specify_algorithm> <dfile_list> )
merge_section → null
| ( <merge> <dfile_list> )
output_section → ( <output> <output_dfile> )
specify_algorithm → null
| ( <algorithm> <sort_algorithm> )
sort_algorithm → <insertion_sort>
| <shell_sort>
| <hash_sort>
| <merge_sort>
| <quick_sort>
dfile_list → <input_dfile> <dfile_list>
| <input_dfile>
input_dfile → ( <dfile> <dfile_name> <input_attribute> )
output_dfile → ( <dfile> <dfile_name> <output_attribute> )
input_attribute → <dfile_tag> <record_filter>
output_attribute → <dfile_tag> <dfile_open_mode>
dfile_tag → ( <tag> <tag_assignment_list> )
tag_assignment_list → <tag_assignment> <tag_assignment_list>
| <tag_assignment>
tag_assignment → ( <tag_variable> <tag_value> )
dfile_open_mode → ( <open_mode> <open_file_mode> )
open_file_mode → <append>
| <truncate>
order_by → [Oo][Rr][Dd][Ee][Rr]-[Bb][Yy]
algorithm → [Aa][Ll][Go][Rr][Ii][Tt][Hh][Mm]
insertion_sort → [Ii][Nn][Ss][Ee][Rr][Tt][Ii][Oo][Nn]-[Ss][Oo][Rr][Tt]
shell_sort → [Ss][Hh][Ee][Ll][Ll]-[Ss][Oo][Rr][Tt]
heap_sort → [Hh][Ee][Aa][Pp]-[Ss][Oo][Rr][Tt]
merge_sort → [Mm][Ee][Rr][Gg][Ee]-[Ss][Oo][Rr][Tt]
quick_sort → [Qq][Uu][Ii][Cc][Kk]-[Ss][Oo][Rr][Tt]
ascending → [Aa][Ss][Cc][Ee][Nn][Dd][Ii][Nn][Gg]
descending → [Dd][Ee][Ss][Cc][Ee][Nn][Dd][Ii][Nn][Gg]
ascii → [Aa][Ss][Cc][Ii][Ii]
numeric → [Nn][Uu][Mm][Ee][Re][Ii][Cc]
high_value_null → [Hh][Ii][Gg][Hh]-[Vv][Aa][Ll][Uu][Ee]-[Nn][Uu][Ll][Ll]
sort → [Ss][Oo][Rr][Tt]
merge → [Mm][Ee][Rr][Gg][Ee]
output → [Oo][Uu][Tt][Pp][Uu][Tt]
dfile → [Dd][Ff][Ii][Ll][Ee]
dfile_name → [-_.a-zA-Z0-9]+
field_name → [-_.a-zA-Z0-9]+
tag → [Tt][Aa][Gg]
tag_variable → %[a-zA-Z]
tag_value → [-_/.a-zA-Z0-9]+
open_mode → [Oo][Pp][Ee][Nn]-[Mm][Oo][Dd][Ee]
append → [Aa][Pp][Ee][Nn][Dd]
truncate → [Tt][Rr][Uu][Nn][Cc][Aa][Tt][Ee]
record_filter → *** described in Overview section ***
There are two methods available in utility dfile_join to join
records between dfiles. One of them requires a sorted dfile to be
pre-loaded into a UNIX shared memory segment. This action is performed
using utility dfile_cache_create. Command line argument
-a
followed by a hexadecimal number specifies the IPC key
associated with the shared memory segment. The -i
command
line argument allows input data dfile to specified. Records can be
filtered using -y
command argument followed by file name
containing filter rules. After dfile_join is complete, shared
memory segments are removed using UNIX command ipcrm. The
following is an example:
$ dfile_cache_create -a 0x0331 -i tid_cid_owner
$ ipcs -m
IPC status from as of Thu Sep 11 17:32:45 CDT 2008
T ID KEY MODE OWNER GROUP
Shared Memory:
m 1929379853 0x331 --r--r----- kcrane kcrane
Joining records between dfiles is a common operation necessary for reporting. Utility dfile_join has two join methods available. One method is referred to as Sort-Merge Join. It requires input records in each dfile to be pre-sorted by corresponding key field values. Input dfiles are read sequentially and merged based on the key field values. The other join method requires all but one dfile to be pre-sorted. The sorted dfiles are also expected to be in UNIX shared memory segments. The join process sequentially reads the unsorted dfile and performs binary searches on records in shared memory. This method is generally for joining a large dfile to small dfiles.
Many command line arguments are available at run time. The -k
argument is followed by one or more key field names. Field names are
separated by commas (,). The -i
argument is followed by
the input dfile name. Fields from this dfile are mapped to output without
specifying them individually. The -j
argument is followed
by a dfile name that contains joining records. Input dfile and dfile
containing joining records are expected to have key field names in common.
The -f
argument is followed by a list of join dfile field
names that specifies which field values to copy to output record. Field
names are separated by commas (,). Command line argument -o
followed by dfile name specifies an output dfile. The -m
argument followed by I
or O
is an optional
argument to specify inner or
outer join operation. By default, inner joins are assumed. When outer
joins are performed, the -s
argument may used followed by
an output field name. This output field will contain values
-
or +
based on whether the input record was
successfully joined to a record from the join dfile. An optional
-u
argument may be specified followed by F
or L
. This limits input records to join with only the first
or last record from the join dfile having matching key field values.
Input, join, and output record filter files may be specified using
arguments -x
, -y
, and -z
respectively. If the join is with a UNIX shared memory segment,
-a
argument is followed by a hexadecimal number associated
with the memory segment. Below are examples:
$ dfile_join -i sw2_non_bsa_swap -j swz_sbscr_hist -o subs_act_bsa \
-k sbscr_nbr,cust_sys_cd -f bsa_id,acct_type_cd,acct_sub_type_cd \
-m O -s subs_act_bsa_join_status -u L -y ${FILTER}/subs_act_bsa.cfg
$ dfile_join -i dp.trvlrev -j tid_cid_owner -o dj.trvlrev \
-k orig_tid_cid -f orig_owner_cd,orig_owner_type_cd \
-m O -s orig_join_status -u L -a 0x0331
Sometimes it is possible to combine multiple join operations into
one execution of dfile_join. This requires join information
to be passed to the utility in a control file at run time using command
line argument -c
. The following
is an example:
$ dfile_join -c join.ctl -t %p=000
$ cat join.ctl
( ( input
( dfile subscription
( map-fields
( ( input geog_cd ) ( output bsa ) )
( ( input sbscrp_eff_dt ) ( output orig_eff_dt ) ) ) ) )
( join
( dfile acct_sbscrp
( key-fields ( sbscrp_id ) )
( copy-fields ( acct_nbr ) )
( outer-join )
( unique-join last-record ) )
( dfile sbscrp_phone_nbr
( key-fields ( sbscrp_id ) )
( copy-fields ( npa_nbr nxx_nbr line_nbr reason_cd ) )
( map-fields
( ( join reason_cd ) ( output sbscrp_phone_nbr_reason_cd ) ) )
( outer-join )
( unique-join last-record ) )
( dfile sbscrp_svc_plan_agrmt
( key-fields ( sbscrp_id ) )
( copy-fields ( svc_plan_cd ) )
( map-fields
( ( join svc_plan_cd ) ( output pkg_svc_name ) ) )
( where ( = $svc_plan_level_cd P ) )
( outer-join )
( unique-join last-record ) )
( dfile off_owner_cd
( key-fields ( bsa_id ) )
( copy-fields ( owner_cd ) )
( map-fields
( ( input geog_cd ) ( join bsa_id ) ) )
( outer-join ( status-field off_owner_cd_join_status ) )
( ipc-key 0x0333 ) ) )
( output
( dfile telephone_dim ) ) )
In the above example, all joins that do not specify an
ipc-key
will
use the Sort-Merge Join method. It is necessary these joins use the
same key field, sbscrp_id. As mentioned earlier, Sort-Merge Join
operations require records to be consistently sorted. In this case,
records must be order by sbscrp_id. Joins with ipc-key
may have
different key fields since records are pre-loaded in a UNIX shared
memory segment for searching. Since this example uses bsa_id as the
key field for the shared memory segment, records in shared memory
are expected to be sorted by bsa_id.
An additional feature available in control files is the ability to map fields between dfiles when field names do not match. In the example input entry, input fields geog_cd and sbscrp_eff_dt are mapped to output fields bsa and orig_eff_dt respectively. Other join entries demonstrate mapping fields between input and join dfiles as well as join and output dfiles.
Grammar for control file is as follows:
START → ( <input_section> <join_section> <output_section> )
input_section → ( <input> ( <dfile> <dfile_name> <input_dfile_options> ) )
join_section → ( <join> ( <dfile_join_list> )
output_section → ( <output> ( <dfile> <dfile_name> <output_dfile_options> ) )
input_dfile_options → <null>
| <map_fields_option>
| <record_filter>
dfile_join_list → <join_dfile> <dfile_join_list>
| <join_dfile>
join_dfile → ( <dfile> <dfile_name> <define_key_fields> <join_dfile_options> )
join_dfile_options → <null>
| <map_fields_option>
| <record_filter>
| <copy_fields_option>
| <join_method_option>
| <unique_join_option>
| <ipc_key_option>
map_fields_option → ( <map_fields> <field_map_list> )
field_map_list → <field_map> <field_map_list>
| <field_map>
field_map → ( <define_field_map> <define_field_map> )
define_field_map → ( <record_source> <field_name> )
record_source → <input>
| <join>
| <output>
define_key_fields → ( <key_fields> ( <field_list> ) )
copy_fields_option → ( <copy_fields> ( <field_list> ) )
field_list → <field_name> <field_list>
| <field_name>
join_method_option → ( <inner_join> )
| ( <outer_join> )
unique_join_option → ( <unique_join> <unique_join_selection> )
unique_join_selection → <first_record>
| <last_record>
output_dfile_options → <null>
| <record_filter>
ipc_key_option → ( <ipc_key> <hex_number> )
input → [Ii][Nn][Pp][Uu][Tt]
join → [Jj][Oo][Ii][Nn]
output → [Oo][Uu][Tt][Pp][Uu][Tt]
dfile → [Dd][Ff][Ii][Ll][Ee]
dfile_name → [-_.a-zA-Z0-9]+
field_name → [-_.a-zA-Z0-9]+
copy_fields → [Cc][Oo][Pp][Yy][-][Ff][Ii][Ee][Ll][Dd][Ss]
key_fields → [Kk][Ee][Yy][-][Ff][Ii][Ee][Ll][Dd][Ss]
inner_join → [Ii][Nn][Nn][Ee][Rr][-][Jj][Oo][Ii][Nn]
outer_join → [Oo][Uu][Tt][Ee][Rr][-][Jj][Oo][Ii][Nn]
unique_join → [Uu][Nn][Ii][Qq][Uu][Ee][-][Jj][Oo][Ii][Nn]
first_record → [Ff][[Ii][Rr][Ss][Tt][-][Rr][Ee][Cc][Oo][Rr][Dd]
last_record → [Ll][Aa][Ss][Tt][-][Rr][Ee][Cc][Oo][Rr][Dd]
ipc_key → [Ii][Pp][Cc][-][Kk][Ee][Yy]
hex_number → 0[Xx][0-9]+
record_filter → *** described in Overview section ***
Typical data summarization operations can be performed using
dfile_agfunc
. This utility offers aggregate functions
available in SQL Group By queries. Command line argument -k
followed by a list of field names specifies key fields containing data
values for preparing groups of records during summarization. This utility
expects records to be pre-sorted by fields listed with the
-k
argument. If -k
argument is not specified
at run time, the utility summarizes all records in data file as one group.
Command line argument -i
followed by a dfile name specifies
the input dfile. The -o
argument, on the other hand, is
followed by a dfile name to specify the output dfile. Input records may
be filtered by providing a -x
argument followed by a UNIX
file name containing a record filter. On the other hand, output records
can be filtered by specifying a -z
argument followed by
a UNIX file name containng record filter rules. Below is a table
describing each aggregate function.
FUNCTION | COMMAND LINE USAGE | REMARKS |
---|---|---|
average | -a sfield,rfield[,%g] |
sfield is the input field name and rfield is
the output field name.
An optional printf format may be specified.
|
count | -c rfield |
rfield is the output field name.
|
ASCII minimum | -m sfield,rfield |
sfield is the input field name and rfield is
the output field name.
|
ASCII maximum | -M sfield,rfield |
sfield is the input field name and rfield is
the output field name.
|
numeric minimum | -n sfield,rfield |
sfield is the input field name and rfield is
the output field name.
|
numeric maximum | -N sfield,rfield |
sfield is the input field name and rfield is
the output field name.
|
sum | -s sfield,rfield[,%g] |
sfield is the input field name and rfield is
the output field name.
An optional printf format may be specified.
|
$ dfile_agfunc -i dfile_sort.swy_swz_trigger -o max_swy_swz_trigger -M snpsht_dt,snpsht_dt
$ cat bsa_swap_dual_record.ctl
( where ( = $record_count '2 ) )
$ dfile_agfunc -i dfile_sort_basic.sw2_rule_engn_outpt_acty \
-o dfile_agfunc.sw2_dual_records \
-k sbscr_nbr,cust_sys_cd -c record_count -z bsa_swap_dual_record.ctl
$ dfile_agfunc -i filter_nonoff_trvl_usage.postpaid_trvl_usage_3g \
-o dfile_agfunc.postpaid_trvl_usage_3g \
-k sw_id,orig_owner_cd,billing_owner_cd,bill_year,bill_month,sys_source_cd,cycle_cd \
-s usage_qty,usage_qty,%.0f -s chg,chg,%.2f
The examples above demonstrate usage for the utility. In the first
example, the greatest snapshot_dt
value in input dfile
dfile_sort.swy_swz_trigger
is written to output dfile
max_swy_swz_trigger
. The second example identifies
sbscr_nbr and cust_sys_cd values having record count of two. The
last example calculates sub-totals for usage_qty
and
chg
.
Sometimes it is necessary to purge duplicate records based on key field values. One common example involves applying record updates to a file. After delta records are sorted and merged with existing records, out of date records must be purged. Purging may be performed using utility dfile_unique. Records in this example are expected to be sorted by key field values plus the latest date in which record was originally created or its field values changed. The utility only retains the last record of the record group per composite key value. The following is an example:
$ dfile_unique -k ban,subscriber_no -i dfile_sort.subscriber_dim -o subscriber_dim -t %p=000
In the above example, input records are expected to be pre-sorted
with ban
and subscriber_no
as the first
two fields in the sort key. Only last input record per unique
ban
and subscriber_no
value combinations
will be written to output dfile.
All DFILE Tool utilities use DFILE Library as a common software library to read and write data files. This library contains a set of C language functions that serve as an API between programs and DFILE Library.
#include "dfile.h"
int dfile_cfg( dfile_cfg_t *dfile_cfg, const char *dfile_name );
dfile_t *dfile_read_open( const dfile_cfg_t *cfg,
dfile_bind_t *program_bind, unsigned short program_bind_count,
const dfile_tag_t *file_name_tag, unsigned short file_name_tag_count,
unsigned short blocks_per_buffer_count, unsigned short buffer_count );
int dfile_read( dfile_t *dfile );
int dfile_read_close( dfile_t *dfile );
dfile_t *dfile_write_open( const dfile_cfg_t *cfg,
const dfile_bind_t *program_bind, unsigned short program_bind_count,
const dfile_tag_t *file_name_tag, unsigned short file_name_tag_cnt,
unsigned short blocks_per_buffer_count, unsigned short buffer_count,
dfile_open_mode_t open_mode );
int dfile_write( dfile_t *dfile );
int dfile_write_close( dfile_t *dfile );
The dfile_cfg()
function gets configuration file
information associated with dfile_name
entry. Return
value is zero when successful and -1 when a failure occurs.
Argument dfile_cfg
is a pointer to structure
dfile_cfg_t
that contains the following:
typedef struct {
char field_separator;
char record_separator;
char separator_escape;
} dfile_rec_t;
typedef struct {
const char *field_name;
char **field_buffer;
size_t *field_length;
} dfile_bind_t;
typedef struct {
const char *dfile_name;
dfile_rec_t rec_attribute;
const char *record_layout_path;
const char **field;
dfile_bind_t *bind;
unsigned short bind_cnt;
void *bind_hash_table;
const char *data_file_path;
} dfile_cfg_t;
dfile_read_open()
creates and returns a structure to be
used when calling dfile_read()
to read records. Function
argument cfg
is a pointer to a structure that will usually
be populated by previously calling function dfile_cfg()
.
program_bind
is an array of structures used to bind C
program variables with parsed field data in DFile buffers. If
program_bind
is a null pointer, the bind structure
populated during dfile_cfg()
containing all fields in
record will be used. Argument program_bind_cnt
is the
number of C program variables in the program_bind
array.
file_name_tag
is an array of the following structure:
typedef struct {
const char *tag;
const char *tag_value;
} dfile_tag_t;
Argument file_name_tag_cnt
is the number of entries in
file_name_tag
array. Function argument
blocks_per_buffer_count
is the number of file system
blocks accessed per I/O operation. Argument buffer_count
defines the number I/O buffers to be used during processing. A value
greater than one causes I/O and compression operations to be threaded.
Function dfile_read()
causes C program variables to contain
values of the next sequentially parsed record. Its dfile
argument corresponds to the value return by dfile_read_open()
.
The dfile_read_close()
function closes open file and releases
I/O buffer memory. Its dfile
argument corresponds to the
value returned by dfile_read_open()
.
dfile_write_open()
creates and returns a structure to be
used when calling dfile_write()
to write records. The first
seven arguments are consistent with the first seven arguments of
dfile_read_open()
. open_mode
can be
Dfile_append
or Dfile_trunc
depending on
whether an existing file is to be appended or truncated.
Function dfile_write()
causes values contained in C program
variables to be formatted into a data record and written. Its
dfile
argument corresponds to the value returned by
dfile_write_open()
.
The dfile_write_close()
function flushes I/O buffers to
disk, closes the output file and releases I/O buffer memory. Its
dfile
argument corresponds to the value return by
dfile_write_open()
.
Upon successful completion, most functions return 0. Failures are
identified by return code -1. Exceptions,
dfile_read_open()
and dfile_write_open()
,
return valid address pointers when successful; otherwise null address
pointers are returned. The end of data condition that occurs in
dfile_read()
can be distinguished from an error by
checking structure variable dfile->error
to verify it
contains value Dfile_all_data_processed
.
C program variables are bound to record fields using structure
dfile_bind_t
. Variable field_name
points to
a string containing a field name. This field name is not case
sensitive and should reference a field defined in the record layout
configuration file. Address of program variable used to reference
field data are assigned to field_buffer
. When
field_length
is optionally set with the address of a
program variable, dfile_read()
populates that variable
with the length of the parsed field value. Setting program variable
assigned to field_length
while doing
dfile_write()
eliminates the need to null terminate
values used in field_buffer
variable. Field length
variables also improve processing efficiency if field length is known.
Variable field_offset
contains a field's offset into a
record layout. The first field in a record would have an offset of zero.
The following is an example of binding C program variables without field
lengths:
static char *sbscrp_id, *svc_plan_cd, *eff_dt, *expr_dt;
static char *cntrct_start_dt, *cntrct_end_dt;
static dfile_bind_t sali_field[] = {
{ "sbscrp_id", &sbscrp_id },
{ "svc_plan_cd",&svc_plan_cd },
{ "eff_dt", &eff_dt },
{ "expr_dt", &expr_dt },
{ "cntrct_start_dt", &cntrct_start_dt },
{ "cntrct_end_dt", &cntrct_end_dt, }
};
const unsigned short sali_field_cnt = sizeof( sali_field ) / sizeof( dfile_bind_t );
The following is an example of binding C program variables with field lengths:
static char *sbscrp_id, *svc_plan_cd, *eff_dt, *expr_dt;
static char *cntrct_start_dt, *cntrct_end_dt;
static size_t sbscrp_id_len, svc_plan_cd_len, eff_dt_len, expr_dt_len;
static size_t cntrct_start_dt_len, cntrct_end_dt_len;
static dfile_bind_t sali_field[] = {
{ "sbscrp_id", &sbscrp_id, &sbscrp_id_len },
{ "svc_plan_cd",&svc_plan_cd, &svc_plan_cd_len },
{ "eff_dt", &eff_dt, &eff_dt_len },
{ "expr_dt", &expr_dt, &expr_dt_len },
{ "cntrct_start_dt", &cntrct_start_dt, &cntrct_start_dt_len },
{ "cntrct_end_dt", &cntrct_end_dt, &cntrct_end_dt_len }
};
const unsigned short sali_field_cnt = sizeof( sali_field ) / sizeof( dfile_bind_t );
DFile library's I/O buffering system can process ASCII or GZIP formatted data. Its determination to apply data compression techniques are based on opened file name at run time. If a file name is suffixed with a '.gz', GZIP compression routines are applied. This results in compressed data files that are compatible with GNU's gzip utility.
If I/O or compress/uncompress operations are application bottle necks, an additional execution thread dedicated to I/O and compression may be started by opening files with multiple buffers. Data is passed between threads using circular buffer queues. Enough buffers should be allocated to minimize contention. Single buffer processing is slightly more efficient since there is no thread processing overhead. Typically the overhead is not worth the extra processing unless writing compressed files.
When opening a DFile an application can control buffer size by specifying a block multiple. A block is based on the amount of data the UNIX file system containing the DFile prefers to communicate. Data per record cannot exceed buffer size. When applications process large records, they should choose a block multiple that creates buffers larger than the largest expected record. Also, there is slight processing overhead associated with buffer rotation. Larger buffers will incur fewer rotations.
The following is an example of reading the UNIX password file.
$ grep passwd dfile.cfg
ipasswd:\::10::${PARM}/passwd.cfg:/etc/passwd
opasswd::::${PARM}/passwd.cfg:%d/passwd.dat
$ cat ${PARM}/passwd.cfg
user_name
password
user_id
group_id
comment
home
shell
$ cat passwd.c
#include
#include "tbox.h"
#include "dfile.h"
int main( void )
{
static char *user_name, *pass_word, *user_id, *group_id;
static char *comment, *home_directory, *default_shell;
static dfile_bind_t field[] = {
{ "user_name", &user_name },
{ "password", &pass_word },
{ "user_id", &user_id },
{ "group_id", &group_id },
{ "comment", &comment },
{ "home", &home_directory },
{ "shell", &default_shell }
};
const unsigned short field_cnt = sizeof( field ) / sizeof( dfile_bind_t );
static dfile_tag_t output_tag[] = {
{ "%d", "/tmp" }
};
const unsigned short output_tag_cnt = sizeof( output_tag ) / sizeof( dfile_tag_t );
dfile_cfg_t cfg;
dfile_t *input_dfile, *output_dfile;
const unsigned short blocks_per_buffer_cnt = 1;
const unsigned short buffer_cnt = 1;
if ( dfile_cfg( &cfg, "ipasswd" ) == -1 ) {
return 5;
}
input_dfile = dfile_read_open( &cfg, field, field_cnt, (dfile_tag_t *)0, (unsigned short)0, blocks_per_buffer_cnt, buffer_cnt );
if ( input_dfile == (dfile_t *)0 ) {
return 10;
}
if ( dfile_cfg( &cfg, "opasswd" ) == -1 ) {
return 15;
}
output_dfile = dfile_write_open( &cfg, field, field_cnt, output_tag, output_tag_cnt, blocks_per_buffer_cnt, buffer_cnt, Dfile_trunc );
if ( output_dfile == (dfile_t *)0 ) {
return 20;
}
(void) puts( "USER NAME USER ID GROUP ID COMMENT HOME DIRECTORY" );
(void) puts( "--------- ------- --------- --------------------- --------------------" );
while ( dfile_read( input_dfile ) == 0 ) {
(void) printf( "%-8.8s%10.10s%11.11s %-24.24s%-16.16s\n", user_name, user_id, group_id, comment, home_directory );
if ( dfile_write( output_dfile ) == -1 ) {
return 25;
 }
}
if ( input_dfile->error != Dfile_all_data_processed ) {
(void) fputs( "Failed to read all records.\n", stderr );
return 30;
}
}
(void) printf( "\nrecord count %lu\n", input_dfile->file_rec_cnt );
(void) dfile_write_close( output_dfile );
(void) dfile_read_close( input_dfile );
return 0;
}
$ ./passwd
USER NAME USER ID GROUP ID COMMENT HOME DIRECTORY
--------- ------- --------- --------------------- --------------------
root 0 0 Charlie & /root
...
$ head -1 /etc/passwd
root:*:0:0:Charlie &:/root:/bin/csh
$ od -c -x /tmp/passwd.dat
0000000 004 r o o t 001 * 001 0 001 0 \t C h a r
0000020 l i e & 005 / r o o t \b / b i n
0000040 / c s h
...
Generic programs allow specific DFiles to be chosen at run time. This flexibility requires extra bind structure coding. The following is a simple example:
#include
#include
#include
#include "tbox.h"
#include "dfile.h"
#include "dcat.h"
/*
** This program displays dfile data.
*/
int main( int argc, char **argv )
{
const unsigned short blocks_per_buffer_cnt = 2;
const unsigned short buffer_cnt = 1;
const char *dfile_name;
char field_separator, heading_flag;
dfile_t *dfile;
dfile_tag_t *tag_tbl;
unsigned short tag_tbl_cnt;
dfile_cfg_t cfg;
dfile_bind_t *bind_tbl;
unsigned short ndx;
if ( get_args( argc, argv, &dfile_name, &tag_tbl, &tag_tbl_cnt, &heading_flag, &field_separator ) == -1 ) {
return 10;
}
if ( dfile_cfg( &cfg, dfile_name ) == -1 ) {
return 20;
}
if ( heading_flag == 'Y' ) {
/*
** Print field name heading.
*/
for ( ndx = 0; ndx < cfg.field_cnt - 1; ++ndx ) {
(void) fputs( cfg.field[ ndx ], stdout );
(void) fputc( field_separator, stdout );
}
(void) fputs( cfg.field[ ndx ], stdout );
(void) fputc( '\n', stdout );
}
dfile = dfile_read_open( &cfg, (dfile_bind_t *)0, (unsigned short)0, tag_tbl, tag_tbl_cnt, blocks_per_buffer_cnt, buffer_cnt );
if ( dfile == (dfile_t *)0 ) {
return 30;
}
bind_tbl = dfile->bind;
while ( dfile_read( dfile ) == 0 ) {
/*
** Print field values.
*/
for ( ndx = 0; ndx < cfg.field_cnt - 1; ++ndx ) {
(void) fputs( *bind_tbl[ ndx ].field_buffer, stdout );
(void) fputc( field_separator, stdout );
}
(void) fputs( *bind_tbl[ ndx ].field_buffer, stdout );
(void) fputc( '\n', stdout );
}
if ( dfile->error != Dfile_all_data_processed ) {
fput_src_code( __FILE__, __LINE__, stderr );
(void) fputs( "Failed to read all data.\n", stderr );
return 40;
}
(void) dfile_read_close( dfile );
return 0;
}
#include
#include
#include
#include
#include "tbox.h"
#include "dfile.h"
#include "dcat.h"
static void print_usage( const char * );
/*
** This function processes the command line arguments.
*/
int get_args( int argc, char * const argv[], const char **dfile_name, dfile_tag_t **tag_tbl, unsigned short *tag_tbl_cnt, char *heading_flag, char *field_separator )
{
int ch;
extern char *optarg;
assert( argv != (char * const *) 0 );
assert( dfile_name != (const char **) 0 );
assert( tag_tbl != (dfile_tag_t **) 0 );
assert( tag_tbl_cnt != (unsigned short *) 0 );
assert( heading_flag != (char *) 0 );
assert( field_separator != (char *) 0 );
*dfile_name = (const char *)0;
*tag_tbl = (dfile_tag_t *)0;
*tag_tbl_cnt = (unsigned short)0;
*heading_flag = 'N';
*field_separator = '|';
while ( ( ch = getopt( argc, argv, "F:ht:" ) ) != EOF ) {
switch ( ch ) {
case 'h':
*heading_flag = 'Y';
break;
case 'F':
*field_separator = *optarg;
break;
case 't':
if ( parse_tag( tag_tbl, tag_tbl_cnt, optarg ) == -1 ) {
return -1;
}
break;
default:
print_usage( argv[ 0 ] );
return -1;
}
}
if ( optind >= argc ) {
fput_src_code( __FILE__, __LINE__, stderr );
(void) fputs( "Must specify input dfile name.\n", stderr );
print_usage( argv[ 0 ] );
return -1;
}
*dfile_name = argv[ optind ];
return 0;
}
static void print_usage( const char *exec_name )
{
(void) fputs( "usage: ", stderr );
(void) fputs( exec_name, stderr );
(void) fputs( " [-F]", stderr );
(void) fputs( " [-h]", stderr );
(void) fputs( " [-t %x=abc]", stderr );
(void) fputc( '\n', stderr );
(void) fputs( "\t-F -> field separator (default |)\n", stderr );
(void) fputs( "\t-h -> field heading\n", stderr );
(void) fputs( "\t-t -> DFile path tags\n", stderr );
}
#include
#include
#include
#include "tbox.h"
#include "dfile.h"
#include "dcat.h"
int parse_tag( dfile_tag_t **tag_tbl, unsigned short *tag_tbl_cnt, char *tag_str )
{
size_t alloc_size;
dfile_tag_t *new;
assert( tag_tbl != (dfile_tag_t **)0 );
assert( tag_tbl_cnt != (unsigned short *)0 );
assert( tag_str != (char *)0 );
/*
** This function expects tag_str to contain a tag in the form of
** %x=value.
*/
if ( tag_str[ 0 ] != '%' || tag_str[ 2 ] != '=' ) {
fput_src_code( __FILE__, __LINE__, stderr );
(void) fputs( "tag [", stderr );
(void) fputs( tag_str, stderr );
(void) fputs( "] is not in correct format.\n", stderr );
return -1;
}
/*
** Replace '=' with null character.
*/
tag_str[ 2 ] = (char)0;
alloc_size = sizeof( dfile_tag_t ) * ( (size_t)*tag_tbl_cnt + (size_t)1 );
new = (dfile_tag_t *)realloc( *tag_tbl, alloc_size );
if ( new == (dfile_tag_t *)0 ) {
unix_error( "realloc() failed", __FILE__, __LINE__ );
return -1;
}
new[ *tag_tbl_cnt ].tag = tag_str;
new[ *tag_tbl_cnt ].tag_value = &tag_str[ 3 ];
*tag_tbl = new;
++*tag_tbl_cnt;
return 0;
}
C source files that reference dfile routines and data structures should include the following header file:
#include "dfile.h"
The command for the link step to create an executable must include the following arguments:
-ldfile -ltbox -lz -lpthread
Software library Dfile is dependent on library Tbox. Short for tool box, library Tbox contains general purpose routines needed for common programming tasks such data sorting and searching.
Software libraries dependent on library Dfile are Dfile_dynamic, Dfile_utility and Where. Library Dfile_dynamic contains helpful routines for processing DFiles without field names being hard coded in C programs. Library Dfile_utility contains common routines used in DFILE Tools utilities described earlier. Also, library Where is used in most DFILE Tools utilities. This library allows records to be filtered (discarded) during read and write operations. Library Where contains an interpreter to evaluate conditional expressions and is dependent on library Sexpr. Library Sexpr parses S-expressions and loads results into a tree structure.