Skip to main content

CHOP

This command divides IPM files into multiple smaller files. This makes handling big files easier.

Syntax

$ cardak help chop
usage: cardak chop [<flags>] <file>

Create smaller physical files from an IPM file

Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
-v, --verbose Add more information displayed on some commands.
--mono Supress color on output.
--ignore Try to ignore some errors and continue processing the file
-W, --width Ignore small terminal width check and force execution
-z, --silent Suppress all output (banner, headers, summary) except the results. Specially useful for DESCRIBE command piped to a search utility like
fzf
-m, --max=100000 Maximum number of records for each generated file

Args:
<file> File name to chop

Description

With this command we can split big files into smaller ones which makes it easier to handle them, in particular with commands like OPEN where the memory consumption increases with the number of records which can be a problem.

As an example, a file with 50.000 records typically consumes a little more than 1Gb of RAM, and one with 100.000 records consumes 2.2Gb or RAM. So, dealing with files with more records can become less practical. But with this command we can create smaller files that can be opened without problems.

Most of the commands are optimized and use a method that consists of processing records as they are read, so memory usage is constant and independent of the number of records. But some commands need to load the full file into memory (like the OPEN command), or when exporting records in CSV format because we need to get the list of all present fields in advance.

This command reads the input file, estimates the number of records (just an estimate because getting the real value implies reading the full file which can take a longer time) and proceed to read the records. When it encounters a file trailer (of a logical file) or reaches the max number of records specified for the output files, it generates the file. Each of these file names consist of the name of the original files plus a sequential number.

We can use the --max (-m) flag to force the maximum number of records to be present in each output file. If this value is not indicated, the program will create files with a maximum number of 100.000 records (which will consume approximately 2Gb when opened) which we consider is acceptable, but the user is free to specify any other value, if, for example, it has a limited amount of memory.