How to check the availablility of a Progress database record with the values stored in a .csv file as input? - progress-4gl

I am uploading a CSV file of records to check if these records are available in a specific Progress database table.
How do I proceed?

Assuming a lot of things here since you're not specifying very much.
Assuming we have a file containing animal id's, one per row:
file.csv
=========
1
2
3
Assuming we have a database table called animals with fields id and animalName we can do this (a very naive approach - assuming input data is well formatted, no error checking etc):
/* Define a temp-table to store the file data in*/
DEFINE TEMP-TABLE ttAnimal NO-UNDO
FIELD id AS INTEGER.
/* Input from files */
INPUT FROM VALUE("c:\temp\file.csv").
REPEAT:
/* Assumption: the data is clean and well formatted! */
CREATE ttAnimal.
IMPORT ttAnimal.
END.
INPUT CLOSE.
/*
For each animal id read from file. Locate database record and display
the name */
FOR EACH ttAnimal:
FIND FIRST animal NO-LOCK WHERE animal.id = ttAnimal.id NO-ERROR.
IF AVAILABLE animal THEN DO:
DISP animal.animalName.
END.
END.

Related

Load the temp table data into a text file using temp table's handle

I have created a temp table I want to load all the data of a temp table including field names in a text file using the temp table's handle what can I do?
Using the default buffer handle of the tamp table (hTable:DEFAULT-BUFFER-HANDLE), you can then loop through the fields of the table.
DO i = 1 TO hBufferHandle:NUM-FIELDS:
...
END
You would do that twice, once to output the field names as headers to your text file, and then once for each record of the temp table to export the values.
You will have to handle things like extents.
You will have to deal with data types, making decisions on what to do, depending on what you have in your table.
In theory it's not very complex code, and you could write a simple reusable library to do the work.
Use the documentation to find the full syntax.

load data to db2 in a single row (cell)

I need to load an entire file (contains only ASCII text), to the database (DB2 Express ed.). The table has only two columns (ID, TEXT). The ID column is PK, with auto generated data, whereas the text is CLOB(5): I have no idea about the input parameter 5, it was entered by default in the Data Studio.
Now I need to use the load utility to save a text file (contains 5 MB of data), in a single row, namely in the column TEXT. I do not want the text to be broken into different rows.
thanks for your answer in advance!
Firstly, you may want to redefine your table: CLOB(5) means you expect 5 bytes in the column, which is hardly enough for a 5 MB file. After that you can use the DB2 IMPORT or LOAD commands with the lobsinfile modifier.
Create a text file and place LOB Location Specifiers (LLS) for each file you want to import, one per line.
LLS is a way to tell IMPORT where to find LOB data. It has this
format: <file path>[.<offset>.<length>/], e.g.
/tmp/lobsource.dta.0.100/ to indicate that the first 100 bytes of
the file /tmp/lobsource.dta should be loaded into the particular LOB
column. Notice also the trailing slash. If you want to import the
entire file, skip the offset and length part. LLSes are placed in
the input file instead of the actual data for each row and LOB column.
So, for example:
echo "/home/you/yourfile.txt" > /tmp/import.dat
Since you said the IDs will be generated in the input data, you don't need to enter them in the input file, just don't forget to use the appropriate command modifier: identitymissing or generatedmissing, depending on how the ID column is defined.
Now you can connect to the database and run the IMPORT command, e.g.
db2 "import from /tmp/import.dat of del
modified by lobsinfile identitymissing
method p (1)
insert into yourtable (yourclobcolumn)"
I split the command onto multiple lines for readability, but you should type it on a single line.
method p (1) means parse the input file and read the column in position 1.
More info in the manual

OpenEdge - Multiple RELATION-FIELDS in DataSet definition

Using Openedge version 11.2 & Progress Developer Studio.
I'm using several TEMP-TABLE definitions (each in a separate include file) to form a DataSet. Everything looks OK if I use only one pair RELATION-FIELDS for a single data relation. As soon as i add another RELATION-FIELDS pair to the definition and drop the include file on the form (or import the schema via "Import Schema from File" button) the DataSet shows a duplicate child table with no columns. For simplicity sake and testing, I've set up three test files :
tt1.i - First TEMP-TABLE:
/* Temp Table 1 */
DEFINE TEMP-TABLE tt1
FIELD tt_test AS CHARACTER
FIELD tt_rel_field_1 AS INTEGER
FIELD tt_rel_field_2 AS INTEGER
INDEX tt_idx tt_rel_field_1 tt_rel_field_2.
tt2.i - Second TEMP-TABLE:
/* Temp Table 2 */
DEFINE TEMP-TABLE tt2
FIELD tt_test2 AS CHARACTER
FIELD tt_rel_field_1 AS INTEGER
FIELD tt_rel_field_2 AS INTEGER
INDEX tt_idx tt_rel_field_1 tt_rel_field_2.
dsTest.i - Dataset definition:
/* Dataset Definition */
{tt1.i}
{tt2.i}
DEFINE DATASET dsTest FOR tt1 , tt2
DATA-RELATION drTest FOR tt1 , tt2
RELATION-FIELDS (
tt_rel_field_1,tt_rel_field_1,
tt_rel_field_2,tt_rel_field_2
).
Printscreen of what happens when i drop dsTest.i on the form :
If I remove the second pair, everything works fine - GUI wise. Am I missing something obvious here ? All the examples I've found so far all use a single RELATION-PAIR. Now I wonder why. According to Progress Knowledgebase article 000018088 there is no voodoo involved.
Your syntax looks correct according to the manuals. But the interesting thing is that I do not see any place in our whole environment where we ever use more than 1 relation-field. It may be that it wants to create a relationship for each field you defined.
What does the data look like that you place in your tables? The data must form unique match.
I would ask Tom Bascom for some input on this. https://stackoverflow.com/users/123238/tom-bascom
=============
Page 1-19 of the OpenEdge Development: ProDataSets manual does the following:
DEFINE DATASET dsOrder FOR ttOrder, ttOline, ttItem
DATA-RELATION OrderLine FOR ttOrder, ttOline
RELATION-FIELDS (OrderNum, OrderNum)
DATA-RELATION LineItem FOR ttOline, ttItem
RELATION-FIELDS (ItemNum, ItemNum).
Sorry not sure how to do formatting here, but maybe test it with creating another table as a link between the two?

What is the easiest way to 'dump' a temp-table into SQL format

Version: 10.2b
So lets say I have a table dogs in progress:
table: dogs
fields: name, age, breed
I want to load them into oracle database using the standard load utility. But the table in oracle looks like this
table: dog
fields: name, age, date_last_updated
so if i make a temp-table matching this, is there a way to 'dump' the temp table into sql? (so that i can load it into oracle)
If not, that is a fine answer too.
Thanks for any help!!
EDIT:
By 'sql dump' I mean:
INSERT INTO dog
VALUES (Max, 5, Feb 18, 2013)
is there any way to get the table in this format other then exporting the words "INSERT INTO" into my file.
Using a Database Management Tool with ODBC Support (DBeaver for example) you can
connect to progress Databases and export Tables and Views to SQL Inserts.
Based on your question, something like this would work. You can also replace the "TT" references with the actual table name.
OUTPUT TO VALUE("sqldump").
FOR EACH tt-dogs:
PUT UNFORMATTED
"INSERT INTO dog VALUES (" tt-dogs.dog-name tt-dogs.age TODAY ")" SKIP.
END.
OUTPUT CLOSE.
Thanks for the clarification.
From the 4GL the "natural" way to dump a table (either a real table or a temp-table) would be to use the EXPORT statement to create a text file. Like so:
/* export the dog record as-is
*/
output to "dog.d".
for each dog no-lock:
export dog.
end.
output close.
or:
/* export modified dog records
*/
output to "dog.d".
for each dog no-lock:
export dog.name dog.age now.
end.
output close.
This Oracle: Import CSV file suggests that importing CSV files into Oracle is possible so you could modify the code above like so to create a comma separated file rather than using Progress' default space delimiter:
/* export modified dog records to a CSV file
*/
output to "dog.csv".
for each dog no-lock:
export delimiter "," dog.name dog.age now.
end.
output close.

How can I calculate the total no of records using Progress 4GL

How can I calculate the total no. of records in a table? I want to show all table names in a DB along with the no. of records in each table
The fastest method is:
proutil dbname -C tabanalys > dbname.tab
this is an external utility that analyzes the db.
You can also, of course read every record and count them but that tends to be a lot slower.
The way to get the number of records depends on the application you are planning.
Our DBAs just use the progress utilities. In Unix /usr/dlc/bin/proutil -C dbanalys or some variation to get database information and just dump that to a file.
To get the schema information from progress itself you can use the VST tables. Specifically within a particular database you can use the _file table to retrieve all of the table names.
Once you have the table names you can use queries to get the number of records in the table. The fastest way to query a particular table for a record count is to use the preselect.
This will require the usage of a dynamic buffer and query.
So you can do something like the following.
CREATE WIDGET-POOL.
DEF VAR h_predicate AS CHAR NO-UNDO.
DEF VAR h_qry AS HANDLE NO-UNDO.
DEF VAR h_buffer AS HANDLE NO-UNDO.
FOR EACH _file NO-LOCK:
h_predicate = "PRESELECT EACH " + _file._file-name + " NO-LOCK".
CREATE BUFFER h_buffer FOR TABLE _file._file-name .
CREATE QUERY h_qry.
h_qry:SET-BUFFERS( h_buffer ).
h_qry:QUERY-PREPARE( h_predicate ).
h_qry:QUERY-OPEN().
DISP _file._file-name h_qry:NUM-RESULTS.
DELETE OBJECT h_qry.
DELETE OBJECT h_buffer.
END.
An easy one:
Select count(*) from tablename.
A bit more complex:
Def var i as int.
for each table:
i = i + 1.
end.
display i.
For more complex answer, you got the others.
Use CURRENT-RESULT-ROW function with DEFINE QUERY and GET LAST to get the total number of records:
e.g.
DEFINE QUERY qCustomer FOR Customer SCROLLING.
OPEN QUERY qCustomer FOR EACH Customer NO-LOCK.
GET LAST qCustomer.
DISPLAY CURRENT-RESULT-ROW("qCustomer") LABEL "Total number of rows".
...
CLOSE QUERY qCustomer.