Load the temp table data into a text file using temp table's handle - progress-4gl

I have created a temp table I want to load all the data of a temp table including field names in a text file using the temp table's handle what can I do?

Using the default buffer handle of the tamp table (hTable:DEFAULT-BUFFER-HANDLE), you can then loop through the fields of the table.
DO i = 1 TO hBufferHandle:NUM-FIELDS:
...
END
You would do that twice, once to output the field names as headers to your text file, and then once for each record of the temp table to export the values.
You will have to handle things like extents.
You will have to deal with data types, making decisions on what to do, depending on what you have in your table.
In theory it's not very complex code, and you could write a simple reusable library to do the work.
Use the documentation to find the full syntax.

Related

How to export statistics_log data cumulatively to the desired Excel sheet in AnyLogic?

I have selected to export tables at the end of model execution to an Excel file, and I would like that data to accumulate on the same Excel sheet after every stop and start of the model. As of now, every stop and start just exports that 1 run's data and overwrites what was there previously. I may be approaching the method of exporting multiple runs wrong/inefficiently but I'm not sure.
Best method is to export the raw data, as you do (if it is not too large).
However, 2 improvements:
manage your output data yourself, i.e. do not rely on the standard export tables but only write data that you really need. Check this help article to learn how to write your own data
in your custom output data tables, add additional identification columns such as date_of_run. I often use iteration and replication columns to also identify from which of those the data stems.
custom csv approach
Alternative approach is to write create your own csv file programmatically, this is possible with Java code. Then, you can create a new one (with a custom filename) after any run:
First, define a “Text file” element as below:
Then, use this code below to create your own csv with a custom name and write to it:
File outputDirectory = new File("outputs");
outputDirectory.mkdir();
String outputFileNameWithExtension = outputDirectory.getPath()+File.separator+"output_file.csv";
file.setFile(outputFileNameWithExtension, Mode.WRITE_APPEND);
// create header
file.println( "col_1"+","+"col_2");
// Write data from dbase table
List<Tuple> rows = selectFrom(my_dbase_table).list();
for (Tuple row : rows) {
file.println( row.get( my_dbase_table.col_1) + "," +
row.get( my_dbase_table.col_2));
}
file.close();

Performance of text column update in PostgreSQL

I would like to store some potentially large log files in a text column in Postgres. And I would like to frequently append additional content to these files using an UPDATE statement. Will this be efficient?
I'd like to do something like this:
UPDATE mytable
SET mytextlog = mytextlog || 'This will be appended'
WHERE id = 1
Will Postgres simply append the additional content without rewriting what is in the column already? Or will it read the content of the column into memory, append the additional content, and then write back the entire column? I realize this is implementation detail. I'm just a little concerned that what I'm planning to do will be unnecessarily slow as the text being stored becomes large. Thanks in advance!

Copy text file using postgres with custom delimiter by character size

I need to copy a text file which has confusing delimiter. I believe the delimiter is space. However, some of the column values are empty and I cannot differentiate which column which making it harder to load the data to database since the space is not indicating anything. Thus, when I try to COPY, the mapping is not right and I am getting ERROR: extra data after last expected column
I have tried to change the delimiter to comma and such, I am still getting the same error above. The below code can be used when I try to load some dummy data with proper delimiter.
COPY usm00070219(HEADREC_ID,YEAR,MONTH,DAY,HOUR,RELTIME,NUMLEV,P_SRC,NP_SRC,LAT,LON) FROM 'D:\....\USM00070219-data.txt' DELIMITER ' ';
This is example data:
It should have 11 columns but the data on the first row is only 10 and it cannot identify the empty value column. The spacings are not helpful at all!
Is there any way I can separate the columns by character size as delimiter and force the data to be divided by the size given?
COPY is not made to handle fixed-width text files. I can think of two options:
Load the file as it is into a table with a single text column using COPY. Then use regexp_split_to_array to split it into its components and inser these into another table.
You can use file_fdw to create a foreign table with a single text column like above and operate on that. That saves loading the file into the database.
There is a foreign data wrapper for fixed-width text files that you can try.

Postgresql: split cell containing column names (WHERE Metacolumn='col1;col2;col3;..') apart into array to dynamically generate INSERT statement

In Postgresql (and Sybase ADS), I am making my own trigger-based multimaster replication across both platforms which must dynamically handle various composite keys and sometimes no PK on certain tables. To make it easiest, I am trying to auto generate the INSERT/UPDATE/DELETE where the user can choose which columns they want to copy over by listing column names in a cell separated by semicolon.
-"SELECT Address, city, us_state, zipcode FROM public.place;" would be a table that needs to replicate.
-The Metatable for Publication/Subscriptions would have a cell containing 'Address;city;us_state;zipcode'.
-I am using Insert/update/delete triggers to capture new row data and want to use the columns to dynamically make a statement like
"insert into place (Address,city,us_state,zipcode) VALUES (NEW.Address,NEW.city,NEW.us_state,NEW.zipcode);" which can be read and executed on the desination via script. I will do the same action for UPDATE and DELETE, using OLD prefix in the UPDATE and DELETE generated statements where needed.
I am not looking for someone to do a bunch of work, but to give an idea of any functions, logic and statements involved. Thank you for any ideas or advice.
You can split a String and create an Array using the regexp_split_to_array function.
Probably something like: regexp_split_to_array(metacolumn, ';')
More info about string functions: https://www.postgresql.org/docs/9.6/functions-string.html

Redshift - Adding a column, do we have to change our previous CSVs to include it?

I currently have a redshift table in our database that has 10 columns, and I want to add another. It's trivial to do an alter table to do this.
My question - When I do this, will all my old CSV files fail to insert into redshift (via COPY from S3) given they won't have this new column?
I was hoping the columns would just be NULL vs. it failing on import, but I haven't seen any documentation on this.
Ideally I wish I could specify the actual column name in the header row of the CSV, but I haven't seen if that is possible anywhere.
FILLRECORD in COPY command does that: 'Allows data files to be loaded when contiguous columns are missing at the end of some of the records'.