What is the easiest way to 'dump' a temp-table into SQL format - progress-4gl

Version: 10.2b
So lets say I have a table dogs in progress:
table: dogs
fields: name, age, breed
I want to load them into oracle database using the standard load utility. But the table in oracle looks like this
table: dog
fields: name, age, date_last_updated
so if i make a temp-table matching this, is there a way to 'dump' the temp table into sql? (so that i can load it into oracle)
If not, that is a fine answer too.
Thanks for any help!!
EDIT:
By 'sql dump' I mean:
INSERT INTO dog
VALUES (Max, 5, Feb 18, 2013)
is there any way to get the table in this format other then exporting the words "INSERT INTO" into my file.

Using a Database Management Tool with ODBC Support (DBeaver for example) you can
connect to progress Databases and export Tables and Views to SQL Inserts.

Based on your question, something like this would work. You can also replace the "TT" references with the actual table name.
OUTPUT TO VALUE("sqldump").
FOR EACH tt-dogs:
PUT UNFORMATTED
"INSERT INTO dog VALUES (" tt-dogs.dog-name tt-dogs.age TODAY ")" SKIP.
END.
OUTPUT CLOSE.

Thanks for the clarification.
From the 4GL the "natural" way to dump a table (either a real table or a temp-table) would be to use the EXPORT statement to create a text file. Like so:
/* export the dog record as-is
*/
output to "dog.d".
for each dog no-lock:
export dog.
end.
output close.
or:
/* export modified dog records
*/
output to "dog.d".
for each dog no-lock:
export dog.name dog.age now.
end.
output close.
This Oracle: Import CSV file suggests that importing CSV files into Oracle is possible so you could modify the code above like so to create a comma separated file rather than using Progress' default space delimiter:
/* export modified dog records to a CSV file
*/
output to "dog.csv".
for each dog no-lock:
export delimiter "," dog.name dog.age now.
end.
output close.

Related

split the string column which has a delimiter(',')-sql

Hi I am trying to split the string column which has a delimiter(',')
drop table #address
CREATE TABLE #Address(stir VARCHAR(max));
GO
INSERT INTO #Address(stir)
values('aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"')
,('ca,"23","42316eg3","pom","","1800-01-01","9999","","1"')
,('daa,"","1324567a","","catty","","756432","213",""')
GO
Expected output:
I am using PARSENAME but it is returning null values? guide me on my expected out put
thanks in advance
The best solution here would be to just create a flat CSV file based on your current insert data, and then use SQL Server's bulk import tool to load it into a table. The following CSV data should be workable here:
aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"
ca,"23","42316eg3","pom","","1800-01-01","9999","","1"
daa,"","1324567a","","catty","","756432","213",""
Just make sure that you specify double quote as the field escape character.

Convert XML PATH sample code from SQL Server to DB2

I'm converting the SQL server to db2..
I need a solution for stuff and for xml path
Ex
Select stuff(select something
from table name
Where condition
For xml path(''),1,1,'')
Pls convert this into db2.
Your code is an old school XML "trick" to convert multiple values to a single string. (Often comma separated but in this case space separated.) Since those days DB2 (and the sql standards) have added a new function called listagg which is designed to solve this exact problem:
Select listagg(something,' ')
from table name
Where condition
db2 docs -
https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_listagg.html
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_74/db2/rbafzcollistagg.htm

How can I export images from a postgreSQL database?

I have a simple data table.
Schemas>resources>tables>image_data contains the columns
image_id(integer) raw_data(bytea)
How can I export this data to files named based on the image_id? Once I figure that out I want to use the image_id to name the files based on a reference in another table.
So far I've got this:
SELECT image_id, encode(raw_data, 'hex')
FROM resources.image_data;
It generates a csv with all the images as HEX, but I don't know what to do with it.
Data-sample:
"image_id";"encode"
166;"89504e470d0a1a0a0000000d49484452000001e0000003160806000000298 ..."

Dump subset of records in an OpenEdge database table in the ".d" file format

I am looking for the easiest way to manually dump a subset of records in an OpenEdge database table in the Progress ".d" file format.
The best way I can imagine is creating an extra test database with the identical schema as the source database, and then copying the subset of records over to the test database using FOR EACH and BUFFER-COPY statements. Then just export the data from the test database using the Dump Data and Definitions Table Contens (.d file )... menu option.
That seems like a lot of trouble. If you can identify the subset of records in order to do the BUFFER-COPY than you should also be able to:
OUTPUT TO VALUE( "table.d" ).
FOR EACH table NO-LOCK WHERE someCondition:
EXPORT table.
END.
OUTPUT CLOSE.
Which is, essentially, what the dictionary "dump data" .d file is less a few lines of administrivia at the bottom which can be safely omitted for most purposes.

How to import file into sqlite?

On a Mac, I have a txt file with two columns, one being an autoincrement in an sqlite table:
, "mytext1"
, "mytext2"
, "mytext3"
When I try to import this file, I get a datatype mismatch error:
.separator ","
.import mytextfile.txt mytable
How should the txt file be structured so that it uses the autoincrement?
Also, how do I enter in text that will have line breaks? For example:
"this is a description of the code below.
The text might have some line breaks and indents. Here's
the related code sample:
foreach (int i = 0; i < 5; i++){
//do some stuff here
}
this is a little more follow up text."
I need the above inserted into one row. Is there anything special I need to do to the formatting?
For one particular table, I want each of my rows as a file and import them that way. I'm guessing it is a matter of creating some sort of batch file that runs multiple imports.
Edit
That's exactly the syntax I posted, minus a tab since I'm using a comma. The missing line break in my post didn't make it as apparent. Anyways, that gives the mismatch error.
I was looking on the same problem. Looks like I've found an answer on the first part of your question — about importing a file into a table with ID field.
So yes, create a temporary table without ID, import your file into it, then do insert..select to copy its data into your target table. (Remove leading commas from mytextfile.txt).
-- assuming your table is called Strings and
-- was created like this:
-- create table Strings( ID integer primary key, Code text )
create table StringsImport( Code text );
.import mytextfile.txt StringsImport
insert into Strings ( Code ) select * from StringsImport;
drop table StringsImport;
Do not know what to do with newlines. I've read some mentions that importing in CSV mode will do the trick (.mode csv), but when I tried it did not seem to work.
In case anyone is still having issues with this you can download an SQLLite manager.
There are several that allow importing from a CSV file.
Here is one but a google search should reveal a few: http://sqlitemanager.en.softonic.com/
I'm in the process of moving data containing long text fields with various punctuation marks (they are actually articles on coding) into SQLite and I've been experimenting with various text imports.
I created a database in SQLite with a table:
CREATE TABLE test (id PRIMARY KEY AUTOINCREMENT, textfield TEXT);
then do a backup with .dump.
I then add the text below the "CREATE TABLE" line manually in the resulting .dump file as such:
INSERT INTO test textfield VALUES (1,'Is''t it great to have
really long text with various punctaution marks and
newlines');
Change any single quotes to two single quotes (change ' to ''). Note that an index number needs to be added manually (I'm sure there is an AWK/SED command to do it automatically). Change the auto increment number in the "sequence" line in the dump file to one above the last index number you added (I don't have SQLite in front of me to give you the exact line, but it should be obvious).
With the new file, I can then do a restore onto the database