Is it possible to run multiple PostgreSQL queries, and using pgadmin3 have them each export to a separate tab on a XLSX file?
On those same lines, is it possible to run one PostgresQL query that exports to multiple tabs based on some criteria?
You'll want to use an external tool for this. PostgreSQL knows nothing about the XLSX format, nor about OpenDocument or any of that.
I suggest writing a script that exports a bunch of individual CSV files with copy. Then using an external tool to convert them to xlsx and assemble them into sheets in the document.
It's possible that ETL tools like CloverETL, Pentaho Kettle, or Talend Studio may do what you want too. I haven't checked this specific functionality.
Related
I have an AS400 with an IBM DB2 database and I need to create a Format Description File (FDF) for each table in the DB. I can create the FDF file using the IBM Export tool but it will only create one file at a time which will take several days to complete. I have not found a way to create the files systematically using a tool or query. Is this possible or should this be done using scripting?
First of all, to correct a misunderstanding...
A Format Description File has nothing at all to do with the format of a Db2 table. It actually describes the format of the data in a stream file that you are uploading into the Db2 table. Sure you can turn on an option during the download from Db2 to create the FDF file, but it's still actually describing the data in the stream file you've just downloaded the data into. You can use the resulting FDF file to upload a modified version of the downloaded data or as the starting point for creating an FDF file that matches the actual data you want to upload.
Which explain why there's no built-in way to create an appropriate FDF file for every table on the system.
I question why you think you actually to generate an FDF file for every table.
As I recall, the format of the FDF (or it's newer variant FDFX) is pretty simple; it shouldn't be all that difficult to generate if you really wanted to. But I don't have one handy at the moment, and my Google-FU has failed me.
I have been looking at a way to automate my data loads into vertica instead of manually exporting flat files each time, and stumbled upon the ETL Talend.
I have been working with a test folder containing multiple csv files, and am attempting to find a way to build a job so the file can be put into vertica.
However, I see in the open studio version (free), if your files do not have the same schema, this becomes next to impossible without having the dynamic schema option which is in the enterprise version.
I start with tFileList and attempt to iterate through tFileInputDelimited, but the schemas are not uniform, so of course it will stop the processing.
So, long story short, am I correct in assuming that there is no way to automate data loads in the free version of Talend if you have a folder consisting of files with different schemas?
If anyone has any suggestions for other open source ETLs to look at or a solution that would be great.
You can access the CURRENT_FILE variable from a tFileList compenent and then send a file down different route depening on the file name. You'd then create a tFileInputDelimited for each file. For example if you had two files named file1.csv and file2.csv, right click the tFileList and choose Trigger>Run If. In the run if condition type ((String)globalMap.get("tFileList_1_CURRENT_FILE")).toLowerCase().matches("file1.csv") and drag it to the tFileInputDelimited set up to handle file1.csv. Do the same for file2.csv, changing the filename in the run if condition.
Im trying to use the PgAdminn III Import tool and want to upload a .csv file. I dont know the column names OR column numbers beforehand, and would like to have them be populated on the fly. I also know that the number of columns is consistent across rows.
In the sense of having a table dynamically created for you from the CSV, no, not with PgAdmin-III or psql.
You'll want to write a quick script for that with your preferred scripting language + its PostgreSQL driver interface, or use an ETL tool like CloverETL, Pentaho Kettle, or Talend Studio.
I am faced with a situation where we get a lot of CSV files from different clients but there is always some issue with column count and column length that out target table is expecting.
What is the best way to handle frequently changing CSV files. My goal is load these CSV files into Postgres database.
I checked the \COPY command in Postgres but it does have an option to create a table.
You could try creating a pg_dump compatible file instead which has the appropriate "create table" section and use that to load your data instead.
I recommend using an external ETL tool like CloverETL, Talend Studio, or Pentaho Kettle for data loading when you're having to massage different kinds of data.
\copy is really intended for importing well-formed data in a known structure.
I would like to import an Excel .xls file workbook into Powerbuilder. The file has 2 sheets and these sheets must be imported into 2 differenct db tables.
Any assistance is kindly appreciated.
Thanks
John.
First thing, there's nothing automagic, along the lines of a one-line solution that you could get for other file formats. There's a manual method, there's a scripting approach, and you can probably merge the two as a third option.
For a manual method, you can go into Excel and export your data as something that will import into a DataWindow. You don't mention your PowerBuilder version, but the file format for importing from Excel that comes to mind is CSV, which was added in PB9.
For a scripting approach, you can use OLE (assuming Excel is installed on the client machine) and access data however you want with the scripting engine, moving it into PowerBuilder in whatever format you want.
To mix the methods, you could use OLE to export the file to a couple of CSVs, then dw.FileImport() the data in.
Good luck,
Terry.
Postscript: Sybase has examples of OLE access, and examples of using ODBC, a solution I had neglected before.
If you give names to the areas with the data in Excel and then setup ODBC connections that point to them, you can access them like a database table from within PowerBuilder.