Export File Name - dbeaver

When I'm exporting data in DBeaver i'm using ${table} and a number is now incrementing on the file name.
I have changed every setting out there that I"m aware of. Tried different extensions.
I'm expecting TABLE.CSV and FILE.CSV and I'm getting TABLE_1.CSV and FILE_2.CSV. What do I do to get the incremental number off so my python scripts can pick up specific table names.

Related

Is possible omit specifics tables from existing dump file when use pgsql for import data?

I have a dump file and I want import it, but it contains a log table with millions of records whereby I need exclude it when execute pgsql < dump_file. Note: I cannot use pg_restore
Edit:
Since the best option is to edit the file manually, any suggestions to remove 650K lines from a 690K line file on windows?
The correct way to is to fix whatever problem is preventing you from using pg_restore (I guess that you have already taken the dump in the wrong format?).
The quick and dirty way is to use a program to exclude what you don't want. I'd use perl, because that is what I would use. sed or awk have similar features, and I'm sure there are ways to do it in every other language you would care to look at.
perl -ne 'print unless /^COPY public.pgbench_accounts/../^\\\.$/' dump.file | psql
This excludes every line between the one that starts with COPY public.pgbench_accounts until the next following \.
Of course you would replace public.pgbench_accounts with your table's name, making sure to quote it properly if that is needed.
It might get confused if your database contains a row whose first column starts with the text "COPY public.pgbench_accounts"...
Then you have to edit the file manually.
A crude alternative might be: create a table with the same name as the log table, but with an incompatible definition or no permissions for the importing user. Then restoring that table will fail. If you ignore these errors, you have reached your goal.

Export CSV From Postgres VIA Command Line

Hello Stack Overflowers!
I'm currently exporting a Postgres table as a .csv using a C# application I developed. I'm able to export them no problem with the following command...
set PGPASSWORD=password
psql -U USERNAME Database_Name
\copy (SELECT * FROM table1) TO C:\xyz\exportfile.csv CSV DELIMITER ',' HEADER;
The problem I am running into is the .csv is meant to be used with Tableau, however, when importing to excel I run into the same issue. It turns text fields into integers in both Tableau and Excel. This causes issues specifically on joining serial numbers on the Tableau side.
I know I can change these fields in Tableau/Excel manually but I am trying to find a way to make sure the end-user wouldn't need to do this. I'd like for them to just drag and drop the updated .csv postgresql data extracts and be able to start Tableau no problem. They don't seem real tech-savvy. I know you can connect Tableau directly to Postgres but in this particular case, I am not allowed to due to limitations beyond my control.
I'm using PostgreSQL 12 and Tableau v2019.4.0
EDIT: As request providing example data! Both of the fields are TEXT inside of PostgreSQL but the export doesn't specify.
Excel Formatting
ASSETNUM,ITEMNUM
1834,8.11234E+12
1835,8.11234E+12
Notepad Formatting
ASSETNUM,ITEMNUM
1834,8112345673294
1835,8112345673295
Note: If you select the specific cell in Excel it shows the full number.
CSV files don't have any type information, so programs like Excel/Tableau are free to interpret the data how they like.
However, #JorgeCampos's link provides useful information. For example
"=""123""","=""123"""
gets interpreted differently than
123,123
when you load it into Excel.
If you want to add quotes to your data, the easiest way is to use PostgreSQL's string functions, e.g.
SELECT '"=""' || my_column || '"""' FROM my_database

Importing issue in postgresql using pgadmin 4

The file is not importing after having created a table. The first line of code is for the table (COPY), the second line of code is for the path of the file (FROM) and the WITH I am not entirely sure if there's a prior line of code that needs to be entered for its success as its not being highlighted in pink. The importing should be going through in either the built-in tool of pgAdmin or the syntax but neither of them generates the needed output. Here are some screenshots:
So I did another table, this time focusing on a single column and ensuring that the name of the column matched on both the table and the file and it worked. The prior example had several columns that had difference in spellings of the column content in table and the file:
You can try this sequentially...
1. First create csv file. .csv file column sequence is most important.
2. Consider the below employee_info.csv file
And consider your database table employee_info table which contain (emp_id [numeric],emp_name[character],emp_sal[numeric],emp_loc [character])
Then Execute the below query
a. copy employee_info(emp_id,emp_name,emp_sal,emp_loc) from 'C:\Users\Zbook\Desktop\employee_info.csv' DELIMITERS ',' CSV;
Note: Ensure that each .csv file row value has not null. Like below...

Using Powershell to create custom install strings from CSV input

I'm mostly just looking to be pointed in the right direction so I can piece it together myself. I have a decent amount of batch file scripting experience. I'm a PS noob but I think PS would be better for the project below.
We have software which requires the client ID to be part of the install string (along with switches, usr/pass, other switches, logging paths, etc).
I've created a batch file (hundreds actually) which I execute with PSEXEC on remote machines which does work but it's burly to maintain. The only change in each is the client ID.
What I'm attempting to do is have a CSV with 2 columns as input (so I just have to maintain the CSV): machine name (as presented by %hostname%) & client ID. I want to create a script which matches %hostname% to a corresponding row in column 1, read the data in column 2 of the same row, and then be able to call that as a variable in the install string.
E.G.
If my CSV has bobs-pc in column 1, row 6, then insert the data from column 2, row 6 (let's call it 0006) in the following install string:
install.exe /client_ID=0006
no looping
I don't want it to install on all machines simultaneously due to the multiple time zones we operate in.
Something like this would be really useful for many projects I have so I'm more interested in learning than having anyone write it for me.
I understand I should be using Import-Csv. I've created a sample csv and can get certain fields to print out in PS. What I need is for a script to be able to insert those fields as variables in the install string.
Sounds like you want something along the lines of this, (assumes your CSV has a header row of col1 and col2):
$hostname = 'server1'
$value = Import-CSV myfile.csv | where { $_.col1 -eq $hostname } | select -expandproperty col2
Install.exe /client_id=$value

Mongodb import and deciphering changed rows

I have a large csv file which contains over 30million rows. I need to load this file on a daily basis and identify which of the rows have changed. Unfortunately there is no unique key field but it's possible to use four of the fields to make it unique. Once I have identified the changed rows I will then want to export the data. I have tried using a traditional SQL Server solution but the performance is so slow it's not going to work. Therefore I have been looking at Mongodb - this has managed to import the file in about 20 minutes (which is fine). Now I don't have any experience using Monogdb and more importantly knowing best practices. So, my idea is the following:
As a one off - Import data into a collection using the mongoimport.
Copy all of the unique id's generated by mongo and put them in a separate collection.
Import new data into the existing collection using upsert fields which should create a new id for each new and changed row.
Compare the 'copy' to the new collection to list out all the changed rows.
Export changed data.
This to me will work but I am hoping there is a much better way to tackle this problem.
Use unix sort and diff.
Sort the file on disk
sort -o new_file.csv -t ',' big_file.csv
sort -o old_file.csv -t ',' yesterday.csv
diff new_file.csv old_file.csv
Commands may need some tweeking.
You can also use mysql to import the file via
http://dev.mysql.com/doc/refman/5.1/en/load-data.html (LOAD FILE)
and then create KEY (or primary key) on the 4 fields.
Then load yesterday's file into a different table and then use a 2 sql statements to compare the files...
But, diff will work best!
-daniel