I'm attempting to spy on non-SSL PostgreSQL traffic using tshark using the following command:
# tshark -f 'tcp dst port 5432' -O PGSQL \
-d 'tcp.port==5432,pgsql' -T fields -e pgsql.query
I am able to see SQL queries, but all the actual values/parameters are missing (instead replaced with placeholders $1, $2, $3 etc). Example output is as follows:
...
INSERT INTO mdl_logstore_standard_log
(eventname,component,action,target,objecttable,objectid,crud,edulevel,
contextid,contextlevel,contextinstanceid,userid,courseid,relateduserid,
anonymous,other,timecreated,origin,ip,realuserid)
VALUES
($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20),
($21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40)
INSERT INTO mdl_backup_files_temp (contextid,component,filearea,itemid,info,backupid)
VALUES($1,$2,$3,$4,$5,$6) RETURNING id RELEASE SAVEPOINT
moodle_pg_savepoint; SAVEPOINT moodle_pg_savepoint SELECT * FROM mdl_backup_ids_temp
WHERE backupid = $1 AND itemname = $2 AND itemid = $3 INSERT INTO
mdl_backup_files_temp (contextid,component,filearea,itemid,info,backupid)
VALUES($1,$2,$3,$4,$5,$6) RETURNING id RELEASE SAVEPOINT moodle_pg_savepoint;
SAVEPOINT moodle_pg_savepoint
...
What am I missing here - and how can I view the values/parameters as well ?
In order to see the values passed in to a prepared statement, you'll need to set log_statement to either mod or all. log_statement prints the statement that is about to be executed, and includes the parameters/arguments used.
I think the easiest way to turn it on is by doing:
psql -c "ALTER SYSTEM SET log_statement TO 'all'"
psql -c "SELECT pg_reload_conf()"
From there, you should be able to view the parameters.
Bear in mind, this has the potential to generate a lot of traffic, so you'll sent to set it back to the previous value once you're done (you can get the current value by calling psql -c "SHOW log_statement" before you do the two commands above).
In order to see the parameters, you would give tshark another field to print:
-e pgsql.query -e pgsql.val.data
But this is going to be mess, especially if you use prepared statements. You should really just figure out what you are doing wrong with log_statement='all', it will log all statements not just a sample of them. Maybe you have that setting countermanded per user, per database, or per connection.
I have a table on pgadmin4 which consist of 100.000 lines and 23 columns.I need to benchmark postgresql on this specific table using pgbench,but i cant understand what parameters should i use.The database name is desdb and table called test.
PgAdmin4 is not a database server, it is a client. You don't have tables "on" pgadmin4, pgadmin4 is just one way of accessing tables which are on an actual server.
You don't benchmark tables, you benchmark queries. Knowing nothing about the table other than its name, all I could propose for a query is something like:
select * from test
Or
select count(*) from test
You could put that in a file test.sql, then run:
pgbench -n -f test.sql -T60 -P5 desdb
If you are like me and don't like littering your filesystem with bunches of tiny files with contents of no particular interest and you if use the bash shell, you could not create a test.sql file and instead make it dynamic:
pgbench -n -f <(echo 'select * from test') -T60 -P5 desdb
Whether that is a meaningful query to be benchmarking, I don't know. Do you care about how fast you can read (and then throw away) all columns for all rows in the table?
you can refer details regarding pgbench from : https://www.cloudbees.com/blog/tuning-postgresql-with-pgbench.
I have multiple tables that are created in the same way (same columns, indexes, etc.)
I would like to have one sql file for creating them all without duplicating the create statements.
Is there a way to use some kind of placeholder in sql file which would be substituted when executing the sql file with a parameter?
For example I would like to have below sql statement:
drop table if exists schema.%PLACEHOLDER%;
create table schema.%PLACEHOLDER%(id text, data text);
And execute such script with:
psql -f mysqlfile.sql -magic_parameter my_desired_table_name
Is this possible when executing PostgreSQL sql files, or maybe other way to achieve the same (except using sed)?
Sincr you are using psql, you can use variables as follows:
drop table if exists schema.:placeholder;
The invocation is:
psql -f mysqlfile.sql -v placeholder=table_name
I have my database hosted on heroku, and I want to download specific parts of the database (e.g. all the rows with id > x from table 1, all the rows with name = x from table 2, etc.) in a single file.
From some research and asking a question here it seems that some kind of modified pg_dump would solve my problem. However, I won't be able to use pg_dump because I won't have access to the command line (basically I want to be able to click a button in my web app and it will generate + download the database file).
So my new strategy is to use the postgres copy command. I'll go through the various tables in my server database, run COPY (Select * FROM ... WHERE ...) TO filename , where filename is just a temporary file that I will download when complete.
The issue is that this filename file will just have the rows, so I can't just turn around and import it into pgadmin. Assuming I have an 'empty' database set up (the schema, indices, and stuff are all already set up), is there a way I can format my filename file so that it can be easily imported into a postgres db?
Building on my comment about to/from stdout/stdin, and answering the actual question about including multiple tables in one file; you can construct the output file to interleave copy ... from stdin with actual data and load it via psql. For example, psql will support input files that look like this:
copy my_table (col1, col2, col3) from stdin;
foo bar baz
fizz buzz bizz
\.
(Note the trailing \. and that the separators should be tabs; you could also specify the delimiter option in the copy command).
psql will treat everything between the ';' and '.' as stdin. This essentially emulates what pg_dump does when you export table data and no schema (e.g., pg_dump -a -t my_table).
The resulting load could be as simple as psql mydb < output.dump.
I am not a programmer, I am struggling a bit with this.
I have a batch file connecting to my PostgreSQL server, and then open a sql script. Everything works as expected. My question is how to pass a variable (if possible) from one to the other.
Here is my batch file:
set PGPASSWORD=xxxx
cls
#echo off
C:\Progra~1\PostgreSQL\8.3\bin\psql -d Total -h localhost -p 5432 -U postgres -f C:\TotalProteinImport.sql
And here's the script:
copy totalprotein from 'c:/TP.csv' DELIMITERS ',' CSV HEADER;
update anagrafica
set pt=(select totalprotein.resultvalue from totalprotein where totalprotein.accessionnbr=anagrafica.id)
where data_analisi = '12/23/2011';
delete from totalprotein;
This is working great, now the question is how could I pass a variable that would carry the date for data_analisi?
Like in the batch file, "Please enter date", and then the value is passed to the sql script.
You could create a function out of your your SQL script like this:
CREATE OR REPLACE FUNCTION f_myfunc(date)
RETURNS void AS
$BODY$
CREATE TEMP TABLE t_tmp ON COMMIT DROP AS
SELECT * FROM totalprotein LIMIT 0; -- copy table-structure from table
COPY t_tmp FROM 'c:/TP.csv' DELIMITERS ',' CSV HEADER;
UPDATE anagrafica a
SET pt = t.resultvalue
FROM t_tmp t
WHERE a.data_analisi = $1
AND t.accessionnbr = a.id;
-- Temp table is dropped automatically at end of session
-- In this case (ON COMMIT DROP) after the transaction
$BODY$
LANGUAGE sql;
You can use language SQL for this kind of simple SQL batch.
As you can see I have made a couple of modifications to your script that should make it faster, cleaner and safer.
Major points
For reading data into an empty table temporarily, use a temporary table. Saves a lot of disc writes and is much faster.
To simplify the process I use your existing table totalprotein as template for the creation of the (empty) temp table.
If you want to delete all rows of a table use TRUNCATE instead of DELETE FROM. Much faster. In this particular case, you need neither. The temporary table is dropped automatically. See comments in function.
The way you updated anagrafica.pt you would set the column to NULL, if anything goes wrong in the process (date not found, wrong date, id not found ...). The way I rewrote the UPDATE, it only happens if matching data are found. I assume that is what you actually want.
Then ask for user input in your shell script and call the function with the date as parameter. That's how it could work in a Linux shell (as user postgres, with password-less access (using IDENT method in pg_haba.conf):
#! /bin/sh
# Ask for date. 'YYYY-MM-DD' = ISO date-format, valid with any postgres locale.
echo -n "Enter date in the form YYYY-MM-DD and press [ENTER]: "
read date
# check validity of $date ...
psql db -p5432 -c "SELECT f_myfunc('$date')"
-c makes psql execute a singe SQL command and then exits. I wrote a lot more on psql and its command line options yesterday in a somewhat related answer.
The creation of the according Windows batch file remains as exercise for you.
Call under Windows
The error message tells you:
Function tpimport(unknown) does not exist
Note the lower case letters: tpimport. I suspect you used mixe case letters to create the function. So now you have to enclose the function name in double quotes every time you use it.
Try this one (edited quotes!):
C:\Progra~1\PostgreSQL\8.3\bin\psql -d Total -h localhost -p 5432 -U postgres
-c "SELECT ""TPImport""('%dateimport%')"
Note how I use singe and double quotes here. I guess this could work under windows. See here.
You made it hard for yourself when you chose to use mixed case identifiers in PostgreSQL - a folly which I never tire of warning against. Now you have to double quote the function name "TPImport" every time you use it. While perfectly legit, I would never do that. I use lower case letters for identifiers. Always. This way I never mix up lower / upper case and I never have to use double quotes.
The ultimate fix would be to recreate the function with a lower case name (just leave away the double quotes and it will be folded to lower case automatically). Then the function name will just work without any quoting.
Read the basics about identifiers here.
Also, consider upgrading to a more recent version of PostgreSQL 8.3 is a bit rusty by now.
psql supports textual replacement variables. Within psql they can be set using \set and used using :varname.
\set xyz 'abcdef'
select :'xyz';
?column?
----------
abcdef
These variables can be set using command line arguments also:
psql -v xyz=value
The only problem is that these textual replacements always need some fiddling with quoting as shown by the first \set and select.
After creating the function in Postgres, you must create a .bat file in the bin directory of your Postgres version, for example C:\Program Files\PostgreSQL\9.3\bin. Here you write:
#echo off
cd C:\Program Files\PostgreSQL\9.3\bin
psql -p 5432 -h localhost -d myDataBase -U postgres -c "select * from myFunction()"