SQL Parameters missing when using tshark against PostgreSQL v12 traffic - postgresql

I'm attempting to spy on non-SSL PostgreSQL traffic using tshark using the following command:
# tshark -f 'tcp dst port 5432' -O PGSQL \
-d 'tcp.port==5432,pgsql' -T fields -e pgsql.query
I am able to see SQL queries, but all the actual values/parameters are missing (instead replaced with placeholders $1, $2, $3 etc). Example output is as follows:
...
INSERT INTO mdl_logstore_standard_log
(eventname,component,action,target,objecttable,objectid,crud,edulevel,
contextid,contextlevel,contextinstanceid,userid,courseid,relateduserid,
anonymous,other,timecreated,origin,ip,realuserid)
VALUES
($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20),
($21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40)
INSERT INTO mdl_backup_files_temp (contextid,component,filearea,itemid,info,backupid)
VALUES($1,$2,$3,$4,$5,$6) RETURNING id RELEASE SAVEPOINT
moodle_pg_savepoint; SAVEPOINT moodle_pg_savepoint SELECT * FROM mdl_backup_ids_temp
WHERE backupid = $1 AND itemname = $2 AND itemid = $3 INSERT INTO
mdl_backup_files_temp (contextid,component,filearea,itemid,info,backupid)
VALUES($1,$2,$3,$4,$5,$6) RETURNING id RELEASE SAVEPOINT moodle_pg_savepoint;
SAVEPOINT moodle_pg_savepoint
...
What am I missing here - and how can I view the values/parameters as well ?

In order to see the values passed in to a prepared statement, you'll need to set log_statement to either mod or all. log_statement prints the statement that is about to be executed, and includes the parameters/arguments used.
I think the easiest way to turn it on is by doing:
psql -c "ALTER SYSTEM SET log_statement TO 'all'"
psql -c "SELECT pg_reload_conf()"
From there, you should be able to view the parameters.
Bear in mind, this has the potential to generate a lot of traffic, so you'll sent to set it back to the previous value once you're done (you can get the current value by calling psql -c "SHOW log_statement" before you do the two commands above).

In order to see the parameters, you would give tshark another field to print:
-e pgsql.query -e pgsql.val.data
But this is going to be mess, especially if you use prepared statements. You should really just figure out what you are doing wrong with log_statement='all', it will log all statements not just a sample of them. Maybe you have that setting countermanded per user, per database, or per connection.

Related

How to use pgbench?

I have a table on pgadmin4 which consist of 100.000 lines and 23 columns.I need to benchmark postgresql on this specific table using pgbench,but i cant understand what parameters should i use.The database name is desdb and table called test.
PgAdmin4 is not a database server, it is a client. You don't have tables "on" pgadmin4, pgadmin4 is just one way of accessing tables which are on an actual server.
You don't benchmark tables, you benchmark queries. Knowing nothing about the table other than its name, all I could propose for a query is something like:
select * from test
Or
select count(*) from test
You could put that in a file test.sql, then run:
pgbench -n -f test.sql -T60 -P5 desdb
If you are like me and don't like littering your filesystem with bunches of tiny files with contents of no particular interest and you if use the bash shell, you could not create a test.sql file and instead make it dynamic:
pgbench -n -f <(echo 'select * from test') -T60 -P5 desdb
Whether that is a meaningful query to be benchmarking, I don't know. Do you care about how fast you can read (and then throw away) all columns for all rows in the table?
you can refer details regarding pgbench from : https://www.cloudbees.com/blog/tuning-postgresql-with-pgbench.

Greenplum to file using PSQL

I'm trying to export data from Green-plum to a text file(client) with pipe delimiter using PSQL and \copy. In the output i see single slash is converted to double slash and tab is converted \t.
Example
N\A is converted to N\\A
So how to get just N\A instead N\\A and just spaces instead of \t ?
Note: i`m allowed to use only \copy. Since my file is huge im getting space issue while use SED or Perl for find and replace
Assuming you don't have any "^" characters, you could use that as the escape character.
copy tpcds.call_center to stdout with delimiter '|' escape '^';
More on copy can be found here: https://www.postgresql.org/docs/8.2/static/sql-copy.html
This technique will be relatively slow and put a burden on the Master. If you used gpfdist instead, you could leverage the parallelism in the cluster and bypass the master. This solution is ideal for unloading large amounts of data.
First, start the gpfidst process:
[gpadmin#gpdbsne ~]$ gpfdist -p 8888 > gpfdist_8888.log 2>&1 < gpfdist_8888.log &
[1] 2255
Now, you can create the external table.
[gpadmin#gpdbsne ~]$ psql
SET
Timing is on.
psql (8.2.15)
Type "help" for help.
gpadmin=# create writable external table tpcds.et_call_center
(like tpcds.call_center)
location ('gpfdist://gpdbsne:8888/call_center.txt')
format 'text' (delimiter '|' escape '^');
NOTICE: Table doesn't have 'distributed by' clause, defaulting to distribution columns from LIKE table
CREATE EXTERNAL TABLE
Time: 18.681 ms
Now, you insert the data:
gpadmin=# insert into tpcds.et_call_center select * from tpcds.call_center;
INSERT 0 6
Time: 72.653 ms
gpadmin=# \q
Verify:
[gpadmin#gpdbsne ~]$ wc -l call_center.txt
6 call_center.txt
In my example, I used the hostname "gpdbsne" which is accessible to all segments in this cluster. Typically, Greenplum uses a private network for communication between segments so this hostname will need to be connected to the private network.
Since the writable external table is written to with SQL, you can use whatever transformation logic you want in the SQL so you can change tabs to spaces if you want. This eliminates the need for awk or sed for post processing the files. Copy can use SQL too but like I said, it is a slower than using writable external tables.

Passing null string value via environment variable to TSQL script

I have a DOS batch file I want to use to invoke a TSQL program.
I want to pass the names of the databases to use. This seems to work.
I want to pass the PREFIXES for the names of the tables I want to work with.
So for test tables I want to pass the name of a prefix to use the test table.
set svr=myserver
rem set db=myTESTdatabasename
set db=mydatabasename
rem set tp=TEST
set tp=
sqlcmd -S %svr% -d somename -i test01.sql
test01.sql looks like this:
use $(db)
go
select top 10 * into $(db).dbo.$(tp)dsttbl from $(db).dbo.$(tp)srctbl
It works fine for the test stuff, but for the real stuff, I just want to set the value of tp to null so that it will use the real table name and not the bogus table name.
The reason I'm doing this is because I don't know the names of everything that will be used on the actual databases. I'm trying to make it generic so I don't have to do a bunch of search replaces on what will be a very large sql program (the real sql program is already hundreds of lines).
In the test case, this would resolve to
select top 10 * into myTESTdatabasename.dbo.TESTdsttbl from myTESTdatabasename.dbo.TESTsrctbl
For the production runs, it should resolve to
select top 10 * into mydatabasename.dbo.dsttbl from mydatabasename.dbo.srctbl
The problem seems that it doesn't like null values for $(tp), or perhaps that it's getting an undefined variable.
I experimented some with the syntax and as Preet Sangha pointed out you should use the /V command line option.
The reason is that setting a variable to the empty string in a batch script undefines it.
If you want to set the database name in the top of the batch file you can still use set, like this:
set db_to_use=
Then you can use this (undefined) variable in the sqlcmd using the /V option:
sqlcmd -S %svr% -d somename -v db="%db_to_use%" -i test01.sql
...or you can just set the value directly in the sqlcmd line:
sqlcmd -S %svr% -d somename -v db="" -i test01.sql

Automating PostgreSQL output to csv

mohpc04pp1: /h/u544835 % psql arco
Welcome to psql 8.1.17, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
WARNING: You are connected to a server with major version 8.3,
but your psql client is major version 8.2. Some backslash commands,
such as \d, might not work properly.
dbname=> \o /h/u544835/data25000.csv
dbname=> select url from urltable where scoreid=1 limit 25000;
dbname=> \q
This is took from a link online of basically what I have been doing, but what I need to do is make a script that I can use to produce csv files daily
So my aim of the script is to while in the script connect to the db, run the \o etc commands then close it
but I'm having trouble scripting it to say go into the psql arco database then run those queries.
command line to connect to db = psql arco then once the scrits recognised I'm in that databse perform those commands to automate a query to a csv file.
if anyone can get me started or point me towards reading material for me to get past that bit, it will be duely appreciated.
i'm running all this off a standard windows xp, ssh'ing to a SLES set-up web server that holds my postgresql database running psql version 8.1.17
First of all you should fix your setup. As it turns out, we are dealing with PostgreSQL 8.1 here. This version has reached end of live in 2010. You need to seriously think about upgrading - or at least remind the guys running the server. Current version is 9.1.
The command you are looking for:
psql arco -c "\copy (select url from urltable where scoreid=1 limit 25000) to '/h/u544835/data25000.csv'"
Assuming your db is named "arco". Adjusted for changed question (including changed port).
I now see version 8.1 popping up in your question, but it's all contradictory. You need Postgres 8.2 or later to use a query (instead of a table) with the \copy meta-command.
Details about psql in the fine manual.
Alternative approach that should work with obsolete PostgreSQL 8.1:
psql arco -o /h/u544835/data25000.csv -t -A -c 'SELECT url FROM urltable WHERE scoreid = 1 LIMIT 25000'
Find some more info about these command line options under this related question on dba.SE.
With function (syntax compatible with 8.1)
Another way would be to create a server side function (if you can!) that executes COPY from a temp table (old syntax - works with pg 8.1):
CREATE OR REPLACE FUNCTION f_copy_file()
RETURNS void AS
$BODY$
BEGIN
CREATE TEMP TABLE u_tmp AS (
SELECT url FROM urltable WHERE scoreid = 1 LIMIT 25000
);
COPY u_tmp TO '/h/u544835/data25000.csv';
DROP TABLE u_tmp;
END;
$BODY$
LANGUAGE plpgsql;
And then from the shell:
psql arco -c 'SELECT f_copy_file()'
Change the separator
\f sets the field separator. I quote the manual again:
-F separator
--field-separator=separator
Use separator as the field separator for unaligned output.
This is equivalent to \pset fieldsep or \f.
Or you can change the column separator in Excel, here are the instructions from MS.
Thanks to Erwin's help and a link I read up on he posted for me I managed to combine the two to get
#!/bin/sh
dbname='arco'
username='' # If you actually supply a username, you need to add the -U switch!
psql $dbname $username << EOF
\f ,
\o /h/u544835/showme.csv
SELECT * FROM storage;
EOF
which will write my queries to a csv file etc for me.
From what there is above, it is not separating the sql query so if I load it straight into excel, they all stay in the same column too which means I'm having a problem with the delimiter
I've tried tabbed delimiters, also tried , ; etc but none are letting me separate it
I need for it
is there an option I can click to see which delimiter is being used with my psql? or a different way of dumping the data from a query into a file that can be read by excel, so a different column for each row etc

PostgreSQL - batch + script + variable

I am not a programmer, I am struggling a bit with this.
I have a batch file connecting to my PostgreSQL server, and then open a sql script. Everything works as expected. My question is how to pass a variable (if possible) from one to the other.
Here is my batch file:
set PGPASSWORD=xxxx
cls
#echo off
C:\Progra~1\PostgreSQL\8.3\bin\psql -d Total -h localhost -p 5432 -U postgres -f C:\TotalProteinImport.sql
And here's the script:
copy totalprotein from 'c:/TP.csv' DELIMITERS ',' CSV HEADER;
update anagrafica
set pt=(select totalprotein.resultvalue from totalprotein where totalprotein.accessionnbr=anagrafica.id)
where data_analisi = '12/23/2011';
delete from totalprotein;
This is working great, now the question is how could I pass a variable that would carry the date for data_analisi?
Like in the batch file, "Please enter date", and then the value is passed to the sql script.
You could create a function out of your your SQL script like this:
CREATE OR REPLACE FUNCTION f_myfunc(date)
RETURNS void AS
$BODY$
CREATE TEMP TABLE t_tmp ON COMMIT DROP AS
SELECT * FROM totalprotein LIMIT 0; -- copy table-structure from table
COPY t_tmp FROM 'c:/TP.csv' DELIMITERS ',' CSV HEADER;
UPDATE anagrafica a
SET pt = t.resultvalue
FROM t_tmp t
WHERE a.data_analisi = $1
AND t.accessionnbr = a.id;
-- Temp table is dropped automatically at end of session
-- In this case (ON COMMIT DROP) after the transaction
$BODY$
LANGUAGE sql;
You can use language SQL for this kind of simple SQL batch.
As you can see I have made a couple of modifications to your script that should make it faster, cleaner and safer.
Major points
For reading data into an empty table temporarily, use a temporary table. Saves a lot of disc writes and is much faster.
To simplify the process I use your existing table totalprotein as template for the creation of the (empty) temp table.
If you want to delete all rows of a table use TRUNCATE instead of DELETE FROM. Much faster. In this particular case, you need neither. The temporary table is dropped automatically. See comments in function.
The way you updated anagrafica.pt you would set the column to NULL, if anything goes wrong in the process (date not found, wrong date, id not found ...). The way I rewrote the UPDATE, it only happens if matching data are found. I assume that is what you actually want.
Then ask for user input in your shell script and call the function with the date as parameter. That's how it could work in a Linux shell (as user postgres, with password-less access (using IDENT method in pg_haba.conf):
#! /bin/sh
# Ask for date. 'YYYY-MM-DD' = ISO date-format, valid with any postgres locale.
echo -n "Enter date in the form YYYY-MM-DD and press [ENTER]: "
read date
# check validity of $date ...
psql db -p5432 -c "SELECT f_myfunc('$date')"
-c makes psql execute a singe SQL command and then exits. I wrote a lot more on psql and its command line options yesterday in a somewhat related answer.
The creation of the according Windows batch file remains as exercise for you.
Call under Windows
The error message tells you:
Function tpimport(unknown) does not exist
Note the lower case letters: tpimport. I suspect you used mixe case letters to create the function. So now you have to enclose the function name in double quotes every time you use it.
Try this one (edited quotes!):
C:\Progra~1\PostgreSQL\8.3\bin\psql -d Total -h localhost -p 5432 -U postgres
-c "SELECT ""TPImport""('%dateimport%')"
Note how I use singe and double quotes here. I guess this could work under windows. See here.
You made it hard for yourself when you chose to use mixed case identifiers in PostgreSQL - a folly which I never tire of warning against. Now you have to double quote the function name "TPImport" every time you use it. While perfectly legit, I would never do that. I use lower case letters for identifiers. Always. This way I never mix up lower / upper case and I never have to use double quotes.
The ultimate fix would be to recreate the function with a lower case name (just leave away the double quotes and it will be folded to lower case automatically). Then the function name will just work without any quoting.
Read the basics about identifiers here.
Also, consider upgrading to a more recent version of PostgreSQL 8.3 is a bit rusty by now.
psql supports textual replacement variables. Within psql they can be set using \set and used using :varname.
\set xyz 'abcdef'
select :'xyz';
?column?
----------
abcdef
These variables can be set using command line arguments also:
psql -v xyz=value
The only problem is that these textual replacements always need some fiddling with quoting as shown by the first \set and select.
After creating the function in Postgres, you must create a .bat file in the bin directory of your Postgres version, for example C:\Program Files\PostgreSQL\9.3\bin. Here you write:
#echo off
cd C:\Program Files\PostgreSQL\9.3\bin
psql -p 5432 -h localhost -d myDataBase -U postgres -c "select * from myFunction()"