Why I am not able to get the specific column in psql - postgresql

I have a table in my psql db called cls and user called cls
then tries to get the specific column from the existing table [name: test] but I am not able to retrieve the table.
Snippets as below:
psql -U cls
cls # select * from test;
name | 
 ip | 
 user | 
 password | 
 group | 
 created_on
-----------+--------+------+--------------+--------+----------------------------
server | 1.1.1.1 | test | pwd | gp1 | 2022-08-04 13:55:00.765548
cls # select ip from test where name='server';
LINE 1: select ip from test where name='server';
^
HINT: Perhaps you meant to reference the column "test.
ip".
cls # select test.ip from test where name='server';
LINE 1: select ip from test where name='server';
^
HINT: Perhaps you meant to reference the column "test.
ip".
cls # select t.ip from test t;
LINE 1: select t.ip from test t;
^
HINT: Perhaps you meant to reference the column "t.
ip".
I tried double quotes and single quotes but no luck.

As the error message says, your column isn't called ip, it's called 
ip - notice the "funny" character before the i.

Related

psql SQL Interpolation in a code block

In some of my scripts I use SQL Interpolation feature of psql utility:
basic.sql:
update :schema.mytable set ok = true;
> psql -h 10.0.0.1 -U postgres -f basic.sql -v schema=myschema
Now I need bit more complicated scenario. I need to specify schema name (and desirebly some other things) inside PL/pgSQL code block:
pg.sql
do
$$
begin
update :schema.mytable set ok = true;
end;
$$
But unfortunately this does not work, since psql does not replace :variables inside $$.
Is there a way to workaround it in general? Or more specifically, how to substitute schema names into pgSQL code block or function definition?
in your referenced docs:
Variable interpolation will not be performed within quoted SQL
literals and identifiers. Therefore, a construction such as ':foo'
doesn't work to produce a quoted literal from a variable's value (and
it would be unsafe if it did work, since it wouldn't correctly handle
quotes embedded in the value).
it does not matter if quotes are double dollar sign or single quote - it wont work, eg:
do
'
begin
update :schema.mytable set ok = true;
end;
'
ERROR: syntax error at or near ":"
to pass variable into quoted statement other way you can try using shell variables, eg:
MacBook-Air:~ vao$ cat do.sh; export schema_name='smth' && bash do.sh
psql -X so <<EOF
\dn+
do
\$\$
begin
execute format ('create schema %I','$schema_name');
end;
\$\$
;
\dn+
EOF
List of schemas
Name | Owner | Access privileges | Description
----------+----------+-------------------+------------------------
public | vao | vao=UC/vao +| standard public schema
| | =UC/vao |
schema_a | user_old | |
(2 rows)
DO
List of schemas
Name | Owner | Access privileges | Description
----------+----------+-------------------+------------------------
public | vao | vao=UC/vao +| standard public schema
| | =UC/vao |
schema_a | user_old | |
smth | vao | |
(3 rows)

What is the simplest way to migrate data from MySQL to DB2

I need to migrate data from MySQL to DB2. Both DBs are up and running.
I tried to mysqldump with --no-create-info --extended-insert=FALSE --complete-insert and with a few changes on the output (e.g. change ` to "), I get to a satisfactory result but sometimes I have weird exceptions, like
does not have an
ending string delimiter. SQLSTATE=42603
Ideally I would want to have a routine that is as general as possible, but as an example here, let's say I have a DB2 table that looks like:
db2 => describe table "mytable"
Data type Column
Column name schema Data type name Length Scale Nulls
------------------------------- --------- ------------------- ---------- ----- ------
id SYSIBM BIGINT 8 0 No
name SYSIBM VARCHAR 512 0 No
2 record(s) selected.
Its MySQL counterpart being
mysql> describe mytable;
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| name | varchar(512) | NO | | NULL | |
+-------+--------------+------+-----+---------+----------------+
2 rows in set (0.01 sec)
Let's assume the DB2 and MySQL databases are called mydb.
Now, if I do
mysqldump -uroot mydb mytable --no-create-info --extended-insert=FALSE --complete-insert | # mysldump, with options (see below): # do not output table create statement # one insert statement per record# ouput table column names
sed -n -e '/^INSERT/p' | # only keep lines beginning with "INSERT"
sed 's/`/"/g' | # replace ` with "
sed 's/;$//g' | # remove `;` at end of insert query
sed "s/\\\'/''/g" # replace `\'` with `''` , see http://stackoverflow.com/questions/2442205/how-does-one-escape-an-apostrophe-in-db2-sql and http://stackoverflow.com/questions/2369314/why-does-sed-require-3-backslashes-for-a-regular-backslash
, I get:
INSERT INTO "mytable" ("id", "name") VALUES (1,'record 1')
INSERT INTO "mytable" ("id", "name") VALUES (2,'record 2')
INSERT INTO "mytable" ("id", "name") VALUES (3,'record 3')
INSERT INTO "mytable" ("id", "name") VALUES (4,'record 4')
INSERT INTO "mytable" ("id", "name") VALUES (5,'" "" '' '''' \"\" ')
This ouput can be used as a DB2 query and it works well.
Any idea on how to solve this more efficiently/generally? Any other suggestions?
After having played around a bit I came with the following routine which I believe to be fairly general, robust and scalable.
1 Run the following command:
mysqldump -uroot mydb mytable --no-create-info --extended-insert=FALSE --complete-insert | # mysldump, with options (see below): # do not output table create statement # one insert statement per record# ouput table column names
sed -n -e '/^INSERT/p' | # only keep lines beginning with "INSERT"
sed 's/`/"/g' | # replace ` with "
sed -e 's/\\"/"/g' | # replace `\"` with `#` (mysql escapes double quotes)
sed "s/\\\'/''/g" > out.sql # replace `\'` with `''` , see http://stackoverflow.com/questions/2442205/how-does-one-escape-an-apostrophe-in-db2-sql and http://stackoverflow.com/questions/2369314/why-does-sed-require-3-backslashes-for-a-regular-backslash
Note: here unlike in the question ; are not being removed.
2 upload the file to DB2 server
scp out.sql user#myserver:out.sql
3 run queries from the file
db2 -tvsf /path/to/query/file/out.sql

Postgres json select not ignoring quotes

I have the following table and setup
create table test (
id serial primary key,
name text not null,
meta json
);
insert into test (name, meta) values ('demo1', '{"name" : "Hello"}')
However, when I run this query, this is the result
select * from test;
id | name | meta
----+-------+--------------------
1 | demo1 | {"name" : "Hello"}
(1 row)
but
select * from test where meta->'name' = 'Hello';
ERROR: operator does not exist: json = unknown
LINE 1: select * from test where meta->'name' = 'Hello';
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
-
select * from test where cast(meta->'name' as text) = 'Hello';
id | name | meta
----+------+------
(0 rows)
and this works
select * from test where cast(meta->'name' as text) = '"Hello"';
id | name | meta
----+-------+--------------------
1 | demo1 | {"name" : "Hello"}
(1 row)
Can anyone tell me what the relevance of this quote is and why it's not doing a simple string search/comparison? Alternatively, does this have something to do with the casting?
That's because the -> gets a field not a value, so you need to add the cast to say to postgresql which data type you are after.
So to run your query like you want you need to use the ->> which gets the json element as text see it here on the docs JSON Functions and Operators
So your query should looks like:
select *
from test
where meta->>'name' = 'Hello';
See it working here: http://sqlfiddle.com/#!15/bf866/8

Error when querying PostgreSQL using range operators

I am trying to query a postgresql (v 9.3.6) table with a tstzrange to determine if a given timestamp exists within the table defined as
CREATE TABLE sensor(
id serial,
hostname varchar(64) NOT NULL,
ip varchar(15) NOT NULL,
period tstzrange NOT NULL,
PRIMARY KEY(id),
EXCLUDE USING gist (hostname WITH =, period with &&)
);
I am using psycopg2 and when I try the query:
sql = "SELECT id FROM sensor WHERE %s <# period;"
cursor.execute(sql,(isotimestamp,))
I get the error
psycopg2.DataError: malformed range literal:
...
DETAIL: Missing left parenthesis or bracket.
I've tried various type castings to no avail.
I've managed a workaround using the following query:
sql = "SELECT * FROM sensor WHERE %s BETWEEN lower(period) AND upper(period);"
but would like to know why I am having problem with the range operators. Is it my code or psycopg2 or what?
Any help is appreciated.
EDIT 1:
In response to the comments, I have attempted the same query on a simple 1-row table in postgresql like below
=> select * from sensor;
session_id | hostname | ip | period
------------+----------+-----------+-------------------------------------------------------------------
1 | bob | 127.0.0.1 | ["2015-02-08 19:26:42.032637+00","2015-02-08 19:27:28.562341+00")
(1 row)
Now by using the "#>" operator I get the following error:
=> select * from sensor where period #> '2015-02-08 19:26:43.04+00';
ERROR: malformed range literal: "2015-02-08 19:26:43.04+00"
LINE 1: select * from sensor where period #> '2015-02-08 19:26:42.03...
Which appears to be the same as the psycopg2 error, a malformed range literal, so I thought I would try typecasting to timestamp as below
=> select * from sensor where sensor.period #> '2015-02-08 19:26:42.032637+00'::timestamptz;
session_id | hostname | ip | period
------------+----------+-----------+-------------------------------------------------------------------
1 | feral | 127.0.0.1 | ["2015-02-08 19:26:42.032637+00","2015-02-08 19:27:28.562341+00")
So it appears that it is my mistake, the literal has to be typecast or it is assumed to be a range. Using psycopg2, the query can be executed with:
sql="select * from sensor where period #> %s::timestamptz"

How to Check for Two Columns and Query Every Table When They Exist?

I'm interested in doing a COUNT(*), SUM(LENGTH(blob)/1024./1024.), and ORDER BY SUM(LENGTH(blob)) for my entire database when column 'blob' exists. For tables where synchlevels does not exist, I still want the output. I'd like to GROUP BY that column:
Example
+--------+------------+--------+-----------+
| table | synchlevel | count | size_mb |
+--------+------------+--------+-----------+
| tableA | 0 | 924505 | 3013.47 |
| tableA | 7 | 981 | 295.33 |
| tableB | 6 | 1449 | 130.50 |
| tableC | 1 | 64368 | 68.43 |
| tableD | NULL | 359 | .54 |
| tableD | NULL | 778 | .05 |
+--------+------------+--------+-----------+
I would like to do a pure SQL solution, but I'm having a bit of difficulty with that. Currently, I'm wrapping some SQL into BASH.
#!/bin/bash
USER=$1
DBNAME=$2
function psql_cmd(){
cmd=$1
prefix='\pset border 2 \\ '
echo $prefix $cmd | psql -U $USER $DBNAME | grep -v "Border\| row"
}
function synchlevels(){
echo "===================================================="
echo " SYNCH LEVEL STATS "
echo "===================================================="
tables=($(psql -U $USER -tc "SELECT table_name FROM information_schema.columns
WHERE column_name = 'blob';" $DBNAME))
for table in ${tables[#]}; do
count_size="SELECT t.synchlevel,
COUNT(t.blob) AS count,
to_char(SUM(LENGTH(t.blob)/1024./1024.),'99999D99') AS size_mb
FROM $table AS t
GROUP BY t.synchlevel
ORDER BY SUM(LENGTH(t.blob)) DESC;"
echo $table
psql_cmd "$count_size"
done
echo "===================================================="
}
I could extend this by creating a second tables BASH array of tables which have the 'synchlevel' column, compare and use that list to run through the SQL, but I was wondering if there was a way I could just do the SQL portion purely in SQL without resorting to making these lists in BASH and doing the comparisons externally. i.e. I want to avoid needing to externally loop through the tables and making numerous queries in tables=($(psql -U $USER....
I've tried the following SQL to test on a table where I know the column doesn't exist...
SELECT
CASE WHEN EXISTS(SELECT * FROM information_schema.columns
WHERE column_name = 'synchlevel'
AND table_name = 'archivemetadata')
THEN synchlevel
END,
COUNT(blob) AS count,
to_char(SUM(LENGTH(blob)/1024./1024.),'99999D99') AS size_mb
FROM archivemetadata, information_schema.columns AS info
WHERE info.column_name = 'blob'
However, it fails on THEN synchlevel for tables where it doesn't exist. It seems really simple to do, but I just can't seem to find a way to do this which doesn't require either:
Resorting to external array comparisons in BASH.
Can be done, but I'd like to simplify my solution rather than add another layer.
Creating PL/PGSQL functions.
This script is really just to help with some database data analysis for improving performance in a third-party software. We are not a shop of DB Admins, so I would prefer not to dive into PL/PGSQL as that would require more folks from our shop to also become acquainted with the language in order to support the script. Again, simplicity is the motivation here.
Postgresql 8.4 is the engine. (We cannot upgrade due to security constraints by an overseeing IT body.)
Thanks for any suggestions you might have!
The following is untested, but how about creating some dynamic sql in one psql session and piping it to another?
psql -d <yourdb> -qtAc "
select 'select ' || (case when info.column_name = 'synchlevel' then 'synchlevel,' else '' end) ||
'count(*) as cnt,' ||
'to_char(SUM(LENGTH(blob)::NUMERIC/1024/1024),''99999D99'') AS size_mb' ||
'from ' || info.table_name ||
(case when info.column_name = 'synchlevel' then ' group by synchlevel order by synchlevel' else '' end)
from information_schema.columns as info
where info.table_name IN (select distinct table_name from information_schema.columns where column_name = 'blob')" | psql -d <yourdb>