Pgcrypto doesn't work correctly on Windows - aes

I execute the same query on 2 different servers and I get a different results. Does anyone know why?
select decrypt('\x792135887dace2af15d3f8548cc20919','\x265bb788ef6762abf50577f8a6669aa0','aes-ecb')
Debian postgresql 9.3 server output (result expected):
"\xafb8967640bd0400309e7b0008acbb23"
Windows postgresql 9.3 server output (result wrong):
"\257\270\226v#\275\004\0000\236{\000\010\254\273#"

Your Windows 9.3 server has a non-default configuration; it has bytea_output set to escape mode, not hex mode.
The result is actually the same, it's just being displayed in a different text representation of the underlying binary.
regress=> SHOW bytea_output;
bytea_output
--------------
hex
(1 row)
regress=> SELECT BYTEA '\257\270\226v#\275\004\0000\236{\000\010\254\273#';
bytea
------------------------------------
\xafb8967640bd0400309e7b0008acbb23
(1 row)

Related

Why does postgres `lower()` not lowercase È in my function [duplicate]

I have a standard postgresql server from a new and an old ubuntu repository.
The first is postgresql server 8.3.12. Here the lower() function works correctly on the Danish letter 'Æ'
go=# select lower('Æ');
lower
-------
æ
(1 row)
Now on postgres 9.1.9 the function doesn't work (it returns the same uppercase letter)
go=# select lower('Æ');
lower
-------
Æ
(1 row)
Does anyone have an idea how to change this behavior?
(my real problem is that ilike doesn't work on Danish characters either, but I thought the above example would make the problem more clear)
Your database was probably created with a different locale.
Check \l+ in psql on the old and new versions. They'll have different locale settings.
Other possibilities are different operating systems/versions. PostgreSQL uses libc's locale rules, and some platforms (notably Mac OS X) have a bit of a ... special ... libc.
On 9.1.9 with an en_AU.UTF-8 database running on Fedora 19 I get:
regress=> select lower('Æ');
lower
-------
æ
(1 row)
The problem turned out to be that the PostgreSQL cluster was created by the system (Ubuntu 12.04) upon installation, and had taken C as the lc_ctype, and ASCII as the encoding, instead of inheriting from the locale, which was en_DK.UTF8. After doing a pg_dropcluster and a pg_createcluster, the correct locale and encoding was used, and everything started working correctly.

Docker difference postgres:12 from postgres:12-alpine

Docker hub contains several versions(tag) of Postgres db such as:
12.3, 12, latest
12.3-alpine, 12-alpine, alpine
-...
What is diff between postgres version 12.3 and 12.3-alpine?
Alpine is a much smaller version of Linux, it results in a smaller container than the full postgres image. It is argued that because of its small size, alpine is also more secured. Although one disadvantage of alpine is that it contains a lot less functionality than a docker image running the full Linux OS.
postgres:12.3 is based on Debian :
postgres:12.3-alpine is based on Alpine
Mainly the image sizes and contents differ:
You should be very careful when choosing a docker image for your database. The fact is that PostgreSQL in Alpine and Debian use a different collation.
Some Alpine images use musl library version (1.1.16) that does not support LC_COLLATE, so despite settings LANG variable, sorting data will be bytewise (C).
LC_COLLATE should be added to musl version 1.1.17
But this can create a problem of disastrous proportions: as soon as musl supports LC_COLLATE, all Postgres VARCHAR indexes will break.
See more discussion in:
https://github.com/docker-library/postgres/issues/273
https://github.com/docker-library/postgres/issues/327
...
I tried it myself:
FROM postgres:13-alpine
postgres=# select 'a' > 'A';
?column?
----------
t
(1 row)
postgres=# select 'a' < 'A';
?column?
----------
f
(1 row)
FROM postgres:13
postgres=# select 'a' > 'A';
?column?
----------
f
(1 row)
postgres=# select 'a' < 'A';
?column?
----------
t
(1 row)

How to handle large result sets with psql?

I have a query which gives about 14M rows (I was not aware of this). When I use psql to run the query, my Fedora machine froze. Also after the query was done, I could not use Fedora anymore and had to restart my machine. When I redirected standard output to a file, Fedora also froze.
So how should I handle large resultsets with psql?
psql accumulates complete results in client memory by default. This behavior is usual for all libpq based Postgres applications or drivers. The solutions are cursors - then you are fetching only N rows from server. Cursors can be used by psql too. You can change it by setting FETCH_COUNT variable, then it will use cursors with batch retrieval size FETCH_COUNT.
postgres=# \set FETCH_COUNT 1000
postgres=# select * from generate_series(1,100000); -- big query

How to run functions every time postgresql starts?

I am using pg_trgm extension for fuzzy search. The default threshold is 0.3 as show in:
# select show_limit();
show_limit
------------
0.3
(1 row)
I can change it with:
# select set_limit(0.1);
set_limit
-----------
0.1
(1 row)
# select show_limit();
show_limit
------------
0.1
(1 row)
But when I restart my session, the threshold is reset to default value:
# \q
$ psql -Upostgres my_db
psql (9.3.5)
Type "help" for help.
# select show_limit();
show_limit
------------
0.3
(1 row)
I want to execute set_limit(0.1) every time I start postgresql. Or in other words, I want to set 0.1 as default value for threshold of pg_trgm extension. How do I do that?
This has been asked before:
Set default limit for pg_trgm
The initial setting is hard coded in the source. One could hack the source and recompile the extension.
To address your comment:
You could put a command in your psqlrc or ~/.psqlrc file. A plain and simple SELECT command in a separate line:
SELECT set_limit(0.1)
Be aware that the additional module is installed per database, while psql can connect to any database cluster and any database within that cluster. It will cause an error message when connecting to any database where pg_trgm is not installed. Nothing bad will happen, though.
On the other hand, connecting with any other client will not set the limit, this may be a bit of a trap.
pg_trgm should really provide a config setting ...

DB2 Connect - Unicode Support from VB Application

I am trying to insert '€' char in Db2 Database. My Db2 database is in zOS. (v.8.0) and my Db2 client version is 9.1 FP5. I am trying this using ODBC Connection via ADODB in Visual Basic or C# code.
But junk char is getting inserted. '€' symbol is not inserting.
Is there any option to set the CodePage 1252 at connection level?
Used DISABLEUNICODE = '1' to SELECT / INSERT '€' symbols from VB