Is there any function for encode/decode base 64 in Firebird?
In SQL Server there are a technique using xml:
declare #source varbinary(max)=convert(varbinary(max),'AbdalrahmanIbnSewareddahab')
SELECT CAST('' AS XML).value('xs:base64Binary(sql:variable(''#source''))','VARCHAR(MAX)') as BASE64_ENCODED;
The result is QWJkYWxyYWhtYW5JYm5TZXdhcmVkZGFoYWI=
How can I do this in Firebird (2.5 or 3.0)?
There is no such function built-in in Firebird 3 or earlier. Firebird 4 introduced the built-in functions BASE64_ENCODE and BASE64_DECODE to convert between binary data and base64 encoded strings.
select base64_encode('AbdalrahmanIbnSewareddahab') from rdb$database;
Result:
QWJkYWxyYWhtYW5JYm5TZXdhcmVkZGFoYWI=
(dbfiddle)
The alternative is to write one yourself, either as a UDF (Firebird 3 and earlier), a stored procedure (all Firebird versions), a UDR (Firebird 3 and higher), or a PSQL function (Firebird 3 and higher).
Related
The dump function in Oracle displays the internal representation of data:
DUMP returns a VARCHAR2 value containing the data type code, length in bytes, and internal representation of expr
Fore example:
SELECT DUMP(cast(1 as number ))
2 FROM DUAL;
DUMP(CAST(1ASNUMBER))
--------------------------------------------------------------------------------
Typ=2 Len=2: 193,2
SQL> SELECT DUMP(cast(1.000001 as number ))
2 FROM DUAL;
DUMP(CAST(1.000001ASNUMBER))
--------------------------------------------------------------------------------
Typ=2 Len=5: 193,2,1,1,2
It shows that the first 1 uses 2 byte for storing and the second example uses 5 bytes for storing.
I suppose the similar function in PostgreSQL is pg_typeof but it returns only the type name without information about byte usage:
SELECT pg_typeof(33);
pg_typeof
integer (1 row)
Does anybody know if there is an equivalent function in PostgreSQL?
I don't speak PostgreSQL.
However, Oracle functionality page says that there's Orafce which
implements in Postgres some of the functions from the Oracle database that are missing (or behaving differently)
It, furthermore, mentions the dump function
dump (anyexpr [, int]): Returns a text value that includes the datatype code, the length in bytes, and the internal representation of the expression
One of examples looks like this:
postgres=# select pg_catalog.dump('Pavel Stehule',10);
dump
-------------------------------------------------------------------------
Typ=25 Len=17: 68,0,0,0,80,97,118,101,108,32,83,116,101,104,117,108,101
(1 row)
To me, it looks like Oracle's dump:
SQL> select dump('Pavel Stehule') result from dual;
RESULT
--------------------------------------------------------------
Typ=96 Len=13: 80,97,118,101,108,32,83,116,101,104,117,108,101
SQL>
I presume you'll have to visit GitHub and install the package to see whether you can use it or not.
It is not a complete equivalent, but if you want to figure out the byte values used to encode a string in PostgreSQL, you can simply cast the value to bytea, which will give you the bytes in hexadecimal:
SELECT CAST ('schön' AS bytea);
This will work for strings, but not for numbers.
How do I execute the following Query from Oracle in PostgreSql:
SELECT to_timestamp('20210603033632200995','yyyymmddhh24missFF6');
PostgreSQL has the standard conforming data type timestamp.
Six decimal places are microseconds, and yes, PostgreSQL supports that. It does not support nanoseconds, which would be 9 decimal places.
I derived at
SELECT to_timestamp('20210603033632200995','yyyymmddhh24missUS');
This worked in PostgresSql workbench. Can anyone else the verify this solution?
https://dbfiddle.uk/?rdbms=postgres_11&fiddle=b220060f7029507c29ab1f81c5e0acbd
I'm using: PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit running on AWS as Aurora (RDS).
I have managed to get the below working
SELECT encode(digest('email#example.com', 'sha256'),'hex');
But I need to use a salt provided by a 3rd party. Let's say for arguments sake the salt is 'this_is_my_salt'. How can I use this? i could only find examples that generate the salt using algorithms.
The vendor wants us to use their hash to compare email addresses in their database with ours. They haven't specified their Database system, but showed me their query which is:
SELECT 'email#example.com' as unhashed_email, sha2('shared_salt_value' || lower('email#example.com')) as hashed_email
And this produces a different hash to me trying the example in postgres using one of the answers below:
SELECT encode(digest('email#example.com' || 'shared_salt_value', 'sha256'),'hex');
My hash starts with db17e....
Their hash starts with b6c84....
Could it be encoding or something causing the difference?
That is trivial, just concatenate the string with the salt:
SELECT encode(digest('this_is_my_salt' || 'email#example.com', 'sha256'),'hex');
I am using symmetricds free version to replicate my firebird database. When I demo by creating new (blank) DB, it worked fine. But when I config on my existing DB (have data), error occurred.
I use Firebird-2.5.5.26952_0 32bit & symmetric-server-3.9.5, OS is Windows Server 2008 Enterprise.
I had searched for whole day but found nothing to solve this. Anyone please help. Thank for your time.
UPDATE:
When initial load, Symmetric execute the statement to declare UDF in firebird DB:
declare external function sym_hex blob
returns cstring(32660) free_it
entry_point 'sym_hex' module_name 'sym_udf
It caused error because my existing DB charset is UNICODE_FSS, max length of CSTRING is 10922. When I work around by updating charset to NONE, it worked fine. But it is not a safe solution. Still working to find better.
One more thing, anyone know others open source tool to replicate Firebird database, I tried many in here and only Symmetric work.
The problem seems to be a bug in Firebird where the length of CSTRING is supposed to be in bytes, but in reality it uses characters. Your database seems to have UTF8 (or UNICODE_FSS) as its default character set, which means each character can take up to 4 bytes (3 for UNICODE_FSS). The maximum length of CSTRING is 32767 bytes, but if it calculates in characters for CSTRING, that suddenly reduces the maximum to 8191 characters (or 32764 bytes) (or 10922 characters, 32766 bytes for UNICODE_FSS).
The workaround to this problem would be to create a database with a different default character set. Alternatively, you could (temporarily) alter the default character set:
For Firebird 3:
Set the default character set to a single byte character set (eg NONE). Use of NONE is preferable to avoid unintended transliteration issues
alter database set default character set NONE;
Disconnect (important, you may need to disconnect all current connections because of metadata caching!)
Set up SymmetricDS so it creates the UDF
Set the default character set back to UTF8 (or UNICODE_FSS)
alter database set default character set UTF8;
Disconnect again
When using Firebird 2.5 or earlier, you will need perform a direct system table update (which is no longer possible in Firebird 3) using:
Step 2:
update RDB$DATABASE set RDB$CHARACTER_SET_NAME = 'NONE'
Step 4:
update RDB$DATABASE set RDB$CHARACTER_SET_NAME = 'UTF8'
The alternative would be for SymmetricDS to change its initialization to
DECLARE EXTERNAL FUNCTION SYM_HEX
BLOB
RETURNS CSTRING(32660) CHARACTER SET NONE FREE_IT
ENTRY_POINT 'sym_hex' MODULE_NAME 'sym_udf';
Or maybe character set BINARY instead of NONE, as that seems closer to the intent of the UDF.
The oracle script I'm in the process of 'converting' so it can be executed in PGAdmin4, has the following values to insert into a column of table with a data type of 'date'
to_timestamp('12-JUN-99','DD-MM-YY HH.MI.SSXFF AM')
From my understanding, FF represents Fractional Seconds.
What would be the equivalent way to represent the statement in PostgreSQL/PGAdmin 4?
SSXFF is my main concern.
I don't see how that code works in Oracle. But this should work in both Postgres and Oracle:
select to_timestamp('12-JUN-99', 'DD-MON-YY')