Firebird list domains and data types - firebird

I want to list all domains, their datatypes, and size.
Background
I've managed to do the query, based on this SO answer.
The basic code takes all fields:
SELECT
*
FROM
rdb$fields
I found that I could get fields from rdb$fields:
filter fields from this request by RDB$FIELD_NAME
get field type code from RDB$FIELD_TYPE
get field length from RDB$FIELD_LENGTH
Reference:
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-fields.html
Question
How to combine all this to list all domains, their datatypes, and size?
I want to get only domains created by users, not automatic ones.

The code:
select
t.RDB$FIELD_NAME Name,
case t.RDB$FIELD_TYPE
when 7 then 'SMALLINT'
when 8 then 'INTEGER'
when 10 then 'FLOAT'
when 12 then 'DATE'
when 13 then 'TIME'
when 14 then 'CHAR'
when 16 then 'BIGINT'
when 27 then 'DOUBLE PRECISION'
when 35 then 'TIMESTAMP'
when 37 then 'VARCHAR'
when 261 then 'BLOB'
end Type_Name,
t.RDB$CHARACTER_LENGTH Chr_Length
from RDB$FIELDS t
where coalesce( rdb$system_flag, 0) = 0
and not ( rdb$field_name starting with 'RDB$')
Also interesting, I could not find a system table with datatypes. Had to hardcode them from the reference.
Thanks for the help in comments:
#MarkRotteveel
RDB$TYPE contains types, but names them differently:
You can find all data types in the RDB$TYPE for RDB$FIELD_NAME =
'RDB$FIELD_TYPE' (although you will need to map some types as it lists
SMALLINT as SHORT, INTEGER as LONG, BIGINT as INT64 and VARCHAR as
VARYING)
Need to use field RDB$CHARACTER_LENGTH instead of RDB$FIELD_LENGTH.
Note that RDB$FIELD_LENGTH is the wrong column for char/varchar
columns as it is the length in bytes (which depends on the character
set), you need to use RDB$CHARACTER_LENGTH for the length in
characters, and for numerical fields, you'll more likely need
RDB$FIELD_PRECISION (+ RDB$FIELD_SCALE), you are also ignoring sub
type information.
I needed the length of varchars only but appears RDB$FIELD_LENGTH = RDB$CHARACTER_LENGTH, 1 byte = 1 char for 1 byte character set.
If you use a 1 byte character set [1 byte = 1 char], but for example, UTF-8 is
(max) 4 byte per character, so then the field_length = 4 x
character_length
#Arioch
The most reliable way to get user domains:
To an extent one may use select * from rdb$fields where coalesce(
rdb$system_flag, 0) = 0 and not ( rdb$field_name starting with 'RDB$')
however no one prohibits user from manually/explicitly creating column
named "RDB$1234567".

Related

Convert jsonb column to a user-defined type

I'm trying to convert each row in a jsonb column to a type that I've defined, and I can't quite seem to get there.
I have an app that scrapes articles from The Guardian Open Platform and dumps the responses (as jsonb) in an ingestion table, into a column called 'body'. Other columns are a sequential ID, and a timestamp extracted from the response payload that helps my app only scrape new data.
I'd like to move the response dump data into a properly-defined table, and as I know the schema of the response, I've defined a type (my_type).
I've been referring to the 9.16. JSON Functions and Operators in the Postgres docs. I can get a single record as my type:
select * from jsonb_populate_record(null::my_type, (select body from data_ingestion limit 1));
produces
id
type
sectionId
...
example_id
example_type
example_section_id
...
(abbreviated for concision)
If I remove the limit, I get an error, which makes sense: the subquery would be providing multiple rows to jsonb_populate_record which only expects one.
I can get it to do multiple rows, but the result isn't broken into columns:
select jsonb_populate_record(null::my_type, body) from reviews_ingestion limit 3;
produces:
jsonb_populate_record
(example_id_1,example_type_1,example_section_id_1,...)
(example_id_2,example_type_2,example_section_id_2,...)
(example_id_3,example_type_3,example_section_id_3,...)
This is a bit odd, I would have expected to see column names; this after all is the point of providing the type.
I'm aware I can do this by using Postgres JSON querying functionality, e.g.
select
body -> 'id' as id,
body -> 'type' as type,
body -> 'sectionId' as section_id,
...
from reviews_ingestion;
This works but it seems quite inelegant. Plus I lose datatypes.
I've also considered aggregating all rows in the body column into a JSON array, so as to be able to supply this to jsonb_populate_recordset but this seems a bit of a silly approach, and unlikely to be performant.
Is there a way to achieve what I want, using Postgres functions?
Maybe you need this - to break my_type record into columns:
select (jsonb_populate_record(null::my_type, body)).*
from reviews_ingestion
limit 3;
-- or whatever other query clauses here
i.e. select all from these my_type records. All column names and types are in place.
Here is an illustration. My custom type is delmet and CTO t remotely mimics data_ingestion.
create type delmet as (x integer, y text, z boolean);
with t(i, j, k) as
(
values
(1, '{"x":10, "y":"Nope", "z":true}'::jsonb, 'cats'),
(2, '{"x":11, "y":"Yep", "z":false}', 'dogs'),
(3, '{"x":12, "y":null, "z":true}', 'parrots')
)
select i, (jsonb_populate_record(null::delmet, j)).*, k
from t;
Result:
i
x
y
z
k
1
10
Nope
true
cats
2
11
Yep
false
dogs
3
12
true
parrots

Check if character varying is between range of numbers

I hava data in my database and i need to select all data where 1 column number is between 1-100.
Im having problems, because i cant use - between 1 and 100; Because that column is character varying, not integer. But all data are numbers (i cant change it to integer).
Code;
dst_db1.eachRow("Select length_to_fault from diags where length_to_fault between 1 AND 100")
Error - operator does not exist: character varying >= integer
Since your column supposed to contain numeric values but is defined as text (or version of text) there will be times when it does not i.e. You need 2 validations: that the column actually contains numeric data and that it falls into your value restriction. So add the following predicates to your query.
and length_to_fault ~ '^\+?\d+(\.\d*)?$'
and length_to_fault::numeric <# ('[1.0,100.0]')::numrange;
The first builds a regexp that insures the column is a valid floating point value. The second insures the numeric value fall within the specified numeric range. See fiddle.
I understand you cannot change the database, but this looks like a good place for a check constraint esp. if n/a is the only non-numeric are allowed. You may want to talk with your DBA ans see about the following constraint.
alter table diags
add constraint length_to_fault_check
check ( lower(length_to_fault) = 'n/a'
or ( length_to_fault ~ '^\+?\d+(\.\d*)?$'
and length_to_fault::numeric <# ('[1.0,100.0]')::numrange
)
);
Then your query need only check that:
lower(lenth_to_fault) != 'n/a'
The below PostgreSQL query will work
SELECT length_to_fault FROM diags WHERE regexp_replace(length_to_fault, '[\s+]', '', 'g')::numeric BETWEEN 1 AND 100;

ORDER BY CASE & Ordinal not working

Sorting column #7 as an example -
This code does not sort data at all:
ORDER BY CASE WHEN '1'='2' THEN 5
WHEN '1'='1' THEN 7
ELSE 13 END
If I change it to a hard-coded ordinal it works:
ORDER BY 7
As long as the respective expressions in the SELECT list are of the same type, you can do it by using the expressions themselves instead of the SELECT list number:
SELECT expression1, expression2, ...
...
ORDER BY CASE
WHEN 1=2
THEN expression5
WHEN 1=1
THEN expression7
ELSE expression13
END;
If the data types are not the same, season with type casts.
Your query does not work because only integer literals can be used as column numbers in ORDER BY. In all other cases, an integer just stands for its constant value.
If it were not like this, ORDER BY expressions could easily become ambiguous. Look at the following:
... ORDER BY intcol + 3;
Should that mean “add three” or “add expression number three from the SELECT list”?

numeric range data type postgresql

I have a strange situation in the desing of my DB. I have the case that the type of value of a field can be a normal integer or a number between a range. I explain myself with a example:
the column age can be a number (18) or a range between (18-30). How I can represent this with postgresql?
Thx!
An integer range can represent both a single integer value and a range. The single value:
select int4range(18,18,'[]');
int4range
-----------
[18,19)
The ")" in the result above means exclusive.
The range:
select int4range(18,30,'[]');
int4range
-----------
[18,31)
There are a couple different ways to do this.
Store a VARCHAR
Store two values lower bound and upper bound
If there are only a select set of ranges you can create a lookup table for that set and store a foreign key to that lookup table.
You can make a bigger number, for example 18 x 1000 + 0 = 18000 for 18 and 18 x 1000 + 30 = 18030 for (18, 30).
When you retrieve it, you do first = round(number/1000) for the first number and second = number - first for the second number.
You can also store them as a point http://www.postgresql.org/docs/9.4/static/datatype-geometric.html#AEN6730.

Trying to get rid of unwanted records in query

I have the following query
Select * from Common.dbo.Zip4Lookup where
zipcode='76033' and
StreetName='PO BOX' and
'704' between AddressLow and AddressHigh and
(OddEven='B' or OddEven = 'E')
The AddressLow and AddressHigh columns are varchar(10) fields.
The records returned are
AddressLow AddressHigh
------------ ------------
1 79
701 711
The second is the desired record How do I get rid of the first record.
The problem is that SQL is using a string compare instead of a numeric compare. This is because AddressLow/High are varchar and not int.
As long as AddressLow/High contain numbers, this should work:
Select * from Common.dbo.Zip4Lookup where
zipcode='76033' and
StreetName='PO BOX' and
704 between
CAST(AddressLow as INT) and
CAST(AddressHigh as INT) and
(OddEven='B' or OddEven = 'E')
The problem is that your condition fits to the first record in 7 on the beginning of the 79 because it's the string value. The easist way is IMHO change the data type to some numeric one.