PostgreSql Queries treats Int as string datatypes - postgresql

I store the following rows in my table ('DataScreen') under a JSONB column ('Results')
{"Id":11,"Product":"Google Chrome","Handle":3091,"Description":"Google Chrome"}
{"Id":111,"Product":"Microsoft Sql","Handle":3092,"Description":"Microsoft Sql"}
{"Id":22,"Product":"Microsoft OneNote","Handle":3093,"Description":"Microsoft OneNote"}
{"Id":222,"Product":"Microsoft OneDrive","Handle":3094,"Description":"Microsoft OneDrive"}
Here, In this JSON objects "Id" amd "Handle" are integer properties and other being string properties.
When I query my table like below
Select Results->>'Id' From DataScreen
order by Results->>'Id' ASC
I get the improper results because PostgreSql treats everything as a text column and hence does the ordering according to the text, and not as integer.
Hence it gives the result as
11,111,22,222
instead of
11,22,111,222.
I don't want to use explicit casting to retrieve like below
Select Results->>'Id' From DataScreen order by CAST(Results->>'Id' AS INT) ASC
because I will not be sure of the datatype of the column due to the fact that JSON structure will be dynamic and the keys and values may change next time. and Hence could happen the same with another JSON that has Integer and string keys.
I want something so that Integers in Json structure of JSONB column are treated as integers only and not as texts (string).
How do I write my query so that Id And Handle are retrieved as Integer Values and not as strings , without explicit casting?

I think your assumtions about the id field don't make sense. You said,
(a) Either id contains integers only or
(b) it contains strings and integers.
I'd say,
If (a) then numerical ordering is correct.
If (b) then lexical ordering is correct.
But if (a) for some time and then (b) then the correct order changes, too. And that doesn't make sense. Imagine:
For the current database you expect the order 11,22,111,222. Then you add a row
{"Id":"aa","Product":"Microsoft OneDrive","Handle":3095,"Description":"Microsoft OneDrive"}
and suddenly the correct order of the other rows changes to 11,111,22,222,aa. That sudden change is what bothers me.
So I would either expect a lexical ordering ab intio, or restrict my id field to integers and use explicit casting.
Every other option I can think of is just not practical. You could, for example, create a custom < and > implementation for your id field which results in 11,111,22,222,aa. ("Order all integers by numerical value and all strings by lexical order and put all integers before the strings").
But that is a lot of work (it involves a custom data type, a custom cast function and a custom operator function) and yields some counterintuitive results, e.g. 11,111,22,222,0a,1a,2a,aa (note the position of 0a and so on. They come after 222).
Hope, that helps ;)

If Id always integer you can cast it in select part and just use ORDER BY 1:
select (Results->>'Id')::int From DataScreen order by 1 ASC

Related

How to understand the return type?

I'm building a framework for rust-postgres.
I need to know what value type will be returned from a row.try_get, to get the value in a variable of the appropriate type.
I can get the sql type from row.columns()[index].type, but not if the value is nullable , so i can't decide to put the value in a normal type or a Option<T>.
I can use just the content of the row to understand it, i can't do things like "get the table structure from Postgresql".
is there a way?
The reason that the Column type does not expose any way to find out if a result column is nullable is because the database does not return this information.
Remember that result columns are derived from running a query, and that query may contain arbitrary expressions. If the query was a simple SELECT of columns from a table, then it would be reasonably simple to determine if a column could be nullable.
But it could also be a very complex expression, derived from multiple columns, subselects or even custom functions. Postgres can figure out the data type of each column, but in the general case it doesn't know if a result column may contain nulls.
If your application is only performing simple queries, and you know which table column each result column comes from, then you can find out if that table column is nullable like this:
SELECT is_nullable
FROM information_schema.columns
WHERE table_schema='myschema'
AND table_name='mytable'
AND column_name='mycolumn';
If your queries are not that simple then I recommend you always get the result as an Option<T> and handle the possibility that the result might be None.

How does implicit casting work in Oracle NoSQL Database?

I am trying to understand the implicit cast behavior.
I have a column called ticketNo, this is a string and it is a pk.
Using the same datatype in both sides, I am returning one row
SELECT * FROM demo d WHERE ticketNo = "1762386738153"
When I am doing a explicit cast, this query is returning the same row
SELECT * FROM demo d WHERE cast (ticketNo as Long)= 1762386738153
Now, when I am doing an implicit cast, this query is returning no rows
SELECT * FROM demo d WHERE ticketNo = 1762386738153
Any ideas ?
There is no implicit cast behavior in Oracle NoSQL Database. String types are not comparable to Long types so the predicate ticketNo = 1762386738153 returns always false in your case. A string item is comparable to another string item. A string item is also comparable to an enum item.
In your case, this is your primary key, in order to have the best performances, it is not recommended to do a CAST. Validate the types before do this query. A primary key is always typed, no wildcard or complex types are accepted
Otherwise,
the reason for returning false for incomparable items, instead of
raising an error, is to handle truly schemaless applications, where
different table rows may contain very different data or differently
shaped data. As a result, even the writer of the query may not know
what kind of items an operand may return and an operand may indeed
return different kinds of items from different rows.
you can always execute the explicit CAST operation when needed, as you did.
If you are interested in have more information : https://docs.oracle.com/en/database/other-databases/nosql-database/20.3/sqlreferencefornosql/value-comparison-operators.html

SQL: Change the datetime to the exact string returned

See below for what is returned in my automated test for this query:
Select visit_date
from patient_visits
where patient_id = '50'
AND site_id = '216'
ORDER by patient_id
DESC LIMIT 1
08:52:48.406 DEBUG Executing : Select visit_date from patient_visits
where patient_id = '50' AND site_id = '216' ORDER by patient_id DESC
LIMIT 1 08:52:48.416 TRACE Return: [(datetime.date(2017, 2, 17),)]
When i run this in workbench i get
2017-02-17
How can i make the query return this instead of the datetime.date bit above. Some formatting needed?
What you got from the database is python's datetime.date object - and that happens due to the db connector drivers casting the DB records to the their corresponding python counterparts. Trust me, it's much better this way than plain strings the user would have to parse and cast himself later.
Imaging the result of this query is stored in a variable ${record}, there are a couple of things to get to it, in the form you want.
First, the response is (pretty much always) a list of tuples; as in your case it will always be a single record, go for the 1st list member, and its first tuple member:
${the_date}= Set Variable ${record[0][0]}
Now {the_date} is the datetime.date object; there are at least two ways to get its string representation.
1) With strftime() (the pythonic way):
${the_date_string}= Evaluate $the_date.strftime('%Y-%m-%d') datetime
here's a link for the strftime's directives
2) Using the fact it's a regular object, access its attributes and construct the result as you'd like:
${the_date_string}= Set Variable ${the_date.year}-${the_date.month}-${the_date.day}
Note that this ^ way, you'd most certainly loose the leading zeros in the month and day.

Convert varchar parameter with CSV into column values postgres

I have a postgres query with one input parameter of type varchar.
value of that parameter is used in where clause.
Till now only single value was sent to query but now we need to send multiple values such that they can be used with IN clause.
Earlier
value='abc'.
where data=value.//current usage
now
value='abc,def,ghk'.
where data in (value)//intended usage
I tried many ways i.e. providing value as
value='abc','def','ghk'
Or
value="abc","def","ghk" etc.
But none is working and query is not returning any result though there are some matching data available. If I provide the values directly in IN clause, I am seeing the data.
I think I should somehow split the parameter which is comma separated string into multiple values, but I am not sure how I can do that.
Please note its Postgres DB.
You can try to split input string into an array. Something like that:
where data = ANY(string_to_array('abc,def,ghk',','))

Is there any way for Access 2016 to sort the numbers that are part of a "text" data type formatted field as though they are numeric values?

I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).