Postgresql jsonb vs datetime - postgresql

I need to store two dates valid_from, and valid_to.
Is it better to use two datetime fields like valid_from:datetime and valid_to:datatime.
Would be it better to store data in jsonb field validity: {"from": "2001-01-01", "to": "2001-02-02"}
Much more reads than writes to database.
DB: PostgresSQL 9.4

You can use daterange type :
ie :
'[2001-01-01, 2001-02-02]'::daterange means from 2001-01-01 to 2001-02-02
bound inclusive
'(2001-01-01, 2001-02-05)'::daterange means from 2001-01-01
to 2001-02-05 bound exclusive
Also :
Special value like Infinite can be use
lower(anyrange) => lower bound of range
and many other things like overlap operator, see the docs ;-)
Range Type

Use two timestamp columns (there is no datetime type in Postgres).
They can efficiently be indexed and they protect you from invalid timestamp values - nothing prevents you from storing "2019-02-31 28:99:00" in a JSON value.
If you very often need to use those two values to check if another tiemstamp values lies in between, you could also consider a range type that stores both values in a single column.

Related

DynamoDB column with tilde and query using JPA

i have table column with tilde value like below
vendorAndDate - Column name
Chipotle~08-26-2020 - column value
I want to query for month "vendorAndPurchaseDate like '%~08%2020'" and for year ends with 2020 "vendorAndPurchaseDate like '%2020'". I am using Spring Data JPA to query the values. I have not worked on column with tilde values before. Please point me in a right direction or some examples
You cannot.
If vendorAndPurchaseDate is your partition key , you need to pass the whole value.
If vendorAndPurchaseDate is range key , you can only perform
= ,>,<>=,<=,between and begins_with operation along with a partition key
reference : https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html
DynamoDB does not support this type of wildcard query.
Let's consider a more DynamoDB way of handling this type of query. It sounds like you want to support 2 access patterns:
Get Item by month
Get Item by year
You don't describe your Primary Keys (Partition Key/Sort Key), so I'm going to make some assumptions to illustrate one way to address these access patterns.
Your attribute appears to be a composite key, consisting of <vendor>~<date>, where the date is expressed by MM-DD-YYYY. I would recommend storing your date fields in YYYY-MM-DD format, which would allow you to exploit the sort-ability of the date field. An example will make this much clearer. Imagine your table looked like this:
I'm calling your vendorAndDate attribute SK, since I'm using it as a Sort Key in this example. This table structure allows me to implement your two access patterns by executing the following queries (in pseudocode to remain language agnostic):
Access Pattern 1: Fetch all Chipotle records for August 2020
query from MyTable where PK = "Vendors" and SK between Chipotle~2020-08-00 and Chipotle~2020-08-31
Access Pattern 2: Fetch all Chipotle records for 2020
query from MyTable where PK = "Vendors" and SK between Chipotle~2020-01-01 and Chipotle~2020-12-31
Because dates stored in ISO8601 format (e.g. YYYY-MM-DD...) are lexicographically sortable, you can perform range queries in DynamoDB in this way.
Again, I've made some assumptions about your data and access patterns for the purpose of illustrating the technique of using lexicographically sortable timestamps to implement range queries.

What are Pros and Cons in using prefixes and suffixes in PostgreSQL dialect for timestamp columns

I have analysed several articles about naming conventions for Date/Time types in SQL data models.
Most of them suggest implementing a database design where a timestamp type is used for some registered even values only, literally timestamping the event case just when it happens. And naturally they suggest datetime type for any other time instanting needs. And they suggest to avoid using suffixes and prefixes which match known data types, like date and time, at all costs, to avoid confusion with data types where only the purpose of the column name is expected.
But PostgreSQL dialect does not have that datetime type at all, so there is only the timestamp type for all cases when just date and time are not enough for the column which is expected to store a value of past or future instant of time.
So, basically, what prefixes or suffixes if any would you suggest for PostgreSQL dialect columns, known that some of them would store past and present and future time instants? And why, for what benefits or because of what limitations?
Should we use timestamp and datetime as prefixes or suffixes to distinguish the purpose of different timestamp type columns by their names? Or would that be a bad practice since there is actually a data type named timestamp and no data type named datetime in PostgreSQL dialect?
Or should we maybe use something very neurtal like an instant noun as a prefix or suffix to denote the purpose of the column?

PostgreSql Queries treats Int as string datatypes

I store the following rows in my table ('DataScreen') under a JSONB column ('Results')
{"Id":11,"Product":"Google Chrome","Handle":3091,"Description":"Google Chrome"}
{"Id":111,"Product":"Microsoft Sql","Handle":3092,"Description":"Microsoft Sql"}
{"Id":22,"Product":"Microsoft OneNote","Handle":3093,"Description":"Microsoft OneNote"}
{"Id":222,"Product":"Microsoft OneDrive","Handle":3094,"Description":"Microsoft OneDrive"}
Here, In this JSON objects "Id" amd "Handle" are integer properties and other being string properties.
When I query my table like below
Select Results->>'Id' From DataScreen
order by Results->>'Id' ASC
I get the improper results because PostgreSql treats everything as a text column and hence does the ordering according to the text, and not as integer.
Hence it gives the result as
11,111,22,222
instead of
11,22,111,222.
I don't want to use explicit casting to retrieve like below
Select Results->>'Id' From DataScreen order by CAST(Results->>'Id' AS INT) ASC
because I will not be sure of the datatype of the column due to the fact that JSON structure will be dynamic and the keys and values may change next time. and Hence could happen the same with another JSON that has Integer and string keys.
I want something so that Integers in Json structure of JSONB column are treated as integers only and not as texts (string).
How do I write my query so that Id And Handle are retrieved as Integer Values and not as strings , without explicit casting?
I think your assumtions about the id field don't make sense. You said,
(a) Either id contains integers only or
(b) it contains strings and integers.
I'd say,
If (a) then numerical ordering is correct.
If (b) then lexical ordering is correct.
But if (a) for some time and then (b) then the correct order changes, too. And that doesn't make sense. Imagine:
For the current database you expect the order 11,22,111,222. Then you add a row
{"Id":"aa","Product":"Microsoft OneDrive","Handle":3095,"Description":"Microsoft OneDrive"}
and suddenly the correct order of the other rows changes to 11,111,22,222,aa. That sudden change is what bothers me.
So I would either expect a lexical ordering ab intio, or restrict my id field to integers and use explicit casting.
Every other option I can think of is just not practical. You could, for example, create a custom < and > implementation for your id field which results in 11,111,22,222,aa. ("Order all integers by numerical value and all strings by lexical order and put all integers before the strings").
But that is a lot of work (it involves a custom data type, a custom cast function and a custom operator function) and yields some counterintuitive results, e.g. 11,111,22,222,0a,1a,2a,aa (note the position of 0a and so on. They come after 222).
Hope, that helps ;)
If Id always integer you can cast it in select part and just use ORDER BY 1:
select (Results->>'Id')::int From DataScreen order by 1 ASC

Is there any way for Access 2016 to sort the numbers that are part of a "text" data type formatted field as though they are numeric values?

I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).

IBM DB2 9.7, any difference between inlined CLOB and VARCHAR?

Using IBM DB2 9.7, in a 32k tablespace, assuming a 10000b (ten thousand bytes) long column fits nicely in a tablespce. Is there a difference between these two, and is one preferred over the other?
VARCHAR(10000)
CLOB(536870912) INLINE LENGTH 10000
Is either or preferred in terms of functionality and performance? A quick look at the two would be that the CLOB is actually more versatile; all content shorter than 10000 is stored in stablespace, but IF bigger content is required then that is fine too, it is just stored elsewhere on disk.
There are a number of restrictions on the way CLOB can be used in a query:
Special restrictions apply to expressions resulting in a CLOB data
type, and to structured type columns; such expressions and columns are
not permitted in:
A SELECT list preceded by the DISTINCT clause
A GROUP BY clause An ORDER BY clause A subselect of a set operator other
than UNION ALL
A basic, quantified, BETWEEN, or IN predicate
An aggregate function
VARGRAPHIC, TRANSLATE, and datetime scalar functions
The pattern operand in a LIKE predicate, or the search
string operand in a POSSTR function
The string representation of a
datetime value.
So if you need to do any of those things, VARCHAR is to be preferred.
I don't have a definitive answer about performance (unfortunately, information like this just doesn't seem to be in the documentation--or at least, it is not easily locatable). However, logically speaking, the DB has more work to do with a CLOB. It has to decide whether to return the CLOB directly in the result or not. That has to mean at least some overhead. Here is a good discussion of some of the issues, though it doesn't give a clear answer on performance, either.
My default position would be to use VARCHAR unless CLOB is really needed (data in the column can be bigger than the VARCHAR limit).