Does Orion-LD support temporal query language from the NGSI-LD specification? - fiware-orion

In the "ETSI GS CIM 009 V1.2.1 (2019-10). Context Information Management (CIM); NGSI-LD API" standard there is a chapter "4.11 NGSI-LD Temporal Query language".
The NGSI-LD Temporal Query language shall be supported by implementations. It is intended to define predicates
which allow testing whether Temporal Properties of NGSI-LD Entities, Properties and Relationships, are within certain
temporal constraints. In particular it can be used to request historic Property values and Relationships that were valid
within the specified timeframe.
The following grammar defines the syntax that shall be supported:
timerel = beforeRel / afterRel / betweenRel
beforeRel = "before"
afterRel = "after"
betweenRel = "between"
The points in time for comparison are defined as follows:
• A time element, which shall represent the comparison point for the before and after relation and the starting
point for the between relation. It shall be represented as DateTime (mandated by clause 4.6.3).
• An endtime element, which is only used for the between relation and shall represent the end point for
comparison. It shall be represented as DateTime (mandated by clause 4.6.3).
And in the "C.5.5 Temporal Query" there is query example
GET /ngsild/v1/*temporal*/entities/?type=Vehicle&q=brandName!=Mercedes&attrs=speed,brandName**&timerel=between
&time=2018-08-01:12:00:00Z&endTime=2018-08-01:13:00:00Z**
I'm trying to run similar GET request(Orion-LD (with Mintaka)) against certain data and get "400" back with: "time is unknown" (or "endTime is unknown" if I remove "time" condition from the string).
If I remove the part of the request that has timerel, time and endDate, then this GET request returns data fine. Therefore I was wondering if the part of the specification described in"4.11 NGSI-LD Temporal Query language". has been implemented in Orion_LD?
Thank you

Orion-LD does support the Temporal Interfaces of NGSI-LD, it just requires the component "Mintaka" for that: https://github.com/FIWARE/mintaka
Orion-LD needs to store the data in TimescaleDB, where it than can be retrieved via Mintaka. To enable the storage on timescale, see https://github.com/FIWARE/context.Orion-LD/blob/develop/doc/manuals-ld/troe.md

Related

Using an array type in the schema for an Ada interface to a postgresql database using gnatcoll_db2ada

I've created a Postgresql database with a few tables and am fairly content with how they work. I've also written some Ada code to interface with and perform simple queries. This all running on Slackware 14.2 using GNAT 2020.
One of my table columns is of an array type, an array of BIGINT.
The problem I have is when I try to create the schema for my Ada using gnatcoll_db2ada.
The schema file ("all-schema.txt") includes the following line:
item_list | BIGINT[] | | | |
When I do
gnatcoll_db2ada -dbmodel all-schema.txt
I get
Error: unknown field type "BIGINT[]"
all-schema.txt:33 gnatcoll-sql-inspect.adb:1420
gnatcoll-sql-inspect.adb:1420
Is what I'm trying to do actually possible?
The documentation suggests that database fields of array types are not supported (i.e. they are not mentioned as being supported). From the document SQL: Database interface:
The type of the field is the SQL type ("INTEGER", "TEXT", "TIMESTAMP", "DATE", "DOUBLE PRECISION", "MONEY", "BOOLEAN", "TIME", "CHARACTER(1)"). Any maximal length can be specified for strings, not just 1 as in this example. The tool will automatically convert these to Ada when generating Ada code. A special type ("AUTOINCREMENT") is an integer that is automatically incremented according to available ids in the table. The exact type used will depend on the specific DBMS.
Note that while the scalar field type "BIGINT" is not mentioned in the documentation, it is mentioned in the source code (see gnatcoll-sql.ads).
If you really need support for the "BIGINT" array type, then a quick glance at the source code suggests that you can extend the GNATCOLL DB interface with new field types by
using the generic package GNATCOLL.SQL_Impl.Field_Types (see here) and
the creation of a new field mapping (i.e. a new concrete type based on GNATCOLL.SQL.Inspect.Field_Mapping, see here).
It seems that new field types are typically placed in package GNATCOLL.SQL_Fields (see here).
Note that I never did this myself, so I cannot tell how much effort it will be and whether this is really all that is needed; The exact requirements for implementing a new field type are (at the time of writing) not documented.
I suspected as much, having briefly looked at the source.
What I'll do is spin off the array into another table. This at least has helped clarify what I need to, and the array, to be fair, always felt a bit clunky. Thanks for the the comments.

DynamoDB column with tilde and query using JPA

i have table column with tilde value like below
vendorAndDate - Column name
Chipotle~08-26-2020 - column value
I want to query for month "vendorAndPurchaseDate like '%~08%2020'" and for year ends with 2020 "vendorAndPurchaseDate like '%2020'". I am using Spring Data JPA to query the values. I have not worked on column with tilde values before. Please point me in a right direction or some examples
You cannot.
If vendorAndPurchaseDate is your partition key , you need to pass the whole value.
If vendorAndPurchaseDate is range key , you can only perform
= ,>,<>=,<=,between and begins_with operation along with a partition key
reference : https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html
DynamoDB does not support this type of wildcard query.
Let's consider a more DynamoDB way of handling this type of query. It sounds like you want to support 2 access patterns:
Get Item by month
Get Item by year
You don't describe your Primary Keys (Partition Key/Sort Key), so I'm going to make some assumptions to illustrate one way to address these access patterns.
Your attribute appears to be a composite key, consisting of <vendor>~<date>, where the date is expressed by MM-DD-YYYY. I would recommend storing your date fields in YYYY-MM-DD format, which would allow you to exploit the sort-ability of the date field. An example will make this much clearer. Imagine your table looked like this:
I'm calling your vendorAndDate attribute SK, since I'm using it as a Sort Key in this example. This table structure allows me to implement your two access patterns by executing the following queries (in pseudocode to remain language agnostic):
Access Pattern 1: Fetch all Chipotle records for August 2020
query from MyTable where PK = "Vendors" and SK between Chipotle~2020-08-00 and Chipotle~2020-08-31
Access Pattern 2: Fetch all Chipotle records for 2020
query from MyTable where PK = "Vendors" and SK between Chipotle~2020-01-01 and Chipotle~2020-12-31
Because dates stored in ISO8601 format (e.g. YYYY-MM-DD...) are lexicographically sortable, you can perform range queries in DynamoDB in this way.
Again, I've made some assumptions about your data and access patterns for the purpose of illustrating the technique of using lexicographically sortable timestamps to implement range queries.

What are Pros and Cons in using prefixes and suffixes in PostgreSQL dialect for timestamp columns

I have analysed several articles about naming conventions for Date/Time types in SQL data models.
Most of them suggest implementing a database design where a timestamp type is used for some registered even values only, literally timestamping the event case just when it happens. And naturally they suggest datetime type for any other time instanting needs. And they suggest to avoid using suffixes and prefixes which match known data types, like date and time, at all costs, to avoid confusion with data types where only the purpose of the column name is expected.
But PostgreSQL dialect does not have that datetime type at all, so there is only the timestamp type for all cases when just date and time are not enough for the column which is expected to store a value of past or future instant of time.
So, basically, what prefixes or suffixes if any would you suggest for PostgreSQL dialect columns, known that some of them would store past and present and future time instants? And why, for what benefits or because of what limitations?
Should we use timestamp and datetime as prefixes or suffixes to distinguish the purpose of different timestamp type columns by their names? Or would that be a bad practice since there is actually a data type named timestamp and no data type named datetime in PostgreSQL dialect?
Or should we maybe use something very neurtal like an instant noun as a prefix or suffix to denote the purpose of the column?

What is the purpose of the input output functions in Postgresql 9.2 user defined types?

I have been implementing user defined types in Postgresql 9.2 and got confused.
In the PostgreSQL 9.2 documentation, there is a section (35.11) on user defined types. In the third paragraph of that section, the documentation refers to input and output functions that are used to construct a type. I am confused about the purpose of these functions. Are they concerned with on-disk representation or only in-memory representation? In the section referred to above, after defining the input and output functions, it states that:
If we want to do anything more with the type than merely store it,
we must provide additional functions to implement whatever operations
we'd like to have for the type.
Do the input and output functions deal with serialization?
As I understand it, the input function is the one which will be used to perform INSERT INTO and the output function to perform SELECT on the type so basically if we want to perform an INSERT INTO then we need a serialization function embedded or invoked in the input or output function. Can anyone help explain this to me?
Types must have a text representation, so that values of this type can be expressed as literals in a SQL query, and returned as results in output columns.
For example, '2013-20-01' is a text representation of a date. It's possible to write VALUES('2013-20-01'::date) in a SQL statement, because the input function of the date type recognizes this string as a date and transforms it into an internal representation (for both using it in memory and storing to disk).
Conversely, when client code issues SELECT date_field FROM table, the values inside date_field are returned in their text representation, which is produced by the type's output function from the internal representation (unless the client requested a binary format for this column).

Dynamic WHERE Clause & SQL Injection

I need to create functionality for users to determine the WHERE criteria of a select - the criteria will be dynamic.
Is there a way I can achieve this without opening up my code to SQL injection?
I'm using C# / .NET Windows Application.
Using parameterized queries would go long way toward protecting you from SQL injection attacks, because most bad things happen in the value portion of your where conditions.
For exampleg given a condition a=="hello" && b=="WORLD", do this:
select a,b,c,d
from table
where a=#pa and b=#pb -- this is generated dynamically
Then, bind #pa="hello" and #pb="WORLD", and run your query.
In C#, you would start with an in-memory representation of your where clause in hand, go through it element-by-element, and produce two output objects:
A string with the where clause, where constants are replaced by automatically generated parameter references pa, pb, and so on (use your favorite naming scheme for these blind parameters: the actual names do not matter)
A dictionary of name-value pairs, where names correspond to the parameters that you've inserted in your where clause, and values that correspond to the constants that you pulled from the expression representation.
With these outputs in hand, you prepare your dynamic query using the string, add parameter values using the dictionary, and then execute the query against your RDBMS source.
DO NOT DO THIS
select a,b,c,d
from table
where a='hello' and b='WORLD' -- This dynamic query is ripe for an interjection attack
Ah two phases. Given you column names and operators are not direct user input. E.g. picked from a list or radio group etc
then
String WhereClause = String.Format("Where {0} {1} #{0}","Customer", "=");
So now you Have "Where Customer = #Customer".
Then you can add aparamer Customer and set it from the user input.
There are a few ways to attack this, depends on how complex your criteria could be though.