About Composite Index - crate

When defining a composite index, e.g.
create table temptable (id integer, id2 integer, name string, INDEX ci using plain(id2, id));
The id and id2 are indexed in elasticsearch using integer,
but the I see from ES's _mapping is like:
"ci" : {
"type" : "string",
"analyzer" : "standard"
},
Where both id and id2 are copied to ci with type "string".
Can you explain more about this (like the order preserved) and probably a bit more the whole composite index thing in crate data?

You found 2 bugs by doing this which we'll try to fix asap. ;)
First, using a plain index type should result in the 'keyword' analyzer not the 'standard' one.
Second, a composite over 2 non-string columns shouldn't result in a string typed column, but if supported, in the same type of the origin columns.
I've wrote "if supported" because for now, we'd forbid defining a composite index over non-string columns because we don't know what this would be for.
Our current match function implementation only supports string literals, so this function couldn't be used for querying the composite index.
Can you explain your use-case a bit?
Maybe creating an issue at github would make sense for this possible enhancement.
The order of the columns used for defining the composite index doesn't matter at all, in case of a string, values of both are analyzed and resulting terms will be inserted/merged at the target field.
Thanks for reporting!

Related

How to efficiently index fields with an identical (and long) prefix in PostgreSQL?

I’m working with identifiers in a rather unusual format: every single ID has the same prefix and the prefix consists of as many as 25 characters. The only thing that is unique is the last part of the ID string and it has a variable length of up to ten characters:
ID
----------------------------------
lorem:ipsum:dolor:sit:amet:12345
lorem:ipsum:dolor:sit:amet:abcd123
lorem:ipsum:dolor:sit:amet:efg1
I’m looking for advice on the best strategy around indexing and matching this kind of ID string in PostgreSQL.
One approach I have considered is basically cutting these long prefixes out and only storing the unique suffix in the table column.
Another option that comes to mind is only indexing the suffix:
CREATE INDEX ON books (substring(book_id FROM 26));
I don’t think this is the best idea though as you would need to remember to always strip out the prefix when querying the table. If you forgot to do it and had a WHERE book_id = '<full ID here>' filter, the index would basically be ignored by the planner.
Most times I always create an integer type ID for my tables if even I have one unique string type of field. Recommendation for you is a good idea, I must view all your queries in DB. If you are recently using substring(book_id FROM 26) after the where statement, this is the best way to create expression index (function-based index). Basically, you need to check table joining conditions, which fields are used in the joining processes, and which fields are used after WHERE statements in your queries. After then you can prepare the best plan for creating indexes. If on the process of table joining you are using last part unique characters on the ID field then this is the best way to extract unique last characters and store this in additional fields or create expression index using the function for extracting unique characters.

Multifield in Elastic4s 5.x

I'm currently using Elastic4s v5.0, which still has the multifield type used to index a field in more than one way.
elasticClient.execute(createIndex("foo") mappings (
mapping("bar").as(
multiField("baz").as(
textField("baz") analyzer myAnalyzer,
textField("original") index NotAnalyzed
)
)
)
However, I get the following error:
No handler for type [multi_field] declared on field []
The answer ElasticSearch 5: MapperParserException with multi_field and documentation here https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html says to use "fields" instead, but I cannot find how to do this in elastic4s.
In Elasticsearch any multifield has a primary field which is kind of like a parent field and then it has secondary fields. The primary field (primary and secondary is my terminology by the way), is accessed with a and the secondary fields are accessed as a.b, a.c and so on.
This might not be how you would first imagine a multi field to be, because you might just think that there's a, b, c as siblings like a kind of sequence. So its worth understanding this.
In elastic4s, you can just use .fields on any field you want, and then those fields will be combined with the parent to become a multi field. So your example re-written would be.
client.execute {
createIndex("foo").mappings(
mapping("bar").fields(
textField("baz").fields(
textField("inner1") analyzer PatternAnalyzer,
textField("inner2") index NotAnalyzed
)
)
)
}
Note that as is an alias for fields and I think fields is more readable so I used it here.

PostgreSql Queries treats Int as string datatypes

I store the following rows in my table ('DataScreen') under a JSONB column ('Results')
{"Id":11,"Product":"Google Chrome","Handle":3091,"Description":"Google Chrome"}
{"Id":111,"Product":"Microsoft Sql","Handle":3092,"Description":"Microsoft Sql"}
{"Id":22,"Product":"Microsoft OneNote","Handle":3093,"Description":"Microsoft OneNote"}
{"Id":222,"Product":"Microsoft OneDrive","Handle":3094,"Description":"Microsoft OneDrive"}
Here, In this JSON objects "Id" amd "Handle" are integer properties and other being string properties.
When I query my table like below
Select Results->>'Id' From DataScreen
order by Results->>'Id' ASC
I get the improper results because PostgreSql treats everything as a text column and hence does the ordering according to the text, and not as integer.
Hence it gives the result as
11,111,22,222
instead of
11,22,111,222.
I don't want to use explicit casting to retrieve like below
Select Results->>'Id' From DataScreen order by CAST(Results->>'Id' AS INT) ASC
because I will not be sure of the datatype of the column due to the fact that JSON structure will be dynamic and the keys and values may change next time. and Hence could happen the same with another JSON that has Integer and string keys.
I want something so that Integers in Json structure of JSONB column are treated as integers only and not as texts (string).
How do I write my query so that Id And Handle are retrieved as Integer Values and not as strings , without explicit casting?
I think your assumtions about the id field don't make sense. You said,
(a) Either id contains integers only or
(b) it contains strings and integers.
I'd say,
If (a) then numerical ordering is correct.
If (b) then lexical ordering is correct.
But if (a) for some time and then (b) then the correct order changes, too. And that doesn't make sense. Imagine:
For the current database you expect the order 11,22,111,222. Then you add a row
{"Id":"aa","Product":"Microsoft OneDrive","Handle":3095,"Description":"Microsoft OneDrive"}
and suddenly the correct order of the other rows changes to 11,111,22,222,aa. That sudden change is what bothers me.
So I would either expect a lexical ordering ab intio, or restrict my id field to integers and use explicit casting.
Every other option I can think of is just not practical. You could, for example, create a custom < and > implementation for your id field which results in 11,111,22,222,aa. ("Order all integers by numerical value and all strings by lexical order and put all integers before the strings").
But that is a lot of work (it involves a custom data type, a custom cast function and a custom operator function) and yields some counterintuitive results, e.g. 11,111,22,222,0a,1a,2a,aa (note the position of 0a and so on. They come after 222).
Hope, that helps ;)
If Id always integer you can cast it in select part and just use ORDER BY 1:
select (Results->>'Id')::int From DataScreen order by 1 ASC

PostgreSQL Data Type

Can someone advise me on the SQL data type that should be used for a DICOM UID, 1.2.840.113986.3.2702661254.20150220.144310.372.4424 as a sample. I would like to use it as a primary key as well.
There are two options available here- either use a less-than-ideal data type which already exists, of which "text" is almost certainly the best option, or implement a custom data type for this particular type of data.
While the best built-in option is "text", looking at the example provided, you would likely get significant performance and space benefits from using a custom data type, though it would require writing code to implement it.
A final option to consider is to use a surrogate key for that data. To do this, you would build a table which contains a "bigserial" column and then a "text" column. The "text" column would hold the long form of the value as you have it shown above and the "bigserial" column would provide an integer (64bit with bigserial, 32 bit if you use "serial" instead) which you would then use in all of your tables, instead of the long form.

how to design Hbase schema?

suppose that I have this RDBM table (Entity-attribute-value_model):
col1: entityID
col2: attributeName
col3: value
and I want to use HBase due to scaling issues.
I know that the only way to access Hbase table is using a primary key (cursor). you can get a cursor for a specific key, and iterate the rows one-by-one .
The issue is, that in my case, I want to be able to iterate on all 3 columns.
for example :
for a given an entityID I want to get all its attriutes and values
for a give attributeName and value I want to all the entitiIDS
...
so one idea I had is to build one Hbase table that will hold the data (table DATA, with entityID as primary index), and 2 "index" tables one with attributeName as a primary key, and the other one with value
each index table will hold a list of pointers (entityIDs) for the DATA table.
Is it a reasonable approach ? or is is an 'abuse' of Hbase concepts ?
In this blog the author say:
HBase allows get operations by primary
key and scans (think: cursor) over row
ranges. (If you have both scale and
need of secondary indexes, don’t worry
- Lucene to the rescue! But that’s another post.)
Do you know how Lucene can help ?
-- Yonatan
Secondary indexes would indeed be useful for many potential applications of HBase, and I believe the developers are in fact looking at it. Checkout http://www.mail-archive.com/hbase-dev#hadoop.apache.org/msg04801.html.
In the mean time though, if your application data storage can be modelled as a star schema (see http://en.wikipedia.org/wiki/Star_schema) you might like to checkout the solution that Hypertable proposes for secondary index-type needs http://markmail.org/message/rphm4q6cbar2ycgp
I recommend having two different flat tables: one for looking up attributes+values given entityID, and one for looking up the entityID given attributes+values.
Table 1 would look like this:
entityID1 {
attribute1: value1;
attribute2: value2;
...
}
and Table 2:
attribute1_value1 {
entityID1;
}
attribute2_value2 {
entityID1;
}