Chaining JSON Paths in PostgreSQL - postgresql

I am exposing JSONPaths for advanced queries in my application, and sometimes it would be really convenient to chain expressions as I'd do for example with jq. Example:
{
"foo": [
{
"bar": "bar",
"from": 10,
"to": 20
}
]
}
$.foo[*] ? (#.bar == "bar") | #.to - #.from
Now I see that the | operator as used above is not implemented, and in pure SQL I could store the result of the first part in a table / variable. However in application I pass the JSONPath to jsonb_path_query and it would be convenient if I didn't have to chain the invocations manually but just let PostgreSQL handle it all in one pre-baked statement. Is there anything that would let me do that?
EDIT: I could do that using e.g.
SELECT jsonb_path_query((
SELECT jsonb_path_query('{ ... }', '$.foo[*] ? (#.bar == "bar")')
), '$.to - $.from');
but that would mean that I'd need to split the JSONPath query (user input) - exactly what I don't want to do in the app.

Related

How to search a nested object using zq?

Given this input in a file called plants.json
{
"flower": { "rose": 1 },
"tree": { "spruce": 1, "oak": 2, "oaky": 3 }
}
Filter it with zq so that it looks like this:
{
"flower": { "rose": 1 },
"tree": { "oak": 2, "oaky": 3 }
}
Effectively filtering a known and named nested object at the key tree to match oak.
Use zq's put operator (the :=) to redefine tree as a filtered search.
zq -f json 'tree:=unflatten((over tree | oak))' plants.json
{"flower":{"rose":1},"tree":{"oak":2,"oaky":3}}
This does a simple search on keys and values. If you want to match more precisely, you can use other functions where | oak is.
For example, if you have a document that also includes "soak":4
{
"flower": { "rose": 1 },
"tree": { "spruce": 1, "oak": 2, "oaky": 3, "soak": 4 }
}
Then this won't work because you'll get oak and oaky and soak. So use grep() and a regex for this case. This example demonstrates how to add the function grep to our previous command.
zq -f json 'tree:=unflatten((over tree | grep(/^oak/)))' plants.json
{"flower":{"rose":1},"tree":{"oak":2,"oaky":3}}
Note that tree:=unflatten(...) is the same as saying put tree:=unflatten(...) because put is an implied operator like oak is search oak. Some of the links on zq's doc site are currently broken but it is currently in the 2.6 language overview section.

ADF: use the output from a lookup activity on another activity in Data Factory

I have a lookup activity (Get_ID) that returns:
{
"count": 2,
"value": [
{
"TRGT_VAL": "10000"
},
{
"TRGT_VAL": "52000"
}
],
(...)
I want to use these 2 values from TRGT_VAL in a WHERE clause of a query in another activity. I'm using
#concat('SELECT * FROM table WHERE column in ',activity('Get_ID').output.value[0].TRGT_VAL)
But only the first value of 10000 is being taken into account. How to get the whole list?
I solved by using a lot of replaces:
#concat('(',replace(replace(replace(replace(replace(replace(replace(string(activity('Get_ID').output.value),'{',''),' ',''),'"',''),'TRGT_VAL:',''),'[',''),'}',''),']',''),')')
Output
{
"name": "AptitudeCF",
"value": "(10000,52000)"
}
Instead of using big expression with lot of replace functions, you can use String interpolation syntax and frame your query. Below is query which you can consider.
SELECT * FROM table WHERE column in (#{activity('Get_ID').output.value[0].TRGT_VAL},#{activity('Get_ID').output.value[1].TRGT_VAL)

Elasticsearch high level rest client more than 1 field search

I am using Scala 2.12 to query the ElasticSearch (6.5).
I am able to use querybuilders for a single field search like below:
val searchSourceBuilder = new SearchSourceBuilder()
val qb = new BoolQueryBuilder()
.must(QueryBuilders.regexpQuery("header.fieldname", "01_.+_20190711_data"))
searchSourceBuilder.query(qb)
Using the above (I need regex search) I can search the relevant documents.
However, I have more complex requirement, where I have to match the documents on more than one field-value pair.
i.e.
header.fieldname should match pattern "01_.+data"
AND
header.fieldname2 should match pattern "type.+_2019-07-11"
Basically, it is like SQL where clause on 2 or more columns (and value string).
I was checking https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-multi-match-query.html
But this is like searching the same string (value) in multiple fields. This is NOT what I want.
I basically want something like SQL AND in where clause (better if it is with regex too).
UPDATE:
Please note the below answer by #Meet Rathod works and accepted.
However, to take it forward, so if I need one more condition which is SQL OR, is my below code correct.
Required:
header.fieldname: 01_.+data AND header.fieldname2: type.+_2019-07-11 AND (header.fieldname3: some_thing OR header.fieldname3: some_other_thing)
Code:
val qb = new BoolQueryBuilder()
.must(QueryBuilders.regexpQuery("header.fieldname", "01_.+_20190711_data"))
.must(QueryBuilders.regexpQuery("header.fieldname2", "type.+_2019-07-11"))
.should(QueryBuilders.regexpQuery("header.fieldname3", "some_thing"))
.should(QueryBuilders.regexpQuery("header.fieldname3", "some_other_thing"))
Is this correct or I am missing something?
As far as I understand, you want only those document which satisfies all your conditions should list out in the result. And if that's the case I believe adding another must clause in your query should get your expected result. The raw query will look something like this.
{
"query": {
"bool": {
"must": [
{
"regexp": {
"header.fieldname": "01_.+data"
}
},
{
"regexp": {
"header.fieldname2": "type.+_2019-07-11"
}
}
]
}
}
}
I'm not sure but, your Scala code should look something like this.
val qb = new BoolQueryBuilder()
.must(QueryBuilders.regexpQuery("header.fieldname", "01_.+_20190711_data"))
.must(QueryBuilders.regexpQuery("header.fieldname2", "type.+_2019-07-11"))

Sanitize object literal in javascrtipt? [duplicate]

It seems mongo does not allow insertion of keys with a dot (.) or dollar sign ($) however when I imported a JSON file that contained a dot in it using the mongoimport tool it worked fine. The driver is complaining about trying to insert that element.
This is what the document looks like in the database:
{
"_id": {
"$oid": "..."
},
"make": "saab",
"models": {
"9.7x": [
2007,
2008,
2009,
2010
]
}
}
Am I doing this all wrong and should not be using hash maps like that with external data (i.e. the models) or can I escape the dot somehow? Maybe I am thinking too much Javascript-like.
MongoDB doesn't support keys with a dot in them so you're going to have to preprocess your JSON file to remove/replace them before importing it or you'll be setting yourself up for all sorts of problems.
There isn't a standard workaround to this issue, the best approach is too dependent upon the specifics of the situation. But I'd avoid any key encoder/decoder approach if possible as you'll continue to pay the inconvenience of that in perpetuity, where a JSON restructure would presumably be a one-time cost.
As mentioned in other answers MongoDB does not allow $ or . characters as map keys due to restrictions on field names. However, as mentioned in Dollar Sign Operator Escaping this restriction does not prevent you from inserting documents with such keys, it just prevents you from updating or querying them.
The problem of simply replacing . with [dot] or U+FF0E (as mentioned elsewhere on this page) is, what happens when the user legitimately wants to store the key [dot] or U+FF0E?
An approach that Fantom's afMorphia driver takes, is to use unicode escape sequences similar to that of Java, but ensuring the escape character is escaped first. In essence, the following string replacements are made (*):
\ --> \\
$ --> \u0024
. --> \u002e
A reverse replacement is made when map keys are subsequently read from MongoDB.
Or in Fantom code:
Str encodeKey(Str key) {
return key.replace("\\", "\\\\").replace("\$", "\\u0024").replace(".", "\\u002e")
}
Str decodeKey(Str key) {
return key.replace("\\u002e", ".").replace("\\u0024", "\$").replace("\\\\", "\\")
}
The only time a user needs to be aware of such conversions is when constructing queries for such keys.
Given it is common to store dotted.property.names in databases for configuration purposes I believe this approach is preferable to simply banning all such map keys.
(*) afMorphia actually performs full / proper unicode escaping rules as mentioned in Unicode escape syntax in Java but the described replacement sequence works just as well.
The latest stable version (v3.6.1) of the MongoDB does support dots (.) in the keys or field names now.
Field names can contain dots (.) and dollar ($) characters now
The Mongo docs suggest replacing illegal characters such as $ and . with their unicode equivalents.
In these situations, keys will need to substitute the reserved $ and . characters. Any character is sufficient, but consider using the Unicode full width equivalents: U+FF04 (i.e. “$”) and U+FF0E (i.e. “.”).
A solution I just implemented that I'm really happy with involves splitting the key name and value into two separate fields. This way, I can keep the characters exactly the same, and not worry about any of those parsing nightmares. The doc would look like:
{
...
keyName: "domain.com",
keyValue: "unregistered",
...
}
You can still query this easy enough, just by doing a find on the fields keyName and keyValue.
So instead of:
db.collection.find({"domain.com":"unregistered"})
which wouldn't actually work as expected, you would run:
db.collection.find({keyName:"domain.com", keyValue:"unregistered"})
and it will return the expected document.
You can try using a hash in the key instead of the value, and then store that value in the JSON value.
var crypto = require("crypto");
function md5(value) {
return crypto.createHash('md5').update( String(value) ).digest('hex');
}
var data = {
"_id": {
"$oid": "..."
},
"make": "saab",
"models": {}
}
var version = "9.7x";
data.models[ md5(version) ] = {
"version": version,
"years" : [
2007,
2008,
2009,
2010
]
}
You would then access the models using the hash later.
var version = "9.7x";
collection.find( { _id : ...}, function(e, data ) {
var models = data.models[ md5(version) ];
}
It is supported now
MongoDb 3.6 onwards supports both dots and dollar in field names.
See below JIRA: https://jira.mongodb.org/browse/JAVA-2810
Upgrading your Mongodb to 3.6+ sounds like the best way to go.
You'll need to escape the keys. Since it seems most people don't know how to properly escape strings, here's the steps:
choose an escape character (best to choose a character that's rarely used). Eg. '~'
To escape, first replace all instances of the escape character with some sequence prepended with your escape character (eg '~' -> '~t'), then replace whatever character or sequence you need to escape with some sequence prepended with your escape character. Eg. '.' -> '~p'
To unescape, first remove the escape sequence from all instance of your second escape sequence (eg '~p' -> '.'), then transform your escape character sequence to a single escape character(eg '~s' -> '~')
Also, remember that mongo also doesn't allow keys to start with '$', so you have to do something similar there
Here's some code that does it:
// returns an escaped mongo key
exports.escape = function(key) {
return key.replace(/~/g, '~s')
.replace(/\./g, '~p')
.replace(/^\$/g, '~d')
}
// returns an unescaped mongo key
exports.unescape = function(escapedKey) {
return escapedKey.replace(/^~d/g, '$')
.replace(/~p/g, '.')
.replace(/~s/g, '~')
}
From the MongoDB docs "the '.' character must not appear anywhere in the key name". It looks like you'll have to come up with an encoding scheme or do without.
A late answer, but if you use Spring and Mongo, Spring can manage the conversion for you with MappingMongoConverter. It's the solution by JohnnyHK but handled by Spring.
#Autowired
private MappingMongoConverter converter;
#PostConstruct
public void configureMongo() {
converter.setMapKeyDotReplacement("xxx");
}
If your stored Json is :
{ "axxxb" : "value" }
Through Spring (MongoClient) it will be read as :
{ "a.b" : "value" }
As another user mentioned, encoding/decoding this can become problematic in the future, so it's probably just easier to replace all keys that have a dot. Here's a recursive function I made to replace keys with '.' occurrences:
def mongo_jsonify(dictionary):
new_dict = {}
if type(dictionary) is dict:
for k, v in dictionary.items():
new_k = k.replace('.', '-')
if type(v) is dict:
new_dict[new_k] = mongo_jsonify(v)
elif type(v) is list:
new_dict[new_k] = [mongo_jsonify(i) for i in v]
else:
new_dict[new_k] = dictionary[k]
return new_dict
else:
return dictionary
if __name__ == '__main__':
with open('path_to_json', "r") as input_file:
d = json.load(input_file)
d = mongo_jsonify(d)
pprint(d)
You can modify this code to replace '$' too, as that is another character that mongo won't allow in a key.
I use the following escaping in JavaScript for each object key:
key.replace(/\\/g, '\\\\').replace(/^\$/, '\\$').replace(/\./g, '\\_')
What I like about it is that it replaces only $ at the beginning, and it does not use unicode characters which can be tricky to use in the console. _ is to me much more readable than an unicode character. It also does not replace one set of special characters ($, .) with another (unicode). But properly escapes with traditional \.
Not perfect, but will work in most situations: replace the prohibited characters by something else. Since it's in keys, these new chars should be fairly rare.
/** This will replace \ with ⍀, ^$ with '₴' and dots with ⋅ to make the object compatible for mongoDB insert.
Caveats:
1. If you have any of ⍀, ₴ or ⋅ in your original documents, they will be converted to \$.upon decoding.
2. Recursive structures are always an issue. A cheap way to prevent a stack overflow is by limiting the number of levels. The default max level is 10.
*/
encodeMongoObj = function(o, level = 10) {
var build = {}, key, newKey, value
//if (typeof level === "undefined") level = 20 // default level if not provided
for (key in o) {
value = o[key]
if (typeof value === "object") value = (level > 0) ? encodeMongoObj(value, level - 1) : null // If this is an object, recurse if we can
newKey = key.replace(/\\/g, '⍀').replace(/^\$/, '₴').replace(/\./g, '⋅') // replace special chars prohibited in mongo keys
build[newKey] = value
}
return build
}
/** This will decode an object encoded with the above function. We assume the structure is not recursive since it should come from Mongodb */
decodeMongoObj = function(o) {
var build = {}, key, newKey, value
for (key in o) {
value = o[key]
if (typeof value === "object") value = decodeMongoObj(value) // If this is an object, recurse
newKey = key.replace(/⍀/g, '\\').replace(/^₴/, '$').replace(/⋅/g, '.') // replace special chars prohibited in mongo keys
build[newKey] = value
}
return build
}
Here is a test:
var nastyObj = {
"sub.obj" : {"$dollar\\backslash": "$\\.end$"}
}
nastyObj["$you.must.be.kidding"] = nastyObj // make it recursive
var encoded = encodeMongoObj(nastyObj, 1)
console.log(encoded)
console.log( decodeMongoObj( encoded) )
and the results - note that the values are not modified:
{
sub⋅obj: {
₴dollar⍀backslash: "$\\.end$"
},
₴you⋅must⋅be⋅kidding: {
sub⋅obj: null,
₴you⋅must⋅be⋅kidding: null
}
}
[12:02:47.691] {
"sub.obj": {
$dollar\\backslash: "$\\.end$"
},
"$you.must.be.kidding": {
"sub.obj": {},
"$you.must.be.kidding": {}
}
}
There is some ugly way to query it not recommended to use it in application rather than for debug purposes (works only on embedded objects):
db.getCollection('mycollection').aggregate([
{$match: {mymapfield: {$type: "object" }}}, //filter objects with right field type
{$project: {mymapfield: { $objectToArray: "$mymapfield" }}}, //"unwind" map to array of {k: key, v: value} objects
{$match: {mymapfield: {k: "my.key.with.dot", v: "myvalue"}}} //query
])
For PHP I substitute the HTML value for the period. That's ".".
It stores in MongoDB like this:
"validations" : {
"4e25adbb1b0a55400e030000" : {
"associate" : "true"
},
"4e25adb11b0a55400e010000" : {
"associate" : "true"
}
}
and the PHP code...
$entry = array('associate' => $associate);
$data = array( '$set' => array( 'validations.' . str_replace(".", `"."`, $validation) => $entry ));
$newstatus = $collection->update($key, $data, $options);
Lodash pairs will allow you to change
{ 'connect.sid': 's:hyeIzKRdD9aucCc5NceYw5zhHN5vpFOp.0OUaA6' }
into
[ [ 'connect.sid',
's:hyeIzKRdD9aucCc5NceYw5zhHN5vpFOp.0OUaA6' ] ]
using
var newObj = _.pairs(oldObj);
You can store it as it is and convert to pretty after
I wrote this example on Livescript. You can use livescript.net website to eval it
test =
field:
field1: 1
field2: 2
field3: 5
nested:
more: 1
moresdafasdf: 23423
field3: 3
get-plain = (json, parent)->
| typeof! json is \Object => json |> obj-to-pairs |> map -> get-plain it.1, [parent,it.0].filter(-> it?).join(\.)
| _ => key: parent, value: json
test |> get-plain |> flatten |> map (-> [it.key, it.value]) |> pairs-to-obj
It will produce
{"field.field1":1,
"field.field2":2,
"field.field3":5,
"field.nested.more":1,
"field.nested.moresdafasdf":23423,
"field3":3}
Give you my tip: You can using JSON.stringify to save Object/ Array contains the key name has dots, then parse string to Object with JSON.parse to process when get data from database
Another workaround:
Restructure your schema like:
key : {
"keyName": "a.b"
"value": [Array]
}
Latest MongoDB does support keys with a dot, but java MongoDB-driver is not supporting. So to make it work in Java, I pulled code from github repo of java-mongo-driver and made changes accordingly in their isValid Key function, created new jar out of it, using it now.
Replace the dot(.) or dollar($) with other characters that will never used in the real document. And restore the dot(.) or dollar($) when retrieving the document. The strategy won't influence the data that user read.
You can select the character from all characters.
The strange this is, using mongojs, I can create a document with a dot if I set the _id myself, however I cannot create a document when the _id is generated:
Does work:
db.testcollection.save({"_id": "testdocument", "dot.ted.": "value"}, (err, res) => {
console.log(err, res);
});
Does not work:
db.testcollection.save({"dot.ted": "value"}, (err, res) => {
console.log(err, res);
});
I first thought dat updating a document with a dot key also worked, but its identifying the dot as a subkey!
Seeing how mongojs handles the dot (subkey), I'm going to make sure my keys don't contain a dot.
Like what #JohnnyHK has mentioned, do remove punctuations or '.' from your keys because it will create much larger problems when your data starts to accumulate into a larger dataset. This will cause problems especially when you call aggregate operators like $merge which requires accessing and comparing keys which will throw an error. I have learnt it the hard way please don't repeat for those who are starting out.
In our case the properties with the period is never queried by users directly. However, they can be created by users.
So we serialize our entire model first and string replace all instances of the specific fields. Our period fields can show up in many location and it is not predictable what the structure of the data is.
var dataJson = serialize(dataObj);
foreach(pf in periodFields)
{
var encodedPF = pf.replace(".", "ENCODE_DOT");
dataJson.replace(pf, encodedPF);
}
Then later after our data is flattened we replace instances of the encodedPF so we can write the decoded version in our files
Nobody will ever need a field named ENCODE_DOT so it will not be an issue in our case.
The result is the following
color.one will be in the database as colorENCODE_DOTone
When we write our files we replace ENCODE_DOT with .
/home/user/anaconda3/lib/python3.6/site-packages/pymongo/collection.py
Found it in error messages. If you use anaconda (find the correspondent file if not), simply change the value from check_keys = True to False in the file stated above. That'll work!

Doing an "ORDER BY ... LIMIT ..." style query on a hash in KRL

Say I have a hash with a list of delivery drivers (for the classic flower shop scenario). Each driver has a rating and an event signal URL (ESL). I want to raise an event only to the top three drivers in that list, sorted by ranking.
With a relational database, I'd run a query like this:
SELECT esl FROM driver ORDER BY ranking LIMIT 3;
Is there a way to do this in KRL? There are two requirements:
A way to sort the hash
A way to limit the number of times a foreach iterates
The second could be solved like this:
rule reset_counter {
select when rfq delivery_ready
noop();
always {
clear ent:loop_counter;
raise explicit event loop_drivers;
}
}
rule loop_on_drivers {
select when explicit loop_drivers
foreach app:drivers setting (driver)
pre {
esl = driver.pick("$.esl");
}
if (ent:loop_counter < 3) then {
// Signal the driver's ESL
}
always {
ent:loop_counter += 1 from 0;
}
}
But that's kind of kludgy. Is there a more KRL-ish way to do it? And how should I solve the ordering problem?
EDIT: Here's the format of the app:drivers array, to make the question easier to answer:
[
{
"id": "1",
"rating": "5",
"esl": "http://example.com/esl"
},
{
"id": "2",
"rating": "3",
"esl": "http://example.com/esl2"
}
]
Without knowing the form of the hash, it's impossible to give you a specific answer, but you can use the sort operator to sort and then use the pick operator or hash
Something like
driver_data.sort(function(){...}).pick("$..something[:2]")
"something" is the name from the hash of the relevant field.