Is this JSON oddball ? - SwiftyJSON - swift

I got the unusual json (actually from IBM Bluemix), shown below,
Thank goodness, trusty and heartwarming SwiftyJSON was able to get the values, like this...
let mauves = json["blue"][0]["brown"][0]["mauve"]
However, notice there are weird sort of "empty unnamed array nested things" in the JSON (hence the [0] calls to Swifty).
My question, in short,
is this valid json?
Even if valid, is it "crappy"? Or am I wrong, it's totally idiomatic? (Maybe I've just been dating the wrong services for decades, I don't know.)
I appreciate that running it through online validators seems to say "valid" (except this one http://json.parser.online.fr gives red things), but, you know, who trusts online services? Ask experts on SO....)
--
{
"red" : 1,
"green" : 4,
"blue" : [
{
"yellow" : "word",
"brown" : [
{
"orange" : "1826662593",
"gold" : "23123",
"mauve" : [
{
"a" : "Beagle",
"b" : 0.979831
},
{
"a" : "Chow",
"b" : 0.937588
},
{
"a" : "Hound",
"b" : 0.987798
}
]
}
]
}
]
}
--

The JSON is valid. The blue member contains an array with 1 element (at index [0] which is the yellow object, and this is repeated for orange.
When I paste it into json.parser.online.fr it reports it as valid for me - are you accidentally including other text around it?

The JSON is perfectly valid - your validators are not lying to you. I don't know if this JSON contains real keys, or if the names have been changed to protect the innocent (it certainly looks like nonsense), but in a real world situation there are frequently arrays that contain one element (because they might contain zero or many elements!).

Related

search phrase or words in document with timestamped words

I've been trying to do this for some days, I guess it's time to ask for a little help.
I'm using elasticsearch 6.6 (I believe it could be upgraded if needed) and nest for c# net5.
The task is to create an index where the documents are the result of a speech-to-text recognition, where all the recognized words have a timestamp (so that that said timestamp can be used to find where the word is spoken in the original file). There are 1000+ texts from media files, and every file is 4 hours long (that means usually 5000~15000 words).
Main idea was to split every text in 3 sec long segments, creating a document with the words in that time segment, and index it so that it can be searched.
I thought that it would not work that well, so next idea was to create a document for every window of 10~12 words scanning the document and jumping by 2 words at time, so that the search could at least match a decent phrase, and have highlighting of the hits too.
Since it's yet far from perfect, I thought it would be nice to index every whole text as a document so to maintain its coherency, the problem is the timestamp associated with every word. To keep this relationship I tried to use nested objects in the document:
PUT index-tapes-nested
{
"mappings" : {
"_doc" : {
"properties" : {
"$type" : { "type" : "text" },
"ContentId" : { "type" : "long" },
"Inserted" : { "type" : "date" },
"TrackId" : { "type" : "long" },
"Words" : {
"type" : "nested",
"properties" : {
"StartMillisec" : { "type" : "integer" },
"Word": { "type" : "text" }
}
}
}
}
}
}
This kinda works, but I don't know exactly how to write the query to search in the index.
A very basic query could be for example:
GET index-tapes-nested/_search
{
"query":{
"nested":{
"path":"Words",
"score_mode":"avg",
"query":{
"match":{
"Words.Word": "a bunch of things"
}
},
"inner_hits": {}
}
}
}
but something like that, especially with the avg scoring, gives low quality results; there could be the right document in the hits, but it doesn't get the word order, so it's not certain and it's not clear.
So as far as I understand it the span_near should come handy in these situations, but I get no results:
GET index-tapes-nested/_search
{
"query": {
"nested":{
"path":"Words",
"score_mode": "avg",
"query": {
"span_near": {
"clauses": [
{ "span_term": { "Words.Word": "bunch" }},
{ "span_term": { "Words.Word": "of" }},
{ "span_term": { "Words.Word": "things" }}
],
"slop": 2,
"in_order": true
}
}
}
}
}
I don't know much about elasticsearch, maybe I should change approach and change the model, maybe rewriting the query is enough, I don't know, this is pretty time consuming, so any help is really appreciated (is this a fairly common task?). For the sake of brevity I'm cutting some stuff and some ideas, I'm available to give some data or other examples if needed.
I also had problems with the c# nest client to manage the nested index, but that is another story.
This could be interpreted in a few ways i guess, having something like an "alternative stream" for a field, or metadata for every word, and so on. What i needed was this: https://github.com/elastic/elasticsearch/issues/5736 but it's not yet done, so for now i think i'll go with the annotated_text plugin or the 10 words window.
I have no idea if in the case of indexing single words there can be a query that 'restores' the integrity of the original text (which means 1. grouping them by an id 2. ordering them) so that elasticsearch can give the desired results.
I'll keep searching in the docs if there's something interesting, or if i can hack something to get what i need (like require_field_match or intervals query).

Count documents based on Array value and inner Array value

Before I explain my use case, I'd like to state that yes, I could change this application so that it would store things in a different manner or even split it into 2 collections for that matter. But that's not my intention, and I'd rather want to know if this is at all possible within MongoDB (since I am quite new to MongoDB). I can for sure work around this problem if I'd really need to, but rather looking for a method to achieve what I want (no I am not being lazy here, I really want to know a way to do this).
Let's get to the problem then.
I have a document like below:
{
"_id" : ObjectId("XXXXXXXXXXXXXXXXXXXXX"),
"userId" : "XXXXXXX",
"licenses" : [
{
"domain" : "domain1.com",
"addons" : [
{"slug" : "1"},
{"slug" : "2"}
]
},
{
"domain" : "domain2.com",
"addons" : [
{"slug" : "1"},
]
}
]
}
My goal is to check if a specific domain has a specific addon. When I use the below query to count the documents with domain: domain2.com and addon slug: 2 the result should be: 0. However with the below query it returns 1. I know that this is because the query is executed document wide and not just the license index that matched domain2.com. So my question is, how to do a sub $and (or however you'd call it)?
db.test.countDocuments(
{$and: [
{"licenses.domain": "domain2.com"},
{"licenses.addons.slug": "2"},
]}
)
Basically I am looking for something like this (below isn't working obviously), but below should return 0, not 1:
db.test.countDocuments(
{$and: [
{
"licenses.domain": "domain2.com",
$and: [
{ "licenses.addons.slug": "2"}
]
}
]}
)
I know there is $group and $filter operators, I have been trying many combinations to no avail. I am lost at this point, I feel like I am completely missing the logic of Mongo here. However I believe this must be relatively easy to accomplish with a single query (just not for me I guess).
I have been trying to find my answer on the official documentation and via stack overflow/google, but I really couldn't find any such use case.
Any help is greatly appreciated! Thanks :)
What you are describe is searching for a document whose array contains a single element that matches multiple criteria.
This is exactly what the $elemMatch operator does.
Try using this for the filter part:
{
licenses: {
$elemMatch: {
domain: "domain2.com",
"addons.slug": "2"
}
}
}

How to document a mixed typed array structures in requests/responses with Spring REST Docs

Given the following exemplary JSON document, which is a list of polymorphic objects of type A and B:
[ {
"a" : 1,
"type" : "A"
}, {
"b" : true,
"type" : "B"
}, {
"b" : false,
"type" : "B"
}, {
"a" : 2,
"type" : "A"
} ]
How would I be able to select the As and the Bs to be able to document them differently.
I put an example project on github: https://github.com/dibog/spring-restdocs-polymorphic-list-demo
Here is an excerpt of me trying to document the fetch method:
.andDo(document("fetch-tree",
responseFields(
beneathPath("[0]").withSubsectionId("typeA"),
fieldWithPath("type")
.type(JsonFieldType.STRING)
.description("only node types 'A' and 'B' are supported"),
fieldWithPath("a")
.type(JsonFieldType.NUMBER)
.description("specific field for node type A")
),
responseFields(
beneathPath("[1]").withSubsectionId("typeB"),
fieldWithPath("type")
.type(JsonFieldType.STRING)
.description("only node types 'A' and 'B' are supported"),
fieldWithPath("b")
.type(JsonFieldType.BOOLEAN)
.description("specific field for node type A")
)))
But I get the following error message:
org.springframework.restdocs.payload.PayloadHandlingException: [0] identifies multiple sections of the payload and they do not have a common structure. The following non-optional uncommon paths were found: [[0].a, [0].b]
It looks like that [0] or [1] does not work and is interpreted as [].
What would be the best way to handle this situation?
Thanks,
Dieter
It looks like that [0] or [1] does not work and is interpreted as [].
That's correct. Adding support for indices is being tracked by this issue.
What would be the best way to handle this situation?
The beneathPath method that you've tried to use above returns an implementation of a strategy interface, PayloadSubsectionExtractor. You could provide your own implementation of this interface and, in the extractSubsection(byte[], MediaType) method, extract the JSON for a particular element in the array and return it as a byte[].

Storing a query in Mongo

This is the case: A webshop in which I want to configure which items should be listed in the sjop based on a set of parameters.
I want this to be configurable, because that allows me to experiment with different parameters also change their values easily.
I have a Product collection that I want to query based on multiple parameters.
A couple of these are found here:
within product:
"delivery" : {
"maximum_delivery_days" : 30,
"average_delivery_days" : 10,
"source" : 1,
"filling_rate" : 85,
"stock" : 0
}
but also other parameters exist.
An example of such query to decide whether or not to include a product could be:
"$or" : [
{
"delivery.stock" : 1
},
{
"$or" : [
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 60
}
},
{
"delivery.filling_rate" : {
"$gt" : 90
}
}
]
},
{
"$and" : [
{
"delivery.maximum_delivery_days" : {
"$lt" : 40
}
},
{
"delivery.filling_rate" : {
"$gt" : 80
}
}
]
},
{
"$and" : [
{
"delivery.delivery_days" : {
"$lt" : 25
}
},
{
"delivery.filling_rate" : {
"$gt" : 70
}
}
]
}
]
}
]
Now to make this configurable, I need to be able to handle boolean logic, parameters and values.
So, I got the idea, since such query itself is JSON, to store it in Mongo and have my Java app retrieve it.
Next thing is using it in the filter (e.g. find, or whatever) and work on the corresponding selection of products.
The advantage of this approach is that I can actually analyse the data and the effectiveness of the query outside of my program.
I would store it by name in the database. E.g.
{
"name": "query1",
"query": { the thing printed above starting with "$or"... }
}
using:
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
Which results in:
2016-03-27T14:43:37.265+0200 E QUERY Error: field names cannot start with $ [$or]
at Error (<anonymous>)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:161:19)
at DBCollection._validateForStorage (src/mongo/shell/collection.js:165:18)
at insert (src/mongo/shell/bulk_api.js:646:20)
at DBCollection.insert (src/mongo/shell/collection.js:243:18)
at (shell):1:12 at src/mongo/shell/collection.js:161
But I CAN STORE it using Robomongo, but not always. Obviously I am doing something wrong. But I have NO IDEA what it is.
If it fails, and I create a brand new collection and try again, it succeeds. Weird stuff that goes beyond what I can comprehend.
But when I try updating values in the "query", changes are not going through. Never. Not even sometimes.
I can however create a new object and discard the previous one. So, the workaround is there.
db.queries.update(
{"name": "query1"},
{"$set": {
... update goes here ...
}
}
)
doing this results in:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 52,
"errmsg" : "The dollar ($) prefixed field '$or' in 'action.$or' is not valid for storage."
}
})
seems pretty close to the other message above.
Needles to say, I am pretty clueless about what is going on here, so I hope some of the wizzards here are able to shed some light on the matter
I think the error message contains the important info you need to consider:
QUERY Error: field names cannot start with $
Since you are trying to store a query (or part of one) in a document, you'll end up with attribute names that contain mongo operator keywords (such as $or, $ne, $gt). The mongo documentation actually references this exact scenario - emphasis added
Field names cannot contain dots (i.e. .) or null characters, and they must not start with a dollar sign (i.e. $)...
I wouldn't trust 3rd party applications such as Robomongo in these instances. I suggest debugging/testing this issue directly in the mongo shell.
My suggestion would be to store an escaped version of the query in your document as to not interfere with reserved operator keywords. You can use the available JSON.stringify(my_obj); to encode your partial query into a string and then parse/decode it when you choose to retrieve it later on: JSON.parse(escaped_query_string_from_db)
Your approach of storing the query as a JSON object in MongoDB is not viable.
You could potentially store your query logic and fields in MongoDB, but you have to have an external app build the query with the proper MongoDB syntax.
MongoDB queries contain operators, and some of those have special characters in them.
There are rules for mongoDB filed names. These rules do not allow for special characters.
Look here: https://docs.mongodb.org/manual/reference/limits/#Restrictions-on-Field-Names
The probable reason you can sometimes successfully create the doc using Robomongo is because Robomongo is transforming your query into a string and properly escaping the special characters as it sends it to MongoDB.
This also explains why your attempt to update them never works. You tried to create a document, but instead created something that is a string object, so your update conditions are probably not retrieving any docs.
I see two problems with your approach.
In following query
db.queries.insert({
"name" : "query1",
"query": { the thing printed above starting with "$or"... }
})
a valid JSON expects key, value pair. here in "query" you are storing an object without a key. You have two options. either store query as text or create another key inside curly braces.
Second problem is, you are storing query values without wrapping in quotes. All string values must be wrapped in quotes.
so your final document should appear as
db.queries.insert({
"name" : "query1",
"query": 'the thing printed above starting with "$or"... '
})
Now try, it should work.
Obviously my attempt to store a query in mongo the way I did was foolish as became clear from the answers from both #bigdatakid and #lix. So what I finally did was this: I altered the naming of the fields to comply to the mongo requirements.
E.g. instead of $or I used _$or etc. and instead of using a . inside the name I used a #. Both of which I am replacing in my Java code.
This way I can still easily try and test the queries outside of my program. In my Java program I just change the names and use the query. Using just 2 lines of code. It simply works now. Thanks guys for the suggestions you made.
String documentAsString = query.toJson().replaceAll("_\\$", "\\$").replaceAll("#", ".");
Object q = JSON.parse(documentAsString);

mongo dot notation ambiguity

I love MongoDB, and a certain little ambiguity occurred to me and I was wondering if anyone had seen this before and possibly would know the answer :-).
in mongo, to reach into sub-objects, you use dot notation, for example:
db.persons.find({ "address.state" : "CA" })
which is simple enough. How (if it does at all) does mongo deal with the difference between:
{
"address" { "state" : "CA" }
}
and
{
"address.state" : "CA"
}
since dots are legal in keys as far as i know. Additionally, I believe that this would be a legal doc as well:
{
"address" { "state" : "A" },
"address.state" : "B"
}
in which case, I can see this query returning either "A" or "B":
db.persons.find({}, {"address.state"}) // all docs selecting address.state as result.
Similar potential issue can arise I imagine with arrays as well:
{"a":["test"]}
which could be access with:
{"a.0"}
and of course
{"a" {"0" : "test"} }
which would also be access with:
{"a.0"}
thoughts? experiences? Is the conventional wisdom simply not to do that?
A key such as "address.state" isn't legal. From here:
Field names cannot contain dots (i.e. .) or null characters, and they must not start with a dollar sign (i.e. $).