Go-MongoDriver decoding JSON array weirdly - mongodb

Heyho party people,
I've recently took up learning Go and started working on a small side project which includes a API written using the Go Fiber library.
All the necessery data is stored in MongoDB with the following schema
{
"ObjectId": {
"name": "name",
"url": "url",
"dateAdded": "date",
"data": [{
"timestamp 1": "price 1"
},
{
"timestamp 2": "price 2"
}
]
}
}
The item type looks like this:
type Item struct {
ID primitive.ObjectID `json:"_id" bson:"_id"`
Name string `json:"name" bson:"name"`
URL string `json:"url" bson:"url"`
DateAdded string `json:"dateAdded" bson:"dateAdded"`
Data []interface{} `json:"data" bson:"data"`
}
Whenever I query a stored item with
err = collection.FindOne(context.TODO(), filter).Decode(&item)
each map inside of the data-array is wrapped in another array =>
{ test url 2021-04-16 [[{2021-04-16 99.99}] [{2021-04-17 109.99}]] }
instead of
{ test url 2021-04-16 [{2021-04-16 99.99}, {2021-04-17 109.99}] }
Does anybody have an idea on how to fix this?
Thanks in advance!

OK, I've found a way to fix this behaviour for my use case.
As mentioned above, the MongoDB-driver for Go wraps each entry of an array into another respective array, which leads to a nested array.
After trying around for some time I found out, that inserting the document into your collection like the following example,
db.collection.insertOne({ name: "Name", url: "URL", dateAdded: "2021-04-25", data: { "2021-04-25": 9.99, "2021-04-26": 19.99 } })
then the result of a query performed in your program looks like this:
{ ObjectID("60858245f8805dc57a716500") Name URL 2021-04-25 [{ 2021-04-25 9.99 } { 2021-04-26 19.99 }] }
This means, that the JSON-schema should look like this
{
"ObjectId": {
"name": "name",
"url": "url",
"dateAdded": "date",
"data": {
"2021-04-25": 9.99,
"2021-04-26": 19.99
}
}
}
Sadly, I was not able to find out what is actually causing this odd behaviour, but I hope this helps anybody encountering this (or a similar) problem.
EDIT
Changing the type of the Data-field to []bson.M, as mkopriva said in the comments below, fixed it.
type Item struct {
ID primitive.ObjectID `json:"_id" bson:"_id"`
Name string `json:"name" bson:"name"`
URL string `json:"url" bson:"url"`
DateAdded string `json:"dateAdded" bson:"dateAdded"`
Data []bson.M `json:"data" bson:"data"`
}
This way the original JSON-schema does not have to be adapted to the workaround.
{
"ObjectId":{
"name":"name",
"url":"url",
"dateAdded":"date",
"data": [
{
"2021-04-25": 9.99
},
{
"2021-04-26": 19.99
}
]
}
}

Related

Execute Logical Operator Filters On GraphQL OnlyOn JSON Objects

Thank you for help. I am trying to execute AND/OR operator in GraphQL without Database.
Below is Query need to execute on dataset not database. Please understand, I don't have authority to connect to any database.
{
customVisualsData(_filter: {and: [{expression: {field: "Country", like: "Canada"}},{expression: {field: "Profit", gt: "5000"}}]}) {
Country
DiscountBand
Product
Segment
Profit
}
}
Transform dataset/JSON Object look like this.
[
{
"Country": "Canada",
"DiscountBand": "High",
"Product": "Paseo",
"Segment": "Government",
"COGS": 1477815,
"GrossSales": 2029256,
"ManufacturingPrice": 70,
"UnitsSold": 12230.5,
"Profit": 300289.99999999994
},
{
"Country": "United States of America",
"DiscountBand": "High",
"Product": "VTT",
"Segment": "Small Business",
"COGS": 1461250,
"GrossSales": 1753500,
"ManufacturingPrice": 750,
"UnitsSold": 5845,
"Profit": 74288
}
]
Schema Builder, I used to create GraphQL Query builder.
var schema = buildSchema(`
type customVisualObject {
Country: String
DiscountBand: String
Product: String
Segment: String
COGS: Float
GrossSales: Float
ManufacturingPrice: Int
UnitsSold: Float
Profit: Float
}
type Query {
customVisualsData(_filter: FilterInput): [customVisualObject]
}
input FilterExpressionInput {
field: String!
eq: String
gt: String
gte: String
like: String
}
input FilterInput {
expression: FilterExpressionInput
and: [FilterInput!]
or: [FilterInput]
not: [FilterInput!]
}
`);
Please let me know, if anyone know How to set resolver for this on graphQL?
Does anyone one know JSON-ata Or GraphQL library to execute such complex Query on JSON Object Not databse?
I appreciate your help.

How to get rid of an additional key added while inserting a nested struct in mongodb

Suppose this is my struct definition,
type partialContent struct {
key string `json:"key" bson"key"`
value string `json:"value" bson:"value"`
}
type content struct {
id string `json:"id" bson:"_id,omitempty"`
partialContent
}
While storing the content in MongoDB, it gets stored as
{
"_id": ObjectID,
"partialcontent": {
"key": "...",
"value": "..."
}
}
But the JSON unmarshal returns
{
"_id": ObjectID,
"key": "...",
"value": "..."
}
How do I get rid of the additional key partialcontent in MongoDB?
First, you need to export struct fields else the drivers will skip those fields.
If you don't want an embedded document in MongoDB, use the ,inline bson tag option:
type PartialContent struct {
Key string `json:"key" bson"key"`
Value string `json:"value" bson:"value"`
}
type Content struct {
ID string `json:"id" bson:"_id,omitempty"`
PartialContent `bson:",inline"`
}
Inserting this value:
v := Content{
ID: "abc",
PartialContent: PartialContent{
Key: "k1",
Value: "v1",
},
}
Will result in this document in MongoDB:
{ "_id" : "abc", "key" : "k1", "value" : "v1" }

RemoteTransportException, Fielddata is disabled on text fields when doing aggregation on text field

I am migrating from 2.x to 5.x
I am adding values to the index like this
indexInto (indexName / indexType) id someKey source foo
however I would also want to fetch all values by field:
def getValues(tag: String) ={
client execute {
search(indexName / indexType) query ("_field_names", tag) aggregations (termsAggregation( "agg") field tag size 1)
}
But I am getting this exception :
RemoteTransportException[[8vWOLB2][172.17.0.5:9300][indices:data/read/search[phase/query]]];
nested: IllegalArgumentException[Fielddata is disabled on text fields
by default. Set fielddata=true on [my_tag] in order to load fielddata
in memory by uninverting the inverted index. Note that this can
however use significant memory.];
I am thought maybe to use keyword as shown here , but the fields are not known in advanced (sent by the user) so I cannot use perpend mappings
By default all the unknown fields will be indexed/added to elasticsearch as text fields which are not specified in the mappings.
If you will take a look at mappings of such a field, you can see there a field is enabled with for such fields with type 'keyword' and these fields are indexed but not analyzed.
GET new_index2/_mappings
{
"new_index2": {
"mappings": {
"type": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
}
so you can use the fields values for the text fields for aggregations like the following
POST new_index2/_search
{
"aggs": {
"NAME": {
"terms": {
"field": "name.fields",
"size": 10
}
}
}
}
Check name.fields
So your scala query can work if you can shift to fields value.
def getValues(tag: String) = {
client.execute {
search(indexName / indexType)
.query("_field_name", tag)
.aggregations {
termsAgg("agg", "field_name.fields")
}.size(1)
}
}
Hope this helps.
Thanks

Sorting by document values in couchbase and scala

I am using couchbase and I have a document (product) that looks like:
{
"id": "5fe281c3-81b6-4eb5-96a1-331ff3b37c2c",
"defaultName": "default name",
"defaultDescription": "default description",
"references": {
"configuratorId": "1",
"seekId": "1",
"hsId": "1",
"fpId": "1"
},
"tenantProducts": {
"2": {
"adminRank": 1,
"systemRank": 15,
"categories": [
"3"
]
}
},
"docType": "product"
}
I wish to get all products (this json is product) that belong to certain category, So i've created the following view:
function (doc, meta) {
if(doc.docType == "product")
{
for (var tenant in doc.tenantProducts) {
var categories = doc.tenantProducts[tenant].categories
// emit(categories, doc);
for(i=0;i<categories.length;i++)
{
emit([tenant, categories[i]], doc);
}
}
}
}
So i can run the view with keys like:
[["tenantId", "Category1"]] //Can also have: [["tenant1", "Category1"],["tenant1", "Category2"] ]
My problem is that i receive the document, but i wish to sort the documents by their admin rank and system rank, these are 2 fields that exists in the "value".
I understand that the only solution would be to add those fields to my key, determine that my key would be from now:
[["tenantId", "Category1", "systemRank", "adminRank"]]
And after i get documents, i need to sort by the 3rd and 4th parameters of the key ?
I just want to make sure i understand this right.
Thanks

Using MERGE with properties via REST

According to the sample code at http://docs.neo4j.org/chunked/2.0.0-M03/rest-api-transactional.html I'm trying to use the MERGE statement.
But when I apply the following statement:
{
"statements": [
{
"statement": "MERGE (p:PERSON { identification }) ON CREATE p SET { properties } ON MATCH p SET { properties } RETURN p",
"parameters": {
"identification": {
"guid": "abc123xyz"
},
"properties": {
"lastName": "Doe",
"firstName": "John"
}
}
}
]
}
it gets back with the following 2 errors:
{ identification }
code: 42000,
status: STATEMENT_EXECUTION_ERROR,
message: Tried to set a property to a collection of mixed types. List(Map(guid -> abc123xyz))
SET { properties }
code: 42001,
status: STATEMENT_SYNTAX_ERROR",
message: =' expected butO' found\n\nThink we should have …
Can this not be done this way (yet) or am I missing something?
Thanks for your help
Daniel
It seems you've discovered a bug. I've reported the issue here:
https://github.com/neo4j/neo4j/issues/975
The issue is that MERGE needs to know the keys you will search on in advance. Passing a map of parameters hides this.
To achieve the same, list each key explicitly. If you still want to pass them all in a single map, you can probably do something like: MERGE (p:Person {name: {merge_map}.name, email: {merge_map}.email}).
Daniel,
I think you have to use SET differently, something like this:
MERGE (p:PERSON { identification })
ON CREATE p SET p={ properties }
ON MATCH p SET p={ properties }
RETURN p
But I'm not sure if that SET overrides all your properties. So it might be that you have to specify them one by one.
{
"statements": [
{
"statement": "MERGE (p:PERSON { guid : {guid} })
ON CREATE p SET p.lastName={lastName},p.firstName={ firstName }
ON MATCH p SET p.lastName={lastName},p.firstName={ firstName }
RETURN p",
"parameters": {
"guid": "abc123xyz",
"lastName": "Doe",
"firstName": "John"
}
}
]
}