How to know Mongo Collection Data Type? - mongodb

In Mongo collection data is stored as json with different data type but I want to check all schema bsontype without mentioning the attribute.
that gives the result like - attribute1 - int, attribute2 - string , attribute - bool

Related

OData groupby / filter on field of derived type

Let's assume I have the following data model:
[Person]
Id - number
FirstName - string
LastName - string
[Director : Person]
YearlyIncome - number
[Employee : Person]
Age - number
[File]
Id - number
CreatedAt - date
AssignedTo - Person
As we can see the File entity has a field called AssignedTo which is a Person.
Now let's say I want to count the amount of Files by the FirstName of the Person, this works fine, because FirstName is a field the Person entity. The query would look like this, for example:
files?$apply=groupby((assignedTo/firstName),aggregate($count as count))
But now what I want is to be able to group by Age, a field on the Employee entity.
However if I use the following query:
files?$apply=groupby((assignedTo/age),aggregate($count as count))
I get the following error:
Could not find a property named 'age' on type 'App.Core.Entities.Person'.
According to OData Version 4.0 docs it should be possible (see 4.9).
I managed to get the query below to work, to select a field of a derived type:
/files?$expand=assignedTo($select=App.Core.Entities.Employee/Age)
But I can't apply this when using groupby, here's what I've tried:
$apply=groupby((assignedTo($select=App.Core.Entities.Employee/Age)),aggregate($count as count))
$apply=groupby((assignedTo/App.Core.Entities.Employee/Age)),aggregate($count as count))
$apply=groupby((assignedTo/App.Core.Entities.Employee/Age)),aggregate($count as count))
I guess the error makes sense, but I'm wondering what's the best way to solve this.

How to write nested bson.M{} correctly

Suppose we have the following struct:
type shop struct {
ID primitive.ObjectID `json:"_id,omitempty" bson:"_id,omitempty"`
Brands *brand `json:"brand,omitempty" bson:"brand,omitempty"`
}
type brand struct {
ID primitive.ObjectID `json:"_id,omitempty" bson:"deploymentid,omitempty"`
}
I tried to find a document using findOne() but I don't get any document even there is a result match by using the MongoDB shell.
filter := bson.M{"brand" : bson.M{"_id" : someValue}}
var shopWithBrand shop
mycollection.findOne(ctx , filter).Decode(&shopWithBrand)
What mistake did I make?
This filter:
filter := bson.M{"brand" : bson.M{"_id" : someValue}}
Tells you want documents that have a brand field that is an embedded document having a single _id field whose value is the value of someValue.
This would actually work if your embedded documents would only consist of this single _id field, but your embedded brand has this ID field mapped to deploymentid and most likely has other fields as well (which you "stripped off" to minimize the example), and that is why it won't match.
Instead you want documents that have a brand field being a document that has a matching deployment field among other fields. This is how you can express that:
filter := bson.M{"brand.deploymentid" : someValue}

Golang revel+mgo - no data returned when struct variables having lowercase names

This is my struct type
type Category struct {
Name string `bson:"listName"`
Slug string `bson:"slug"`
}
used with the following function to return all results from a mongo collection -
func GetCategories(s *mgo.Session) []Category {
var results []Category
Collection(s).Find(bson.M{}).All(&results)
return results
}
The problem is that the field names in my db have names starting in lowercase but the Golang struct returns null when I try to use variable names starting with lower case. For e.g. this returns a JSON with corresponding fields empty -
type Category struct {
listName string `bson:"listName"`
slug string `bson:"slug"`
}
I'm actually porting a Meteor based API to Golang and a lot of products currently using the API rely on those field names like they are in the db!
Is there a workaround?
You need to make your fields visible for mgos bson Unmarshall by naming them with a starting capital letter. You also need to map to the appropiates json/bson field names
type Category struct {
ListName string `json:"listName" bson:"listName"`
Slug string `json:"slug" bson:"slug"`
}

How to query a JSON element

Let's say I have a Postgres database (9.3) and there is a table called Resources. In the Resources table I have the fields id which is an int and data which is a JSON type.
Let's say I have the following records in said table.
1, {'firstname':'Dave', 'lastname':'Gallant'}
2, {'firstname':'John', 'lastname':'Doe'}
What I want to do is write a query that would return all the records in which the data column has a json element with the lastname equal to "Doe"
I tried to write something like this:
records = db_session.query(Resource).filter(Resources.data->>'lastname' == "Doe").all()
Pycharm however is giving me a compile error on the "->>"
Does anyone know how I would write the filter clause to do what I need?
Try using astext
records = db_session.query(Resource).filter(
Resources.data["lastname"].astext == "Doe"
).all()
Please note that the column MUST have a type of a JSONB. The regular JSON column will not work.
Also you could explicitly cast string to JSON (see Postgres JSON type doc).
from sqlalchemy.dialects.postgres import JSON
from sqlalchemy.sql.expression import cast
db_session.query(Resource).filter(
Resources.data["lastname"] == cast("Doe", JSON)
).all()
If you are using JSON type (not JSONB) the following worked for me:
Note the '"object"'
query = db.session.query(ProductSchema).filter(
cast(ProductSchema.ProductJSON["type"], db.String) != '"object"'
)
I have some GeoJSON in a JSON (not JSONB) type column and none of the existing solutions worked, but as it turns out, in version 1.3.11 some new data casters were added, so now you can:
records = db_session.query(Resource).filter(Resources.data["lastname"].as_string() == "Doe").all()
Reference: https://docs.sqlalchemy.org/en/14/core/type_basics.html#sqlalchemy.types.JSON
Casting JSON Elements to Other Types
Index operations, i.e. those invoked by calling upon the expression
using the Python bracket operator as in some_column['some key'],
return an expression object whose type defaults to JSON by default, so
that further JSON-oriented instructions may be called upon the result
type. However, it is likely more common that an index operation is
expected to return a specific scalar element, such as a string or
integer. In order to provide access to these elements in a
backend-agnostic way, a series of data casters are provided:
Comparator.as_string() - return the element as a string
Comparator.as_boolean() - return the element as a boolean
Comparator.as_float() - return the element as a float
Comparator.as_integer() - return the element as an integer
These data casters are implemented by supporting dialects in order to
assure that comparisons to the above types will work as expected, such
as:
# integer comparison
data_table.c.data["some_integer_key"].as_integer() == 5
# boolean comparison
data_table.c.data["some_boolean"].as_boolean() == True
According sqlalchemy.types.JSON, you can do it like this
from sqlalchemy import JSON
from sqlalchemy import cast
records = db_session.query(Resource).filter(Resources.data["lastname"] == cast("Doe", JSON)).all()
According to this, pre version 1.3.11, the most robust way should be like this, as it works for multiple database types, e.g. SQLite, MySQL, Postgres:
from sqlalchemy import cast, JSON, type_coerce, String
db_session.query(Resource).filter(
cast(Resources.data["lastname"], String) == type_coerce("Doe", JSON)
).all()
From version 1.3.11 onward, type-specific casters is the new and neater way to handle this:
db_session.query(Resource).filter(
Resources.data["lastname"].as_string() == "Doe"
).all()

Golang/mgo: Cannot retrieve value of int field from MongoDB document

I am querying a collection that includes an integer value among it's values, and loading resulting documents into this struct:
type Subscription struct {
Id bson.ObjectId "_id,omitempty"
Listen string
Job string
TimeoutSeconds int
Data string
}
var subscription Subscription
subscriptions := subscriptionsCol.Find(bson.M{"listen": "example_channel"}).Iter()
for subscriptions.Next(&subscription) {
log("Pending job: %s?%s (timeout: %d)\n",
subscription.Job,
subscription.Data,
subscription.TimeoutSeconds)
}
This is what phpMoAdmin shows me:
[_id] => MongoId Object (
[$id] => 502ed8d84eaead30a1351ea7
)
[job] => partus_test_job_a
[TimeoutSeconds] => 30
[listen] => partus.test
[data] => a=1&b=9
It puzzles me that subscription.TimeoutSeconds contains always 0, when I'm positive I have 30 in the document I inserted in the collection.
All other values are retrieved OK.
What's wrong with the int type?
Have you tried setting the "key" value for that field?
Unmarshal
The lowercased field name is used as the key for each exported field,
but this behavior may be changed using the respective field tag.
type Subscription struct {
Id bson.ObjectId "_id,omitempty"
Listen string
Job string
TimeoutSeconds int "TimeoutSeconds"
Data string
}
The other fields are working fine because their lowercase value matches your Mongo fields in the collection, whereas TimeoutSeconds is using the TitleCase. What is happening is the int field is being left at its zero value, since the Unmarshal can't map a field to it.
When UnMarshalling data, there are multiple keys that are supported.
Below are some examples:
type T struct {
A bool
B int "myb"
C string "myc,omitempty"
D string `bson:",omitempty" json:"jsonkey"`
E int64 ",minsize"
F int64 "myf,omitempty,minsize"
}
The general spec for 1 key-value pair during marshal is :
"[<key>][,<flag1>[,<flag2>]]"
`(...) bson:"[<key>][,<flag1>[,<flag2>]]" (...)`
GO provides support for particular keywords like bson (for mongo keys) and json for setting the json key in a resposne.
Check the Marshal GO Reference for more information.
Similarly there are some frameworks which provide further options to define the keys befor parsing. For example, in sql jinzhu github library gives support for setting default values, column ids to map, etc.
Anyone can use this feature and provide customized support.