This is my schema:
db.createCollection("user_clicks", {
validator: {
$jsonSchema: {
bsonType: "object",
required: [ "session_id", "country", "browser", "url", "date"],
properties: {
session_id: {
bsonType: "string",
description: "must be a string and is required" },
country: {
bsonType: "string",
description: "country name and is required"},
browser: {
bsonType: "string",
description: "browser name and is required"},
url : {
bsonType: "string",
description: "user click url and is required"},
date: {
bsonType: "date",
description: "localdatetime and is required"}}}})
This is the code I am using to generate data:
mgeneratejs `{
"session_id": "$oid",
"country": "$country",
"browser": {
"$choose": {
"from": [
"Firefox",
"Chrome",
"Safari",
"Explorer"
],
"weights": [
1,
2,
2,
1
]
}
},
"url": {
"$choose": {
"from": [
"google.com/images",
"facebook.com/profile1538713",
"soundcloud.com/playlist03",
"some-url.com/home",
"sinoptik.ua/kyiv"
],
"weights": [
1,
2,
2,
1,
3
]
}
},
"date": {
"$date": {
"min": "2016-08-01T23:59:59.999Z",
"max": "2016-10-01T23:59:59.999Z"
}
}
}` -n 5 | mongoimport --uri="mongodb://localhost:27017/events" --collection user_clicks --mode=insert
I'm trying to generate random date by using mgeneratejs and mongoimport. The problem is that i can't insert any date like : "3/13/2019" or "2019-03-26T23:44:26Z" (that's what i actually need). The error is:
**WriteResult({
"nInserted" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})**
I try to insert like new Date("2019-03-26T23:44:26Z") and it works! Please help how to automate every time inserting create new Date(date) or how to fix this !
I am confused at what your issue is... everything is working as it should.. You store dates in Mongo as Date objects (that have the type of Date)..
If you want to store dates in Mongo in a format like 3/13/2019 you can do something like this:
// javascript
let dateToInsert = new Date("3/13/2019");
Use the below before inserting the random date and it should work:
new Date(randomDate);
Related
I am trying to secure my real time db. I have the following database structure:
{
"chats": {
"-NMhLlfSU-HYmjmXBzmH": {
"lastMessage": "",
"lastSender": "",
"seen": true,
"timestamp": 1674724449157
},
"members": {
"-NMhLlfSU-HYmjmXBzmH": {
"63cc6d925b51cb7a423393cc": true,
"63d240635b51cb7a423397d5": true
},
},
"users": {
"63cc6d925b51cb7a423393cc": {
"city": "Ituzaingó, Buenos Aires Province, Argentina",
"contacts": {
"63d240635b51cb7a423397d5": true
},
"name": "Joaquin varela",
"picture": "https://cdn.pixabay.com/photo/2015/10/05/22/37/blank-profile-picture-973460_1280.png"
},
"63d240635b51cb7a423397d5": {
"city": "Madrid",
"contacts": {
"63cc6d925b51cb7a423393cc": true
},
"name": "Test Test",
"picture": "https://cdn.pixabay.com/photo/2015/10/05/22/37/blank-profile-picture-973460_1280.png"
},
}
I am trying to implement the rules for it. The only problem is, my auth.uid is not the same as my user_id
Is there any way to secure my database? Maybe passing some user_id argument but I don't know how.
I hope you can help me. Thanks in advance!
The problem I am facing is that I want to develop an autocomplete search bar using Mean Stack like the one in this site, but when I type, for example, 'ag' it's not returning the right location that should be 'Aguascalientes'.
I have two different search indexes set up and a different query for each.
First Index:
{
"mappings": {
"dynamic": false,
"fields": {
"name": {
"foldDiacritics": false,
"maxGrams": 7,
"minGrams": 3,
"tokenization": "edgeGram",
"type": "autocomplete"
},
"searchName": {
"foldDiacritics": false,
"maxGrams": 7,
"minGrams": 3,
"tokenization": "edgeGram",
"type": "autocomplete"
}
}
}
}
First Query:
[
{
$search: {
index: "autocomplete2",
compound: {
must: [
{
text: {
query: search,
path: "searchName",
fuzzy: {
maxEdits: 2,
},
},
},
],
},
},
},
{
$limit: 10,
},
]
The first ones are not returning any document at all. But the second example is:
{
"mappings": {
"dynamic": false,
"fields": {
"name": {
"analyzer": "lucene.standard",
"type": "string"
},
"searchName": {
"analyzer": "lucene.standard",
"type": "string"
}
}
}
}
Query:
[
{
$search: {
index: 'default',
compound: {
must: [
{
text: {
query: search,
path: 'name',
fuzzy: {
maxEdits: 1,
},
},
},
{
text: {
query: search,
path: 'searchName',
fuzzy: {
maxEdits: 1,
},
},
},
],
},
},
},
{
$limit: 5,
},
]
The second example is only returning documents if the search term 'aguascalient' but is not returning any document if the search term is shorter like the site. Maybe it has something to do with the fuzzy edits but if I set it up to greater than 2 I get an error.
Also the order is not right, it returns first the CITY and second the STATE but I need the STATE first because the search term is more similar than the city. Let me explain, search field for STATE is only 'Aguascalientes' but search field cities is 'Aguascalientes Aguascalientes' so I don't know why is not working properly. Maybe in that case I should give weights accordingly but I'm not sure if it's the right approach to solve this.
My data structure:
{
"_id": "638d0ffc34ad076c6bd12cb6",
"depth": 2,
"label": "CITY",
"location_id": "V1-C-247",
"name": "Aguascalientes",
"parent": "Aguascalientes",
"fullName": "Aguascalientes, Aguascalientes",
"parentId": "V1-B-61",
"searchName": "Aguascalientes Aguascalientes",
}
{
"_id": "638d0ffc34ad076c6bd12cb6",
"depth": 1,
"label": "STATE",
"location_id": "V1-C-248",
"name": "Aguascalientes",
"parent": null,
"fullName": "Aguascalientes",
"parentId": null,
"searchName": "Aguascalientes",
}
For the first index + query setup:
First, you are indexing the name field but are not searching on it. I will remove it from the code snippets for readability, but you can add it back to your index definition if you find you need to search on it.
There are two problems with the this index + query setup if you want to return results with a query for "ag". You have searchName defined as a field mapping of type autocomplete, but you also need to use the autocomplete operator in your query:
[
{
$search: {
index: "autocomplete2",
compound: {
must: [
{
autocomplete: {
query: search,
path: "searchName",
},
},
],
},
},
},
{
$limit: 10,
},
]
Second, in your index definition field mapping for searchName, you have minGram set to 3 and maxGram set to 7. Based on the documentation for the autocomplete field mapping, this means that your data will be tokenized into sequences of character lengths between 3 to 7, using the selected tokenization strategy. Since you have selected edgeGram, the tokens generated by the text "Aguascalientes" will be tokenized starting from the left edge, resulting in tokens "agu", "agua", "aguas", "aguasc", "aguasca". Since the search term "ag" does not match any of the tokens, nothing is returned. So, you must change the minGram to 2 to get the token "ag":
{
"mappings": {
"dynamic": false,
"fields": {
"searchName": {
"foldDiacritics": false,
"maxGrams": 7,
"minGrams": 2,
"tokenization": "edgeGram",
"type": "autocomplete"
}
}
}
}
Finally, if you want the document with an exact match to return over a partial match, ie. "Aguascalientes" should return before "Aguascalientes Aguascalientes", you need to implement exact matching. Here is a MongoDB blog post outlining a few options.
One option that I tried: In the index, use a keyword analyzer on the "searchName" field typed as a string data type. In the query, use the text operator nested in a should clause so that exact matches will return higher than other results.
Index:
{
"mappings": {
"dynamic": false,
"fields": {
"searchName": [
{
"foldDiacritics": false,
"maxGrams": 7,
"type": "autocomplete"
},
{
"analyzer": "lucene.keyword",
"searchAnalyzer": "lucene.keyword",
"type": "string"
}
]
}
}
}
Query:
[
{
$search: {
compound: {
must: [
{
autocomplete: {
query: search,
path: "searchName"
}
}
],
should:[
{
text: {
query: search,
path: "searchName"
}
}
],
},
},
},
]
I am using mongodb compass and i am trying to insert below data, it is giving error. Any help appreceiated.
/**
* Paste one or more documents here
*/
{
"_id": {
"$oid": "621f567ceff392db081a4135"
},
"CompanyID": "620d2d9efc8cec9c94f26284",
"GeoLevelName": "All India",
"IsActive": 1,
"CreatedUser": "string",
"CreateDate": "2022-02-28T14:27:05.757Z",
"LastModifyDate": "2022-02-28T14:27:05.757Z",
"LastModifyUser": "string"
"GeoLevelMain": [{
"GeoLevelID": "621cdce8b876f1ec17b1cec9",
"GeoLevelValue": "Maharastra"
},{
"GeoLevelID": "621cdce8b876f1ec17b1cec9",
"GeoLevelValue": "Maharastra"
}],
"GeographyID": "621cde14b876f1ec17b1cece",
"DBID": "620f658d6dee6848caf53832",
"Division": {
"DivisionID": "6215d68d9e4786b2f7ab80a0",
"DivisionName": "DivisionName"
}
}
We are having difficulty adding header_text and description_text to a Service Alerts protobuff file. We are attempting to match the example shown on this page here.
https://developers.google.com/transit/gtfs-realtime/examples/alerts
Our data starts in the following dictionary:
alerts_dict = {
"header": {
"gtfs_realtime_version": "1",
"timestamp": "1543318671",
"incrementality": "FULL_DATASET"
},
"entity": [{
"497": {
"active_period": [{
"start": 1525320000,
"end": 1546315200
}],
"url": "http://www.capmetro.org/planner",
"effect": 4,
"header_text": "South 183: Airport",
"informed_entity": [{
"route_type": "3",
"route_id": "17",
"trip": "",
"stop_id": "3304"
}, {
"route_type": "3",
"route_id": "350",
"trip": "",
"stop_id": "3304"
}],
"description_text": "Stop closed temporarily",
"cause": 2
},
"460": {
"active_period": [{
"start": 1519876800,
"end": 1546315200
}],
"url": "http://www.capmetro.org/planner",
"effect": 4,
"header_text": "Ave F / Duval Detour",
"informed_entity": [{
"route_type": "3",
"route_id": "7",
"trip": "",
"stop_id": "1167"
}, {
"route_type": "3",
"route_id": "7",
"trip": "",
"stop_id": "1268"
}],
"description_text": "Stop closed temporarily",
"cause": 2
}
}]
}
Our Python code is as follows:
newfeed = gtfs_realtime_pb2.FeedMessage()
newfeedheader = newfeed.header
newfeedheader.gtfs_realtime_version = '2.0'
for alert_id, alert_dict in alerts_dict["entity"][0].iteritems():
print(alert_id)
print(alert_dict)
newentity = newfeed.entity.add()
newalert = newentity.alert
newentity.id = str(alert_id)
newtimerange = newalert.active_period.add()
newtimerange.end = alert_dict['active_period'][0]['end']
newtimerange.start = alert_dict['active_period'][0]['start']
for informed in alert_dict['informed_entity']:
newentityselector = newalert.informed_entity.add()
newentityselector.route_id = informed['route_id']
newentityselector.route_type = int(informed['route_type'])
newentityselector.stop_id = informed['stop_id']
print(alert_dict['description_text'])
newdescription = newalert.header_text
newdescription = alert_dict['description_text']
newalert.cause = alert_dict['cause']
newalert.effect = alert_dict['effect']
pb_feed = newfeed.SerializeToString()
with open("servicealerts.pb", 'wb') as fout:
fout.write(pb_feed)
The frustrating part is that we don't receive any sort of error message. Everything appears to run properly but the resulting pb file doesn't contain the new header_text or description_text items.
We are able to read the pb file using the following code:
feed = gtfs_realtime_pb2.FeedMessage()
response = open("servicealerts.pb")
feed.ParseFromString(response.read())
print(feed)
We truly appreciate any help that anyone can offer in pointing us in the right direction of figuring this out.
I was able to find the answer. This Python Notebook showed that by properly formatting the dictionary the PB could be generated with a few of lines of code.
from google.transit import gtfs_realtime_pb2
from google.protobuf.json_format import MessageToDict
newfeed = gtfs_realtime_pb2.FeedMessage()
ParseDict(alerts_dict, newfeed)
pb_feed = newfeed.SerializeToString()
with open("servicealerts.pb", 'wb') as fout:
fout.write(pb_feed)
All I had to do was format by dictionary properly.
if ALERT_GROUP_ID not in entity_dict.keys():
entity_dict[ALERT_GROUP_ID] = {"id": ALERT_GROUP_ID,
"alert":{
"active_period": [{
"start": int(START_TIME),
"end": int(END_TIME)
}],
"cause": cause_dict.get(CAUSE, ""),
"effect": effect_dict.get(EFFECT),
"url": {
"translation": [{
"text": URL,
"language": "en"
}]
},
"header_text": {
"translation": [{
"text": HEADER_TEXT,
"language": "en"
}]
},
"informed_entity": [{
'route_id': ROUTE_ID,
'route_type': ROUTE_TYPE,
'trip': TRIP,
'stop_id': STOP_ID
}],
"description_text": {
"translation": [{
"text": "Stop closed temporarily",
"language": "en"
}]
},
},
}
# print(entity_dict[ALERT_GROUP_ID]["alert"]['informed_entity'])
else:
entity_dict[ALERT_GROUP_ID]["alert"]['informed_entity'].append({
'route_id': ROUTE_ID,
'route_type': ROUTE_TYPE,
'trip': TRIP,
'stop_id': STOP_ID
})
This is my JSON:
{
"title": "This an item",
"date":1000123123,
"data": [
{
"type": "html",
"content": "<h1>Hi there, this is a H1</h1>"
},
{
"type":"img",
"content": [
{
"title": "Image 1",
"url": "www.google.com/1.jpg",
"description":"This is the first image"
}
]
},
{
"type": "map",
"content": [
{
"lat":323434555,
"lng":4444343434,
"description":"this is just a place"
}
]
}
]
}
As you can see, the "data" fiel stores an array of objects where the "content" field is variable.
How should I model that in Mongoose?
This is how I defined my schema:
module.exports = mongoose.model('TestObject', new Schema({
title: String,
date: Date,
data: [
{
type: String,
content: Object
}
]
}));
And this is the response for the "data" field:
"data": [
{
"type":"img",
"content": [ "[object Object]" ]
},
{
"type":"map",
"content": [ "[object Object]" ]
}
]
What is the correct way to define a varying datatype for an object in Mongoose?
Maybe the Mixed type could meet your requirement
An "anything goes" SchemaType, its flexibility comes at a trade-off of it being harder to maintain. Mixed is available either through Schema.Types.Mixed or by passing an empty object literal.
data: [
{
type: String,
content: Mixed
}
]