MongoDB: Finding a value where object name contains url - mongodb

We are using learninglocker and I am trying to query its mongodb. learninglocker puts escaped urls as object names, which makes them more difficult to search. I am returning 0 results when I would expect to return several.
My find is as follows:
{"statement.object.definition.extensions.http://activitystrea&46;ms/schema/1&46;0/device.device_type": "app"}
I assume that this should be escaped somehow, however, am unsure how.
http://activitystrea&46;ms/schema/1&46;0/device
Sample object:
"statement": {
"version" : "1.0.0",
"actor" : { },
"verb" : { },
"context" : { },
"object" : {
"definition" : {
"extensions" : {
"http://activitystrea&46;ms/schema/1&46;0/device" : {
"device_type" : "app"
}
}
}
}
}

Related

Missing rollover_alias index setting in OpenSearch

I am trying to setup index rollover in OpenSearch with simple min_doc_count condition, but I am getting "message": "Missing rollover_alias index setting [index=app_logs-000002]" error.
I have a rollover alias called app_logs, and also have the following policy (for demo purpose it is dummy to rollover after 3 documents) attached to indexes:
PUT _plugins/_ism/policies/rollover_policy
{
"policy": {
"description": "Rollover policy",
"default_state": "rollover",
"states": [
{
"name": "rollover",
"actions": [
{
"rollover": {
"min_doc_count": 3
}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"app_logs-*"
]
}
]
}
}
GET _cat/aliases:
app_logs app_logs-000001 - - - false
app_logs app_logs-000002 - - - true
GET _cat/indices:
yellow open app_logs-000002 V4j0gxaYTcqoQZvtd0u2zc 1 1 6 0 4.1kb 4.1kb
yellow open app_logs-000001 AnPjlOq6Q5We411z2q_YpQ 1 1 5 0 18.8kb 18.8kb
...
When doing
GET _opendistro/_ism/explain/app_logs-000002?pretty I get:
{
"app_logs-000002" : {
"index.plugins.index_state_management.policy_id" : "rollover_policy",
"index.opendistro.index_state_management.policy_id" : "rollover_policy",
"index" : "app_logs-000002",
"index_uuid" : "V4j0gxaYTcqoQZvtd0u2zc",
"policy_id" : "rollover_policy",
"policy_seq_no" : -2,
"policy_primary_term" : 0,
"rolled_over" : false,
"index_creation_date" : 1659299029428,
"state" : {
"name" : "rollover",
"start_time" : 1659299410303
},
"action" : {
"name" : "rollover",
"start_time" : 1659424192817,
"index" : 0,
"failed" : true,
"consumed_retries" : 3,
"last_retry_time" : 1659424804833
},
"step" : {
"name" : "attempt_rollover",
"start_time" : 1659424192817,
"step_status" : "failed"
},
"retry_info" : {
"failed" : false,
"consumed_retries" : 0
},
"info" : {
"message" : "Missing rollover_alias index setting [index=app_logs-000002]"
},
"enabled" : false
},
"total_managed_indices" : 1
}
When I do GET app_logs-000002/_settings I get:
{
"app_logs-000002" : {
"settings" : {
"index" : {
"creation_date" : "1659299029428",
"number_of_shards" : "1",
"number_of_replicas" : "1",
"uuid" : "V4j0gxaYTcqoQZvtd0u2zc",
"version" : {
"created" : "136227827"
},
"provided_name" : "app_logs-000002"
}
}
}
}
so yes rollover alias setting is really missing there. But I would expect that this will be added automatically.
When I do GET _template I get:
{
"ism_rollover" : {
"order" : 0,
"index_patterns" : [
"app_logs-*"
],
"settings" : {
"index" : {
"opendistro" : {
"index_state_management" : {
"rollover_alias" : "app_logs"
}
}
}
},
"mappings" : { },
"aliases" : { }
}
}
so rollover_alias is there in template. Why this is not used in a new index from template?
Thanks!
I experienced a similar problem. The issue was that the indices needed to be created after the ism policy and template. I'm not sure if you managed to find a solution but perhaps for those future users this could prove useful.
Some docs:
Very useful sample on setting up a rolling index strategy: https://opensearch.org/docs/latest/im-plugin/ism/policies/#sample-policy-with-ism-template-for-auto-rollover
Official AWS docs on the same topic with some examples: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ism.html.
A great writeup on common errors experienced when implementing a rolling index ISM policy: https://aws.amazon.com/premiumsupport/knowledge-center/opensearch-failed-rollover-index/
In your case it appears that the policy was not correctly applying to your indices which is likely a result of you creating your indices before the policy and template were created. If you want to add a policy to an index see the step 6 of Create an ISM policy in the linked AWS docs above:
POST _plugins/_ism/add/my-index
{
"policy_id": "my-policy-id"
}
Here is how I went about solving this problem using a policy and template:
Implement an ISM policy (as you did above)
Create an ISM template
PUT _template/ism_rollover_app
{
"index_patterns": "app_logs-*",
"settings": {
"index": {
"opendistro.index_state_management.rollover_alias": "app_logs"
}
}
}
Create an initial index called app_logs-00001 (or some variant that matches the regex ^.*-\d+$)
This should hopefully see app_logs-00001 be created from the ism_rollover_app template and have the app_logs index associated with it. This should subsequently fix this missing alias issue.

How to find special characters in a particular attribute of json data stored in mongodb using mongo query

how to search any special characters in a particular nested json object field. Am having a field which stores nested json data.
I need to write a MongoDB query to fetch all the names which is having special characters
Student collection:
Example:
{
_id:123
student: {
"personalinfo":{
"infoid": "YYY21"
"name": "test##!*"
}
}
}
I have tried few regular expressions but I am not sure how to loop in array elements
I expect it to print the infoid & name, which has special characters in the name field.
The following query can do the trick:
db.collection.distinct("student.personalinfo.name",{"student.personalinfo.name": { $not: /^[\w]+[\w ]*$/ } })
Data set:
{
"_id" : ObjectId("5d77a5babd4e75c58d59821d"),
"student" : {
"personalinfo" : {
"infoid" : "YYY21",
"name" : "test##!*"
}
}
}
{
"_id" : ObjectId("5d77a5babd4e75c58d59821e"),
"student" : {
"personalinfo" : {
"infoid" : "YYY21",
"name" : "Bruce##"
}
}
}
{
"_id" : ObjectId("5d77a5babd4e75c58d59821f"),
"student" : {
"personalinfo" : {
"infoid" : "YYY21",
"name" : "Tony"
}
}
}
{
"_id" : ObjectId("5d77a5babd4e75c58d598220"),
"student" : {
"personalinfo" : {
"infoid" : "YYY21",
"name" : "Natasha"
}
}
}
Output:
[ "test##!*", "Bruce##" ]
Try using below regex, it matches all unicode punctuation and symbols.
dbname.find({'student.name':{$regex:"[\p{P}\p{S}]"}})

Elasticsearch doesn't find value in range query

I launch following query:
GET archive-bp/_search
{
"query": {
"bool" : {
"filter" : [ {
"bool" : {
"should" : [ {
"terms" : {
"naDataOwnerCode" : [ "ACME-FinServ", "ACME-FinServ CA", "ACME-FinServ NY", "ACME-FinServ TX", "ACME-Shipping APA", "ACME-Shipping Eur", "ACME-Shipping LATAM", "ACME-Shipping ME", "ACME-TelCo-CN", "ACME-TelCo-ESAT", "ACME-TelCo-NL", "ACME-TelCo-PL", "ACME-TelCo-RO", "ACME-TelCo-SA", "ACME-TelCo-Treasury", "Default" ]
}
},
{
"bool" : {
"must_not" : {
"exists" : {
"field" : "naDataOwnerCode"
}
}
}
} ]
}
}, {
"range" : {
"bankCommunicationStatusDate" : {
"from" : "2006-02-27T06:45:47.000Z",
"to" : null,
"time_zone" : "+02:00",
"include_lower" : true,
"include_upper" : true
}
}
} ]
}
}
}
And I receive no results, but the field exists in my index.
When I strip off the data owner part, I still have no results. When I strip off the bankCommunicationDate, I get 10 results, so there is the problem.
The query of only the bankCommunicationDate:
GET archive-bp/_search
{
"query" :
{
"range" : {
"bankCommunicationStatusDate" : {
"from" : "2016-04-27T09:45:43.000Z",
"to" : "2026-04-27T09:45:43.000Z",
"time_zone" : "+02:00",
"include_lower" : true,
"include_upper" : true
}
}
}
}
The mapping of my index contains the following bankCommunicationStatusDate field:
"bankCommunicationStatusDate": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
}
And there are values for the field bankCommunicationStatusDate in elasticsearch:
"bankCommunicationStatusDate": "2016-04-27T09:45:43.000Z"
"bankCommunicationStatusDate": "2016-04-27T09:45:47.000Z"
What is wrong?
What version of Elastic Search do you use?
I guess the reason is that you should use "gte/lte" instead of "from/to/include_lower/include_upper".
According to documentation to version 0.90.4
https://www.elastic.co/guide/en/elasticsearch/reference/0.90/query-dsl-range-query.html
Deprecated in 0.90.4.
The from, to, include_lower and include_upper parameters have been deprecated in favour of gt,gte,lt,lte.
The strange thing is that I have tried your example on elastic search version 1.7 and it returns data!
I guess real depreciation took place much later - between 1.7 and maybe newer version you have.
BTW. You can isolate the problem even further using Sense plugin for Chrome and this code:
DELETE /test
PUT /test
{
"mappings": {
"myData" : {
"properties": {
"bankCommunicationStatusDate": {
"type": "date"
}
}
}
}
}
PUT test/myData/1
{
"bankCommunicationStatusDate":"2016-04-27T09:45:43.000Z"
}
PUT test/myData/2
{
"bankCommunicationStatusDate":"2016-04-27T09:45:47.000Z"
}
GET test/_search
{
"query" :
{
"range" : {
"bankCommunicationStatusDate" : {
"gte" : "2016-04-27T09:45:43.000Z",
"lte" : "2026-04-27T09:45:43.000Z"
}
}
}
}

Update a Map value document value using Mongo

I have a document like myPortCollection
{
"_id" : ObjectId("55efce10f027b1ca77deffaa"),
"_class" : "myTest",
"deviceIp" : "10.115.75.77",
"ports" : {
"1/1/x1" : {
"portId" : "1/1/x1",
healthState":"Green"
I tried to update
db.myPortCollection.update({
{ deviceIp:"10.115.75.77"},
{ "chassis.ports.1/1/x10.rtId":"1/1/x10" },
{ $set: { "chassis.ports.1/1/x10.healthState" : "Red" }
})
But I am getting error that attribute names mentioned is wrong,.Please help in specifying the syntax properly for embedded map document update.
The "query" portion is wrong as you have split conditions into two documents. It should be this:
db.myPortCollection.update(
{
"deviceIp":"10.115.75.77",
"chassis.ports.1/1/x10.rtId":"1/1/x10"
},
{ "$set": { "chassis.ports.1/1/x10.healthState" : "Red" } }
)
And as long as the query then matches ( valid data not shown in your question ) then the specified field will be set or added.

How can one "join" a DBRef

Somehow it must be possbile to generate a result when DBReferences are resolved and the value of the referenced object is given back together whit the original object.
example: first object has a reference
{
"_id" : ObjectId("53bd526a5894ca07e60ca414"),
"name": "The name"
"labelnames" : {
"de" : {
"$ref" : "nameList",
"$id" : ObjectId("53bd526a5894ca07e60ca41c")
}
}
}
Second objects stores the value
{
"_id" : ObjectId("53bd526a5894ca07e60ca41c"),
"lang" : "de",
"labelNameMap" : {
"9d96cd10-d27f-4579-9f6e-9fd8d9f9c683" : {
"value" : "the value"
}
}
}
The result should be:
{name: "The name", value: "the value}
With SQL it would be a join, how ist this done in MongoDB?