I have a _User Class in MongoDb for which I do not want sensitive items such as email being returned queries (unless the correct session token is sent).
I can set CLP in the _Schema like so:
```
"_metadata": {
"class_permissions": {
"get": {
"*": true
},
"find": {
"*": true
},
"update": {
"*": true
},
"create": {
"*": true
},
"delete": {
"*": true
},
"addField": {
"*": true
},
"readUserFields": [],
"writeUserFields": []
}
},
"username": "string",
"email": "string"...
```
I do not want to whole Class to be hidden - i.g., I want to keep get and find as publicly readable. I just want to hide certain columns.
I couldn't find any documentation on the field readUserFields as seen above, nor did it help to add fields within it, i.e. readUserField: ["trivialField"].
If it helps, this database structure was migrated from https://parse.com - but I believe the question is more relevant to MongoDB as a whole; there was no useful information regarding this from the Parse community.
Any help would be appreciated!
Related
I am new to Elastic Search and facing a couple of issues when querying. I have a simple Mongodb database with collections of cities and places of interest. Each collection has a cityName and other details like website etc, and also a places object array. This is my mapping;
{
"mappings": {
"properties": {
"cityName": {
"type": "text"
},
"phone": {
"type": "keyword"
},
"email": {
"type": "keyword"
},
"website": {
"type": "keyword"
},
"notes": {
"type": "keyword"
},
"status": {
"type": "keyword"
},
"places": {
"type": "nested",
"properties": {
"name": {
"type": "text"
},
"status": {
"type": "keyword"
},
"category": {
"type": "keyword"
},
"reviews": {
"properties": {
"rating": {
"type": "long"
},
"comment": {
"type": "keyword"
},
"user": {
"type": "nested"
}
}
}
}
}
}
}
}
I need a fuzzy query where user can search both cityName and places.name, however I get results when I search a single word, adding multiple words return 0 hits. I am sure I am missing something here because I started learning elastic search 2 days ago. The following query returns results because I have a document with cityName: Islamabad and places array having objects that have the keyword Islamabad in their name, in some places the keyword Islamabad is at the beginning of the place.name and in some places objects it might be in the middle or end
This is what I am using : Returns results when only one word
{
"query": {
"bool": {
"should": [
{
"fuzzy": {
"cityName": "Islamabad"
}
},
{
"nested": {
"path": "places",
"query": {
"fuzzy": {
"places.name": "Islamabad"
}
}
}
}
]
}
}
}
Adding another word, say, club, to the above query returns 0 hits when I actually do have places having names Islamabad club and Islamabad Golf club
Problem
The search query is sent from an app and so it is dynamic, so the term to search is same for both cityName and places.name AND places.name doesn't always have the cityName in it.
What do I need exactly??
I need a query where I can search cityName and the array of places (only searching places.name). The query should be of Fuzzy type so that it still returns results if the word Islamabad is spelled like Islambad or even return results for Islam or Abad. And the query should also return results for multiple words, I am sure am I doing something wrong there. Any help would be appreciated.
**P.S : ** I am actually using MongoDB as my database but migrating to Elastic Search ONLY for improving our search feature. I tried different ways with MongoDB, used the mongoose-fuzzy-searching npm module but that didn't work, so if there's a simpler solution for MongoDB please share that too.
Thanks.
EDIT 1:
I had to change the structure (mapping) of my data. Now I have 2 separate indices, one for cities with city details and a cityId and another index for all places, each place has a cityId which will be used for joining later if needed. Each place also has a cityName key so I will only be searching the places index because it has all the details (place name and city name).
I have a city including the word Welder's in it's name and also the some places inside the same location have the word Welder's in their name, which have a type:text. However when searched for welder both of the following queries don't return these documents, a search for welders OR welder's does return these documents. I am not sure why welder won't match with Welder's*. I didn't specify any analyzer during the creation of both the indices and neither am I explicitly defining it in the query can anyone help me out with this query so it behaves as expected:
Query 1 :
{
"query": {
"bool": {
"should": [
{
"match": {
"name": {
"query": "welder",
"fuzziness": 20
}
}
},
{
"match": {
"cityName": {
"query": "welder",
"fuzziness": 20
}
}
}
]
}
}
}
Query 2 :
{
"query": {
"match": {
"name": {
"query": "welder",
"fuzziness": 20
}
}
}
}
the fuzzy query is meant to be used to find approximations of your complete query within a certain distance :
To find similar terms, the fuzzy query creates a set of all possible
variations, or expansions, of the search term within a specified edit
distance. The query then returns exact matches for each expansion.
If you you cant to allow fuzzy matching of individual terms in your query your need to use a match query with the fuzziness activated.
POST <your_index>/_search
{
"query": {
"bool": {
"should": [
{
"match": {
"cityName": {
"query": "Islamabad golf",
"fuzziness": "AUTO"
}
}
},
{
"nested": {
"path": "places",
"query": {
"match": {
"places.name": {
"query": "Islamabad golf",
"fuzziness": "AUTO"
}
}
}
}
}
]
}
}
}
Reminder: Fuzziness in elasticsearch allow at max 2 corrections per term. SO you will never be able to match Islam with Islamabad since there are 4 changes between those terms.
For more information on distance and fuzziness parameters please refer to this documentation page fuzziness parameters
In Loopback (v3), when defining indexes in my model.json files, how do I specify different types of indexes (such as a BRIN)? Also, how do I specify index conditions (such as if I want to create a partial index)? I'm using postgres for the database, if that's relevant.
You can configure the index type via type field.
{
"name": "MyModel",
"properties": {
// ...
},
"indexes": {
"myindex": {
"columns": "name, email",
"type": "BRIN",
// ...
}
}
}
I am afraid LoopBack does not support index conditions (partial indexes) yet. Feel free to open a new issue in https://github.com/strongloop/loopback-connector-postgresql/issues.
i was trying to add in Lb4. Its pretty straightforward there (should be same for lb3 as well i hope)
#model({
name: 'tablename',
settings: {
indexes: {
idx_tablename: {
columnA : '',
columnB : '',
columnC: ''
}
}
}
})
once the build is done, the index name idx_tablename with 3 columns will get created
In PostgreSQL and Loopback 3 you can specify an index for multi-column like this.
The following loopback JSON code creates index in Postgres with fields message and type are unique together.
{
"name": "notification",
"base": "PersistedModel",
"idInjection": true,
"options": {
"validateUpsert": true
},
"properties": {
"message": {
"type": "string",
"required": true
},
"type": {
"type": "string",
"required": true
},
"seen": {
"type": "boolean",
"required": true,
"default": false
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": {},
"indexes": {
"message_type_index": {
"keys": "message, type",
"options": {"unique": true}
}
}
}
For a specific function that I am building, I need to parse my JSON and have in some cases the attribute, instead of the value itself, be used as the value for the attribute. But how do I manage that with JOLT?
Let's say this is my input:
{
"Results": [
{
"FirstName": "John",
"LastName": "Doe"
},
{
"FirstName": "Mary",
"LastName": "Joe"
},
{
"FirstName": "Thomas",
"LastName": "Edison"
}
]
}
And this should be the outcome:
{
"Results": [
{
"Name": "FirstName",
"Value": "John"
},
{
"Name": "FirstName",
"Value": "Mary"
},
{
"Name": "FirstName",
"Value": "Thomas"
},
{
"Name": "LastName",
"Value": "Doe"
},
{
"Name": "LastName",
"Value": "Doe"
},
{
"Name": "LastName",
"Value": "Edison"
},
]
}
For those interested.. I'm building a JSON to Excel export functionality in Mendix and it has to be completely dynamic, regardless of the input. To accomplish this, I need an array where each attribute (equal to a column in Excel) has to be it's own object with a column name and a value. If each column data is it's own object, I can simply say "create column for each object with the same "Name". Little bit difficult to explain, but it 'should' work.
Arrays and Jolt, are not the best. Basically there are 3 ways to deal with arrays in Shift.
you explicitly assign data to an array position. Aka foo[0] and foo[1]
you reference a "number" that exists in the input data. Aka foo[&2] and foo[&3]
you "accumulate" data into a list. Aka foo[].
Your input data is array of size 3. Your desired output is an array of size 6. You want this to be flexible and be able to handle variable inputs.
This means option 3. So you have to "fix" / process your data into it "final form", while maintaining the original input Json structure (of a list with 3 items), and then accumulate all the "built" items into a list.
This means that you are buildling a list of lists, and then finally "squashing" it down to a single list.
Spec
[
{
// Step 1 : Pivot the data into parallel lists of keys and values
// maintaining the original outer input list structure.
"operation": "shift",
"spec": {
"Results": {
"*": { // results index
"*": { // FirstName / Lastname
"$": "temp[&2].keys[]",
"#": "temp[&2].values[]"
}
}
}
}
},
{
// Step 2 : Un-pivot the data into the desired
// Name/Value pairs, using the inner array index to
// keep things organized/separated.
"operation": "shift",
"spec": {
"temp": {
"*": { // temp index
"keys": {
"*": "temp[&2].[&].Name"
},
"values": {
"*": "temp[&2].[&].Value"
}
}
}
}
},
{
// Step 3 : Accumulate the "finished" Name/Value pairs
// into the final "one big list" output.
"operation": "shift",
"spec": {
"temp": {
"*": { // outer array
"*": "Results[]"
}
}
}
}
]
I am using ibm graph in bluemix and new to this.
I created a graph named 'test' using the GUI provided by bluemix and uploaded the sample data 'Music Festival' provided by ibm in that graph.
Now I am trying to query all the vertices having label 'attendee' using below query.
def gt = graph.traversal();
gt.V().hasLabel("attendee");
But I am getting error as
Error: Error encountered evaluating script def gt = graph.traversal();gt.V().hasLabel("attendee"); with reason com.thinkaurelius.titan.core.TitanException: Could not find a suitable index to answer graph query and graph scans are disabled: [(~label = attendee)]:VERTEX
Not sure what I am doing wrong.
Can somebody tell where am i going wrong?
How can i get rid of this error and get the expected output?
Thanks
#Radhika, Your Gremlin query is a valid Gremlin query. However, some vendors (such as IBM Graph and Titan) chose to only allow users to start their queries with a query that is indexed.This is to make sure you get the performance of your queries. Calling hasLabel() by itself will give you the Could not find a suitable index... error as you can't create indexes for labels. What you need to do is follow this step with a step that uses a indexed property as in this query :
graph.traversal();gt.V().hasLabel("band").has("genre","pop");
An index for genre has been created in the schema for the sample music festival data as you can see below
{
"propertyKeys": [
{ "name": "name", "dataType": "String", "cardinality": "SINGLE" },
{ "name": "gender", "dataType": "String", "cardinality": "SINGLE" },
{ "name": "age", "dataType": "Integer", "cardinality": "SINGLE" },
{ "name": "genre", "dataType": "String", "cardinality": "SINGLE" },
{ "name": "monthly_listeners", "dataType": "String", "cardinality": "SINGLE" },
{ "name":"date","dataType":"String","cardinality":"SINGLE" },
{ "name":"time","dataType":"String","cardinality":"SINGLE" }
],
"vertexLabels": [
{ "name": "attendee" },
{ "name": "band" },
{ "name": "venue" }
],
"edgeLabels": [
{ "name": "bought_ticket", "multiplicity": "MULTI" },
{ "name":"advertised_to","multiplicity":"MULTI" },
{ "name":"performing_at","multiplicity":"MULTI" }
],
"vertexIndexes": [
{ "name": "vByName", "propertyKeys": ["name"], "composite": true, "unique": false },
{ "name": "vByGender", "propertyKeys": ["gender"], "composite": true, "unique": false },
{ "name": "vByGenre", "propertyKeys": ["genre"], "composite": true, "unique": false}
],
"edgeIndexes" :[
{ "name": "eByBoughtTicket", "propertyKeys": ["time"], "composite": true, "unique": false }
]
That's why the above query works and you need to do the same.
If you don't have a schema, create one. You can model it after the
one above or follow the API
doc
Create an (Vertex/Label) index for the properties that you'll start
your traversals from. In this example, Name, Gender and Genre for
vertex properties and name for the edge properties.
Call the schema
endpoint
to add your schema to your graph
It's recommended to create your schema before adding any data to
your graph so that you don't have to reindex later. That'll save you
a lot of time.
Once you create your schema, you can't modify what you created
already, but you can add new properties/indexes later on.
Look at the following code samples for Java and Nodejs for the exact code to use.
I hope that helps
I have document like below in MongoDB:
{
"_id": "test",
"tasks": [
{
"Name": "Task1",
"Parameter": [
{
"Name": "para1",
"Type": "String",
"Value": "*****"
},
{
"Name": "para2",
"Type": "String",
"Value": "*****"
}
]
},
{
"Name": "Task2",
"Parameter": [
{
"Name": "para1",
"Type": "String",
"Value": "*****"
},
{
"Name": "para2",
"Type": "String",
"Value": "*****"
}
]
}
]
}
There is Embedded Data Structure (Parameter) inside of another Embedded Data Structure (Tasks). Now I want to update the para1 in Task1's Parameter.
I have tried many ways but I can only use query tasks.Parameter.name to find the para1 but cannot update it. the example in the doc are using .$. to update the value in a Embedded Data Structure but it doesn't work in my case.
Anyone have any ideas ?
MongoDB currently only supports the positional operator once, and only for the top level array. There is a ticket SERVER-831 to change this behavior for your use case. You can follow the issue there and up vote it.
However, you might be able to change your approach to accomplish what you want to do. One way is to change your schema. Collapse the tasks name into the array so the document looks like this:
{
_id:test,
tasks:
[
{
Task:1
Name:para1,
Type:String,
Value:*****
},
{
Task:1
Name:para2,
Type:String,
Value:*****
},
{
Task:2
Name:para1,
Type:String,
Value:*****
},
{
Task:2
Name:para2,
Type:String,
Value:*****
}
]
}
Another approach that may work for you is to use $pull and $push. For instance something like this to replace a task (this assumes that tasks.Parameter.Name is unique to an array of Parameters):
db.test2.update({$and: [{"tasks.Name": "Task3"}, {"tasks.Parameter.Name":"para1"}]}, {$pull: {"tasks.$.Parameter": {"Name": "para1"}}})
db.test2.update({"tasks.Name": "Task3"}, {$push: {"tasks.$.Parameter": {"Name": "para3", Type: "String", Value: 1}}})
With this solution you need to be careful with regard to concurrency, as there will be a brief moment where the document doesn't exist.