Printing out array in constructor coffeescript - coffeescript

This should print out everything in collection
but it only prints out two elements.
Why won't it print out the entire list?
Is it some kind of race case?
http://jsfiddle.net/Czenu/1/
class window.Restful
constructor:->
_.each #collection, (action,kind)=>
$('.actions').append "<div>#{action} #{kind}</div>"
class Material extends Restful
namespace: 'admin/api'
table_name: 'materials'
constructor:(#$rootScope,#$http)->
super
collection:
get: 'downloaded'
get: 'incomplete'
get: 'submitted'
get: 'marked'
get: 'reviewed'
get: 'corrected'
get: 'completed'
post: 'sort'
post: 'sort_recieve'
new Material()

Your collection object consists of elements with just two different keys: "get" and "post". Since each key can only map to one value, your object is reduced to:
collection:
get: 'downloaded'
...
get: 'corrected'
get: 'completed'
post: 'sort'
post: 'sort_recieve'
The solution is to make more senseful objects, for instance an array of custom objects (created using a shortcut function with a senseful name, as in the example below.).
class window.Restful
constructor: ->
_.each #collection, (obj) =>
{action,kind} = obj
$('.actions').append "<div>#{action} #{kind}</div>"
class Material extends Restful
get = (action) -> {action, kind:'get'}
post = (action) -> {action, kind:'post'}
...
collection: [
get 'downloaded'
get 'incomplete'
get 'submitted'
get 'marked'
get 'reviewed'
get 'corrected'
get 'completed'
post 'sort'
post 'sort_recieve'
]
The full result is shown at http://jsfiddle.net/Czenu/2/.

Related

Is there a way to filter list elements using selective sync in amplify flutter?

I have the following datastore model:
type Foo #model #auth(rules: [{ allow: groups, groups: ["baz"] }]) {
id: ID! #primaryKey
name: String!
items: [String]
}
in the initialization of amplify I have the following selective sync constraint:
final datastorePlugin = AmplifyDataStore(
modelProvider: ModelProvider.instance,
syncExpressions: [
DataStoreSyncExpression(
Foo.classType,
() => Foo.ITEMS.contains('bar'),
)
],
);
I need to filter for list items that have a 'bar' element
The docs (Amplify -> Flutter -> Datastore -> Query Data -> Predicates) state that there is a list contains predicate:
Lists: contains | notContains
but it seem that the contains that I am using is checking for a String within another, not a list element using object equality.
Is there a way to ensure that the list predicate is used instead of the String one?

What does the second parameter inherit when the call back function is being used for mongoose?

I read the documentation of mongoose and I still don't quite understand the second parameter in the callback function in find function.
Item.find({},function(err,items){
});
for example I have a DB that is called Items and the model is called Item, so does the items just become an array that contains all the document I have in the DB?
err will have error details if any error occurred.
items will have result documents in case of success request.
That's a callback after find executed - you can decide what needs to be in case of err/data.
Refer
Sample from doc:
// executes, passing results to callback
MyModel.find({ name: 'john', age: { $gte: 18 }}, function (err, docs) {});

How to persist a document in json format using elasticsearch-dsl

I am trying to update an existing elasticsearch data pipeline and would like to use elasticsearch-dsl more fully. In the current process we create a document as a json object and then use requests to PUT the object to the relevant elasticsearch index.
I would now like to use the elasticsearch-dsl save method but am left struggling to understand how I might do that when my object or document is constructed as json.
Current Process:
//import_script.py
index = 'objects'
doc = {"title": "A title", "Description": "Description", "uniqueID": "1234"}
doc_id = doc["uniqueID"]
elastic_url = 'http://elastic:changeme#localhost:9200/' + index + '/_doc/ + doc_id
api = ObjectsHandler()
api.put(elastic_url, doc)
//objects_handler.py
class ObjectsHandler():
def put(self, url, object):
result = requests.put(url, json=object)
if result.status_code != requests.codes.ok:
print(result.text)
result.raise_for_status()
Rather than using this PUT method, I would like to tap into the Document.save functionality available in the DSL but I can't translate the examples in the api documentation for my use case.
I have amended my ObjectsHandler so that it can create the objects index:
//objects_handler.py
es = Elasticsearch([{'host': 'localhost', 'port': 9200}],
http_auth='elastic:changeme')
connections.create_connection(es)
class Object(Document):
physicalDescription = Text()
title = Text()
uniqueID = Text()
class Index:
name = 'objects'
using = es
class ObjectsHandler():
def init_mapping(self, index):
Object.init(using=es, index=index)
This successfully creates an index when I call api.init_mapping(index) from the importer script.
The documentation has this as an example for persisting the individual documents, where Article is the equivalent to my Object class:
# create and save and article
article = Article(meta={'id': 42}, title='Hello world!', tags=['test'])
article.body = ''' looong text '''
article.published_from = datetime.now()
article.save()
Is it possible for me to use this methodology but to persist my pre-constructed json object doc, rather than specifying individual attributes? I also need to be able to specify that the document id is the doc uniqueID.
I've extended my ObjectsHandler to include a save_doc method:
def save_doc(self, document, doc_id, index):
new_obj = Object(meta={'id': doc_id},
title="hello", uniqueID=doc_id,
physicalDescription="blah")
new_obj.save()
which does successfully save the object with uniqueID as id but I am unable to utilise the json object passed in to the method as document.
I've had some success at this by using elasticsearch.py bulk helpers rather than elasticsearch-dsl.
The following resources were super helpful:
Blog - Bulk insert from json objects
SO Answer, showing different ways to add keywords in a bulk action
Elastic documentation on bulk imports
In my question I was referring to a:
doc = {"title": "A title", "Description": "Description", "uniqueID": "1234"}
I actually have an array or list of 1 or more docs eg:
documents = [{"title": "A title", "Description": "Description", "uniqueID": "1234"}, {"title": "Another title", "Description": "Another description", "uniqueID": "1235"}]
I build up a body for the bulk import and append the id:
for document in documents:
bulk_body.append({'index': {'_id': document["uniqueID"]}})
bulk_body.append(document)
then run my new call to the helpers.bulk method:
api_handler.save_docs(bulk_body, 'objects')
with my objects_handler.py file looking like:
//objects_handler.py
from elasticsearch.helpers import bulk
es = Elasticsearch([{'host': 'localhost', 'port': 9200}],
http_auth='elastic:changeme')
connections.create_connection(es)
class Object(Document):
physicalDescription = Text()
title = Text()
uniqueID = Text()
class Index:
name = 'objects'
using = es
class ObjectsHandler():
def init_mapping(self, index):
Object.init(using=es, index=index)
def save_docs(self, docs, index):
print("Attempting to index the list of docs using helpers.bulk()")
resp = es.bulk(index='objects', body=docs)
print("helpers.bulk() RESPONSE:", resp)
print("helpers.bulk() RESPONSE:", json.dumps(resp, indent=4))
This works for single docs in a json format or multiple docs.

Dataloader did not return an array of the same length?

I Am building an express JS application with graphql, and mongodb (mongoose). I am using facebooks Dataloader to batch and cache requests.
Its working perfectly fine except for this use case.
I have a database filled with users posts. Each post contains the users ID for reference. When i make a call to return all the posts in the database. The posts are returned fine but if i try to get the user in each post. Users with multiple posts will only return a single user because the key for the second user is cached. So 2 posts(keys) from user "x" will only return 1 user object "x".
However Dataloader has to return the same amount of promises as keys that it recieves.
It has a option to specify cache as false so each key will make a request. But this doesnt seem to work for my use case.
Sorry if i havn't explained this very well.
this is my graphql request
query {
getAllPosts {
_id // This is returned fine
user {
_id
}
}
}
Returned error:
DataLoader must be constructed with a function which accepts Array<key> and returns Promise<Array<value>>, but the function did not return a Promise of an Array of the same length as the Array of keys.
are you trying to batch post keys [1, 2, 3] and expecting to get user results [{ user1 }, {user2}, { user1 }]?
or are you trying to batch user keys [1, 2] and expecting to get post results [{ post1}, {post3}] and [{ post2 }]?
seems like only in the second case will you run into a situation where you have length of keys differing from length of results array.
to solve the second, you could do something like this in sql:
const loader = new Dataloader(userIDs => {
const promises = userIDs.map(id => {
return db('user_posts')
.where('user_id', id);
});
return Promise.all(promises);
})
loader.load(1)
loader.load(2)
so you return [[{ post1}, {post3}], [{ post2 }]] which dataloader can unwrap.
if you had done this instead:
const loader = new Dataloader(userIDs => {
return db('user_posts')
.where('user_id', [userIDs]);
})
loader.load(1)
loader.load(2)
you will instead get [{ post1}, {post3}, { post2 }] and hence the error: the function did not return a Promise of an Array of the same length as the Array of keys
not sure if the above is relevant / helpful. i can revise if you can provide a snippet of your batch load function
You need to map the data returned from the database to the Array of keys.
Dataloader: The Array of values must be the same length as the Array of keys
This issue is well explained in this YouTube Video - Dataloader - Highly recommended

How to access the properties of a query result in Mongo

I can find a document on my database. A call to:
subject = await Subject.find({ name: 'Math' });
res.send(subject);
returns the document correctly:
{
"topics": [],
"_id": "5ab71fe102863b28e8fd1a3a",
"name": "Math",
"__v": 0
}
The problem is when I try to access the properties of subject. Any of the following calls returns nothing:
res.send(subject._id);
res.send(subject.name);
I've tried subject.toObject() and subject.toArray() but I receive an error:
(node:2068) UnhandledPromiseRejectionWarning: TypeError: subject.toObject is not a function
Any help will be appreciated. Thanks!
NB:
before res.send(subject), I called console.log(subject) and the output is:
[ { topics: [],
_id: 5ab71fe102863b28e8fd1a3a,
name: 'cocei5',
__v: 0 } ]
That is because find method in MongoDB always returns an array.
subject = await Subject.find({ name: 'Math' });
So in above line the Subject.find({name: 'Math'}) is returning an array. Which you are storing in subject variable. if you are getting only single object from DB then you might access the object properties by using subject[0].propertyName.
like if you want to send just an id you can do it by
res.send(subject[0]._id);
You can always use the es6 destructuring feature to get the first element returned in the array, as long as you are sure the result will always be on the 0th index.
const [subject] = await Subject.find({ name: 'Math' });
res.send(subject._id);
res.send(subject.name);
ref: Destructuring arrays and objects
Details for find api
OR you can either use
const subject = await Subject.findOne({ name: 'Math' });
res.send(subject._id);
res.send(subject.name);
As findOne return object whereas find returns an array of objects.
ref: Details for findOne api