I use Casbin as authorization library for my REST API, written in Go.
To load the policy from my Mongo database, I use MongoDB Adapter.
A single policy Mongo document looks like this:
{
"_id": {
"$oid": "639491f73e4c9bec05a1d1ec"
},
"ptype": "p",
"v0": "admin",
"v1": "laptops",
"v2": "read",
"v3": "",
"v4": "",
"v5": ""
}
In my business logic, I validate if the user can access (read) laptops:
// Resolves to true
if can, _ := e.Enforce(user, "laptops", "read"); can {
...
This works fine.
The problem now is when I delete the policy document, I would expect that I'm not allowed to access laptops anymore. This is only the case when I restart my application.
Thus, it appears that the Enforce checks are not being evaluated real-time.
As a workaround, I could call the LoadPolicy method as soon as the request comes in but this looks like a dirty hack to me.
I would really appreciate some help / suggestions.
I am new to Loop Back, but I want to use it for my upcoming API application. I am currently testing the features it has to offer, but I'm stuck on doing an advance query on a sub model of my root entity. I apologize in advance if this is already answered, but I've spent hours searching the web for answers and have not gotten anywhere. Furthermore, I saw below on LB4 website, but for unknown reasons it's not working for me.
customerRepo.find({
include: [
{
relation: 'orders',
scope: {
where: {name: 'ToysRUs'},
include: [{relation: 'manufacturers'}],
},
},
],
});
I am essentially using two models - User and Note where User has many Notes. I used LB4 CLI 99.9% of times to create the datasource, models, repositories and relations. I have even added inclusion resolver to my note repository as below.
this.registerInclusionResolver('user', this.user.inclusionResolver);
However, when I try to run below filter against my note repository, it does not apply the where filter against User. Oddly, when I add the scope block, the user is no longer included in the response.
{
"limit": 5,
"include": [
{
"relation": "user",
"scope": {
"where": {
"username": "jdoe#example.com"
}
}
}
]
}
My project is a boilerplate code created using lb4 app command. I've then added my datasource, models, repositories, and controllers.
Any help will be greatly appreciated.
If possible I'd like to query a related model using a query parameter rather than overwriting a blueprint action. I'm not sure if I need to manually populate an association for this to work.
I have two related models: Idea and Tag. An Idea can have many Tags, and a Tag can be associated with many Ideas (M-M association). I would like to query all Ideas that are associated with a given Tag name.
I had this working in the past, but now the result is always an empty array. I looked through my git history and I have two different possible solutions that I have used in the past
Blueprint Endpoint (worked in the past?):
http://localhost:1337/Idea?tags.text[]=GIS
http://localhost:1337/Idea?tags.text=GIS
Example Idea instance (other associations removed for brevity):
{
"tags": [
{
"text": "GIS",
"description": "Geographic Information Systems involves the association of location and attribute data, data collection, and analysis.",
"approved": true,
"createdAt": "2015-08-26T13:27:19.593Z",
"updatedAt": "2015-08-26T13:29:44.209Z",
"id": "55ddbeb71670cf062be4e5c0"
}
],
"title": "First Idea",
"description": "Let's all do some more GIS!",
"status": "Proposed",
"createdAt": "2015-08-26T13:30:03.238Z",
"updatedAt": "2015-08-26T13:30:03.240Z",
"id": "55ddbf5b1670cf062be4e5c1"
}
I am creating a Desktop application using Node-WebKit. The application is basically to create documents (details of an employee's daily work), any registered user can comment on these documents. The documents that I am creating will be split into sections. The users will comment on particular sections. I want to link these sections with the comments that the users post. The linking will be done using JsonLD. I am using MongoDB to store the data.
I am using sails.js in backend and AngularJs in frontend.
Usually we store our objects in this way:
module.exports = {
attributes: {
document: {
type: 'string'
},
comments: {
collection: 'Comments',
via: 'document'
}
project:{
model: 'Project'
}
}
};
I have done some RnD on JsonLD and according to what I know about JsonLD. This is how the JsonLD will be:
{
"#context":
{
"name": "http://xmlns.com/foaf/0.1/name",
"depiction":
{
"#id": "http://xmlns.com/foaf/0.1/depiction",
"#type": "#id"
},
"homepage":
{
"#id": "http://xmlns.com/foaf/0.1/homepage",
"#type": "#id"
},
}
}
I would like to know how I can store the JsonLD in MongoDB
JSON-LD is valid JSON, so you can store JSON-LD the same way you store JSON objects. What JSON-LD does is to map concepts to URLs but everything remains valid JSON and they look like regular JSON if the context property is used. You can find a good example in slides 22 - 23 of this presentation. Also you may find "JSON-LD and MongoDB" presentation interesting.
I am doing the same thing in a Java backend using the MongoDB java client and I just treat JSON-LD objects as regular JSON objects.
Introduction
/me/books.reads returns books[1].
It includes an array of books and the following fields for each book:
title
type
id
url
Problem
I'd like to get the author name(s) at least. I know that written_by is an existing field for books.
I'd like to get ISBN, if possible.
Current situation
I tried this:
/me/books.reads?fields=data.fields(author)
or
/me/books.reads?fields=data.fields(book.fields(author))
But the error response is:
"Subfields are not supported by data"
The books.reads response looks like this (just one book included):
{
"data": [
{
"id": "00000",
"from": {
"name": "User name",
"id": "11111"
},
"start_time": "2013-07-18T23:50:37+0000",
"publish_time": "2013-07-18T23:50:37+0000",
"application": {
"name": "Books",
"id": "174275722710475"
},
"data": {
"book": {
"id": "192511337557794",
"url": "https://www.facebook.com/pages/A-Semantic-Web-Primer/192511337557794",
"type": "books.book",
"title": "A Semantic Web Primer"
}
},
"type": "books.reads",
"no_feed_story": false,
"likes": {
"count": 0,
"can_like": true,
"user_likes": false
},
"comments": {
"count": 0,
"can_comment": true,
"comment_order": "chronological"
}
}
}
If I take the id of a book, I can get its metadata from the open graph, for example http://graph.facebook.com/192511337557794 returns something like this:
{
"category": "Book",
"description": "\u003CP>The development of the Semantic Web...",
"genre": "Computers",
"is_community_page": true,
"is_published": true,
"talking_about_count": 0,
"were_here_count": 0,
"written_by": "Grigoris Antoniou, Paul Groth, Frank Van Harmelen",
"id": "192511337557794",
"name": "A Semantic Web Primer",
"link": "http://www.facebook.com/pages/A-Semantic-Web-Primer/192511337557794",
"likes": 1
}
The response includes ~10 fields, including written_by which has the authors of the book.
Curiously, link field seems to map to url of the books.reads response. However, the field names are different, so I'm starting to loose hope that I would be able to ask for written_by in books.reads request..
The only reference that I've found about /me/books is https://developers.facebook.com/docs/reference/opengraph/object-type/books.book/
This is essentially about user sharing that he/she has read a book, not the details of the book itself.
The data structure is focused on the occasion of reading a book: when reading was started, when this story was published, etc.
[1] I know this thanks to How to get "read books"
FQl does not looks very promising – although you can request books from the user table, it seems to deliver just a string value with only the book titles comma-separated.
You can search page table by name – but I doubt it will work with name in (subquery) when what that subquery delivers is just one string of the format 'title 1,title 2,…'.
Can’t really test this right now, because I have read only one book so far (ahm, one that I have set as “books I read” on FB, not in general …) – but using that to search the page table by name already delivers a multitude of pages, and even if I narrow that selection down by AND is_community_page=1, I still get several, so no real way of telling which would be the right one, I guess.
So, using the Graph API and a batch request seems to be more promising.
Similar to an FQL multi-query, batch requests also allow you to refer data from the previous “operation” in a batch, by giving operations a “name”, and then referring to data from the first operation by using JSONPath expression format (see Specifying dependencies between operations in the request for details).
So a batch query for this could look like this,
[
{"method":"GET","name":"get-books","relative_url":"me\/books?fields=id"},
{"method":"GET","relative_url":"?ids={result=get-books:$.data.*.id}
&fields=description,name,written_by"}
]
Here all in one line, for easier copy&paste, so that line breaks don’t cause syntax errors:
[{"method":"GET","name":"get-books","relative_url":"me\/books?fields=id"},{"method":"GET","relative_url":"?ids={result=get-books:$.data.*.id}&fields=description,name,written_by"}]
So, to test this:
Go to Graph API Explorer.
Change method to POST via the dropdown, and clear whatever is in the field right next to it.
Click “Add a field”, and input name batch, and as value insert the line copy&pasted from above.
Since that will also get you a lot of “headers” you might not be interested in, you can add one more field, name include_headers and value false to get rid of those.
In the result, you will get a field named body, that contains the JSON-encoded data for the second query. If you want more fields, add them to the fields parameter of the second query, or leave that parameter out completely if you want all of them.
OK, after some trial-and-error I managed to create a direct link to Graph API Explorer to test this – the right amount of URL-encoding to use is a little fiddly to figure out :-)
(I left out the fields parameter for the second operation here, so this will give you all the info for the book that there is.)
As I said, I only got one book on FB, but this should work for a user with multiple books the same way (since the second operation just takes however many IDs it is given from the first one).
But I can’t tell you off the top of my head how this will work for a lot of books – how slow the second operation might get with that, when you set a high limit for the first one. And I also don’t know how this will behave in regard to pagination, which you might run into when me/books delivers a lot of books for a user.
But I think this should be a good enough starting point for you to figure the rest out by trying it on users with more data. HTH.
Edit: ISBN does not seem to be part of the info for a book’s community page, at least not for the ones I checked. And also written_by is optional – my book doesn’t have it. So you’ll only get that info if it is actually provided.