I am currently implementing a public API for a Open Data Platform with Postgraphile creating the needed API for me. The API should be completly public, with no authentification whatsoever and because of that the API should only implement read-only queries. Has anyone found a possibility to use Postgraphile-CLI to only create read-only functionality?
So far I have sucessfully setup a Postgraphile-CLI API for my postgres databases, with a user that only has the "GRANT SELECT" for the schemas in Postgres. However, this doesn't seem to work for my use case, since I still can use the mutation in graphql and insert or delete data from my schemas.
Since I don't know too much about postgres database administration, I therefor wonder If it is possible to just not provide mutations with Postgraphile-CLI.
Kind regards
Grigorios
EDIT0: I have found the mistake with my Postgres database rights. That may solve that read-only problem, but If anybody knows an answer to the initial question, I would be curious to know anyway.
You have a number of options:
Use permissions, as you suggest, along with the --no-ignore-rbac option - you will have to ensure your database permissions are and remain correct (no default grants to the public role, for example) for this to work
Use PostGraphile's --disable-default-mutations (-M) option; this will stop the CRUD mutations being generated but won't prevent custom mutation functions from being exposed, if you have any
Skip the MutationPlugin via --skip-plugins graphile-build:MutationPlugin - this will prevent the Mutation type from being added to the schema in the first place, so no mutations can be added.
For a real belt-and-braces approach, why not all three?
postgraphile \
--no-ignore-rbac \
--disable-default-mutations \
--skip-plugins graphile-build:MutationPlugin
Related
Basics - I need to return data from columns based on some variables from a different table(I either return column or null if access is not allowed)
I have already done what I need via a custom function in postgres, but the problem is that in Hasura functions share the permission with the table/view it implements SETOF on.
So I have to allow the access to the table itself and as the result permissions in my function are kind of meaningless, because anyone will be able to access the data simply by querying the original table directly.
My current line of thinking is that the only way to do what I need is to create a remote schema and remove access to the original table.
But maybe there is a way to not expose some of the tables as a graphql query? If I could do something like this - I'd just hide my table and expose only a function.
The remote schema seems like it would work.
Another option would be the allow-queries option.
It's possible to limit queries. It's a bit tricky it seems, you need an exact copy of every query that should be allowed (with the fields in the exactly correct order), but if you do that, then only your explicitly whitelisted queries will be accepted. More info in the docs.
I'm not familiar enough with postgres permissions to offer any better ideas...
I am using prisma and yoga graphql servers with a postgres DB.
I want to implement authorization for my graphql queries. I saw solutions like graphql-shield that solve column level security nicely - meaning I can define a permission and according to it block or allow a specific table or column of data (on in graphql terms, block a whole entity or a specific field).
The part I am stuck on is row level security - filtering rows by the data they contain - say I want to allow a logged in user to view only the data that is related to him, so depending on the value in a user_id column I would allow or block access to that row (the logged in user is one example, but there are other usecases in this genre).
This type of security requires running a query to check which rows the current user has access to and I can't find a way (that is not horrible) to implement this with prisma.
If I was working without prisma, I would implement this in the level of each resolver but since I am forwarding my queries to prisma I do not control the internal resolvers on a nested query.
But I do want to work with prisma, so one idea we had was handling this in the DB level using postgres policy. This could work as follows:
Every query we run will be surrounded with “begin transaction” and “commit transaction”
Before the query I want to run “set local context.user_id to 5"
Then I want to run the query (and the policy will filter results according to the current_setting(‘context.user_id’))
For this to work I would need prisma to allow me to either add pre/post queries to each query that runs or let me set a context for the db.
But these options are not available in prisma.
Any ideas?
You can use prisma-client instead of prisma-binding.
With prisma-binding, you define the top level resolver, then delegates to prisma for all the nesting.
On the other hand, prisma-client only returns scalar values of a type, and you need to define the resolvers for the relations. Which means you have complete control on what you return, even for nested queries. (See the documentation for an example)
I would suggest you use prisma-client to apply your security filters on the fields.
With the approach you're looking to take, I'd definitely recommend a look at Graphile. It approaches row-level security essentially the same way that you're thinking of. Unfortunately, it seems like Prisma doesn't help you move away from writing traditional REST-style controller methods in this regard.
I currently assign a mongodb to my meteor app using the env variable
"MONGO_URL": "mongodb://localhost:27017/dbName" when I start the meteor instance.
So all data gets written to the mongo database with the name "dbName".
I am looking for a way to individually set the dbName for each custumer upon login in order to seperate their data into different databases.
This generally unsupported as this is defined at startup. However, this thread offers a possible solution:
https://forums.meteor.com/t/switch-database-while-meteor-is-running/4361/6
var database = new MongoInternals.RemoteCollectionDriver("<mongo url>");
MyCollection = new Mongo.Collection("collection_name", { _driver: database });
This would allow you to define the database name in the mongo url but would require a fair bit of extra work to redefine your collections on a customer by customer basis.
Here's another approach that will make your life eternally easier:
Create a generic site with no accounts at mysite.com
When they login at mysite.com, figure out what site they actually belong to and redirect them to customerName.mysite.com and log them in there
Run a separate instance of Meteor configured for a different mongo at each site
nginx might help you with the above.
It is generally good practice to run separate DBs when offering a B2B
solution.
That's a matter of opinion that depends heavily on the platform. Many SaaS providers would argue that point.
How do I add isComponent to a Datomic attribute using the Datomisca library?
In Datomic, I would do the following:
{:db/id :person/favorite-food
:db/isComponent true
:db.alter/_attribute :db.part/db}
Unfortunately, I haven’t had time to add full support for schema alteration in Datomisca.
However, schema alteration is no different to any other transactions, so there should be no issue with building the transaction data that you describe above.
Entity.add(Namespace("person") / "favorite-food") (
Attribute.isComponent -> true,
Namespace("db.alter") / "_attribute" -> Partition.DB
)
What Datomisca is lacking is
http://docs.datomic.com/javadoc/datomic/Connection.html#syncSchema(long)
But a datomisca Connection is just datomic Connection, so you can still access that underlying API. I will endeavor to add the new sync APIs in the near future.
For future reference, the google group is a good place to ask questions like these, as I’m more likely to notice them (a colleague noticed your question).
https://groups.google.com/forum/?fromgroups#!forum/datomisca
I really hope someone has some insight into this. Just to clarify what I'm talking about up front; when referring to Schema I mean the database object used for ownership separation, not the database create schema.
We use Sql Server Schema objects to group tables into wholes where each group belongs to an application. Each application also has it's own database login. I've just started introducing database roles in order to fully automate deployment to test and staging environment. We're using xSQL Object compare engine. A batch file is run each night to perform comparison and generate a script change file. This can then be applied to the target database along with code changes.
The issue I'm encountering is as folows. Consider the following database structure:
Database:
Security/Schemas:
Core
CoreRole (owner)
SchemaARole (select, delete, update)
SchemaBRole (select)
SchemaA
SchemaARole (owner)
SchemaB
SchemaBRole (owner)
Security/Roles/Database Roles:
CoreRole
core_login
SchemaARole
login_a
SchemaBRole
login_b
The set-up works perfectly well for the three applications that use these. The only problem is how to create / generate a script that creates schema -> role permissions? The owner role gets applied correctly. So for example, schema Core gets owner role CoreRole (as expected). However the SchemaARole and SchemaBRole do not get applied.
I wasn't able to find an option to turn this on within xSQL object nor does an option to script this from SQL Server management studio exist. Well, I can't find it at least.
Am I trying to do impossible? How does SQL Server manage this relationship then?
I just kicked up SQL Profiler & trapped what I think you scenario is. Try this:
GRANT SELECT ON [Core].[TestTable] TO [CoreRole]
GRANT DELETE ON [Core].[TestTable] TO [CoreRole]
GRANT UPDATE ON [Core].[TestTable] TO [CoreRole]