Graphql - how to omit tables from the auto-generated graphiql - postgresql

Im working on postgraphile server. the stack is: nodejs, expressjs, postgraphile and knex.
My auto-generated graphiql exposes queries to tables it doesn't need to - knex_migrations.
following this doc: https://medium.com/make-it-heady/graphql-omit-table-from-generating-under-graphiql-postgres-smart-comments-6d3b6abec37
in the pgAdmin, I added in the properties of the knex_migrations table the followings:
#name knex_migrations
#omit create,update,delete
This is the documentation.
still when running the server and opening graphiql, I see queries for the migrations table.
what am I missing

If you want to completely omit the table completely from your graphql schema using a smart comment, you simply need to use the #omit tag without any following actions. Using #omit create,update,delete only removes the autogenerated mutations -but does not remove read operations (usage in queries).
See docs for #omit for all available options.

Related

Read-Only Postgraphile-CLI API

I am currently implementing a public API for a Open Data Platform with Postgraphile creating the needed API for me. The API should be completly public, with no authentification whatsoever and because of that the API should only implement read-only queries. Has anyone found a possibility to use Postgraphile-CLI to only create read-only functionality?
So far I have sucessfully setup a Postgraphile-CLI API for my postgres databases, with a user that only has the "GRANT SELECT" for the schemas in Postgres. However, this doesn't seem to work for my use case, since I still can use the mutation in graphql and insert or delete data from my schemas.
Since I don't know too much about postgres database administration, I therefor wonder If it is possible to just not provide mutations with Postgraphile-CLI.
Kind regards
Grigorios
EDIT0: I have found the mistake with my Postgres database rights. That may solve that read-only problem, but If anybody knows an answer to the initial question, I would be curious to know anyway.
You have a number of options:
Use permissions, as you suggest, along with the --no-ignore-rbac option - you will have to ensure your database permissions are and remain correct (no default grants to the public role, for example) for this to work
Use PostGraphile's --disable-default-mutations (-M) option; this will stop the CRUD mutations being generated but won't prevent custom mutation functions from being exposed, if you have any
Skip the MutationPlugin via --skip-plugins graphile-build:MutationPlugin - this will prevent the Mutation type from being added to the schema in the first place, so no mutations can be added.
For a real belt-and-braces approach, why not all three?
postgraphile \
--no-ignore-rbac \
--disable-default-mutations \
--skip-plugins graphile-build:MutationPlugin

protecting from CSRF in amplify with SQL data source

I'm setting up a small app using AWS-amplify.
Due to the the queries I needed to perform I needed to use a SQL database.
I've therefore made an Aurora database and connected in to my amplify graphql API via the "amplify api add-graphql-datasource" command.
This generates the cloudformation templates for the resolvers to perform basic CRUD operations on the Aurora DB.
I wanted to perform some dynamic queries like:
"SELECT * FROM Question Where type = {ctx.input.type}"
How do I protect the gql input from sql-injection attacks?
Does VTL have a custom function which will escape these inputs? - or alternatively throw an error if a special character exists?
I know i could setup either write all of this logic in the vtl resolver or create a pipeline resolver that does all of this in a node lambda but just wondering if there is a simpler solution.

How to deal with complex permissions in Hasura

Basics - I need to return data from columns based on some variables from a different table(I either return column or null if access is not allowed)
I have already done what I need via a custom function in postgres, but the problem is that in Hasura functions share the permission with the table/view it implements SETOF on.
So I have to allow the access to the table itself and as the result permissions in my function are kind of meaningless, because anyone will be able to access the data simply by querying the original table directly.
My current line of thinking is that the only way to do what I need is to create a remote schema and remove access to the original table.
But maybe there is a way to not expose some of the tables as a graphql query? If I could do something like this - I'd just hide my table and expose only a function.
The remote schema seems like it would work.
Another option would be the allow-queries option.
It's possible to limit queries. It's a bit tricky it seems, you need an exact copy of every query that should be allowed (with the fields in the exactly correct order), but if you do that, then only your explicitly whitelisted queries will be accepted. More info in the docs.
I'm not familiar enough with postgres permissions to offer any better ideas...

Row level security using prisma and postgres

I am using prisma and yoga graphql servers with a postgres DB.
I want to implement authorization for my graphql queries. I saw solutions like graphql-shield that solve column level security nicely - meaning I can define a permission and according to it block or allow a specific table or column of data (on in graphql terms, block a whole entity or a specific field).
The part I am stuck on is row level security - filtering rows by the data they contain - say I want to allow a logged in user to view only the data that is related to him, so depending on the value in a user_id column I would allow or block access to that row (the logged in user is one example, but there are other usecases in this genre).
This type of security requires running a query to check which rows the current user has access to and I can't find a way (that is not horrible) to implement this with prisma.
If I was working without prisma, I would implement this in the level of each resolver but since I am forwarding my queries to prisma I do not control the internal resolvers on a nested query.
But I do want to work with prisma, so one idea we had was handling this in the DB level using postgres policy. This could work as follows:
Every query we run will be surrounded with “begin transaction” and “commit transaction”
Before the query I want to run “set local context.user_id to 5"
Then I want to run the query (and the policy will filter results according to the current_setting(‘context.user_id’))
For this to work I would need prisma to allow me to either add pre/post queries to each query that runs or let me set a context for the db.
But these options are not available in prisma.
Any ideas?
You can use prisma-client instead of prisma-binding.
With prisma-binding, you define the top level resolver, then delegates to prisma for all the nesting.
On the other hand, prisma-client only returns scalar values of a type, and you need to define the resolvers for the relations. Which means you have complete control on what you return, even for nested queries. (See the documentation for an example)
I would suggest you use prisma-client to apply your security filters on the fields.
With the approach you're looking to take, I'd definitely recommend a look at Graphile. It approaches row-level security essentially the same way that you're thinking of. Unfortunately, it seems like Prisma doesn't help you move away from writing traditional REST-style controller methods in this regard.

What is more recommended to use in the C Driver , mongoc_collection_command with "insert" or mongoc_collection_insert

After working for awhile with the C driver , reading the tutorials and the API .
I little confused ,
According to this tutorial : http://api.mongodb.org/c/current/executing-command.html
i can execute DB and Collections commands which include also the CRUD commands.
And i can even get the Document cursor if i don't use "_simple" in the command API
so why do i need to use for example the mongoc_collection_insert() API command ?
What are the differences ? what is recommended ?
Thanks
This question is probably similar to what's the difference between using insert command or db.collection.insert() via the mongo shell.
mongoc_collection_insert() is specific function written to insert a document into a collection while mongoc_collection_command() is for executing any valid database commands on a collection.
I would recommend to use the API function (mongoc_collection_insert) whenever possible. For the following reasons:
The API functions had been written as an abstraction layer with a specific purpose so that you don't have to deal with other details related to the command.
For example, mongoc_collection_insert exposes the right parameters for inserting i.e. mongoc_write_concern_t and mongoc_insert_flags_t with the respective default value. On the other hand, mongoc_collection_command has broad range of parameters such as mongoc_read_prefs_t, skip, or limit which may not be relevant for inserting a document.
Any future changes for mongoc_collection_insert will more likely be considered with the correct context for insert.
Especially for CRUD, try to avoid using command because the MongoDB wire protocol specifies different request opcodes for command (OP_MSG: 1000) and insert (OP_INSERT: 2002).