How can I monitor the most simple way if Hasura has proper connection to the database or if the database is reachable in a correct way?
I am thinking to create a hasura endpoint which just executes some dummy query over the database, but I couldn't figure out how to implement this in Hasura.
Maybe Hasura has something build in for this part?
Hasura's health check endpoint gives information about the server health and metadata inconsistencies (in this case, database connection issue).
You can read more about the API here - https://hasura.io/docs/latest/graphql/core/api-reference/health.html
Related
I have research it for several days but could not find a definitive answer for it.
My use case is I have a PostgreSQL database hosted on AWS EKS and I want to expose it using GraphQL, which generally leads to AWS AppSync.
I understand that AppSync can be auto imported from DynamoDB only however I am not using that. From several articles it suggested a Lambda Function to connect between AppSync and PostgreSQL. Which I tried, but I need the two feature which is:
Auto Generated Schema
Hot reload of the schema whenever there is changes in the database
Currently I am using PostGraphile for these two features however I am not sure AppSync can be connected to that, as I understand we need to push the schema generated by PostGraphile to AppSync, but I need it to be automatic.
Eg: I create a new table in PostgreSQL -> PostGraphile Lambda Function reload the schema -> Reflects on AppSync schema automatically -> User call new table via AppSync
Can this flow be achieved? Is there anything I can use as reference?
Thank you!
If anyone still wondering, I found a resource from AWS seems to be able to achieve this with some tweaks and changes:
https://github.com/aws-samples/appsync-with-postgraphile-rds
I have setup Keycloak as a SAML broker, and authentication is done by an external IdP provided by the authorities. Users logging in using this IdP are all accepted and all we need from Keycloak is an OAuth token to access our system.
I have tried both the default setup using H2 and running with an external MariaDB.
The external IdP provides us with a full name of the user and a personal ID. Both data are covered by GDPR and I really do not like the sound of storing that data in a database running in the DMZ. Opening up for Keycloak to access a database in the backend is also not a good solution, especially when I do not need users to be stored.
The benefit of running without a database is that I have a simpler DMZ setup as I really do not need to store anything about the users but on the backend.
Do I need a database, and if not how do I run Keycloak without it?
Do I need a database, and if not how do I run Keycloak without it?
Yes, however, out-of-the-box Keycloak runs without having to deploy any external DB. From the Keycloak official documentation section Relational Database Setup one can read:
Keycloak comes with its own embedded Java-based relational database
called H2. This is the default database that Keycloak will use to
persist data and really only exists so that you can run the
authentication server out of the box.
So out-of-the-box you cannot run Keycloak without a DB.
That being said from the same documentation on can read:
We highly recommend that you replace it with a more production ready external database. The H2 database is not very viable in high concurrency situations and should not be used in a cluster either.
So regarding this:
The benefit running without a database is that I have a simpler DMZ
setup as I really do not need to store anything about the users but
on the backend.
You would still be better offer deploying another DB, because Keycloak stores more than just the users information in DB (e.g., realm information, groups, roles and so on).
The external IdP provides us with a full name of the user and a
personal ID. Both data are covered by GDPR and I really do not like
the sound of storing that data in a database running in the DMZ.
Opening up for Keycloak to access a database in the backend is also
not a good solution, especially when I do not need users to be stored.
You can configured that IDP and Keycloak in a manner that the users are not imported to the Keycloak whenever those user authenticate.
I want to store my terraform.tfstate file in mongodb database. I could see that there is no default option available for mongodb as a backend in terraform. So, can we create a custom backend in terraform (In my case, I want to create mongodb backend to store and fetch terraform.tfstate file. If not possible, any work around to complete this aim?
Yes, like you said there is no mongodb as default backend. But there are several existing backends that would help.
But, if are still looking at solutions only using mongodb as your default backend, you can still achieve it.
Along with S3, postgres, azurerm, gcs Terraform also supports backend using a REST client http.
All you have to do is build a small REST client using Node or Flask or your favourite framework and expose an endpoint as shown below in your backend.
terraform {
backend "http" {
address = "http://tfstate.mycompany.io/store"
}
}
And your RESTful client will be communicating with the Mongodb to store and retrieve data. But you need to have endpoints to POST, GET and DELETE configured in order to let Terraform to do the job.
Hope this helps.
Is there a way through the aws api to get the connection string for a RDS database? Something in the form of:
postgres://username:password#host/db_name
No there's no API call that will build that string for you.
However, using the DescribeDBInstances API call you can retrieve the MasterUsername, DBName ("the name of the initial database of this instance that was provided at create time, if one was specified when the DB instance was created") and host (through the Endpoint.Address field) and build the string yourself.
For the password, you'll have to provide it yourself in a secure manner, since RDS cannot retrieve it through API calls.
I am interested in barring pgAdmin access to my PostgreSQL server from any station other than the server. Is is possible to do this using pg_hba.conf? The PostgreSQL server should still allow access to the server for my application from other stations.
No, this isn't possible. Nor is it sensible, since the client (mode of access) isn't the issue, but what you do on the connection.
If the user managed to trick your app into running arbitrary SQL via SQL injection or whatever, you'd be back in the same position.
Instead, set your application up to use a restricted user role that:
is not a superuser
does not own the tables it uses
has only the minimum permissions GRANTed to it that it needs
and preferably also add guards such as triggers to preserve data consistency within the DB. This will help mitigate the damage that can be done if someone extracts database credentials from the app and uses them directly via a SQL client.
You can also make it harder for someone with your app's binary etc to extract the credentials and use them to connect to postgres directly by:
using md5 authentication
if you use a single db role shared between all users, either (a) don't do that or (b) store a well-obfuscated copy of the db password somewhere non-obvious in the configuration, preferably encrypted against the user's local credentials
using sslmode=verify-full and a server certificate
embedding a client certificate in your app and requiring that it be presented in order for the connection to be permitted by the server (see client certificates
Really, though, if you can't trust your uses not to be actively malicious and run DELETE FROM customer; etc ... you'll need middleware to guard the SQL connection and apply further limits. Rate-limit access, disallow bulk updates, etc etc.