How to configure conjur DATABASE_URL with postgres ssl_mode=verify-full - postgresql

I would like to configure Conjur with ssl_mode=verify-full to connect to my postgres database.
I use the Docker image cyberark/conjur:1.8.1#sha256:01d601d763edf1d98ca81dda36d4744e78244a4836cfa804570a47da5fd50405
Adding it as a string parameter (like that DATABASE_URL=postgres://conjur:$CONJURDBPASSWORD#postgres-conjur:5432/conjurdb?sslmode=verify-full) does not seem to work.
The db library used by Conjur is Sequel and it supports it https://sequel.jeremyevans.net/rdoc/files/doc/opening_databases_rdoc.html#label-postgres
How can I achieve that without altering the Conjur code ? Ideally, via ENV or mounting a config file.
A project like Gemstash uses the same library and gives a way to achieve that easily. with a config.yml file containing (for instance):
:db_adapter: postgres
:db_url: postgres://{{ .Env.DB_HOST }}/gemstashdb?user=gemstash&password={{ .Env.DB_PASSWD }}
:db_connection_options:
:connect_timeout: 10
:read_timeout: 5
:timeout: 30
:sslmode: 'verify-full'
:sslrootcert: '{{ .Env.HOME }}/.ssl/root.crt'
I didn't find anything similar in Conjur.

Related

How to connect Eclipse ditto to mongodb cloud

I am fairly new to Eclipse Ditto and have just started using it for my project.
I am trying to connect Cloud hosted mongodb instance to ditto.
Following the documentation I know that I need to add some variables and pass them to docker-compose. The problem is that I do not know what should be the values of these variables as there are no examples.
Are all these variables necessary or will just the URI work?
This is my current .env file config
MONGO_DB_URI=mongodb+srv://username:pass#IP
MONGO_DB_READ_PREFERENCE=primary
MONGO_DB_WRITE_CONCERN=majority
The command I am using to start ditto is
docker-compose --env-file .env up
I have removed mongodb service from docker-compose.yml
Nice to hear that you started using Ditto in your project.
You need to set the following env variables to connect to your Cloud hosted MongoDB.
MONGO_DB_URI: Connection string to MongoDB
For more detailes see: https://docs.mongodb.com/manual/reference/connection-string/
If you have a ReplicaSet your MongoDB URI should look like this: mongodb://[username:password#]mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl
I assume you also need to enable SSL to connect to your MongoDB.
To do so set this env var.
MONGO_DB_SSL_ENABLED: true
If you want to use a specific Ditto version you can set the following env var
DITTO_VERSION= e.g. 2.1.0-M3
If you use .env as file name you can start Ditto with:
docker-compose up
The other options for pool size, read preference and write concern aren't necessary as there are default values in place.

Move my hasura cloud schema, relations, tables etc. and put into my offline docker file using docker-compose

So basically I have my cloud hasura with existing schema, relations tables etc... and i want to offline it using docker and try using metadata export and import that seems not working how can I do it or is there other ways to do it?
this is the docker i want to offline
this is my cloud i want to get the schemas or metadata
OR MAYBE I JUST MANUALLY RECREATE THE TABLES AND RELATIONS??
When using the steps outlined in the Hasura Quickstart with Docker page then the following steps would help get all the table definitions, relationships etc., setup on the local instance just like it is set up on hasura cloud instance.
Migrate all the database schema and metadata using the steps mentioned in Setting up migrations
Since you want to migrate from hasura cloud use the URL of the cloud instance in step 2. Perform steps 3-6 as described in the above link.
Bring up the local docker environment. Ideally edit the docker-compose.yaml file to set HASURA_GRAPHQL_ENABLE_CONSOLE: "false" before running docker-compose up -d.
Resume the process of applying migrations from step 7. Use the endpoint from local instance. For example,
$ hasura metadata apply --endpoint http://localhost:8080
$ hasura migrate apply --endpoint http://localhost:8080

How to change database to Postges in JupyterHub?

I am trying to run jupyterhub with my config and I would like to change database form SQLite, that is created by default to PostgreSQL that alredy exists and has some tables (jupyterhub and other app would work concurrently and share database). On website only thing I see is:
We recommend using PostgreSQL for production if you are unsure ...
But no word how to change this database. Have you done this before and can describe it? Like do I ahve to create some tables on my own or do I just pass a link and jupyterhub willl do the rest?
You need create an empty database in postgres and then set the db_url property in the jupyterhub config. For example, for a postgres database on the local machine:
Connect to your postgres instance with a user that has the 'Create DB' attribute and run:
CREATE DATABASE jupyterhub1;
In your jupyterhub_config.py file set this property:
c.JupyterHub.db_url = 'postgresql://username:password#localhost:5432/jupyterhub1'
When you start jupyterhub it will create the required tables automatically. Also note that you don't have to hard code the credentials into the db_url property, you could access them from environment variables using os.environ["VAR_NAME"]
Thanks
Well, I will share what have worked for me. I hope this instruction help other people that face the same difficulty also.
Make a full backup of your database, just in case things go bad. Source
If you read the source above you got that you need to add some fields to your config.yaml.
you'll have to add this part to your config.yaml:
hub:
db:
upgrade: true
Keep following the source mentioned above and you will understand, because there are a little few details to do.
It is not that all, yet! You need to create your database(using Mysql or Postgres). After you have done your database, now you should have username, password, database_name, host and port.
Now, have a look on this info. This another page highlights one interesting point. JupyterHub doc says that using Postgres is easier than Mysql. You 'll have to add hub.db.type and hub.db.url. Have a look on doc to understand the string connection.
postgresql+psycopg2://<db-username>:<db-password>#<db-hostname>:<db-port>/<db-name>
Pay attention that in this example I just put the username and put nothing as password within string. Once it is possible to add hub.db.password I used this option. I also declared the size of database as 20Gi, in my example case.
hub:
db:
upgrade: true
type: postgres
url: postgresql+psycopg2://postgres:#db_jupyterhub_xxxxx.amazonaws.com:5432/db_jupyterhub
password: ~
pvc:
accessModes:
- ReadWriteMany
storage: 20Gi
At this point you realized that I did not put the password. In this case when I will deploy JupyterHub(or make a helm upgrade ...) I need to pass the --set parameter. So for this example I used this way to upgrade the helm chart that already existed.
helm upgrade -f config.yaml jupyterhub . \
--set hub.db.password=TYPE-PASSWORD-OF-DATABASE
With these steps the deploy should work. I used in this way and it worked fine. I have also another point to highlight, cookieSecret. In the docs they mentioned the needs of recreate it in case of pods restarting. Please have a look in this topic about deafult behavior of jupyterhub_cookie_secret and in this link explaining about cookie generation and uses.
I hope these steps help some of you.

dockerfile for backend and a seperate one for dbms because compose wont let me copy sql file into dbms container?

I have a dockerfile for frontend, one for backend, and one for the database.
In the backend portion of the project, I have a dockerfile and a docker-compose.yml file.
the dockerfile is great for the backend because it configures the backend, copies and sets up the information etc. I like it alot.
The issue i have come to though is that if i can easily create a dockerfile for the dbms, but it requires me to put it in a different directory, where i was hoping to just define it in the same directory as the backend, and because of the fact the backend and the dbms is so tightly coupled, i figured this is where docker-compose would go.
My issue I ran into is that in a compose file, I cant do a COPY into the dbms container. I would just have to create another dockerfile to set that up. I was thinking that would work.
When looking on github, there was a big enhancement thread about it, but the closest people would get is just creating volume relationship, which fails to do what I want.
Ideally, All i want to be able to do is to stand up a postgres dbms in a fashion such that i could conduct load balancing on it later down the line with 1 write, 5 read or something, and have its initial db defined in my one sql file.
Am I missing something? I thought i was going about it correctly, but maybe I need to create a whole new directory with a dockerfile for the dbms.
Thoughts on how I should accomplish this?
Right now i was doing something like:
version: '2.0'
services:
backend:
build: .
ports:
- "8080:8080"
database:
image: "postgres:10"
environment:
POSTGRES_USER: "test"
POSTGRES_PASSWORD: "password"
POSTGRES_DB: "foo"
# I shouldnt have volumes as it would copy the entire folder and its contents to db.
volumes:
- ./:/var/lib/postgresql/data
To copy things with docker there an infinite set of possibilities.
At image build time:
use COPY or ADD instructions
use shell commands including cp,ssh,wget and many others.
From the docker command line:
use docker cp to copy from/to hosts and containers
use docker exec to run arbitrary shell commands including cp, ssh and many others...
In docker-compose / kubernetes (or through command line):
use volume to share data between containers
volume can be local or distant file systems (network disk for example)
potentially combine that with shell commands for example to perform backups
Still how you should do it dependy heavily of the use case.
If the data you copy is linked to the code and versionned (in the git repo...) then treat as it was code and build the image with it thanks to the Dockerfile. This is for me a best practice.
If the data is a configuration dependrnt of the environement (like test vs prod, farm 1 vs farm 2), then go for docker config/secret + ENV variables.
If the data is dynamic and generated at production time (like a DB that is filled with user data as the app is used), use persistant volumes and be sure you understand well the impact of container failure for your data.
For a database in a test system it can make sense to relauch the DB from a backup dump, a read only persistant volume or much simpler backup the whole container at a known state (with docker commit).

Using Docker and MongoDB

I have been using Docker and Kubernetes for a while now, and have set up a few databases(postgres and mysql) and services.
Now I was looking at adding a mongoDB, but it seems different when it comes to user management.
Take for example postgres:
https://hub.docker.com/_/postgres/
Immediately I can declare users with a password on setup and then connect using this. It seems the mongo image does not support this. Is there a way to simply declare users on startup and use them similar to the postgres setup? This is, without having to exec into the image, modify auth settings and restarting the mongo service.