HashiCorp Vault Mongo error - mongodb

I'm trying to run the default configuration for hashicorp and mongo but I can't complete the tutorial from here: https://www.vaultproject.io/docs/secrets/databases/mongodb.html.
It crashes here:
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:Password!#mongodb.acme.com:27017/admin?ssl=true"
-bash: !mongodb.acme.com: event not found
I have mongo installed and done correctly the vault mount database

There are several things to change from that command.
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:passwd#127.0.0.1:27017/admin"
Admin:Password has to be changed to the current admin:password credentials (keep in mind that mongo don't have any admin:password with a fresh installation).
!#mongodb.acme.com had to be changed to the ip of the machine where mongo is.
Finally had to disable the ssl ssl=false or removing it directly.

Related

Validate Cluster - api/v1/nodes: http: server gave HTTP response to HTTPS client

On my ubuntu 18.04 aws server I try to create cluster via kops.
kops create cluster \
--name=asdf.com \
--state=s3://asdf \
--zones=eu-west-1a \
--node-count=1 \
--node-size=t2.micro \
--master-size=t2.micro \
--master-count=1 \
--dns-zone=asdf.com \
--ssh-public-key=~/.ssh/id_rsa.pub
kops update cluster --name asdf.com
Succesfully Updated my cluster.
But when i try to validate and try to
kubectl get nodes
I got the error : Server gave http response to https server
kops validate cluster --name asdf.com
Validation failed: unexpected error during validation: error listing nodes: Get https://api.asdf.com/api/v1/nodes: http: server gave HTTP response to HTTPS client
Error
I could’nt solve this.
I tried
kubectl config set-cluster asdf.com --insecure-skip-tls-verify=true
but it didnt work.
Please can you help?
t2.micro instances may be too small for a control plane nodes. They will certainly be very slow in booting properly. You can try omitting that flag (i.e use the default size) and see if that boots up properly.
Tip: use kops validate cluster --wait=30m as it may provide more clues to what is wrong.
Except for the instance size, the command above looks good. But if you want to dif deeper, you can have a look at https://kops.sigs.k8s.io/operations/troubleshoot/

How to create an SSH in gcloud, but keep getting API error

I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.

How to run cluster initialization script on GCP after creation of cluster

I have created a Google Dataproc cluster, but need to install presto as I now have a requirement. Presto is provided as an initialization action on Dataproc here, how can I run this initialization action after creation of the cluster.
Most init actions would probably run even after the cluster is created (though I haven't tried the Presto init action).
I like to run clusters describe to get the instance names, then run something like gcloud compute ssh <NODE> -- -T sudo bash -s < presto.sh for each node. Reference: How to use SSH to run a shell script on a remote machine?.
Notes:
Everything after the -- are args to the normal ssh command
The -T means don't try to create an interactive session (otherwise you'll get a warning like "Pseudo-terminal will not be allocated because stdin is not a terminal.")
I use "sudo bash" because init actions scripts assume they're being run as root.
presto.sh must be a copy of the script on your local machine. You could alternatively ssh and gsutil cp gs://dataproc-initialization-actions/presto/presto.sh . && sudo bash presto.sh.
But #Kanji Hara is correct in general. Spinning up a new cluster is pretty fast/painless, so we advocate using initialization actions when creating a cluster.
You could use initialization-actions parameter
Ex:
gcloud dataproc clusters create $CLUSTERNAME \
--project $PROJECT \
--num-workers $WORKERS \
--bucket $BUCKET \
--master-machine-type $VMMASTER \
--worker-machine-type $VMWORKER \
--initialization-actions \
gs://dataproc-initialization-actions/presto/presto.sh \
--scopes cloud-platform
Maybe this script can help you: https://github.com/kanjih-ciandt/script-dataproc-datalab

how to mount secret in openshift with uid:gid set correctly

I'm using this Dockerfile to deploy it on openshift. - https://github.com/sclorg/postgresql-container/tree/master/9.5
It works fine, until I enabled ssl=on and injected the server.crt and server.key file into the postgres pod via volume mount option.
Secret is created like
$ oc secret new postgres-secrets \
server.key=postgres/server.key \
server.crt=postgres/server.crt \
root-ca.crt=ca-cert
The volume is created as bellow and attached to the given BuidlConfig of postgres.
$ oc volume dc/postgres \
--add --type=secret \
--secret-name=postgres-secrets \
--default-mode=0600 \
-m /var/lib/pgdata/data/secrets/secrets/
Problem is the mounted files of secret.crt and secret.key files is owned by root user, but postgres expect it should be owned by the postgres user. Because of that the postgres server won't come up and says this error.
waiting for server to start....FATAL: could not load server
certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt":
Permission denied stopped waiting pg_ctl: could not start server
How we can insert a volume and update the uid:guid of the files in it ?
It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html
In the Kubernetes projects, this is something that is still under discussion https://github.com/kubernetes/kubernetes/issues/2630, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work.
I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).

IBM Object Storage Command Line Access

Using this guide, I have been trying to access my containers at IBM Object Storage, I have installed the python-swiftclient library and running this command(AUTH_URL, USERNAME,KEY are from IBM Bluemix Object Storage Credentials Section):
swift -A <AUTH_URL> -U <USERNAME> -K <KEY> stat -v
I get the following error:
Auth GET failed: https://identity.open.softlayer.com/ 300 Multiple Choices [first 60 chars of response] {"versions": {"values": [{"status": "stable", "updated": "20
I have tried with other credentials as well, looked online, no luck so far. What is wrong with this?
If you are referring to the Cloud Object Storage (S3 compatible version) look at https://ibm-public-cos.github.io/crs-docs/crs-python.html instead. The example in the KnowledgeLayer is for the SWIFT based option. The new Cloud Object Storage is using S3 API style commands.
Use the following:
swift \
--os-auth-url=https://identity.open.softlayer.com/v3 \
--auth-version=3 \
--os-project-id=<projectId> \
--os-region-name=<region> \
--os-username=<username> \
--os-password=<password> \
--os-user-domain-id=<domainId> \
stat -v
You will find the values for projectId, region, username, password, domainId in the credentials section of your Object Storage service in the Bluemix dashboard.
Another option is to set the environment variables OS_AUTH_URL, OS_AUTH_VERSION, OS_PROJECT_ID, OS_REGION_NAME, OS_USERNAME (or OS_USER_ID), OS_PASSWORD and OS_DOMAIN_ID.