how can I programmatically get the hostname for the BIGSQL_HEAD? - ibm-cloud

I need to programmatically retrieve the BIGSQL_HEAD hostname of my BigInsihgts on Cloud enterprise cluster from a script so I can automate connecting to that host.
The BIGSQL_HEAD hostname is in Ambari - how can I retrieve this information using 'standard' unix tools?

BI_HOST=your_biginsights_mastermanager_hostname
BI_USER=your_biginsights_username
BI_PASS=your_biginsights_password
BASE_URL=https://${BI_HOST}:9443
CLUSTER_NAME=$(curl -k -s -u ${BI_USER}:${BI_PASS} -H 'X-Requested-By:ambari' \
${BASE_URL}/api/v1/clusters | python -c \
'import json,sys;print json.load(sys.stdin)["items"][0]["Clusters"]["cluster_name"]')
BIGSQL_HOST=$(curl -k -s -u ${BI_USER}:${BI_PASS} -H 'X-Requested-By:ambari' \
${BASE_URL}/api/v1/clusters/${CLUSTER_NAME}/services/BIGSQL/components/BIGSQL_HEAD | \
python -c \
'import json,sys;print json.load(sys.stdin)["host_components"][0]["HostRoles"]["host_name"]')
echo ${BIGSQL_HOST}
These commands can be run on the BigInsight cluster or your client machine.
Thanks to Pierre Regazzoni for the ambari code.

Related

How to run schema registry container for sasl plain kafka cluster

I want to run the cp-schema-registry image on AWS ECS, so I am trying to get it to run on docker locally. I have a command like this:
docker run -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.address:9092,2.kafka.address:9092,3.kafka.address:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_PLAINTEXT \
confluentinc/cp-schema-registry:5.5.3
(I have replaced the kafka urls).
My consumers/producers connect to the cluster with params:
["sasl.mechanism"] = "PLAIN"
["sasl.username"] = <username>
["sasl.password"] = <password>
Docs seem to indicate there is a file I can create with these parameters, but I don't know how to pass this into the docker run command. Can this be done?
Thanks to OneCricketeer for the help above with the SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG var. The command ended up like this (I added port 8081:8081 so I could test with curl):
docker run -p 8081:8081 -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.broker:9092,2.kafka.broker:9092,3.kafka.broker:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_SSL \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM=PLAIN \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="user" password="pass";' confluentinc/cp-schema-registry:5.5.3
Then test with curl localhost:8081/subjects and get [] as a response.

Hasura use SSL certificates for Postgres connection

I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?
You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest

Couchbase running in a container not accessible

So I've created this Dockerfile:
FROM centos
EXPOSE 7081 8092 11210
RUN yum install -y \
hostname \
initscripts \
openssl098e \
pkgconfig \
sudo \
tar \
wget \
&& wget http://packages.couchbase.com/releases/3.0.2/couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm \
&& yum install -y couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm \
&& rm -f ./couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm \
CMD /opt/couchbase/bin/couchbase-server start -- -noinput
And that seems to be working (running the couchbase server) and to build and run it I do:
docker build -t="my/couchbase" .
docker run -itd --name=couchbase -p 11210:11210 -p 8091:7081 -p 8092:8092 my/couchbase
Now for some reason I can't connect to it via http. I tried to get ip address of the container with docker inspect couchbase | grep IP
and then going to http://containters_ip:7081
It's trying to get there for a very long time, but eventually times out.
What am I doing wrong?
You need EXPOSE 8091 8092 11210 (think of this as "the container listens on these ports") and -p 7081:8091 to get the mapping you seek. In -p it's hostport:containerport order.

How to customize heroku CLI?

I need to download my database at heroku, how to add in these flags: -a (data only), -x (no privileges), -O (no owner) to the CLI ??
Currently I use:
heroku pgbackups:capture
$ curl -o latest.dump `heroku pgbackups:url`
It doesn't seem like you can pass flags to pgbackups:capture. You can, however, uses pg_dump directly.
pg_dump DATABASE_NAME -h DATABASE_HOST -p PORT -U USER -F c -a -x -O -f db.dump
You can get the database values by running heroku pg:credentials DATABASE_URL You can also use the plugin a colleague and I wrote: parse_db_url. This will let you run a command like heroku pg:parse_db_url --format pg_dump and get a usable pg_dump command as output.

How to suppress INFO messages when running psql scripts

I'm seeing INFO messages when I run my tests and I thought that I had gotten rid of them by setting the client_min_messages PGOPTION. Here's my command:
PGOPTIONS='--client-min-messages=warning' \
psql -h localhost \
-p 5432 \
-d my_db \
-U my_user \
--no-align \
--field-separator '|' \
--pset footer \
--quiet \
-v AUTOCOMMIT=off \
-X \
-v VERBOSITY=terse \
-v ON_ERROR_STOP=1 \
--pset pager=off \
-f tests/test.sql \
-o "$test_results"
Can someone advise me on how to turn off the INFO messages?
This works for me: Postgres 9.1.4 on Debian GNU Linux with bash:
env PGOPTIONS='-c client_min_messages=WARNING' psql ...
(Still works for Postgres 12 on Ubuntu 18.04 LTS with bash.)
It's also what the manual suggests. In most shells, setting environment variables also works without an explicit leading env. See maxschlepzig's comment.
Note, however, that there is no message level INFO for client_min_messages.
That's only applicable to log_min_messages and log_min_error_statement.