How do I connect bare metal worker to docker TSA host - concourse

I have followed the docker-compose concourse installation set up
Everything is up and running but I cant figure out what to use as --tsa-host value in command to connect worker to TSA host
Would be worth mentioning that docker concourse web and db are running on the same machine that I hope to use as bare metal worker.
I have tried 1. to use IP address of concourse web container but no joy. I cannot even ping the docker container IP from host.
1.
sudo ./concourse worker --work-dir ./worker --tsa-host IP_OF_DOCKER_CONTAINER --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
I have also tried using the 2. CONCOURSE_EXTERNAL_URL and 3. the ip address of the host but no luck either.
2.
sudo ./concourse worker --work-dir ./worker --tsa-host http://10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
3.
sudo ./concourse worker --work-dir ./worker --tsa-host 10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
Other details of setup:
Mac OSX Sierra
Docker For Mac

Please confirm you use the internal IP of the host, not public IP, not container IP.
--tsa-host <INTERNAL_IP_OF_HOST>
If you use docker-compose.yml as in its setup document, you needn't care of TSA-HOST, the environmen thas been defined already
CONCOURSE_TSA_HOST: concourse-web

I used the docker-compose.yml recently with the steps described here https://concourse-ci.org/docker-repository.html .
Please confirm that there is a keys directory next to the docker-compose.yml after you executed the steps.
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
export CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
docker-compose up

Related

How to put Nextcloud in kubernetes in maintenance mode

I'm trying to migrate my Nextcloud instance to a kubernetes cluster. I've succesfully deployed a Nextcloud instance using openEBS-cStor storage. Before I can "kubectl cp" my old files to the cluster, I need to put Nextcloud in maintenance mode.
This is what I've tried so far:
Shell access to pod
Navigate to folder
Run OCC command to put next cloud in maintenance mode
These are the commands I used for the OCC way:
kubectl exec --stdin --tty -n nextcloud nextcloud-7ff9cf449d-rtlxh -- /bin/bash
su -c 'php occ maintenance:mode --on' www-data
# This account is currently not available.
Any tips on how to put Nextcloud in maintenance mode would be appreciated!
The su command fails because there is no shell associated with the www-data user.
What worked for me is explicitly specifying the shell in the su command:
su -s /bin/bash www-data -c "php occ maintenance:mode --on"

Nginx ingress controller at kubernetes not allowing installation of some package

I am looking to execute
apt install tcpdump
but facing permission denial, upon looking to set the directory to root, it is asking me for password and I don't know from where to get that password.
I installed nginx helm chart from stable/nginx repository with no RBAC
Please see snapshot for details on error, while I tried installing tcpdump in the pod after doing ssh into it.
In Using GDB with Nginx, you can find troubleshooting section:
Shortly:
find the node where your pod is running (kubectl get pods -o wide)
ssh into the node
find the docker_ID for this image (docker ps | grep pod_name)
run docker exec -it --user=0 --privileged docker_ID bash
Note: Runtime privilege and Linux capabilities
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
Additional resources:
ROOT IN CONTAINER, ROOT ON HOST
Hope this help.

How can I use REST API to interact with the Docker engine?

We can use the command docker images to list the Docker images we have on local host.
Now I want to get the same information from a remote server by sending an HTTP GET request in Firefox or Chrome. Does Docker provide some REST API to do this?
I did a lot of search. For example:
Examples using the Docker Engine SDKs and Docker API
It provides a way something like this:
curl --unix-socket /var/run/docker.sock http:/v1.24/containers/json
I know a little about Unix sockets, and I don't think this is what I want. The URL (http:/v1.24/containers/json) is so weird and don't even have a server name in it. I don't think it can work on a remote server. (It does work on a local server.)
Is there any official documentation that Docker provides on this topic?
You need to expose the Docker daemon on a port.
You can configure the Docker daemon to listen to multiple sockets at the same time using multiple -H options:
listen using the default Unix socket, and on two specific IP addresses on this host.
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
The Docker client will honor the DOCKER_HOST environment variable to set the -H flag for the client. Use one of the following commands:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
You need to do this by creating a systemd dropin:
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/10_docker.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2376
EOF
Then reload and restart Docker:
systemctl daemon-reload
systemctl restart docker
Note: this way you would be exposing your host and you shouldn't do it this way in production. Please read more about this on the link I shared earlier.

Create multiple Postgres instances on same machine

To test streaming replication, I would like to create a second Postgres instance on the same machine. The idea is that if it can be done on the test server, then it should be trivial to set it up on the two production servers.
The instances should use different configuration files and different data directories. I tried following the instructions here http://ubuntuforums.org/showthread.php?t=1431697 but I haven't figured out how to get Postgres to use a different configuration file. If I copy the init script, the scripts are just aliases to the same Postgres instance.
I'm using Postgres 9.3 and the Postgres help pages say to specify the configuration file on the postgres command line. I'm not really sure what this means. Am I supposed to install some client for this to work? Thanks.
I assume you can work your way out on using postgresql utilities.
Create the clusters
$ initdb -D /path/to/datadb1
$ initdb -D /path/to/datadb2
Run the instances
$ pg_ctl -D /path/to/datadb1 -o "-p 5433" -l /path/to/logdb1 start
$ pg_ctl -D /path/to/datadb2 -o "-p 5434" -l /path/to/logdb2 start
Test streaming
Now you have two instances running on ports 5433 and 5434. Configuration files for them are in data dirs specified by initdb. Tweak them for streaming replication.
Your default installation remains untouched in port 5432.
On Debian based distros you could use pg_createcluster instead of initdb:
$ pg_createcluster -u [user] -g [group] -d /path/to/data -l /path/to/log -p 5433
Also pg_ctlcluster is an alternative to pg_ctl.
Steps to create New Server Instance on PostgreSQL 9.5
On command prompt run:
initdb -D Instance_Directory_path -U username -W
(prompts for password)
Once the new Instance Directory is created. Run command prompt as Administrator
pg_ctl register -N service_name -D Instance_Directory_path -o "-p port_no"
After the service is registered, start server
pg_ctl start -D Instance_Directory_path -o "-p port_no"
To complete other answers, on CentOS 6 AND 7.
After running something like
$ initdb -D /path/to/newdb
You'll have to change at least port configuration option and, probably, listen_addresses in config file postgresql.conf.
Instead of starting inmediatly this new instance, which has been explained in previous answers, maybe you want new instance to run automatically on system start (in case of shutdown, e.g.). To do this, as CentOS doesn't have pg_ctl register option (only for Windows) you'll have to create a new service file and register it in order systemctl or service can start it up automatically.
Centos 6
Follow next commands to get service's init file:
[root#machine ~]# service postgresql-9.6 edit
Usage: /etc/init.d/postgresql-9.6 {start|stop|status|restart|upgrade|condrestart|try-restart|reload|force-reload|initdb|promote}
[root#machine ~]# cd /etc/init.d # Now we know where service file is
[root#machine init.d]# cp -p postgresql-9.6 postgresql-9.6_5433
[root#machine init.d]# vi postgresql-9.6_5433
Now you can change PGDATA directory with the one where new instance resides. If you're using Postgresql version previous to 9.4 (which you shouldn't by the time of this answer) you'll have to change PGPORT too with the value where new instance is listening to.
The name of the new service is up to you. I usually take original service name and add port number at the end.
Now you only have to register new service:
[root#machine init.d]# chkconfig postgresql-9.6_5433 on # service registered!
[root#machine init.d]# service postgresql-9.6_5433 start
Iniciando servicios postgresql-9.6_5433: [ OK ]
[root#machine init.d]# service postgresql-9.6_5433 status
Se está ejecutando postgresql-9.6_5433 (pid 120993)...
Centos 7
In CentOS 7 instead of service to control services running on the machine you have systemctl and commands and paths change a bit. But the process is the same: create new service file, edit with the new location/port, register and start:
[root#localhost ~]# locate postgresql.service
/etc/systemd/system/multi-user.target.wants/postgresql.service
/usr/lib/systemd/system/postgresql.service
[root#localhost ~]# cd /usr/lib/systemd/system
[root#localhost ~]# cp -p postgresql.service postgresql_5433.service
[root#localhost ~]# vi postgresql_5433.service
# Change PGDATA and maybe PGPORT if PG version <9.4
[root#localhost ~]# systemctl enable postgresql_5433.service
[root#localhost ~]# systemctl start postgresql_5433.service
[root#localhost ~]# systemctl list-unit-files | grep postgres
postgresql.service enabled
postgresql_5433.service enabled

Docker: Use sockets for communication between 2 containers

I have 2 Docker containers: App & Web.
App — simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web — web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!
If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app