Openshift online zookeeper from dockerfile pod "Crash loop back off" - apache-zookeeper

I want to deploy application on Openshift-origin online (next gen). There will be at least 4 pods communicating via services.
In 1st POD I have to run Zookeeper. So I created POD where my Zookeeper from docker image will be running, but POD's status is: Crash loop back off.
I created new project
oc new-project my-project
I created new app to deploy my zookeeper from docker
oc new-app mciz/zookeeper-docker-infispector --name zookeeper
And the output message was:
--> Found Docker image 51220f2 (11 minutes old) from Docker Hub for "mciz/zookeeper-docker-infispector"
* An image stream will be created as "zookeeper:latest" that will track this image
* This image will be deployed in deployment config "zookeeper"
* Ports 2181/tcp, 2888/tcp, 3888/tcp will be load balanced by service "zookeeper"
* Other containers can access this service through the hostname "zookeeper"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/zookeeper --add ...'
* WARNING: Image "mciz/zookeeper-docker-infispector" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources with label app=zookeeper ...
imagestream "zookeeper" created
deploymentconfig "zookeeper" created
service "zookeeper" created
--> Success
Run 'oc status' to view your app.
Then I ran pods list:
oc get pods
with output:
NAME READY STATUS RESTART AGE
zookeeper-1-mrgn1 0/1 CrashLoopBackOff 5 5m
Then I ran logs:
oc logs -p zookeeper-1-mrgn1
with output:
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
grep: /opt/zookeeper/bin/../conf/zoo.cfg: No such file or directory
mkdir: can't create directory '': No such file or directory
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Invalid config, exiting abnormally
My dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER mciz
ARG MIRROR=http://apache.mirrors.pair.com
ARG VERSION=3.4.6
LABEL name="zookeeper" version=$VERSION
RUN apk add --no-cache wget bash \
&& mkdir /opt \
&& wget -q -O - $MIRROR/zookeeper/zookeeper-$VERSION/zookeeper- $VERSION.tar.gz | tar -xzf - -C /opt \
&& mv /opt/zookeeper-$VERSION /opt/zookeeper \
&& cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
EXPOSE 2181 2888 3888
WORKDIR /opt/zookeeper
VOLUME ["/opt/zookeeper/conf"]
ENTRYPOINT ["/opt/zookeeper/bin/zkServer.sh"]
CMD ["start-foreground"]

There is a warning in the new-app command output:
WARNING: Image "mciz/zookeeper-docker-infispector" runs as the 'root' user which may not be permitted by your cluster administrator
You should fix the docker image to not run as root (or tell OpenShift to allow this project containers to run as root).
There is an specific example of Zookeeper image and template that works in Openshift.
https://github.com/openshift/origin/tree/master/examples/zookeeper
Notice the Dockerfile changes to run the container as non root user

Related

Gloud deploy - container failed to start

docker build -t "us.gcr.io/ek-airflow-stage/array_data:sree" .
Status: Downloaded newer image for python:3.7
---> 869a8debb0fd
Successfully built 869a8debb0fd
Successfully tagged us.gcr.io/ek-airflow-stage/array_data:sree
docker push "us.gcr.io/ek-airflow-stage/array_data:sree"
The push refers to repository [us.gcr.io/ek-airflow-stage/array_data]
a36ba9e322f7: Layer already exists
sree: b size: 2218
gcloud run deploy "ek-airflow-stage" \
--quiet \
--image "us.gcr.io/ek-airflow-stage/array_data:sree" \
--region "us-central1" \
--platform "managed"
Deploying container to Cloud Run service [ek-airflow-stage] in project ["project"] region [us-central1]
/ Deploying... Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Deployment failed
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.

ecs-cli stuck creating containers

I'm trying to get a working Docker environment on AWS using the ecs-cli command line.
I have a working local Docker environment using Dockerfiles, docker-compose.yml, a .env file, and a entrypoint.sh scripts. The containers are an apache webserver running PHP and a bunch of extensions, and a MySQL db.
Skeleton file structure is like this:
./db <-- mounted by db container for persistence
./docker
./docker/database
./docker/database/Dockerfile
./docker/database/dump.sql
./docker/webserver
./docker/webserver/apache-config.conf
./docker/webserver/Dockerfile
./docker/webserver/entrypoint.sh
./docker-compose.yml
./web <!-- mounted by web server, contains all public web code
Here's the 2 Docker files:
./docker/database/Dockerfile
FROM mysql:5.6
ADD dump.sql /docker-entrypoint-initdb.d
./docker/webserver/Dockerfile
FROM php:5.6-apache
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y zlib1g-dev nodejs gdal-bin
RUN npm install -g topojson
RUN docker-php-ext-install mysql mysqli pdo pdo_mysql zip
RUN pecl install dbase
RUN docker-php-ext-enable dbase
COPY apache-config.conf /etc/apache2/sites-enabled/000-default.conf
RUN a2enmod rewrite headers
RUN service apache2 restart
COPY entrypoint.sh /entrypoint.sh
RUN chmod 0755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh", "apache2-foreground"]
entrypoint.sh creates some directories in the web directory for apache to write into:
./docker/webserver/entrypoint.sh
#!/bin/sh
mkdir /var/www/html/maps
chown www-data /var/www/html/maps
chgrp www-data /var/www/html/maps
exec "$#"
Here's the docker-compose.yml
version: '2'
services:
webserver:
image: ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-webserver
ports:
- "8080:80"
volumes:
- ./web:${APACHE_DOC_ROOT}
links:
- db
environment:
- HTTP_ROOT=http://${DOCKER_HOST_IP}:${DOCKER_HOST_PORT}/
- PHP_TMP_DIR=${PHP_TMP_DIR}
- APACHE_LOG_DIR=${APACHE_LOG_DIR}
- APACHE_DOC_ROOT=${APACHE_DOC_ROOT}/
- SERVER_ADMIN_EMAIL=${SERVER_ADMIN_EMAIL}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
env_file: .env
db:
user: "1000:50"
image: ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-database
ports:
- "4406:3306"
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
env_file: .env
To build the images referenced there I:
Created AWS IAM user with Admin permission and set keys in ~/.aws/credentials under a profile name, then set-up local ENV using
export AWS_PROFILE=my-project-profile
Then built the images locally as follows:
docker/webserver $ docker build -t ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-webserver .
docker/database $ docker build -t ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-database .
Got docker logged into ECR (running the docker login command echo'd to std-out):
$aws ecr get-login --no-include-email
Created the repos:
$ aws ecr create-repository --repository-name project/project-webserver
$ aws ecr create-repository --repository-name project/project-database
Pushed the images:
$docker push ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-webserver
$docker push ACCOUNT_NUMBER.dkr.ecr.eu-west-1.amazonaws.com/project/project-database
Checked they are there:
$ aws ecr describe-images --repository-name project/project-webserver
$ aws ecr describe-images --repository-name project/project-database
All looks fine.
Created an EC2 key-pair in the same region
$ecs-cli configure --region eu-west-1 --cluster project
$ cat ~/.ecs/config
Tried running them on ECS:
$ecs-cli up --keypair project --capability-iam --size 1 --instance-type t2.micro --force
But if I open port 22 in the security group of the resulting EC2 instance and SSH in I can see the agent container running, but no others:
[ec2-user#ip-10-0-0-122 ~]$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d011f1402c26 amazon/amazon-ecs-agent:latest "/agent" 8 minutes ago Up 8 minutes ecs-agent
I don't see anything bad in the logs for the agent
[ec2-user#ip-10-0-1-102 ~]$ docker logs ecs-agent
2017-09-22T13:32:55Z [INFO] Loading configuration
2017-09-22T13:32:55Z [INFO] Loading state! module="statemanager"
2017-09-22T13:32:55Z [INFO] Event stream ContainerChange start listening...
2017-09-22T13:32:55Z [INFO] Registering Instance with ECS
2017-09-22T13:32:55Z [INFO] Registered! module="api client"
2017-09-22T13:32:55Z [INFO] Registration completed successfully. I am running as 'arn:aws:ecs:eu-west-1:248221388880:container-instance/ba24ead4-21a5-4bc7-ba9f-4d3ba0f29c6b' in cluster 'gastrak'
2017-09-22T13:32:55Z [INFO] Saving state! module="statemanager"
2017-09-22T13:32:55Z [INFO] Beginning Polling for updates
2017-09-22T13:32:55Z [INFO] Event stream DeregisterContainerInstance start listening...
2017-09-22T13:32:55Z [INFO] Initializing stats engine
2017-09-22T13:32:55Z [INFO] NO_PROXY set:169.254.169.254,169.254.170.2,/var/run/docker.sock
2017-09-22T13:33:05Z [INFO] Saving state! module="statemanager"
2017-09-22T13:44:50Z [INFO] Connection closed for a valid reason: websocket: close 1000 (normal): ConnectionExpired: Reconnect to continue
I guess I need to figure out why those containers aren't initalising, but where do I look and, better still, what do I need to do next to get this to work?
In case anyone else runs adrift here, the missing incantations were
$ ecs-cli compose create
which builds an ECS task definition from your compose file (assuming it is compatible...)
and
$ecs-cli compose run
which will build and run the containers on the remote EC2 machine.
SSH'ing to the remote machine and doing a "docker ps -a" should show the containers running. Or "docker logs [container_name]" to see what went wrong...

connect to shell terminal of other container in a pod

When I define multiple containers in a pod/pod template like one container running agent and another php-fpm, how can they access each other? I need the agent container to connect to php-fpm by shell and need to execute few steps interactively through agent container.
Based on my understanding, we can package kubectl into the agent container and use kubectl exec -it <container id> sh to connect to the container. But I don't want Agent container to have more privilege than to connect to the target container with is php-fpm.
Is there a better way for agent container to connect to php-fpm by a shell and execute commands interactively?
Also, I wasn't successful in running kubectl from a container when using minikube due to following errors
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
Error in configuration:
* unable to read client-cert /Users/user/.minikube/apiserver.crt for minikube due to open /Users/user/.minikube/apiserver.crt: no such file or directory
* unable to read client-key /Users/user/.minikube/apiserver.key for minikube due to open /Users/user/.minikube/apiserver.key: no such file or directory
* unable to read certificate-authority /Users/user/.minikube/ca.crt for minikube due to open /Users/user/.minikube/ca.crt: no such file or directory
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
First off, every Pod within a k8s cluster has its own k8s credentials provided by /var/run/secrets/kubernetes.io/serviceaccount/token, and thus there is absolutely no need to attempt to volume mount your home directory into a docker container
The reason you are getting the error about client-cert is because the contents of ~/.kube are merely strings that point to the externally defined ssl key, ssl certificate, and ssl CA certificate defined inside ~/.kube/config -- but I won't speak to fixing that problem further since there is no good reason to be using that approach

Docker: Use sockets for communication between 2 containers

I have 2 Docker containers: App & Web.
App — simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web — web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!
If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app

error with docker container with postgresql

I am unable to install postgresql on existing jenkins docker image,below are the list of steps i have followed:
Step 1 : Download the jenkins and specify the name for the volume to jenkins-home as described in the below article
http://www.catosplace.net/blog/2015/02/11/running-jenkins-in-docker-containers/
using the below command download the image and specify the volume
docker create -v /var/jenkins_home --name jenkins-home jenkins
Step 2 : Updated the dockerfile please see below
Dockerfile added postgresql installation commands from postgresql_dockerfile
Step 3 : Build the docker image
docker build -t ci_jenkins_docker .
Step 4 : Now run the ci_jenkins_docker image
docker run -p 8085:8080 --volumes-from jenkins-home ci_jenkins_docker
I get below error message after running the above command
touch : cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied.
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions ?
What am I doing wrong ?
When you mount an external volume, that happens at run time, and the permissions of the mounted volume over-ride whatever is set earlier in the Dockerfile image. In order to make the jenkins_home directory writable, you will probably have to change the permissions in an entrypoint script.