Installing Rancher using docker containers is taking too long and is getting connection timeout - kubernetes

I am trying to install a K8S cluster with Rancher, after installing docker successfully I ran the following command to install Rancher containers:
$ sudo docker run -d --restart=unless-stopped -p 8088:8088 rancher/server:stable
The console I got was:
As you can see I am not being able to download the Rancher containers, what could I do to make it work?

Well, starting with the point this Rancher installation is based on what is proposed in this link https://phoenixnap.com/kb/install-rancher-on-ubuntu. The Rancher version to be installed following the script in the link, according to Docker Hub, is 1.x.
So, I don't recommend the command proposed in the script:
sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
See, the rancher/server must be replaced by: rancher/rancher:stable, then you will install latest Rancher version 2.x.
Also, to avoid the timeout problem you instead of using the command "docker run" go for "docker pull" first, then "docker run". In other words, after finishing the "pull" then you go with the "run" option. I recommend you run the following commands which will be faster and effective:
sudo docker pull rancher/rancher:latest
sudo docker run -d --restart=unless-stopped -p 8088:8088 -p 8443:8443 --privileged rancher/rancher:latest
After running that everything is in good shape, you can run your Rancher. I hope that can help someone, I lost one day to figure that out, since the download time takes hours each time you run to timeout and fail.

Related

Docker Postgres shutting down immediately with (4294967295) exited code

I am using Docker for Windows which is up to date (v 4.16.2). I also switched to Windows Containers. I wanted to use postgres using this prompt:
docker run --name MyPostgresSQL -p 5432:5432 -e POSTGRES_PASSWORD=12345 -d postgres
Then the log of this MyPostgresSQL is here:
I couldn't even find any question about this error. I tried restarting docker, switching to Linux containers then switching back to Windows containers, purging docker data but any of them didn't work at all.

How to install chrony on redhat 8 minimal

Im using keycloak docker image and need to synchronize time with chrony. however I cannot install chrony, because its not in repository i assume.
I use image from https://hub.docker.com/r/jboss/keycloak
ist based on registry.access.redhat.com/ubi8-minimal
Steps to reproduce:
~$ docker run -d --rm -p 8080:8080 --name keycloak jboss/keycloak
~$ docker exec -it -u root keycloak bash
root#707c136d9c8a /]# microdnf install chrony
error: No package matches 'chrony'
I'm not able to find working repo which provides chrony for redhat 8 minimal
Apparently i need synchronize time on host system, nothing to do with container itself.. Silly me, i need a break..

No access to apache docker server

in my docker image I need to run an Apache Server to deploy my website, a glassfish server for deploying the corresponding backend and MongoDB on which the backend connects.
My dockerfile looks like this:
FROM httpd:2.4
FROM glassfish:latest
FROM mongo:3.6
COPY /backend_war_exploded /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/backend_war_exploded
COPY /backend_war_exploded /usr/local/glassfish4/bin/backend_war_exploded
COPY /dist /usr/local/apache2/htdocs/
After building the image I run and start it with:
docker run -dit --name application -p 80:80 -p 8080:8080 -p 27017:27017 applicationimg
docker start application
When I try to access via http://localhost:80 it delivers the code: ERR_EMPTY_RESPONSE. Same for the backend but I can access mongodb on port 27017. When I am commenting out the FROM tags in my dockerfile and run everything separately it just works fine. Does somebody see the mistake? Thanks in advance.
UPDATE
I followed your suggestion and created rewrote the Dockerfile:
FROM ubuntu:16.04
COPY /dist /var/www/html/
COPY /backend_war_exploded /glassfish4/glassfish/domains/domain1/autodeploy/backend_war_exploded
RUN apt-get update && apt-get install -y apache2
RUN apt-get install -y openjdk-8-jdk
RUN apt-get install -y wget && apt-get install -y unzip
RUN wget http://download.java.net/glassfish/4.1.2/release/glassfish-4.1.2.zip
RUN unzip glassfish-4.1.2.zip
RUN cd /glassfish4/bin/ && ./asadmin start-domain domain1
EXPOSE 80
EXPOSE 8080
The webserver starts up and is accesable via localhost:80 but the glassfish server start while building the image but when running the docker image it is not started anymore. When I am accessing the container via docker exec I can navigate to glassfish and start it up manually. What is the issue?
You need to depend on on FROM only and add the other tools through RUN steps. or use single image for each application and connect them together through docker network or by creating a docker-compose.yml which will be easier, you can check it through here. Using multiple FROM does not mean that you are going to have all 3 in 1.
For more information about how to create Dockerfile and How to deploy your application with multiple containers you can check the get started tutorial from Docker
In order to run multiple service inside one container you need to use a service manager like Supervisor. Check the following link for more details: Multi-Service Container

Docker postgres doesn't start at build

I want do use a docker container to simulate my production environment, so I installed the db and the server in the same container, and not each in his own.
This is my dockerfile:
FROM debian
RUN apt update
RUN apt install postgresql-9.6 tomcat8 tomcat8-admin -y
RUN service postgresql start
RUN service postgresql status # says postgres is down
RUN su - postgres ;
RUN createdb db_example # fails !!!
RUN psql -c "CREATE USER springuser WITH PASSWORD 'test123';"
RUN exit
RUN service tomcat8 start
COPY target/App-1.0.war /var/lib/tomcat8/webapps/
CMD ["/bin/bash"]
The problem is that the database is down so I am uable to create the user and the database.
If I start the a debian docker container and do this steps per hand everything works fine.
Thanks for your help
All the recommendations in the comments are correct, it's better to keep services in different containers.
Nevertheless and just to let you know, the problem in the Dockerfile is that starting services in RUN statements is useless. For every line in the Dockerfile, docker creates a new image. For example RUN service postgresql start, it may start postgresql during docker build, but it doesn't persist in the final image. Only the filesystem persist from one step to another, not the processes.
Every process need to be started in the entrypoint, this is the only command that's called when you exec docker run:
FROM debian
RUN apt update
RUN apt install postgresql-9.6 tomcat8 tomcat8-admin -y
COPY target/App-1.0.war /var/lib/tomcat8/webapps/
ENTRYPOINT["/bin/bash", "-c", "service postgresql start && service postgresql status && createdb db_example && psql -c \"CREATE USER springuser WITH PASSWORD 'test123';\" && service tomcat8 start && sleep infinity"]
(It may have problems with quotes on psql command)
I have the problem hat in the war file the localhost for the database war hard coded.
Thanks to Light.G, he suggested me to use --net=host for the container, so now there is one container with the database and one with the tomcat server.
This are the steps I followed.
Build the docker image
docker build -t $USER/App .
Start a postgres database
We are using the host namespace it is not possible to run another programm on the post 5432.
Start the postgres container like this:
docker run -it --rm --net=host -e POSTGRES_USER='springuser' -e POSTGRES_DB='db_example' -e POSTGRES_PASSWORD='test123' postgres
Start the tomcat
Start the App container, with this command:
docker run -it --net=host --rm $USER/App

Docker Lamp Centos7: '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1

I'm starting to work with docker to automate envorinments, then I'm trying to build a simple LAMP so the Dockerfile is the following:
FROM centos:7
ENV container=docker
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum -y update; yum clean all; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
RUN yum -y update && yum clean all
RUN yum -y install firewalld httpd mariadb-server mariadb php php-mysql php-gd php-pear php-xml php-bcmath php-mbstring php-mcrypt php-php-gettext
#Enable services
RUN systemctl enable httpd.service
RUN systemctl enable mariadb.service
#start services
RUN systemctl start httpd.service
RUN systemctl start mariadb.service
#Open firewall ports
RUN firewall-cmd --permanent --add-service=http
RUN firewall-cmd --permanent --add-service=https
RUN firewall-cmd --reload
EXPOSE 80
CMD ["/usr/sbin/init"]
so when I build the image
docker build -t myimage .
Then when I run the code I get the following mistake:
The command '/bin/sh -c systemctl start httpd.service' returned a non-zero code: 1
When I enter to interactive mode(jumping the commands after RUN systemctl start httpd.service and rebuidling the image):
docker run -t -i myimage /bin/bash
And after try to start manually the service httpd I get the following mistake:
Failed to get D-Bus connection: No connection to service manager.
so, I don't know what am I doing wrong?
First of all, welcome to Docker! :-) Loads of Docker tutorials and docs are written around Ubuntu containers, but I like Centos too.
Ok, there are a couple of things to talk about here:
You're running up against a known issue with systemd-based Docker containers where they seem to need extra privileges to run, and even then lots of extra config is required to get them working. The Red Hat team are experimenting with some fixes (mentioned in comments) but not sure where that's at.
If you wish to try getting it working, these are the best instructions I've found, but I've played with this several times in the last couple of weeks and not got it working yet.
What people might say is "the real issue" here is that a Docker container should not be thought of as a "mini Virtual Machine". Docker is designed to run one "root" process per container, and the container system makes it easy to compose multiple containers together - they are small on disk, light on memory usage and easy to network together.
Here's a blog post from Docker which gives some background on this. There's also the "Docker Fundamentals" docs on Dockerizing applications and Working with containers.
So arguably the best way to proceed with the setup you're attempting to create here (though it might sound more complicated at the beginning) is to break your "stack" up into the services you need, and then use a tool like docker-compose (introduction, documentation) to create single-purpose Docker containers as required.
In your case above, you have two services, a web server and a database server. Therefore two Docker containers should work well, connected together by the database network connection. Here are some examples:
example with Symfony app, nginx and MariaDB
example with MariaDB + NodeJS
If you run one service per Docker container, you don't need to use systemd to manage them, as the Docker daemon manages each container sort of like it is a Unix process. When the process dies, the Docker container dies, and this is important because the Docker server monitors containers and can restart them automatically, or notify you.
This looks more like a perfect example where my docker-systemctl-replacement would fit into. It can easily interpret "systemctl start httpd.service" without an active SystemD around. I have done the same for some database services but not specifically the mariadb.service - may be you could give it a try.