How do i install chef/Ansible as a container on kubernetes/docker.
I have checked docker hub registry for chef image.
I have pulled chef/chef and chef/chefdk but none of them is working for me.
I have run the command:
docker run -d -p 443:443 chef/chef
The chef/chef docker container exists mostly for the kitchen-dokken driver, not really independent use. If you want to make a containerized chef solo thingy, Chef Software has some Habitat examples you can follow, or at least they used to.
Related
I'm trying to docker-containerize PostgreSQL server and this container will have many other applications as well. The need is that, PostgreSQL server data should be mapped to the host volume so that when container is stopped, we won't lose the data. Also that, the next time when we start the container, the same directory can be mapped again and postgres can use the old data. Below is the DOCKERFILE. Note that I'm using ubuntu 22.04 on the host.
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt install -y postgresql
ENTRYPOINT ["tail", "-f", "/dev/null"]
Docker image is built using the command
docker build -t pg_test .
and the container is run using the command
docker run --name test -v /home/me/data:/var/lib/postgresql/14/main pg_test
'/home/me/data' is the host directory which is empty where I want to map the postgres server data. '/var/lib/postgresql/14/main' is the directory inside the docker container where the postgres is supposed to store the data.
Once the docker container starts, I enter the docker container using the command
docker exec -it test bash
and once I'm inside, I'm trying to start the PostgreSQL service. But PostgreSQL fails to start as there is no data in '/var/lib/postgresql/14/main' directory. I understand that since I have mapped an empty host directory to '/var/lib/postgresql/14/main' directory, postgres doesn't have the files required to start.
I understand that I'm doing it the wrong way, but I couldn't find a way around it. Can anyone please help me to do this the right way, if there is one?
Any help would be appreciable.
You should use the postgres docker image, it will set up the db for you when you start the container, you can find instructions on https://hub.docker.com/_/postgres
If you must use a custom image, you will need to initialize the db yourself, usually by running initdb or whatever your system provides.
But really you should use the appropriate docker image, and if you need more services you start them in their own container and connect them to the postgres one
----- i use kafka, kafka-connect(image: confluentinc/cp-kafka-connect)
when you use kafka in docker container if you wanna operate kafka, you have to go into the container(like 'docker exec -it kafka' or 'docker exec -it kafka-connect' ----> this is another question what i wanna ask) , right..??
i tried putting some connector (jdbc connector, mysql connector) into kafka-connect container, but it didn't work.
so.. my question is
after docker-compose up(put in container), if i wanna connect with some connectors('./bin/connect-distributed.sh ./etc/kafka/connect-distributed.properties'),
what container i have to go into???
if i type plugin path, where should i write?( kafka? kafka-connect?)
I wouldn't mind if it was difficult to read... sorry for that
No, you don't need to exec anywhere unless you cannot download Kafka on your host machine to get the CLI scripts. But you'd only exec for kafka-topics, console producer/consumer, kafka-consumer-groups, etc, not any of the Connect scripts.
The Connect container automatically runs the Distributed script and you simply provide CONNECT_PLUGIN_PATH as an environment variable to any directory in the container you want to use for the plugins (I like /opt/connectors if I mount volume, but that's not where confluent-hub installs to for that image). That variable doesn't do anything for the broker image, only Connect.
Related How to install connectors to the docker image of apache kafka connect
If your requirement is startup a Kafka Connect.
You can use the basic guide published by Confluent "Build Your Own Apache Kafka® Demos"
Basically you need execute the following instructions:
git clone https://github.com/confluentinc/cp-all-in-one.git
cd cp-all-in-one/cp-all-in-one
git checkout 7.1.1-post
docker-compose up -d
This has Control Center at http://localhost:8088
If you need install a Connector, you can go to the https://www.confluent.io/hub select your specific connector.
Then, you can create your DockerImage of specific Kafka Connect server.
1.- Write a Dockerfile.
vim Dockerfile
2.- Add connector "example JDBC" from Confluent Hub.
FROM confluentinc/cp-kafka-connect
ENV MYSQL_DRIVER_VERSION 5.1.39
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.5.0
RUN curl -k -SL "https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-${MYSQL_DRIVER_VERSION}.tar.gz" \
| tar -xzf - -C /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib \
--strip-components=1 mysql-connector-java-5.1.39/mysql-connector-java-${MYSQL_DRIVER_VERSION}-bin.jar
3.- Build the docker image.
docker build . -t my-kafka-connect-jdbc:1.0.0
4.- Then, you can go to edit your docker-compose.yml, change the line 57
from:
image: cnfldemos/cp-server-connect-datagen:0.5.3-7.1.0
to:
image: my-kafka-connect-jdbc:1.0.0
5.- Finally, stop and start your Confluent Platform local environment:
docker-compose down
docker-compose up
Verify your docker
docker ps
Test your Connect server:
curl --location --request GET 'http://localhost:8083/connectors'
I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>
my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible
I have the following Dockerfile , for a container that runs just fine on my Mac, (I'm using docker-machine)
FROM perl:latest
RUN cpanm SOAP::Lite
RUN cpanm LWP::Simple
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
ENTRYPOINT [ "perl", "./doceng_purge_tools/bin/akamai_purge_pattern_generic.pl" ]
# CMD /bin/bash
# docker build -t my_perl_purger_001 .
# docker run -t my_perl_purger_001 -pattern cd/Q14299_01 -server prod
However, when I run it using docker on my corporate network. I get a low-level SSL error.
Forgive my ignorance, but I thought a feature of docker is that I can be shielded from these platform gotchas.
Is there a way I can package this up, on my Mac, and just run the container in my Linux environment, behind my firewall?
I can supply more details about the SSL errors, if that helps.
... and just run the container in my Linux environment, behind my firewall?
...Can't connect to control.akamai.com:443
... but I thought a feature of docker is that I can be shielded from these platform gotchas.
If you run docker behind a firewall which prohibits connections to outside you can not expect to get a connection. Docker does not create some magic tunnel through the firewall but relies instead on the existing network, same es it relies on the existence of the CPU, RAM and storage. Proper network is just another resource you need to provide for your docker image.