Use init container for running commands in the actual pod - kubernetes

I need to install some libraries in my pod before it starts working as expected.
My use case: I need some libraries that will support SMB (samba), and the image that I have to use does not have it installed.
Unfortunately, exec'ing into the actual pod and running commands do not seem to be a very good idea.
Is there a way by which I can use an init-container to install libsmbclient-dev in my ubuntu pod?
Edit: Some restrictions in my case.
I use helm chart to install my app (nextcloud). So I guess I cannot use a custom image (as far as I know, we cannot use our own images in an existing helm chart). This would have been the best solution.
I cannot run commands in kubernetes value.yaml since I do not use kubectl to install my app. Also I need to restart apache2 after I install the library, and unfortunately, restarting apache2 results in restarting the pod, effectively making the whole installation meaningless.
Since nextcloud helm allows the use of initcontainers, I wondered if that could be used, but as far as I understand the usability of initcontainers, this is not possible (?).

You should build your own container image - e.g. with docker - and push it to a container repository that is suitable for your cluster, e.g. Docker Hub, AWS ECR, Google Artifact registry ...
First install docker (https://docs.docker.com/get-docker/)
create an empty directory and change into it.
Then create a file Dockerfile with the following content:
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y libsmbclient-dev \
&& rm -rf /var/lib/apt/lists/*
Execute
docker build -t myimage:latest .
This will download Ubuntu and build a new container image where the commands from the RUN statement will be executed. The image name will be myimage and the version will be latest.
Then push your image with docker push to your appropriate repository.
See also Docker best practices:
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Related

Docker-compose up not executing updated Dockerfile

I have made edits to a Dockerfile to install some PHP modules from PECL and packages via apt-get. To my disappointment, none of them seem to have worked.
I then tried to test my execution process by adding a string test to my Dockerfile, docker-compose downing all my containers and then calling docker-compose up -d to see if the Dockerfile gets executed but my containers loaded with no complaints about the test string.
Below is my code:
test
RUN pecl update-channels;
RUN pecl install memcache;
RUN service memcached start
RUN apt-get install memcached -y
I have manually typed each one of those commands (with the exception of test, of course) and everything worked as expected. I then put the commands into my Dockerfile so I don't have to manually execute them. Which is where this issue began.
What am I missing?
More extended answer from above.
Roughly speaking, docker-compose up starts the container. In other words, it takes the existing container and runs it. docker-compose up --build re-builds the container. Therefore, to add new services, packages, etc., you need to re-build the container.

nvidia-docker - can cuda_runtime be available while building a container?

While attempting to compile darknet in the build command of a docker container I constantly run into the exception include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory.
I am building the container from the instructions here: https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2. I have a simple Dockerfile I am testing with - the relevant parts:
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
...
WORKDIR /
RUN apt-get install -y git
RUN git clone https://github.com/pjreddie/darknet.git
WORKDIR /darknet
# Set OpenCV makefile flag
RUN sed -i '/OPENCV=0/c\OPENCV=1' Makefile
RUN sed -i '/GPU=0/c\GPU=1' Makefile
#RUN ln -s /usr/local/cuda-9.2 /usr/local/cuda
# HERE I have been playing with commands to show me the state of the docker image to try to troubleshoot the problem
RUN find / -name "cuda_runtime.h"
RUN ls /usr/local/cuda/lib64/
RUN less /usr/local/cuda/README
RUN make
Most of the documentation I see references using the nvidia libraries when running a container, but the darknet compiles differently when built with gpu support so I need cuda_runtime.h available at build time.
Perhaps I misunderstand what nvidia-docker is doing - I'm assuming that nvidia-docker exists because the Nvidia code must be installed on the actual host machine and not inside the container & they use some mechanism to share the "native" code with the containers so the GPU can be managed - is that correct?
Should I even be trying to build darknet when building my container or should I be installing it on the host machine, then making it available somehow to the container? This seems to go against the portability of the containers but I can live with some constraints to get access to the GPU.
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
Your image only has bits and pieces of CUDA-9.2 needed to run a CUDA app, but does not have the bits needed to build one.
You need to use -devel variant.

Running Tomcat with PostgreSql using Dockerfile

I want to run a Tomcat with PostgreSql database within the same Dockerfile.
I have the following Dockerfile
FROM tomcat:8-jre7
MAINTAINER "Sonam <mymail#gmail.com>"
RUN apt-get -y update
Add simplewebapp.war /usr/local/tomcat/webapps/
RUN apt-get update && apt-get -y upgrade
FROM postgres
When I run the docker image, I can't access the Tomcat like I could if I comment out the postgres. How do I get Postgres running and Tomcat too?
thanks
You can only take one image as your base, just the same as you can only have one OS installed.
If you need to have two applications installed, then you need to build your own container - either starting from one and running the sequence of commands in the Dockerfile you need to install the other app, or just start from a base OS image, and install both.
Alternatively - why do you need them in the same container? Something like --link might do what you want, more effectively. Just run two containers, and link them.

Cannot get postgresql-9.3-postgis-2.1 on Ubuntu 14.04.1 Docker container

I tried to install postgresql-9.3-postgis-2.1 or postgresql-9.1-postgis-2.1 for a cloned app, but I can only get postgresql-9.4-postgis-2.1 on my Ubuntu docker image which is build from python:2.7 image.
I looked into the image and found it's on a Ubuntu 14.04.1 image. I tries to install postgis on my Xubuntu 14.04.2 VM, everything is OK.
How could I get the installation works OK?
Dockerfile is pretty easy:
FROM python:2.7
RUN mkdir /workspace
RUN mkdir /data
WORKDIR /workspace
RUN apt-get update
RUN apt-get install postgresql postgresql-common postgresql-9.3-postgis-2.1
Error code is very normal too:
E: Unable to locate package postgresql-9.3-postgis-2.1
E: Couldn't find any package by regex 'postgresql-9.3-postgis-2.1'
Please provide more information, like the dockerfile and the errors you get.
From your comment it appears you load the python libraries before the postgresql libraries. I assume that your python app needs postgresql access and that it uses one of the python wrappers around the postgresql C libraries.
If that is the case then install the postgresql libraries before installing the python libraries, and do not forget to add the -dev libraries.
What I do in such a case is to make a minimal docker image, start a root shell in the container and do the install manually, take notes and use them to update the docker file. Alternatively you can run
$ docker exec -t -i bash -i
to get a shell in the container and try out what needs to be done.
Thanks for everyone who tried to help me! Though I finally fix this myself, there is nothing wrong with the Dockerfile which is pretty simple, but the image I chose is not a typical Ubuntu image, the docker office use buildpack-deps:jessie instead of ubuntu:14.04 image:
https://github.com/docker-library/python/blob/master/2.7/Dockerfile
It caused different behavior in docker and Ubuntu VM.
Finally, I build a Python image from Ubuntu:12.04 and fixed this issue.

Docker workflow for scientific computing

I'm trying to imagine a workflow that could be applied on a scientific work environment. My work involves doing some scientific coding, basically with Python, pandas, numpy and friends. Sometimes I have to use some modules that are not common standards in the scientific community and sometimes I have to integrate some compiled code in my chain of simulations. The code I run is most of the time parallelised with IPython notebook.
What do I find interesting about docker?
The fact that I could create a docker containing my code and its working environment. I can then send the docker to my colleges, without asking them to change their work environment, e.g., install an outdated version of a module so that they can run my code.
A rough draft of the workflow I have in mind goes something as follows:
Develop locally until I have a version I want to share with somebody.
Build a docker, possibly with a hook from a git repo.
Share the docker.
Can somebody give me some pointers of what I should take into account to develop further this workflow? A point that intrigues me: code running on a docker can lunch parallel process on the several cores of the machine? e.g., an IPython notebook connected to a cluster.
Docker can launch multiple process/thread on multiple core. Multiple processes may need the use of a supervisor (see : https://docs.docker.com/articles/using_supervisord/ )
You should probably build an image that contain the things you always use and use it as a base for all your project. (Would save you the pain of writing a complete Dockerfile each time)
Why not develop directly in a container and use the commit command to save your progress on a local docker registry? Then share the final image to your colleague.
How to make a local registry : https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
Even though you'll have a full container, I think a package manager like conda can still be a solid part of the base image for your workflow.
FROM ubuntu:14.04
RUN apt-get update && apt-get install curl -y
# Install miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
* from nice example showing docker + miniconda + flask
Wrt doing source activate <env> in the Dockerfile you need to:
RUN /bin/bash -c "source activate <env> && <do something in the env>"