Can you access file in another containerd but same pod? [duplicate] - kubernetes

Is it possible with Docker to combine two images into one?
Like this here:
genericA --
\
---> specificAB
/
genericB --
For example there's an image for Java and an image for MySQL.
I'd like to have an image with Java and MySQL.

No, you can only inherit from one image.
You probably don't want Java and MySQL in the same image as it's more idiomatic to have a single component in a container i.e. create a separate MySQL container and link it to the Java container rather than put both into the same container.
However, if you really must have them in the same image, write a Dockerfile with Java as the base image (FROM statement) and install MySQL in the Dockerfile. You should be able to largely copy the statements from the official MySQL Dockerfile.

Docker doesn't directly support this, but you can use DockerMake (full disclosure: I wrote it) to manage this sort of "inheritance". It uses a YAML file to set up the individual pieces of the image, then drives the build by generating the appropriate Dockerfiles.
Here's how you would build this slightly more complicated example:
--> genericA --
/ \
debian:jessie --> customBase ---> specificAB
\ /
--> genericB --
You would use this DockerMake.yml file:
specificAB:
requires:
- genericA
- genericB
genericA:
requires:
- customBase
build_directory: [some local directory]
build: |
#Dockerfile commands go here, such as
ADD installA.sh
RUN ./installA.sh
genericB:
requires:
- customBase
build: |
#Here are some other commands you could run
RUN apt-get install -y genericB
ENV PATH=$PATH:something
customBase:
FROM: debian:jessie
build: |
RUN apt-get update && apt-get install -y buildessentials
After installing the docker-make CLI tool (pip install dockermake), you can then build the specificAB image just by running
docker-make specificAB

If you do docker commit, it is not handy to see what commands were used in order to build your container, you have to issue a docker history image
If you have a Dockerfile, just look at it and you see how it was built and what it contains.
Docker commit is 'by hand', so prone to errors, docker build using a Dockerfile that works is much better.

You can put multiple FROM commands in a single Dockerfile.
https://docs.docker.com/reference/builder/#from

Related

Deploy a private image inside minikube on linux

I am starting to use kubernetes/Minikube to deploy my application which is currently running on docker containers.
Docker version:19.03.7
Minikube version: v1.25.2
From what I read I gather that first of all I need to build my frontend/backend images inside minikube.
The image is available on the server and I can see it using:
$ docker image ls
The first step, as far as I understand, is to use the "docker build" command:
$docke build -t my-image .
However, the dot at the end, so I understand, means it is looking for a Dockerfile in the curretn directoy, and indeed I get an error:
unable to evaluate symlinks in Dockerfile path: lstat
/home/dep/k8s-config/Dockerfile: no such file or directory
So, where do I get this dockerfile for the "docker build" to succeed?
Thanks
My missunderstanding...
I have the Dockefile now, so I should put it anywhere and use docker build from there.

Can I run aws-xray on the same ECS container?

I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.

Where does docker-compose up keep its images?

I'm aware that if I change my Dockerfile or build directory, I'm supposed to run docker-compose build. This surely implies that docker-compose has some cache somewhere of its already-built images.
Where is it? How do I purge it?
I'd like to get back to a state where docker-compose up is forced to do the initial build steps, without me needing to remember to run docker-compose build.
I've run docker stop $(docker ps -aq) and docker X prune (for X in container, image, volume, network), but docker-compose up still refuses to run the build steps in my Dockerfile.
Or am I completely misunderstanding how docker-compose works?
you can pass on additional argument (--no-cache) to skip using cache during build process.
docker#default:~$ docker-compose build --help
Build or rebuild services.
Services are built once and then tagged as `project_service`,
e.g. `composetest_db`. If you change a service's `Dockerfile` or the
contents of its build directory, you can run `docker-compose build` to rebuild it.
Usage: build [options] [--build-arg key=val...] [SERVICE...]
Options:
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
docker#default:~$
docker-compose uses images, which you can see with docker images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker_ubuntu latest a7dc4f9bbdfb 19 hours ago 158MB
ubuntu 16.04 0b1edfbffd27 3 weeks ago 113MB
hello-world latest f2a91732366c 6 months ago 264MB
The docker-compose images are prefixed with (usually) the name of the directory you're running docker-compose in. So, for me, the docker_ubuntu image.
docker image prune thinks that the images are in use, so it doesn't prune them.
To get rid of the docker-compose image, you need to delete it explicitly:
docker image rm docker_ubuntu

How to deploy product using docker in few steps

I can not understand conception of Docker. I trying to install this component (graphite rendering graphs from influxdb):
https://github.com/vimeo/graphite-api-influxdb-docker
I was faced with docker at first time and it is important to deploy graphite+influxdb from that link by this work night.
The question is: if I need search github links of graphite and influxdb, install them, and after that make them work under docker?
For what docker and how quickly to deploy this project.
As I understood I need to do next steps from github link:
#cd /root
#yum install docker
#docker pull vimeo/graphite-api-influxdb
#git clone https://github.com/vimeo/graphite-api-influxdb-docker.git
#cd graphite-api-influxdb-docker
#ls
Dockerfile graphite-api.sh graphite-api.yaml LICENSE NOTICE README.md
#vi graphite-api.yaml (change <host> to localhost)
#docker build .
#docker run -p 8000:8000 <image-id> (<image-id> here i set like vimeo/graphite-api-influxdb if this true?)
I feel that I think in different direction and hope for a few words what u think about will a little help to me.
First you need to clone the GitHub repository
git clone https://github.com/vimeo/graphite-api-influxdb-docker.git
Second, you have to add your own graphite-api.yaml (if you want)
Build it:
docker build .
If you need more information about how to build a Docker content from a Dockerfile, read the "Building an image from a Dockerfile" section from this link to know how to build a Docker image from a Dockerfile.
You can add a name with -t option (and use it as ID in the next step).
And, finally, run the content :
docker run -p 8000:8000 [ID]
[ID] is provided to you when you build the Docker content (it is explained in the link).
I hope my answer will help you.

Extending docker official postgres image

I'm trying to extend the official docker postgres image in order to install a custom python module so I can use it from withing a plpython3 stored procedure.
Here's my dockerfile
FROM postgres:9.5
RUN apt-get update && apt-get install -y postgresql-plpython3-9.5 python3
ADD ./repsug/ /opt/smtnel/repsug/
WORKDIR /opt/smtnel/repsug/
RUN ["python3", "setup.py", "install"]
WORKDIR /
My question is: do I need to add ENTRYPOINT and CMD commands to my Dockerfile? Or are they "inherited" FROM the base image?
The example in the official readme.md shows a Dockerfile which only changes the locale, without ENTRYPOINT or CMD.
I also read in the readme that I can extend the image by executing custom sh and/or sql scripts. Should I use this feature instead of creating my custom image? The question in this case is how I make sure the scripts are run only once at "install time" and not every time? I mean, if the database is already created and populated, I would not want to overwrite it.
Thanks,
Awer
If you define a new ENTRYPOINT in your Dockerfile it will override the inherited ENTRYPOINT. So in that case, Postgres will not be able to be initialized automatically (unless yo write the same ENTRYPOINT).
https://docs.docker.com/engine/reference/builder/#entrypoint
Also, the official postgres image let you add .sql/.sh files in the /docker-entrypoint-initdb.d folder, so they can be executed once the database is initialized.
Finally, if you don't want that Postgres remove your data, you can mount a volume between the /var/lib/postgresql/data folder and a local folder in each docker run ... command to persist them.