Is there any way to mount github or other repo's http link as docker volumes, so that when I start my docker container, it would be running with the new code which I've pushed to github or bitbucket ?
You cannot mount such remote resources that are on the Internet.
What you can do is to have a shell script in your docker image which when executed will download those resources. And make docker run that script when the container is run.
Dockerfile
FROM ubuntu:latest
COPY download-from-github.sh /
# your stuff...
RUN bash /download-from-github.sh
download-from-github.sh
curl -sL https://github.com/you/repo/archive/master.zip > /tmp/master.zip
unzip /tmp/master.zip -d /opt
Related
I was trying to delete pg_log folder (it was huge 3Gib) But i accidentally remove everything in data folder (by rm ./*):
Now all of the .conf files removed from data folder and i receiving this error in the log:
"Data page checksums are disabled"
The postgres was made by docker through docker hub (15-alpine)
I didn't touch any config files there.
Where can i find the default postgres config files? I think i can make it back to work by restoring the .conf files.
Steps to recover default config files using Docker image.
Pull docker image for Postrgres:
docker pull postgres:15-alpine
Run container:
docker run -e POSTGRES_HOST_AUTH_METHOD=trust postgres:15-alpine
Keep current terminal open and open new terminal
Connect to shell in docker container:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aee1237294b8 postgres:15-alpine3.17 "docker-entrypoint.sā¦" 7 seconds ago Up 6 seconds 5432/tcp naughty_knuth
copy container id from docker ps result and execute shell command:
docker exec -it aee1237294b8 bash
Go to data folder and archive it:
cd /var/lib/postgresql/data/
tar -zcf pgdata.tar.gz *
Exit docker container shell:
exit
Copy archive from docker container:
docker cp aee1237294b8:/var/lib/postgresql/data/pgdata.tar.gz ~/Downloads/pgdata.tar.gz
as result I've downloaded config from postgres:15-alpine
grab it from here: https://anarjafarov.me/pg.conf.zip
and watch video instructions: https://www.youtube.com/watch?v=fgHtvwbQJDE
I need to create a docker image with a Dockerfile which is available on a remote computer.
will the below work in Powershell ?
docker build -t MyFirstDockerImage -f //RemoteServerpath/FolderStructure/ .
The docker build can only read local files. But not the remote files. i.e Files should be copied on to the machine where docker build is being executed.
I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>
I was trying to install Apache MADLib on Postgres. Having difficulty with YUM approach I moved to Docker approach as suggested by this website https://pgxn.org/dist/madlib/
I was able to pull docker image as suggested at para 1. Now at para 2 I am stuck with comment "Path to incubator-madlib directory". I am not able to understand whether it should be the URL to MADLib Incubator such as "https://github.com/apache/incubator-madlib" or it should refer to local disk area. It would be great by giving an example of how to run this command.
2) Launch a container corresponding to the MADlib image, mounting the
source code folder to the container:
docker run -d -it --name madlib \ -v (path to incubator-madlib directory):/incubator-madlib/ madlib/postgres_9.6
The (path to incubator-madlib directory) refers to wherever you have git cloned the MADlib code base to on your machine. Say for example, your home directory in your machine is /home/xyz/ and you have cloned the MADlib code base there, you should have a directory called /home/xyz/incubator-madlib. You can now run the docker command documented in the MADlib repo as follows:
docker run -d -it --name madlib -v /home/xyz/incubator-madlib/:/incubator-madlib/ madlib/postgres_9.6
You were probably getting the Permission denied docker:... error after trying Robert's suggestion because the $(pwd) was not referring to your incubator-madlib source code folder, but was referring to /var/lib/docker/devicemapper/devicemapper/data, which should not be the case. In any case, it might be a better idea to provide the incubator-madlib directory's absolute path in the docker command, as specified above.
As is documented, that is the directory on your computer where the source code resides:
where incubator-madlib is the directory where the MADlib source code resides.
So, supossing that you have downloaded the source code in ./incubator-madlib, run as this:
docker run -d -it --name madlib -v $(pwd)/incubator-madlib:/incubator-madlib/ madlib/postgres_9.6
Then see what the container logs:
docker logs -f madlib
I have 2 Docker containers: App & Web.
App ā simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web ā web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!
If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app