I am attempting to use jhipster to generate a microservices architecture set of apps. From within the jhipster-devbox, I do the following:
$ mkdir mygateway && cd mygateway
$ yo jhipster - select gateway, answer all questions
$ ./gradlew bootRepackage -Pdev buildDocker want to make sure this all runs locally before I try to move it to AWS
$ cd.. && mkdir myapi && && cd myapi
$ yo jhipster - select microservices app (same package name as gateway...don't know if that matters, but not for this question)
$ ./gradlew bootRepackage -Pdev buildDocker
$ cd .. && mkdir docker-compose && cd docker-compose
$ yo jhipster:docker-compose (all items have run successfully to this point)
$ docker-compose up -d returns:
ERROR: Conflict. The name "/jhipster-registry" is already in use by
container
a785f619b5dd985b3ff30a8ed6e41066795eb8b5e108d2549cd4a9d5dc27710a. You
have to remove (or rename) that container to be able to reuse that
name.
It would appear the "jhipster-registry" is available inside the gateway and api apps I just created... I tried commenting them out of the app.yml file to no success?
I had the same problem but it's that you already have a docker container named "jhipster-registry". I imagine it's because you already had created at least 1 other jhipster microservices stack with docker before. If you remove the jhipster-registry container (i.e. docker rm jhipster-registry) and then run docker-compose up -d again, you should be fine as it will recreate the container. I'm not sure why the jhipster-registry container doesn't get prefixed by the directory it's in (as the other containers in the stack do). I think it has to do with the jhipster-registry.yml file that specifically names the container "jhipster-registry".
Related
For years we have built base PHP-FPM container images locally with code like this to include Oracle DB support:
ARG PHP_VERSION=7.4
ARG PHP_TYPE=fpm
FROM php:${PHP_VERSION}-${PHP_TYPE}
ENV LD_LIBRARY_PATH /usr/local/instantclient
ENV ORACLE_BASE /usr/local/instantclient
ENV ORACLE_HOME /usr/local/instantclient
ENV TNS_ADMIN /etc/oracle
COPY oracle /etc/oracle
RUN echo 'instantclient,/usr/local/instantclient' | pecl install oci8-${OCI8_VERSION} \
&& docker-php-ext-configure oci8 --with-oci8=instantclient,/usr/local/instantclient \
&& docker-php-ext-install oci8 \
&& docker-php-ext-configure pdo_oci --with-pdo-oci=instantclient,/usr/local/instantclient \
&& docker-php-ext-install pdo_oci \
&& rm -rf /tmp/pear
From this image we build application specific images that are deployed to a Kubernetes cluster and the TNS_ADMIN variable and value have persisted without issue.
We recently changed how the images are built (using Kaniko and GitLab CI instead of building them locally) and found that now when the image is deployed to the Kubernetes cluster (via Helm) the TNS_ADMIN variable is now missing (not just a blank value, the entire variable). Another change made was how the Oracle pieces are installed (using docker-php-extension-installer), so the pertinent Dockerfile code looks like this now:
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
RUN chmod +x /usr/local/bin/install-php-extensions && \
install-php-extensions oci8 pdo_oci
# Oracle client config
ENV TNS_ADMIN=/etc/oracle
COPY php.cli/oracle /etc/oracle
And, here is the GitLab CI Kaniko related code to build the application specific images (only the $PHP_TYPE applies to the image in question):
- |
LOCAL_REPOSITORY=${CI_REGISTRY}/<internal namespace path>/$REPOSITORY
# Build config.json for credentials
echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
/kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/$DOCKER_FILE_PATH/Dockerfile --build-arg PHP_VERSION=$PHP_VERSION --build-arg PHP_TYPE=$PHP_TYPE --build-arg PHPUNIT_VERSION=$PHPUNIT_VERSION --build-arg PHPCS_VERSION=$PHPCS_VERSION --build-arg PHPCSFIXER_VERSION=$PHPCSFIXER_VERSION --destination $LOCAL_REPOSITORY:$PHP_VERSION-$TAG_NAME
Thinking this was possibly due to how Kaniko works, or the changes to the Oracle install process, we pulled the base image and application image separately and ran them with a bash shell. When pulled locally, the TNS_ADMIN variable is present. This suggests whatever is occurring is happening once Helm deploys it to the cluster.
What is vexing is on the surface neither of the changes we made should affect the setting of an environment variable in this manner in the image, but those were the only changes made that coincide with the issue arising. So, the issue seems to be when deploying the image to our cluster. This process itself has not changed at all. The Helm chart has not changed, which indicates it is not part of this issue; that being said, the issue occurs when Helm deploys the chart that uses the image.
Has anyone else seen something like this, or have any ideas where to center our search for answers?
Well, our issue was one that is probably endemic to many people running applications in Kubernetes: our image pull policy for the Helm deployment was set to IfNotPresent and a cached image without the ENV value set was being used (the image was built using a Dockerfile that did not set TNS_ADMIN). We have a lot of moving parts in our process and made multiple changes that were not seen due to this.
I am of course chastened by this explanation and so I will offer the advice to always make sure you are pulling a fresh image as the first step in troubleshooting issues with Kubernetes/Helm deployments.
I am starting to use kubernetes/Minikube to deploy my application which is currently running on docker containers.
Docker version:19.03.7
Minikube version: v1.25.2
From what I read I gather that first of all I need to build my frontend/backend images inside minikube.
The image is available on the server and I can see it using:
$ docker image ls
The first step, as far as I understand, is to use the "docker build" command:
$docke build -t my-image .
However, the dot at the end, so I understand, means it is looking for a Dockerfile in the curretn directoy, and indeed I get an error:
unable to evaluate symlinks in Dockerfile path: lstat
/home/dep/k8s-config/Dockerfile: no such file or directory
So, where do I get this dockerfile for the "docker build" to succeed?
Thanks
My missunderstanding...
I have the Dockefile now, so I should put it anywhere and use docker build from there.
I have an asp.net core 2.0 application whose docker image runs fine locally, but when that same image is deployed to an AKS cluster, the pods have a status of CrashLoopBackOff and the pod log shows:
Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409.
And since you can't ssh to AKS clusters, it's pretty difficult to figure this out?
Dockerfile:
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapi.dll"]
Turned out that our build system wasn't putting the app code into the container as we thought. Since the container wasn't runnable, I didn't know how to inspect its contents until I found this command which is a lifesaver for these kinds of situations:
docker run --rm -it --entrypoint=/bin/bash [image_id]
... which at this point, you can freely inspect/verify the contents of the container.
I just ran into the same issue and it's because I was missing a key piece to the puzzle.
docker-compose -f docker-compose.ci.build.yml run ci-build
VS2017 Docker Tools will create that docker-compose.ci.build.yml file. After that command is run, the publish folder is populated and docker build -t <tag> will build a populated image (without an empty /app folder).
I have 2 Docker containers: App & Web.
App — simple container with php application code. It is used only for storage and deliver the code to the remote Docker host.
App image Dockerfile:
FROM debian:jessie
COPY . /var/www/app/
VOLUME ["/var/www/app"]
CMD ["true"]
Web — web service container, consist of PHP-FPM + Nginx.
Web image Dockerfile:
FROM nginx
# Remove default nginx configs.
RUN rm -f /etc/nginx/conf.d/*
# Install packages
RUN apt-get update && apt-get install -my \
supervisor \
curl \
wget \
php5-cli \
php5-curl \
php5-fpm \
php5-gd \
php5-memcached \
php5-mysql \
php5-mcrypt \
php5-sqlite \
php5-xdebug \
php-apc
# Ensure that PHP5 FPM is run as root.
RUN sed -i "s/user = www-data/user = root/" /etc/php5/fpm/pool.d/www.conf
RUN sed -i "s/group = www-data/group = root/" /etc/php5/fpm/pool.d/www.conf
# Pass all docker environment
RUN sed -i '/^;clear_env = no/s/^;//' /etc/php5/fpm/pool.d/www.conf
# Add configuration files
COPY config/nginx.conf /etc/nginx/
COPY config/default.vhost /etc/nginx/conf.d
COPY config/supervisord.conf /etc/supervisor/conf.d/
COPY config/php.ini /etc/php5/fpm/conf.d/40-custom.ini
VOLUME ["/var/www", "/var/log"]
EXPOSE 80 443 9000
ENTRYPOINT ["/usr/bin/supervisord"]
My question: Is it possible to link Web container and App container by the socket?
The main reason for this - using App container for deploy updated code to remote Docker host.
Using volumes/named volumes for share code between containers is not a good idea. But Sockets can help.
Thank you very much for your help and support!
If both containers run on the same host, it's possible to share a socket between the two as they are plain files.
You can create a local docker volume and mount that volume on both containers. Then configure you program(s) to use that path.
docker volume create --name=phpfpm
docker run phpfpm:/var/phpfpm web
docker run phpfpm:/var/phpfpm app
If the socket can be generated on the host you can mount the file into both containers. This is the method used to get a docker container to control the hosts docker.
docker run -v /var/container/some.sock:/var/run/some.sock web
docker run -v /var/container/some.sock:/var/run/some.sock app
I have some weird behaviour when working with the postgres image.
I've created a script which is mounted into /docker-entrypoint-initdb.d and using single-user mode to initialise the database.
However, when I update the script, run
docker stop postgres_db
docker rm postgres_db
and than start a new container, the database is the old one. It doesn't change.
I'm using vagrant box-cutter/ubuntu1404-docker and when I run
vagrant destroy -f
vagrant up
and than recreate the new docker container, the changes are applied.
Why doesn't it work when I just remove the old container and start a new one? Where does docker keep it's cache which I could purge to really get a new image?
Update:
The exact docker file I'm using is a small extension to the original one. It just installs the orafce plugin link, content:
FROM postgres:9.4
RUN apt-get update && \
apt-get install -y postgresql-9.4-orafce && \
rm -rf /var/lib/apt/lists/*
the command I use to start a new container is:
docker run --name="postgres_db" \
--restart="always" \
-e POSTGRES_PASSWORD=PostgresPassword \
-p 5432:5432 \
-v /data:/var/lib/postgresql/data \
-v /vagrant/container_data/init_scripts:/docker-entrypoint-initdb.d \
-v /vagrant/container_data/tmp:/tmp \
-d grmanit/postgres
the init_scripts folder contains one script which creates a new database and a new user.
Clarifications:
When I change the script in my init_scripts folder, for instance modify the database name, and than stop+remove the old container and start a new one, it doesn't have any effect.
When I change the password of postgres, which is set as an environment variable in the docker run command (in this example it's PostgresPassword) and again stop+remove the old container and than start a new one with the new environment variable, it again doesn't have any effect. I still need to use the old password to connect to the db and the new one wouldn't work.
But when I destroy the VM and start it up again, I have the configurations which I had defined before. However changing them again needs me to destroy and restart the VM which takes much longer than just destroying a docker container and starting up a new one.
You get a new container, but -v /data:/var/lib/postgresql/data mounts the /data directory of the host into the container, making changes to the database persistent between containers. If you omit that, the database will be isolated in the container and will disappear as soon as you delete the container.
See Managing data in containers (Docker documentation) for details.