I am attempting to use Cassandra with the latest release of JHipster (3.0.0) in a microservices architecture.
Here are the steps I've followed so far:
npm install -g generator-jhipster
mkdir C:\users\jd\dev\sample && cd $_
mkdir sample-gateway && cd $_
yo jhipster... (Create a gateway application w/ Cassandra)
I've installed the latest beta release of the docker toolbox. From the console, I can see the following:
c:\Users\jd\dev\sample\sample-gateway>docker -v
Docker version 1.10.3, build 20f81dd
c:\Users\jd\dev\sample\sample-gateway>docker-machine -v
docker-machine version 0.6.0, build e27fb87
I am able to successfully start my default machine using:
C:\Users\jd>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
C:\Users\jd>docker-machine start default
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
C:\Users\jd>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.100:2376 v1.10.3
When I run the build step for Cassandra I receive the following error:
C:\Users\jd\dev\sample\sample-gateway>docker-compose -f src\main\docker\cassandra.yml build
Building curatorial-cassandra
←[31mERROR←[0m: Couldn't connect to Docker daemon. You might need to install Docker:
https://docs.docker.com/engine/installation/
Any ideas why I might be receiving this error?
I may have found the solution...
It looks like I needed to run the output from $ docker-machine env default.
When I run the output FOR /f "tokens=*" %i IN ('docker-machine env default') DO %i, I am able to see this in my console:
C:\Users\jd\dev\sample\sample-gateway>docker-compose -f src\main\docker\cassandra.yml build
Building sample-cassandra
←[31mERROR←[0m: Couldn't connect to Docker daemon. You might need to install Docker:
https://docs.docker.com/engine/installation/
C:\Users\jd\dev\sample\sample-gateway>FOR /f "tokens=*" %i IN ('docker-machine env default') DO %i
C:\Users\jd\dev\sample\sample-gateway>SET DOCKER_TLS_VERIFY=1
C:\Users\jd\dev\sample\sample-gateway>SET DOCKER_HOST=tcp://192.168.99.100:2376
C:\Users\jd\dev\sample\sample-gateway>SET DOCKER_CERT_PATH=C:\Users\jd\.docker\machine\machines\default
C:\Users\jd\dev\sample\sample-gateway>SET DOCKER_MACHINE_NAME=default
C:\Users\jd\dev\sample\sample-gateway>REM Run this command to configure your shell:
C:\Users\jd\dev\sample\sample-gateway>REM FOR /f "tokens=*" %i IN ('docker-machine env default') DO %i
C:\Users\jd\dev\sample\sample-gateway>docker-compose -f src\main\docker\cassandra.yml build
Building sample-cassandra
Step 1 : FROM cassandra:2.2.5
2.2.5: Pulling from library/cassandra
←[0Bd7827f33: Pulling fs layer
←[0B95caeb02: Pulling fs layer
←[0B03976053: Pulling fs layer
←[0B44d757b1: Pulling fs layer
←[0B8b59ac1b: Pulling fs layer
←[0Bbadb6c0c: Pulling fs layer
←[0B72404d3b: Pulling fs layer
←[0Bd13f7785: Pulling fs layer
←[0B4e7f1560: Pulling fs layer
←[3Bd13f7785: Downloading [========================> ] 61.61 MB/124.8 MB
I will hold off on marking this as the answer in the hopes that someone with a more complete understanding of the docker-compose process can provide an answer.
I will also try to investigate more...
Related
Any hints on why Remote - Containers isn't working with podman on Windows?
Installed podman v4.2.0 on Windows 11 via .msi package
Set remote.containers.dockerPath to podman in VS Code Settings
Run podman machine init
Run podman machine start
Open Remote Explorer in VS Code and be presented with the following:
Everything is working with podman — pull, run, images, etc, but Remote - Containers on VSCode doesn't recognize podman.
After running Remote-Containers Developer: Show All Logs... in VS Code:
[2022-08-21T12:55:15.916Z] Start: Run: podman version --format {{.Server.APIVersion}}
[2022-08-21T12:55:16.080Z] Stop (164 ms): Run: podman version --format {{.Server.APIVersion}}
[2022-08-21T12:55:16.080Z] Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: dial unix \\.\pipe\openssh-ssh-agent: connect: No connection could be made because the target machine actively refused it.
And podman system connection list in a terminal:
Name URI Identity Default
podman-machine-default ssh://user#localhost:62078/run/user/1000/podman/podman.sock C:\Users\Edmundo\.ssh\podman-machine-default true
podman-machine-default-root ssh://root#localhost:62078/run/podman/podman.sock C:\Users\Edmundo\.ssh\podman-machine-default false
Related Issues: #6957, #6747.
Please confirm you are running the latest build (prerelease)
v0.236.1.
(there are known issues on github with earlier release, fixed in this version)
in a WSL shell, i.e. for debugging try this
first - try to start podman podlib REST api (for socket, lifetime 5000 sec. - set to zero for "forever")
podman system service -t 5000 &
then symlink the podman.sock to the location vscode expects:
sudo ln -s /mnt/wslg/runtime-dir/podman/podman.sock /var/run/docker.sock
if none of that works, would you mind posting a dump:
podman info
HINT: check the podman info YAML output for host | remoteSocket | path & make sure it matches the path /mnt/wslg/runtime-dir/podman/podman.sock above.
The bug being tracked on GitHub. One step you should also do is enable Run in WSL in VS Code Development Container extension settings. Then it will run the podman commands in the podman-machine-default wsl instance.
I want to execute a Popper workflow on a Linux HPC (High-performance computing) cluster. I don’t have admin/sudo rights. I know that I should use Singularity instead of Docker because Singularity is designed to not need sudo to run.
However, singularity build needs sudo privileges, if not executed in fakeroot/rootless mode.
This is what I have done in the HPC login node:
I installed Spack (0.15.4) and Singularity (3.6.1):
git clone --depth=1 https://github.com/spack/spack.git
. spack/share/spack/setup-env.sh
spack install singularity
spack load singularity
I installed Popper (2.7.0) in a virtual environment:
python3 -m venv ~/popper
~/popper/bin/pip install popper
I created an example workflow in ~/test/wf.yml:
steps:
- uses: "docker://alpine:3.11"
args: ["echo", "Hello world!"]
- uses: "./my_image/"
args: ["Hello number two!"]
With ~/test/my_image/Dockerfile:
FROM alpine:3.11
ENTRYPOINT ["echo"]
I tried to run the two steps of the Popper workflow in the login node:
$ cd ~/test
$ ~/popper/bin/popper run --engine singularity --file wf.yml 1
[1] singularity pull popper_1_4093d631.sif docker://alpine:3.11
[1] singularity run popper_1_4093d631.sif ['echo', 'Hello world!']
ERROR : Failed to create user namespace: user namespace disabled
ERROR: Step '1' failed ('1') !
$ ~/popper/bin/popper run --engine singularity --file wf.yml 2
[2] singularity build popper_2_4093d631.sif /home/bikfh/traylor/test/./my_image/
[sudo] password for traylor:
So both steps fail.
My questions:
For an image from Docker Hub: How do I enable “user namespace”?
For a custom image: How do I build an image without sudo and run the container?
For an image from Docker Hub: How do I enable “user namespace”?
I found that the user namespace feature needs to be already enabled on the host machine. Here are instructions for checking whether it’s enabled.
In the case of the cluster computer I am using (Frankfurt Goethe HLR), user namespaces are only enabled in the computation nodes, not the login node.
That’s why it didn’t work for me.
So I need to send the job with SLURM (here only the first step with a container from Docker Hub):
~/popper/bin/popper run --engine singularity --file wf.yml --config popper_config.yml 1
popper_config.yml defines the options for SLURM’s sbatch (compare the Popper docs). They depend on your cluster computer. In my case it looks like this:
resource_manager:
name: slurm
options:
"1": # The default step ID is a number and needs quotes here.
nodes: 1
mem-per-cpu: 10 # MB
ntasks: 1
partition: test
time: "00:01:00"
For a custom image: How do I build an image without sudo and run the container?
Trying to apply the same procedure to step 2, which has a custom Dockerfile, fails with this message:
FATAL: could not use fakeroot: no mapping entry found in /etc/subuid
I tried to create the .sif file (Singularity image) with Popper on another computer and copy it from ~/.cache/popper/singularity/... over to the cluster machine.
Unfortunately, Popper seems to clear that cache folder, so the .sif image doesn’t persist.
I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>
I have setup docker toolbox on windows 10. While building the project I encountered the following error :- Bind for 0.0.0.0:8081 failed: port is already allocated ? The sudo service docker restart command isn't working. Please provide me a solution for the same.
Generally speaking, you need to stop running the current container. For that you are going to know current CONTAINER ID:
$ docker container ls
You get something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
97a32e8928ef friendlyhello "python app.py" 51 seconds ago Up 50 seconds 0.0.0.0:4000->80/tcp romantic_tesla
Then you stop the container by:
$ docker stop 97a32e8928ef
Finally, you try to do what you wanted to do, for example:
$ docker run -p 4000:80 friendlyhello
Commonly if this error happens, I restarting my winnat with commands:
$ net stop winnat
// build your project
$ net start winnat
If that doesnt help. I restart whole docker with commands:
wsl --unregister docker-desktop
wsl --unregister docker-desktop-data
Then docker offers to restart docker-service.
I found such command as docker-compose down on docker website, but when I try to use it i get an error.
No such command: down
Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
up Create and start containers
migrate-to-labels Recreate containers to add labels
My docker-compose version is:
docker-compose version: 1.3.1
CPython version: 2.7.10
OpenSSL version: OpenSSL 1.0.2d 9 Jul 2015
Did I do something wrong?
It might be that docker-compose down command is not available in the version you use - the command was added in version 1.6.0 - see the CHANGELOG here.
So if you really want to use the command, you may have to upgrade to version 1.6.0 or later.
Hope this helps.