docker-compose portmapping gives failed to create endpoint hnsCall failed in Win32: The specified port already exists - docker-compose

I have started a new (.net core 3.0)project in Visual Studio, with Docker support (Windows)
I have added Docker support (right-click on project Add->Docker support) and in the same way added Docker compose support.
If I just Click "play-button" for Docker Compose, the project starts everything works well.
But when I run docker-compose up from the solution folder I get
Cannot start service testproj30: failed to create endpoint
testproj30_testproj30_1 on network nat: hnsCall failed in Win32: The
specified port already exists.
(I have closed my VS solution). If I remove the port mapping in docker-compose.override.yaml I dont get this error message. I have dont the most common tricks with restarting docker servce, hni service and so on. Nothing helps.
I dont want to depend on all VS-voodoo from the project file and God knows what other files that are involved.
I can run docker run -p 8080:80 443:443 without any port problems

I fixed a similar problem by removing some terminated container and then pruning networks.
List terminated container :
docker ps -a
Remove them (Cygwin syntax) :
docker rm $(docker ps -aq)
You will have error message for runnnig containers.
Clean your networks :
docker network prune
For myself, the main cause was the Docker killing process skiped the port releasing mechanism of my application.

Related

Docker nuget connection timeout

Trying to utilize official jetbrains\teamcity-agent image on Kubernetes. I've managed to run Docker in Docker there but trying to build an ASP.NET Core image with docker build command failes on dotnet restore with
The HTTP request to 'GET https://api.nuget.org/v3/index.json' has timed out after 100000ms.
When I connect to the pod itself and try curling the URL it's super fast. So I assume network is not an issue. Thank for any advice.
Update
Trying to run a simple dotnet restore step from container worked. But not from inside the docker build.
Update 2
I've isolated the problem, it has nothing to do with nuget nor TeamCity. Is network related on the Kubernetes host.
Running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
produces output:
Step 1/3 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/3 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/3 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
Pods orchestrated by Kubernetes can access internet normally. I'm using Calico as network layer.
I fix this issue by passing argument --disable-parallel to restore command which Disables restoring multiple projects in parallel.
RUN dotnet restore --disable-parallel
i have exactly same behaviour:
i have solution with contains several nuget dependencies
it build without any issue on local machine.
it build without any issue on windows build agent
it build without any issue on docker host machine
but then i try to build it in build agent in docker - i have a lot of message such following:
Failed to download package 'System.Threading.4.0.11' from 'https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg'.
The download of 'https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg' timed out because no data was received for 60000ms
i can ping and curl page from nuget.org normally from docker container.
so i think this is some special case. i found some info about MTU but i'm not tested it.
UPDATE initial problem may be connect to k8s - my container work inside k8s cluster based on ubuntu 18.04 with flannel ang k8s v1.16
on my local machine (win based) all works without any issue... but it is strange because i have many services that works in this cluster without any problems! (such harbor, graylog, jaeger etc)
UPDATE 2 ok, now i can understand anything.
i try to execute
curl https://api.nuget.org/v3/index.json
and can get file content without any errors
after this i try to run
wget https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg
and package downloaded successfully
but after i run dotnet restore i still receive errors with timeout
UPDATE 3
i try to reproduce problem not in k8s cluster but in docker locally
i run container
docker run -it -v d:/project/test:/mnt/proj teamcity-agent-core3.1 bash
teamcity-buildagent-core3.1 - my image based on jetbrains/teamcity-agent which contains .net core 3.1 sdk.
and then execute command inside interactive session:
dotnet restore test.sln
with failed with following messages:
Failed to download package 'System.Runtime.InteropServices.4.3.0' from 'https://api.nuget.org/v3-flatcontainer/system.runtime.interopservices/4.3.0/system.runtime.interopservices.4.3.0.nupkg'.
Received an unexpected EOF or 0 bytes from the transport stream.
The download of 'https://api.nuget.org/v3-flatcontainer/system.text.encoding.extensions/4.3.0/system.text.encoding.extensions.4.3.0.nupkg' timed out because no data was received for 60000ms.
Exception of type 'System.TimeoutException' was thrown.
In my case the solution was marked out here
As noted in the comment, "So maybe the issue needs to be fixed by microsoft by changing the default nuget.config inside of mcr.microsoft.com/dotnet/sdk:5.0."
This was my problem. Docker building from sdk:5.0. Solution seems to be doing the job, which is to add a nuget.config file to the root of the solution.
Contents of nuget.config (again, from posts in that issue):
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<config>
<add key='maxHttpRequestsPerSource' value='10' />
</config>
</configuration>
I had a similar issue. The mistake I was doing was not specifying the exact dotnet version on the docker image.
FROM mcr.microsoft.com/dotnet/core/sdk AS build
My project targets dotnet 2.2. What I did not know was this was pulling the latest dotnet SDK 3.1. So when the dotnet restore ran, it was timing out.
So this is what I did.
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
I had to specify a specific version. I'm not sure if this is relation to your problem but I hope it send you in the right direction. Always be explicit with the image version.
I had similar of #NIMROD MAINA and #Anatoly Kryzhanovsky issue when i was using build in docker container from gitlab-runner (docker).
When i run dotnet restore outside docker container. Everything it's work!
In my case it didn't work when nuget.config was inside the project folder.
I put nuget.config in the solution folder (out of the project folder) and it worked again.
For me it was solution setting docker (Windows) to:
Expose daemon on tcp://localhost:2375 without TLS (true) and
Use Docker Compose V2 (true)
It's temporary solution, but it works.
Check your DNS settings (A record). Try to type nslookup yourfeeddomain. Make sure that IP address is one and resolved.

How to connect local Mongo database to docker

I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>

Access docker within container on jenkins slave

my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible

How to use the "Remote Systems" view in Eclipse to explore a Docker container file system?

The Eclipse Remote Systems view is a great tool to connect to VMs and explore their file systems, currently the following options are available:
First I find out the container IP by running this command:
docker inspect <container> | grep IPAddress | cut -d '"' -f 4
Once I have the IP, I launch the New Connection wizard from the Remote Systems view, I tried to select Linux, SSH only and FTP only and in the Hostname field I paste the container IP, click Finish and the connection seems to be successfully created, now when I try to expand the the Files node it prompts for User and Password, the problem is that I don't have that info, does the user/pass vary from container to container? how can I get this info?
You can just instantiate a container with that image but with a shell so that you can see what usernames are configured on that image.
docker run -it node /bin/bash
You can then configure users, password and do a:
docker commit <image-name> my-node:0.1
Then you can instantiate a new container:
docker run -d -p 80:9080 -p 443:9443 my-node
Is ssh also running in that container? If not you will have to install it into the container so that you can ssh to it.
A docker container only runs a single parent process at a time (on your host machine that parent process is 'init' which runs a bunch of system services). In the case of your node container, that parent process is a node server.
Eclipse connects to a remote machine by connecting to a listener on that machine using some protocol. SSH of FTP, for example. With the docker container, there is no process listening for this connection, so you cannot connect using Eclipse as it is. You have two options...
Use the command line and docker exec to connect to the machine and explore its filesystem. No pretty pictures, but you don't need a lot of knowledge.
Modify your container in some way to connect to it. you have two options here...
A. Modify your image to run an SSH daemon. A simple way to do that is to use the phusion/baseimage container as your parent, and have it spawn both the ssh daemon and the node server. You need to know a good amount about linux sysadmin to get this working (not a lot, but a good amount).
B. Launch a second copy of the container with a different command, such as ssh -d. You can then connect to the second copy. This has the downside that it won't be the same container you're interested in, and you STILL have to modify the image since I doubt the node image even has an ssh daemon installed... but it is less knowledge than wrapping your head around runit.