Docker nuget connection timeout - kubernetes

Trying to utilize official jetbrains\teamcity-agent image on Kubernetes. I've managed to run Docker in Docker there but trying to build an ASP.NET Core image with docker build command failes on dotnet restore with
The HTTP request to 'GET https://api.nuget.org/v3/index.json' has timed out after 100000ms.
When I connect to the pod itself and try curling the URL it's super fast. So I assume network is not an issue. Thank for any advice.
Update
Trying to run a simple dotnet restore step from container worked. But not from inside the docker build.
Update 2
I've isolated the problem, it has nothing to do with nuget nor TeamCity. Is network related on the Kubernetes host.
Running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
produces output:
Step 1/3 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/3 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/3 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
Pods orchestrated by Kubernetes can access internet normally. I'm using Calico as network layer.

I fix this issue by passing argument --disable-parallel to restore command which Disables restoring multiple projects in parallel.
RUN dotnet restore --disable-parallel

i have exactly same behaviour:
i have solution with contains several nuget dependencies
it build without any issue on local machine.
it build without any issue on windows build agent
it build without any issue on docker host machine
but then i try to build it in build agent in docker - i have a lot of message such following:
Failed to download package 'System.Threading.4.0.11' from 'https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg'.
The download of 'https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg' timed out because no data was received for 60000ms
i can ping and curl page from nuget.org normally from docker container.
so i think this is some special case. i found some info about MTU but i'm not tested it.
UPDATE initial problem may be connect to k8s - my container work inside k8s cluster based on ubuntu 18.04 with flannel ang k8s v1.16
on my local machine (win based) all works without any issue... but it is strange because i have many services that works in this cluster without any problems! (such harbor, graylog, jaeger etc)
UPDATE 2 ok, now i can understand anything.
i try to execute
curl https://api.nuget.org/v3/index.json
and can get file content without any errors
after this i try to run
wget https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg
and package downloaded successfully
but after i run dotnet restore i still receive errors with timeout
UPDATE 3
i try to reproduce problem not in k8s cluster but in docker locally
i run container
docker run -it -v d:/project/test:/mnt/proj teamcity-agent-core3.1 bash
teamcity-buildagent-core3.1 - my image based on jetbrains/teamcity-agent which contains .net core 3.1 sdk.
and then execute command inside interactive session:
dotnet restore test.sln
with failed with following messages:
Failed to download package 'System.Runtime.InteropServices.4.3.0' from 'https://api.nuget.org/v3-flatcontainer/system.runtime.interopservices/4.3.0/system.runtime.interopservices.4.3.0.nupkg'.
Received an unexpected EOF or 0 bytes from the transport stream.
The download of 'https://api.nuget.org/v3-flatcontainer/system.text.encoding.extensions/4.3.0/system.text.encoding.extensions.4.3.0.nupkg' timed out because no data was received for 60000ms.
Exception of type 'System.TimeoutException' was thrown.

In my case the solution was marked out here
As noted in the comment, "So maybe the issue needs to be fixed by microsoft by changing the default nuget.config inside of mcr.microsoft.com/dotnet/sdk:5.0."
This was my problem. Docker building from sdk:5.0. Solution seems to be doing the job, which is to add a nuget.config file to the root of the solution.
Contents of nuget.config (again, from posts in that issue):
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<config>
<add key='maxHttpRequestsPerSource' value='10' />
</config>
</configuration>

I had a similar issue. The mistake I was doing was not specifying the exact dotnet version on the docker image.
FROM mcr.microsoft.com/dotnet/core/sdk AS build
My project targets dotnet 2.2. What I did not know was this was pulling the latest dotnet SDK 3.1. So when the dotnet restore ran, it was timing out.
So this is what I did.
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
I had to specify a specific version. I'm not sure if this is relation to your problem but I hope it send you in the right direction. Always be explicit with the image version.

I had similar of #NIMROD MAINA and #Anatoly Kryzhanovsky issue when i was using build in docker container from gitlab-runner (docker).
When i run dotnet restore outside docker container. Everything it's work!

In my case it didn't work when nuget.config was inside the project folder.
I put nuget.config in the solution folder (out of the project folder) and it worked again.

For me it was solution setting docker (Windows) to:
Expose daemon on tcp://localhost:2375 without TLS (true) and
Use Docker Compose V2 (true)
It's temporary solution, but it works.

Check your DNS settings (A record). Try to type nslookup yourfeeddomain. Make sure that IP address is one and resolved.

Related

How to resolve DNS lookup error when trying to run example microservice application using minikube

Dear StackOverflow community!
I am trying to run the https://github.com/GoogleCloudPlatform/microservices-demo locally on minikube, so I am following their development guide: https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md
After I successfully set up minikube (using virtualbox driver, but I tried also hyperkit, however the results were the same) and execute skaffold run, after some time it will end up with following error:
Building [shippingservice]...
Sending build context to Docker daemon 127kB
Step 1/14 : FROM golang:1.15-alpine as builder
---> 6466dd056dc2
Step 2/14 : RUN apk add --no-cache ca-certificates git
---> Running in 0e6d2ab2a615
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: DNS lookup error
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: DNS lookup error
ERROR: unable to select packages:
git (no such package):
required by: world[git]
Building [recommendationservice]...
Building [cartservice]...
Building [emailservice]...
Building [productcatalogservice]...
Building [loadgenerator]...
Building [checkoutservice]...
Building [currencyservice]...
Building [frontend]...
Building [adservice]...
unable to stream build output: The command '/bin/sh -c apk add --no-cache ca-certificates git' returned a non-zero code: 1. Please fix the Dockerfile and try again..
The error message suggest that DNS does not work. I tried to add 8.8.8.8 to /etc/resolv.conf on a minikube VM, but it did not help. I've noticed that after I re-run skaffold run and it fails again, the content /etc/resolv.conf returns to its original state containing 10.0.2.3 as the only DNS entry. Reaching the outside internet and pinging 8.8.8.8 form within the minikube VM works.
Could you point me to a direction how can possible I fix the problem and learn on how the DNS inside minikube/kubernetes works? I've heard that problems with DNS inside Kubernetes cluster are frequent problems you run into.
Thanks for your answers!
Best regards,
Richard
Tried it with docker driver, i.e. minikube start --driver=docker, and it works. Thanks Brian!
Sounds like issue was resolved for OP but if one is using docker inside minikube then below suggestion worked for me.
Ref: https://github.com/kubernetes/minikube/issues/10830
minikube ssh
$>sudo vi /etc/docker/daemon.json
# Add "dns": ["8.8.8.8"]
# save and exit
$>sudo systemctl restart docker

docker-compose portmapping gives failed to create endpoint hnsCall failed in Win32: The specified port already exists

I have started a new (.net core 3.0)project in Visual Studio, with Docker support (Windows)
I have added Docker support (right-click on project Add->Docker support) and in the same way added Docker compose support.
If I just Click "play-button" for Docker Compose, the project starts everything works well.
But when I run docker-compose up from the solution folder I get
Cannot start service testproj30: failed to create endpoint
testproj30_testproj30_1 on network nat: hnsCall failed in Win32: The
specified port already exists.
(I have closed my VS solution). If I remove the port mapping in docker-compose.override.yaml I dont get this error message. I have dont the most common tricks with restarting docker servce, hni service and so on. Nothing helps.
I dont want to depend on all VS-voodoo from the project file and God knows what other files that are involved.
I can run docker run -p 8080:80 443:443 without any port problems
I fixed a similar problem by removing some terminated container and then pruning networks.
List terminated container :
docker ps -a
Remove them (Cygwin syntax) :
docker rm $(docker ps -aq)
You will have error message for runnnig containers.
Clean your networks :
docker network prune
For myself, the main cause was the Docker killing process skiped the port releasing mechanism of my application.

How to connect local Mongo database to docker

I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.