Docker dotnet run port not mapping, windows 10 host, linux container - powershell

I'm following a https://app.pluralsight.com/library/courses/docker-web-development/table-of-contents which uses the older microsoft/aspnetcore-build image but I'm running core 2.1 so I'm using microsoft/dotnet:2.1-sdk instead.
The command I'm running is:
docker run -it -p 8080:5001 -v ${pwd}:/app -w "/app"
microsoft/dotnet:2.1-sdk
and then once inside the TTY I do a dotnet run which gives me the following output:
Using launch settings from /app/Properties/launchSettings.json...
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using '/root/.aspnet/DataProtection-Keys'
as key repository; keys will not be encrypted at rest.
info:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[58]
Creating key {5445e854-c1d9-4261-82f4-0fc3a7543e0a} with creation date
2018-12-14 10:41:13Z, activation date 2018-12-14 10:41:13Z, and
expiration date 2019-03-14 10:41:13Z.
warn:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key
{5445e854-c1d9-4261-82f4-0fc3a7543e0a} may be persisted to storage in
unencrypted form.
info:
Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[39]
Writing data to file
'/root/.aspnet/DataProtection-Keys/key-5445e854-c1d9-4261-82f4-0fc3a7543e0a.xml'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to https://localhost:5001 on the IPv6 loopback
interface: 'Cannot assign requested address'.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback
interface: 'Cannot assign requested address'.
Hosting environment: Development
Content root path: /app
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Then, when I open browser on my host and navigate to http://localhost:8080 I get a "This page isn't working" "localhost didn't send any data" " ERR_EMPTY_RESPONSE"
I've tried a couple different port combinations too with the same result.
Can anyone spot where I went wrong? Or have any ideas / suggestions?

Not sure if this question still relevant for you, but I also encountered this issue and left my solution here for others. I used PowerShell with the next docker command (almost the same as your command, just used internal port 90 instead of 5000 and used --rm switch which will automatically remove the container when it exits):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" microsoft/dotnet /bin/bash
And after that, I got the interactive bash shell, and when typing dotnet run I got the same output as you and cannot reach my site in the container via localhost:8080.
I resolved it by using UseUrls method or --urls command-line argument. They (UseUrls method or --urls command-line argument) indicates the IP addresses or host addresses with ports and protocols that the server should listen on for requests. Below descriptions of solutions which worked for me:
Edit CreateWebHostBuildermethod in Program.cs like below:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseUrls("http://+:90") //for your case you should use 5000 instead of 90
.UseStartup<Startup>();
You can specify several ports if needed using the next syntax .UseUrls("http://+:90;http://+:5000")
With this approach, you just typed dotnet run in bash shell and then your container will be reachable with localhost:8080.
But with the previous approach you alter the default behavior of your source code, which you can forget and then maybe should debug and fix in the future. So I prefer 2nd approach without changing the source code. After typing docker command and getting an interactive bash shell instead of just dotnet run type it with --urls argument like below (in your case use port 5000 instead of 90):
dotnet run --urls="http://+:90"
In the documentation there is also a 3rd approach where you can use ASPNETCORE_URLS environment variable, but this approach didn't work for me. I used the next command (with -e switch):
docker run --rm -it -p 8080:90 -v ${pwd}:/app -w "/app" -e "ASPNETCORE_URLS=http://+:90" microsoft/dotnet /bin/bash
If you type printenv in bash you will see that ASPNETCORE_URLS environment variable was passed to the container, but for some reason dotnet run is ignoring it.

Related

Docker Container - Unable to access on the browser

I am new to docker and I've been trying to run pgadmin through docker. I ran the following command:
docker run -p 5555:80 --name pgadmin -e PGADMIN_DEFAULT_EMAIL="user#domain.com" -e PGADMIN_DEFAULT_PASSWORD="***" dpage/pgadmin4
The container is currently running but I'm not able to access it through the browser (localhost:5555). It keeps loading gives me an error "Secure connection failed". Where am I making a mistake?
PS:Please do let me know if any further information is needed to answer/understand my question.
I can replicate your issue if I use HTTPS with port 80.
At the time of writing this answer, the dpage/pgadmin4 with the latest tag (which is the one you're using) exposes ports 80 and 443 - try using the other, secure one instead.
Your line should be:
docker run -p 5555:443 --name pgadmin -e PGADMIN_DEFAULT_EMAIL="user#domain.com" -e PGADMIN_DEFAULT_PASSWORD="***" dpage/pgadmin4
My guess is you're using something like HTTPS Everywhere and it forces you to use HTTPS on unsecured port, thus giving you this warning.

docker-compose portmapping gives failed to create endpoint hnsCall failed in Win32: The specified port already exists

I have started a new (.net core 3.0)project in Visual Studio, with Docker support (Windows)
I have added Docker support (right-click on project Add->Docker support) and in the same way added Docker compose support.
If I just Click "play-button" for Docker Compose, the project starts everything works well.
But when I run docker-compose up from the solution folder I get
Cannot start service testproj30: failed to create endpoint
testproj30_testproj30_1 on network nat: hnsCall failed in Win32: The
specified port already exists.
(I have closed my VS solution). If I remove the port mapping in docker-compose.override.yaml I dont get this error message. I have dont the most common tricks with restarting docker servce, hni service and so on. Nothing helps.
I dont want to depend on all VS-voodoo from the project file and God knows what other files that are involved.
I can run docker run -p 8080:80 443:443 without any port problems
I fixed a similar problem by removing some terminated container and then pruning networks.
List terminated container :
docker ps -a
Remove them (Cygwin syntax) :
docker rm $(docker ps -aq)
You will have error message for runnnig containers.
Clean your networks :
docker network prune
For myself, the main cause was the Docker killing process skiped the port releasing mechanism of my application.

How to give port number for docker container at run time?

I have written one Scala and Akka HTTP based REST API and created Dockerfile to create Docker image for this API. My Dockerfile is as follows:
FROM maven:3.6.0-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/parent/
COPY data-catalogue/pom.xml /tmp/parent/data-catalogue/
COPY data-catalogue/src /tmp/parent/data-catalogue/src/
WORKDIR /tmp/parent/data-catalogue/
RUN mvn package
FROM java:openjdk-8
COPY --from=MAVEN_TOOL_CHAIN /tmp/parent/data-catalogue/target/data-catalogue-1.0-SNAPSHOT.jar /opt/data-catalogue.jar
COPY data-catalogue/src/main/resources/logback.xml /opt/logback.xml
ENTRYPOINT ["java", "-Dlogging.config=/opt/logback.xml", "-jar", "/opt/data-catalogue.jar", "prod"]
CMD ["8080"]
Everything is good so far. I can run one container using this image.
Now requirement is to run two containers using this image on same Docker host. I have modified REST APIs main class such that it will take port number on which it has to run as an command line argument. If command line argument is not provided then it will listen for requests on 8080 port.
I would like to know how do I provide command line parameter to my REST API while starting container?
For example:
First instance of REST API should start/run on port 5555 so this 5555 argument should reach main class of REST API
Second instance of REST API should start/run on port 1111 so this 5555 argument should reach main class of REST API
I have tried to use ENTRYPOINT and CMD for this but my command line argument simply does not reach main class and REST API starts on 8080 port only.
Docker PORT mapping is your answer.
Dockerizing your API is exactly the opposite to provide your API with the port you want to run it on everytime. That's exactly what you don't want to do when going the Docker way.
Your API should be able to attend to your requests through whichever port you decide to EXPOSE on your docker image, and then, at run time, you just need to map any port you wish from your host to the inner port of your API (which, inside its container, will be attending to the same "internal" port always.
So.. how does it look like?
docker run -d --name api-1 -p 5555:8080 my/api
and then...
docker run -d --name api-2 -p 1111:8080 my/api
Now both instances are running on your host, and you're able to hit both of them, each with a different host port (even when internally they're using the same port number)
Through Environment Variables
In Dockerfile:
ENV PORT 8080
You can override the above env on the cmd by passing -e for e.g.
docker run -d my_image -e "PORT=5555"
Consume the env in your application code
So for e.g. if you don't provide env on the cmd, your application code will receive 8080 as PORT value. If you do override the env on cmd, your application code will receive 5555 as PORT value
you can set ARG to your container:
ARG MYPORT
then export it like:
CMD [ $MYPORT ]
then start your docker like this:
export MYPORT=5000 ; docker run ....

Kubernetes Container Command

I'm working with Neo4j in Kubernetes.
For a showcase, I want to fill the Neo4j in the Pods with initial data which i can do with a cypher-file, I'm having in the /bin folder using the cypher-shell.
So basically I start the container and run cat bin/initialData.cypher | bin/cypher-shell.
I've validated that this works by running it in the kubectl exec -it <pod> /bin/bash bash.
However, no matter how I try to map to the spec.container.command, it fails.
Currently my best guess is
spec:
containers:
command:
- /bin/bash
- -c
- |
cd bin
ls
cat initialData.cypher | cypher-shell
which does not work. It displays the ls correctly but throws a connection refused afterwards, where I have no idea where its coming from.
edit: Updated
You did valid spec, but with a wrong syntax.
Try like this
spec:
containers:
command: ["/bin/bash"]
args: ["-c","cat import/initialData.cypher | bin/cypher-shell"]
Update:
In your neo4j.conf you have to uncomment the lines related to using the neo4j-shell
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
# The network interface IP the shell will listen on (use 0.0.0.0 for all interfaces).
dbms.shell.host=127.0.0.1
# The port the shell will listen on, default is 1337.
dbms.shell.port=1337
Exec seems like the better way to handle this but you wouldn’t usually use both command and args. In this case, probably just put the whole thing in command.
I've found out what my problem was.
I did not realize that the commands are not linked to the initialisation lifecycles meaning the command was executed, before the neo4j was started in the container.
Basically, using the command was the wrong approach for me.

systemctl from inside docker container fails with D-Bus connection error

I have setup a docker container based on OpenSuse 12, installed some additional files and copied some installer binaries into the container. So far everything fine.
From inside a running image of the container I now need to run the aforementioned setup program but this needs to have uuid.socket up and running - uuid.socket in turn needs systemctl to work correctly and this causes an error like this:
hxehost:/usr/sap/SRCFiles # systemctl
Failed to get D-Bus connection: Unknown error -1
I started the docker container like this:
docker run -h hxehost -i -t f3096b0aa964 /bin/bash
Which, according to some postings should start a machine container as opposed to an application container.
Can anyone tell me what I'm doing wrong here??? How do I get systemctl to work inside a docker container?
I tried to starte the container with this command, which according to linked hints should do, but to no avail
docker run --privileged --rm -ti -e 'container=docker' -h hxehost --network="bridge" --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro siliconchris/hxe:v0.0.2 /bin/bash
If I do this, systemctl still gives exact same error.
If I start /sbin/init instead of /bin/bash, I can see that quite a lot of services are started (some, like wicked, login and module, fail). In the end, the container presents me with a login. After login, I can now execute systemctl and it shows all services with their respective states.
Now my next question is: IS THIS APPROACH FEASIBLE AT ALL???
Best regards,
Chris
You may find the repo to this image at SAP HANA Express Edition inside docker
Most current Linux systems depend on SystemD running, and systemctl will send requests to it. However most applications did install easily when I replaced the systemctl binary with a script that just interprets start/stop/status/enable commands. As another benefit, it would not need anymore those complicated startup-commands for the resulting image to get the systemd mapped into the container. May be that would help you? Please have a look at the docker-systemctl-replacement.