dotnet core app api do not keep running on kubernetes - kubernetes

I'm setting a dotnet core app into kubernetes cluster and i getting error "Unable to start kestrel".
Dockerfile is working ok on local machine.
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
Unhandled Exception: System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found.
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found.
Unable to start Kestrel.
My dockerfile:
[...build step]
FROM microsoft/dotnet:2.1-aspnetcore-runtime
COPY --from=build-env /app/out ./app
ENV PORT=5000
ENV ASPNETCORE_URLS=http://+:${PORT}
WORKDIR /app
EXPOSE $PORT
ENTRYPOINT [ "dotnet", "Gateway.dll" ]
I expected application started successfully but i getting this error "unable to start kestrel".
[UPDATE]
I've removed https port from app and tried again without https but now application just start and stop without any error or warning. Container log bellow:
Running local using dotnet run or building image and running from container, everything work. Application just shut down into kubernetes.
I am using dotnet core 2.2
[UPDATE]
I've generated a cert, added in project, setup in kestrel and i got same result. Localhost using docker imagem it work, but in kubernetes (google cloud), it just shutdown immediately after it started.
localhost:
$ docker run --rm -it -p 5000:5000/tcp -p 5001:5001/tcp juooo:latest
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f7808ac5-0a0d-47d0-86cb-c605c2db84a3} may be persisted to storage in unencrypted form.
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Overriding address(es) 'https://+:5001, http://+:5000'. Binding to endpoints defined in UseKestrel() instead.
Hosting environment: Production
Content root path: /app
Now listening on: https://0.0.0.0:5001
Application started. Press Ctrl+C to shut down.

I found a event log with a kubernetes error saying that kubernetes was unable to hit (:5000/). So i tried create a controller targeting root application (because it's a api, so don't have a root like a web app) and it worked.

The problem seems to be with the SSL certificate not being correctly configured while creating docker image. On dev machine,it will be using the developer certificate however on other machines it should be stored somewhere. Check this

I am pretty sure you need to open up a firewall rule to run on any port other than 80, something like:
gcloud compute firewall-rules create test-node-port --allow tcp:5000
Taken from kubernetes how-to located here: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

Related

GitLab Runner won't connect to other ec2, via scp ERROR: debug1: read_passphrase: can't open /dev/tty: No such device or address

Currently, I'm trying to use a docker image gitlab file inorder to connect to my production server and overwrite my code within the production on deployment. While I can ssh from my local machine with the private key given, whenever I try to copy the private key as a variable and connect I consistently get the error of:
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
...
debug1: read_passphrase: can't open /dev/tty: No such device or address
Host key verification failed.
lost connection
I've verified that the /dev/tty address exists on both machines and that my .pem can be read appropriately on the gitlab runner post copying, I've established appropriate permissions with chmod and also have tried multiple permutations of calling the scp script. I'm currently running my connection within the before_script of my gitlab.yml file to avoid the delay of building the docker images within my file and the relevant portion is enclosed below.
EDIT: /dev/tty also has the correct permissions, I've viewed the previous stack overflow posts related to this issue and they either weren't relevant to the problem or weren't the solution
image: docker:19.03.5
services:
- docker:19.03.1-dind
before_script:
- docker info
- apk update
- apk add --no-cache openssh
- touch $SSH_KEY_NAME
- echo "$SSH_KEY" > "./$SSH_KEY_NAME"
- chmod 700 $SSH_KEY_NAME
- ls -la /dev/tty
- scp -v -P 22 $SSH_KEY_NAME -i $SSH_KEY_NAME $PROD_USER#$SERVER_URL:.
Apologies if it feels dumb, but I have little experience within the technical nature of private key setup from another machine, currently I'm unsure if I need to link the private key within my gitlab runner in a specific way? If it's possible that the echo isn't saving the .pem as a private key. My IP inbound for the aws instance is set for all traffic on port 22, and copying this key and connecting from my PC works fine. It's just the runner that has problems. Thanks for your help!
The best solution I found to this is to either run an ubuntu gitlab-image and manually call docker inside it with a volume tied to ssh or use a aws instance with a password to gitlab secrets if you're hardpressed on the dind image in gitlab. Neither is truly optimal, but due to the containerization's effectiveness in isolation, you have to resolve it in one of the two method's.

docker-compose 'ports' mapping in development vs travisCI

I have a development environment using docker-compose, it has 5 services:
db (postrgresql)
redis
celery
celery-beat
web (a django web app - development is occurring here)
In development, I run the top four in containers with
docker-compose up db redis celery celery-beat
These four containers can connect to each other no problem.
However, while I code with the web app, I need to run it locally so I can get live updates and debug. However, running locally, the web app can't connect with the containers, and I need to map the ports on the containers, e.g:
db:
ports:
- 5432:5432
so that my locally running web app can connect with them.
However, if I then push my code to github, TravisCI fails it with this error:
ERROR: for db Cannot start service db: b'driver failed programming external connectivity on endpoint hackerspace_db_1 (493e7fb9e53f551b3b1eea35f9e2baf5725e9077fc642d8121891cab31b34373): Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use'
ERROR: Encountered errors while bringing up the project.
The command "docker-compose run web sh -c "python src/manage.py test src/"" exited with 1.
TravisCI passes without the port mapping, but I have to develop with port mapping.
How can I get around this so that they both work with the same settings? I'm willing to try different workflows, as I'm new to docker and containers and trying to find my way around.
I've tried:
Developing in a container with Visual Studio Code's Remote - Containers extension, but there doesn't seem to be a way to view the debug log/output
Finding a parameter to add to the docker-compose up ... that would map the ports when I run them, but there doesn't seem to be an option for this.

exposing api via secure gateway

I want to expose one blue zone api to external customers via secure-gateway, I am using docker as the client, but I always met below errors (the api server is in DST environment), can anyone help me on this? I have added the host name and port into ACL file, also, I tried adding --allow when I run docker, it will disable 'deny all'
[INFO] (Client ID d83dty5MIJA_rVI) Connection #2 is being established to ralbz001234.cloud.dst.ibm.com:8888
[2017-09-06 20:59:19.210] [ERROR] (Client ID d83dty5MIJA_rVI) Connection #1 to destination ralbz001234.cloud.dst.ibm.com:8888 had error: EHOSTUNREACH
When I add secure-gateway, the resource loacated filed, I choose On-Premises, is this correct?
EHOSTUNREACH is an issue with the underlying system not being able to find a route to the host you've provided. From the machine hosting the docker client, are you able to access the resource located at ralbz001234.cloud.dst.ibm.com:8888? If the host is able to connect, then you could try adding --net=host to the docker run command:
docker run --net=host -it ibmcom/secure-gateway-client <gatewayID> -t <security_token> --allow
If the host is unable to connect as well, then this post may shed more light on routing.

Install certificate on Centos 7 for docker registry access

We currently have a docker registry setup, that has security. Normally, in order to access it, from a developer's perspective, I have to do a long with the docker login --username=someuser --password=somepassword --email user#domain.com https://docker-registry.domain.com.
However, since I am currently trying to do an automatized deployment of a docker container in the cloud, one of the operations, which is the docker pull command, fails because the login was not performed (it works if I add the login in the template, but that's bad).
I was suggested to use the certificate to allow the pull from being done (.crt file). I tried installing the certificate using the steps explained here: https://www.linode.com/docs/security/ssl/ssl-apache2-centos
But it does not seem to work, I still have to do a manual login in order to be able to perform my docker pull from the registry.
Is there a way I can replace the login command by the use of the certificate?
As I see, it's wrong URL for SSL authentication between docker server and private registry server.
You can follow this:
Running a domain registry
While running on localhost has its uses, most people want their registry to be more widely available. To do so, the Docker engine requires you to secure it using TLS, which is conceptually very similar to configuring your web server with SSL.
Get a certificate
Assuming that you own the domain myregistrydomain.com, and that its DNS record points to the host where you are running your registry, you first need to get a certificate from a CA.
Create a certs directory:
mkdir -p certs
Then move and/or rename your crt file to: certs/domain.crt, and your key file to: certs/domain.key.
Make sure you stopped your registry from the previous steps, then start your registry again with TLS enabled:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
You should now be able to access your registry from another docker host:
docker pull ubuntu
docker tag ubuntu myregistrydomain.com:5000/ubuntu
docker push myregistrydomain.com:5000/ubuntu
docker pull myregistrydomain.com:5000/ubuntu
Gotcha
A certificate issuer may supply you with an intermediate certificate. In this case, you must combine your certificate with the intermediate's to form a certificate bundle. You can do this using the cat command:
cat domain.crt intermediate-certificates.pem > certs/domain.crt

GlassFish WAR file deployment to a non-default port

I am attempting to deploy a war file (Oracle's APEX Listener) to a GlassFish 3.1.2.2 server deployed on an RHEL server (I am also seeing the same issues on an Ubuntu server at home).
I used the following command to create the domain:
$GLASSFISH_HOME/bin/asadmin create-domain --portbase 8100 myDomain
[I am also creating multiple domains on the same GlassFish server (one GF instance with multiple domains) using values of 8200, 8300, and 8400 for the portbase value and using different domain names.]
I then start the domain using:
$GLASSFISH_HOME/bin/asadmin start-domain myDomain
Next, I attempt to deploy the APEX.WAR file using:
$GLASSFISH_HOME/bin/asadmin deploy --contextroot apex apex.war
But, I get the following error:
Remote server does not listen for requests on [localhost:4848]. Is the server up? Unable to get remote commands. Closest matching local
command(s):
help Command deploy failed.
I have also used the following commands with the same result:
$GLASSFISH_HOME/bin/asadmin deploy apex.war
$GLASSFISH_HOME/bin/asadmin deploy --target myDomain apex.war
$GLASSFISH_HOME/bin/asadmin deploy --target domain apex.war
And I get the same error each time.
I can deploy the file using the admin gui, but this is for a customer installation and I would really like to do as much as possible from the bash shell script I have created.
I am also installing the Java 1.7.0_45 JDK and modifying the $GLASSFISH_HOME/config/asenv.conf file to include AS_JAVA=
The error is actually correct because the admin port is 8148. But, how do I get GlassFish to "know" to use 8148 instead of 4848.
I have also tried this by:
$GLASSFISH_HOME/bin/asadmin create-domain --adminport 8148
--domainproperties http.ssl.port=8152
but this gets the same results as above.
Thanks for reading this tome of a post and any info on how to fix this would be greatly appreciated!
/dave
You can specify the port to which asadmin should connect as a parameter like this:
asadmin --port 4949 start-domain
If this isn't enough you can even specify the hostname with --host.
Have a look at the official documentation to see all possible parameters.
i get the same error ,you should do like this:
$GLASSFISH_HOME/bin/asadmin --port 8148 deploy apex.war
and input username and password the default user is admin and password is adminadmin
good luck for you!