I am having the following issue.
I am new to GCP/Cloud, I have created a cluster in GKE and deployed our application there, installed nginx as a POD in the cluster, our company has a authorized SSL certificate which i have uploaded in Certificates in GCP.
In the DNS Service, i have created an A record which matched the IP of Ingress.
When i call the URL in the browser, it still shows that the website is still unsecure with message "Kubernetes Ingress controller fake certificate".
I used the following guide https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#console_1
however i am not able to execute step 3 "Associate an SSL certificate with a target proxy", because it asks "URL Maps" and i am not able to find it in the GCP Console.
Has anybody gone through the same issue like me or if anybody helps me out, it would be great.
Thanks and regards,
I was able to fix this problem by adding an extra argument to the ingress-nginx-controller deployment.
For context: my TLS secret was at the default namespace and was named letsencrypt-secret-prod, so I wanted to add this as the default SSL certificate for the Nginx controller.
My first solution was to edit the deployment.yaml of the Nginx controller and add at the end of the containers[0].args list the following line:
- '--default-ssl-certificate=default/letsencrypt-secret-prod'
Which made that section of the yaml look like this:
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.0-beta.0#sha256:92115f5062568ebbcd450cd2cf9bffdef8df9fc61e7d5868ba8a7c9d773e0961
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--ingress-class=nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
- '--default-ssl-certificate=default/letsencrypt-secret-prod'
But I was using the helm chart: ingress-nginx/ingress-nginx, so I wanted this config to be in the values.yaml file of that chart so that I could upgrade it later if necessary.
So reading the values file I replaced the attribute: controller.extraArgs, which looked like this:
extraArgs: {}
For this:
extraArgs:
default-ssl-certificate: default/letsencrypt-secret-prod
This restarted the deployment with the argument in the correct place.
Now I can use ingresses without specifying the tls.secretName for each of them, which is awesome.
Here's an example ingress that is working for me with HTTPS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: some-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- http:
paths:
- path: /some-prefix
pathType: Prefix
backend:
service:
name: some-service-name
port:
number: 80
You can save your SSL/TLS certificate into the K8s secret and attach it to the ingress.
you need to config the TLS block in ingress, dont forget to add ingress.class details in ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
rules:
- host: mydomain.com
http:
paths:
-
backend:
serviceName: my-service
servicePort: 80
path: /
tls:
- hosts:
- mydomain.com
secretName: my-tls-secret
You can read more at : https://medium.com/avmconsulting-blog/how-to-secure-applications-on-kubernetes-ssl-tls-certificates-8f7f5751d788
You might be seeing something like this in browser :
that's from the ingress controller and wrong certificate attached to ingress or ingress controller default fake cert.
in my fault, I upgraded new tsl on cattle-system name space, but not in my name-space, therefore some how, ingress recognize with K8s fake cert.
Solution: upgrade all old cert to new cert (only ingress user cert, WARNING - system cert maybe damaged your system - cannot explain)
Also don't forget to check that Subject Alternative Name actually contains the same value the CN contains. If it does not, the certificate is not valid because the industry moves away from CN. Just learned this now
Related
I am fairly new to Kubernetes and have just deployed my first cluster to IBM Cloud. When I created the cluster, I get a dedicated ingress subdomain, which I will be referring to as <long-k8subdomain>.cloud for the scope of this post. Now, this subdomain works for my app. For example: <long-k8subdomain>.cloud/ping works from my browser/curl just fine- I get the expected JSON response back. But, if I add this subdomain to a CNAME record on my domain provider's DNS settings (I have used Bluehost and IBM Cloud's Internet Services), I get a 404 response back from all routes. However this response is the default nginx 404 response (it says "nginx" under "404 Not Found"). I believe this means that this means the ingress load balancer is being reached, but the request does not get routed right. I am using Kubernetes version 1.20.12_1561 on VPC gen 2 and this is my ingress-config.yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Host: <long-k8subdomain>.cloud";
spec:
rules:
- host: <long-k8subdomain>.cloud
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
I am pretty sure this problem is due to the annotations. Maybe I am using the wrong ones or I do not have enough. Ideally, I would like something like this: api..com/ to route correctly. I have also read a little bit about default backends, but I have not dove too much into that just yet. Any help would be greatly appreciated, as I have spent multiple hours trying to fix this.
Some sources I have used:
https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning
https://cloud.ibm.com/docs/containers?topic=containers-ingress-types
https://cloud.ibm.com/docs/containers?topic=containers-comm-ingress-annotations#annotations
Note: The reason why I have the second annotation is because for some reason, requests without that header were not being routed directly. So that was part of my debugging process and I just ended up leaving it as I am not sure if that annotation solves that, so I left it for now.
For the NGINX ingress controller to route requests for your own domain's CNAME record to the service instead of the IBM Cloud one, you need a rule in the ingress where the host identifies your domain.
For instance, if your domain's DNS entry is api.example.com, then change the resource YAML to:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-resource
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-name
port:
number: 80
You should not need the second annotation for this to work.
If you want both of the hosts to work, then you could add a second rule instead of replacing host in the existing one.
I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.
First, I've created a deployment which should work without any configuration. Based on this article, I did
kubectl create namespace tests
created file probe-service.yaml based on paulbouwer/hello-kubernetes:1.8 and deployed it kubectl create -f probe-service.yaml -n tests:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
created ingress.yaml and applied it (kubectl apply -f .\probes\ingress.yaml -n tests)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
spec:
rules:
- host: test.projectname.org
http:
paths:
- pathType: Prefix
path: "/test"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
- host: test2.projectname.org
http:
paths:
- pathType: Prefix
path: "/test2"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
ingressClassName: nginx
Second, I can see that DNS actually point to the cluster and ingress rules are applied:
if I open http://test.projectname.org/test or any irrelevant path (http://test.projectname.org/test3), I'm shown NET::ERR_CERT_AUTHORITY_INVALID, but
if I use "open anyway" in browser, irrelevant paths give ERR_TOO_MANY_REDIRECTS while http://test.projectname.org/test gives Cannot GET /test
Now, TLS issues aside (those deserve a separate question), why can I get Cannot GET /test? It looks like ingress controller (ingress-nginx) got the rules (otherwise it wouldn't descriminate paths; that's why I don't show DNS settings, although they are described in the previous question) but instead of showing the simple hello-kubernetes page at /test it returns this simple 404 message. Why is that? What could possibly go wrong? How to debug this?
Some debug info:
kubectl version --short tells Kubernetes Client Version is v1.21.5 and Server Version is v1.20.7-eks-d88609
kubectl get ingress -n tests shows that hello-kubernetes-ingress exists indeed, with nginx class, 2 expected hosts, address equal to that shown for load balancer in AWS console
kubectl get all -n tests shows
NAME READY STATUS RESTARTS AGE
pod/hello-kubernetes-first-6f77d8ff99-gjw5d 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-ptwsn 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-x8w87 1/1 Running 0 5h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-kubernetes-first ClusterIP 10.100.18.189 <none> 80/TCP 5h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-kubernetes-first 3/3 3 3 5h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-kubernetes-first-6f77d8ff99 3 3 3 5h4m
ingress-nginx was installed before me via the following chart:
apiVersion: v2
name: nginx
description: A Helm chart for Kubernetes
type: application
version: 4.0.6
appVersion: "1.0.4"
dependencies:
- name: ingress-nginx
version: 4.0.6
repository: https://kubernetes.github.io/ingress-nginx
and the values overwrites applied with the chart differ from the original ones mostly (well, those got updated since the installation) in extraArgs: default-ssl-certificate: "nginx-ingress/dragon-family-com" is uncommneted
PS To answer Andrew, I indeed tried to setup HTTPS but it seemingly didn't help, so I haven't included what I tried into the initial question. Yet, here's what I did:
installed cert-manager, currently without a custom chart: kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
based on cert-manager's tutorial and SO question created a ClusterIssuer with the following config:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-backoffice
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
# use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
privateKeySecretRef: # this secret will be created in the namespace of cert-manager
name: letsencrypt-backoffice-private-key
# email: <will be used for urgent alerts about expiration etc>
solvers:
# TODO: add for each domain/second-level domain/*.projectname.org
- selector:
dnsZones:
- test.projectname.org
- test2.projectname.org
# haven't made it to work yet, so switched to the simpler to configure http01 challenge
# dns01:
# route53:
# region: ... # that of load balancer (but we also have ...)
# accessKeyID: <of IAM user with access to Route53>
# secretAccessKeySecretRef: # created that
# name: route53-credentials-secret
# key: secret-access-key
# role: arn:aws:iam::645730347045:role/cert-manager
http01:
ingress:
class: nginx
and applied it via kubectl apply -f issuer.yaml
created 2 certificates in the same file and applied it again:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test.projectname.org
dnsNames:
- test.projectname.org
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate-2
spec:
secretName: tls-secret-2
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test2.projectname.org
dnsNames:
- test2.projectname.org
made sure that the certificates are issued correctly (skipping the pain part, the result is: kubectl get certificates shows that both certificates have READY = true and both tls secrets are created)
figured that my ingress is in another namespace and secrets for tls in ingress spec can only be referred in the same namespace (haven't tried the wildcard certificate and --default-ssl-certificate option yet), so for each one copied them to tests namespace:
opened existing secret, like kubectl edit secret tls-secret-2, copied data and annotations
created an empty (Opaque) secret in tests: kubectl create secret generic tls-secret-2-copy -n tests
opened it (kubectl edit secret tls-secret-2-copy -n tests) and inserted data an annotations
in ingress spec, added the tls bit:
tls:
- hosts:
- test.projectname.org
secretName: tls-secret-copy
- hosts:
- test2.projectname.org
secretName: tls-secret-2-copy
I hoped that this will help, but actually it made no difference (I get ERR_TOO_MANY_REDIRECTS for irrelevant paths, redirect from http to https, NET::ERR_CERT_AUTHORITY_INVALID at https and Cannot GET /test if I insist on getting to the page)
Since you've used your own answer to complement the question, I'll kind of answer all the things you asked, while providing a divide and conquer strategy to troubleshooting kubernetes networking.
At the end I'll give you some nginx and IP answers
This is correct
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
Breaking down troubleshooting with Ingress
DNS
Ingress
Service
Pod
Certificate
1.DNS
you can use the command dig to query the DNS
dig google.com
Ingress
the ingress controller doesn't look for the IP, it just looks for the headers
you can force a host using any tool that lets you change the headers, like curl
curl --header 'Host: test3.projectname.com' http://123.123.123.123 (your public IP)
Service
you can be sure that your service is working by creating ubuntu/centos pod, using kubectl exec -it podname -- bash and trying to curl your service form withing the cluster
Pod
You're getting this
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
This part GET /test2 means that the request got the address from the DNS, went all the way from the internet, found your clusters, found your ingress controller, got through the service and reached your pod. Congratz! Your ingress is working!
But why is it returning 404?
The path that was passed to the service and from the service to the pod is /test2
Do you have a file called test2 that nginx can serve? Do you have an upstream config in nginx that has a test2 prefix?
That's why, you're getting a 404 from nginx, not from the ingress controller.
Those IPs are internal, remember, the internet traffic ended at the cluster border, now you're in an internal network. Here's a rough sketch of what's happening
Let's say that you're accessing it from your laptop. Your laptop has the IP 192.168.123.123, but your home has the address 7.8.9.1, so when your request hits the cluster, the cluster sees 7.8.9.1 requesting test3.projectname.com.
The cluster looks for the ingress controller, which finds a suitable configuration and passed the request down to the service, which passes the request down to the pod.
So,
your router can see your private IP (192.168.123.123)
Your cluster(ingress) can see your router's IP (7.8.9.1)
Your service can see the ingress's IP (192.168.?.?)
Your pod can see the service's IP (192.168.14.57)
It's a game of pass around.
If you want to see the public IP in your nginx logs, you need to customize it to get the X-Real-IP header, which is usually where load-balancers/ingresses/ambassador/proxies put the actual requester public IP.
Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with / path, like this:
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
I've finally got the hello kubernetes thing on http://test3.projectname.org. I have succeeded with TLS after a number of attempts/research and some help in a separate question.
But I haven't succeeded with actual debugging: looking at kubectl logs -n nginx <pod name, see kubectl get pod -n nginx> doesn't really help understanding what path was passed to the service and is rather difficult to understand (can't even find where those IPs come from: they are not mine, LB's, cluster IP of the service; neither I understand what tests-hello-kubernetes-first-80 stands for – it's just a concatenation of namespace, service name and port, no object has such name, including ingress):
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
[tests-hello-kubernetes-first-80] [] 192.168.49.95:8080 144 0.000 404 <some hash>
Any more pointers on debugging will be helpful; also suggestions regarding correct path ~rewriting for nginx-ingress are welcome.
I try to deploy 2 applications (behind 2 separates Deployments objects). I have 1 Service per Deployment, with type NodePort.
application1_service.yaml
apiVersion: v1
kind: Service
metadata:
name: application1-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: application1
type: NodePort
application2_service.yaml is the exact same (except for name and run)
I use an Ingress to make the 2 services available,
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
networking.gke.io/managed-certificates: "my-certificate"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "my.host.com"
http:
paths:
- path: /*
backend:
serviceName: application1-service
servicePort: 80
- path: /application2/*
backend:
serviceName: application2-service
servicePort: 80
I also create a ManagedCertificate object, to be able to handle HTTPS requests.
managed_certificate.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: my-certificate
spec:
domains:
- my.host.com
The weird thing here is that curl https://my.host.com/ works fine and I can access my service, but when I try curl https://my.host.com/application2/, I keep getting 404 Not Found.
Why is the root working and not the other ?
Additional info:
The ManagedCertificate is valid and works fine with /.
application1 and application2 are the exact same app and if I swap them in the ingress, the output is the same.
Thanks for your help !
EDIT:
Here is the 404 I get when I try to access application2
Don't know if it can help but here is also the part of the Ingress access logs showing the 404
i think you can't use the same port for 2 different applications because this port is used on every node to route to one app.
From docs:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
So in your case one app is already using port 80, you could try to use a different one for application 2
I have replicated the problem without a domain and managed a
Certificate. For me it's working fine using the Load Balancer IP
address.
(ingress-316204)$ curl http://<LB IP address>/v2/
Hello, world!
Version: 2.0.0
Hostname: web2-XXX
(ingress-316204)$ curl http://<LB IP address>/
Hello, world!
Version: 1.0.0
Hostname: web-XXX
Ingress configuration seems to be correct as per doc. Check if curl https://LB IP address/application2/ is working or not, if it's working then there might be some issue with the host name.
Check if you have updated the host file (/etc/hosts) with line LB IP
address my.host.com.
Check if host, path and backend are configured correctly in Load
Balancer configuration.
If still having the problem then check Port,Nodeport and Targetport configured correctly or else share the output of ‘kubectl describe ing my-ingress’ for further investigation.
Answering my own question:
After searching for days, I ultimately found the reason of the problem.
Everything was fine with the cluster and the configs, my problem was from my Flask API.
All the URLs was like this one:
#app.route("/my_function")
So it was working fine on root path with my.host.com/my_function, but when I was typing my.host.com/application1/my_function it wasn't working...
I just changed my app to
#app.route("/application1/my_function")
Everything works fine now :) Hope it will help !
I have a K8s setup with traefik being exposed like this
kubernetes:
ingressClass: traefik
service:
nodePorts:
http: 32080
serviceType: NodePort
Behind, I forward some requests to different services
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-name
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-host.com
http:
paths:
- path: /my-first-path
backend:
serviceName: my-nodeJs-services
servicePort: 3000
When the DNS is set directly to resolve to my ip, the application works fine with HTTP
http://my-host.com:32080/my-first-path
But when some one add SSL through AWS ALB / API Gateway, the application fail to be reached with 404-NotFound error
The route is like this
https://my-host.com/my-first-path
On the AWS size, they configured something like this
https://my-host.com => SSL Termination and => Forward all to 43.43.43.43:32080
I think this fail because traefik is expecting http://my-host.com but not https://my-host.com which lead to its failure to find the matching route? Or maybe at the ssl termination time, the hostname is lost so that traefik can not find a route?
What should I do in this situation?
I am not very familiar with ALB but what is probably happening is that the requests received by the loadbalancer contain the header Host: my-host.com and when it gets forwarded to your ingress controller, the header is replaced by Host: 43.43.43.43. If this is the case, I see 3 solutions:
ALB might be able to pass the original Host header to the target. (You will have to check in the doc if it's possible)
If the application behind your ingress doesn't check the host header, you can write an ingress that doesn't check a specific host. For example on these examples you can see that the host field is not specified.
If the name resolution works internally, you can define a name for your target, use this name in your ALB and in your ingress.
Kubernetes dedicated cockroachdb node - accessing admin ui via traefik ingress controller fails - page isn't redirecting properly
I have a dedicated kubernetes node running cockroachdb. The pods get scheduled and everything is setup. I want to access the admin UI from a subdomain like so: cockroachdb.hostname.com. I have done this with traefik dashboard and ceph dashboard so I know my ingress setup is working. I even have cert-manager running to have https enabled. I get the error from the browser that the page is not redirecting properly.
Do I have to specify the host name somewhere special?
I have tried adding this with no success: --http-host cockroachdb.hostname.com
This dedicated node has its own public ip which is not mapped to hostname.com. I think I need to change a setting in cockroachdb, but I don't know which because I am new to it.
Does anyone know how to publish admin UI via an ingress?
EDIT01: Added ingress and service config files
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-public
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/ssl-temporary-redirect: "true"
ingress.kubernetes.io/ssl-host: "cockroachdb.hostname.com"
traefik.frontend.rule: "Host:cockroachdb.hostname.com,www.cockroachdb.hostname.com"
traefik.frontend.redirect.regex: "^https://www.cockroachdb.hostname.com(.*)"
traefik.frontend.redirect.replacement: "https://cockroachdb.hostname.com/$1"
spec:
rules:
- host: cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
- host: www.cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
tls:
- hosts:
- cockroachdb.hostname.com
- www.cockroachdb.hostname.com
secretName: cockroachdb-secret
Serice:
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
EDIT02:
I can access the Admin UI page now but only by going over the external ip address of the server with port 8080. I think I need to tell my server that its ip address is mapped to the correct sub domain?
EDIT03:
On both scheduled traefik-ingress pods the following logs are created:
time="2019-04-29T04:31:42Z" level=error msg="Service not found for default/cockroachdb-public"
Your referencing looks good on the ingress side. You are using quite a few redirects, unless you really know what each one is accomplishing, don't use them, you might end up in an infinite loop of redirects.
You can take a look at the following logs and methods to debug:
Run kubectl logs <traefik pod> and see the last batch of logs.
Run kubectl get service, and from what I hear, this is likely your main issue. Make sure your service exists in the default namespace.
Run kubectl port-forward svc/cockroachdb-public 8080:8080 and try connecting to it through localhost:8080 and see terminal for potential error messages.
Run kubectl describe ingress cockroachdb-public and look at the events, this should give you something to work with.
Try accessing the service from another pod you have running ping cockroachdb-public.default.svc.cluster.local and see if it resolves the IP address.
Take a look at your clusterrolebindings and serviceaccount, it might be limited and not have permission to list services in the default namespace: kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default