Ingress doesn't redirect to my service after AppID on IKS - ibm-cloud

I have an IKS cluster with AppID tied to it. I have a problem with redirecting an NodeJS app with the ingress. All other apps works with both AppID and ingress, but this one gives 500 Internal Server error when redirecting back from AppID. The service works fine when used as a NodePort and accessed on the server address and that nodePort.
When describing the ingress I'm only getting successful results back
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Success 58m public-cr9c603a564cff4a27adb020dd40ceb65e-alb1-59fb8fc894-s59nd Successfully applied ingress resource.
My ingress looks like:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/appid-auth: bindSecret=binding-raven3-app-id namespace=default
requestType=web serviceName=r3-ui
ingress.bluemix.net/rewrite-path: serviceName=r3-ui rewrite=/;
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.bluemix.net/appid-auth":"bindSecret=binding-raven3-app-id namespace=default requestType=api serviceName=r3-ui [idToken=false]"},"name":"myingress","namespace":"default"},"spec":{"rules":[{"host":"*host*","http":{"paths":[{"backend":{"serviceName":"r3-ui","servicePort":3000},"path":"/"}]}}],"tls":[{"hosts":["mydomain"],"secretName":"mytlssecret"}]}}
creationTimestamp: "2019-06-20T10:31:57Z"
generation: 21
name: myingress
namespace: default
resourceVersion: "24140"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myingress
uid: a15f74aa-9346-11e9-a9bf-f63d33811ba6
spec:
rules:
- host: *host*
http:
paths:
- backend:
serviceName: r3-ui
servicePort: 3000
path: /
tls:
- hosts:
- *host*
secretName: raven3
status:
loadBalancer:
ingress:
- ip: 169.51.71.141
kind: List
metadata:
resourceVersion: ""
selfLink: ""
and my service looks like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-06-20T09:25:30Z"
labels:
app: r3-ui
chart: r3-ui-0.1.0
heritage: Tiller
release: r3-ui
name: r3-ui
namespace: default
resourceVersion: "23940"
selfLink: /api/v1/namespaces/default/services/r3-ui
uid: 58ff6604-933d-11e9-a9bf-f63d33811ba6
spec:
clusterIP: 172.21.180.240
ports:
- name: http
port: 3000
protocol: TCP
targetPort: http
selector:
app: r3-ui
release: r3-ui
sessionAffinity: None
type: ClusterIP
status:
What's weird is that I'm getting different results on port 80 and on port 443. On port 80 I'm getting HTTP error 500 and on port 443 I'm getting Invalid Host header

Are you using https in order to access your application? For security reasons, App ID authentication only supports back ends with TLS/SSL enabled.
If you are using SSL and still having troubles, can you kindly share your Ingress and application logs so we can figure out what went wrong?
Thanks.

Related

Ingress return 404

Rancher ingress return 404 to service.
Setup: I have 6 VMs, one Rancher server x.x.x.51 (where dns domain.company is pointing to, TLS), and 5 VMs (one master and 4 worker x.x.x.52-56).
My service, gvm-gsad running in gvm namespace:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
meta.helm.sh/release-name: gvm
meta.helm.sh/release-namespace: gvm
creationTimestamp: "2021-11-15T21:14:21Z"
labels:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gvm-gsad
app.kubernetes.io/version: "21.04"
helm.sh/chart: gvm-1.3.0
name: gvm-gsad
namespace: gvm
resourceVersion: "3488107"
uid: c1ddfdfa-3799-4945-841d-b6aa9a89f93a
spec:
clusterIP: 10.43.195.239
clusterIPs:
- 10.43.195.239
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: gsad
port: 80
protocol: TCP
targetPort: gsad
selector:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/name: gvm-gsad
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress configure: Ingress controller is default one from rancher.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.16.12.53"],"port":443,"protocol":"HTTPS","serviceName":"gvm:gvm-gsad","ingressName":"gvm:gvm","hostname":"dtl.miproad.ad","path":"/gvm","allNodes":true}]'
creationTimestamp: "2021-11-16T19:22:45Z"
generation: 10
name: gvm
namespace: gvm
resourceVersion: "3508472"
uid: e99271a8-8553-45c8-b027-b259a453793c
spec:
rules:
- host: domain.company
http:
paths:
- backend:
service:
name: gvm-gsad
port:
number: 80
path: /gvm
pathType: Prefix
tls:
- hosts:
- domain.company
status:
loadBalancer:
ingress:
- ip: x.x.x.53
- ip: x.x.x.54
- ip: x.x.x.55
- ip: x.x.x.56
When i access it with https://domain.company/gvm then i get 404.
However, when i change the service to NodePort, i could access it with x.x.x.52:PORT normally. Meaning the deployment is running fine, just some configuration issue in ingress.
I checked this one: rancher 2.x thru ingress controller returns 404 but it does not help.
Thank you in advance!
Figured out the solution.
The domain.company is pointing to rancher (x.x.x.51). Where the ingress is running on (x.x.x.53,.54,.55,.56).
So, the solution is to create a new DNS called gvm.domain.company pointing to any ingress (x.x.x.53,.54,.55,.56) (you can have LoadBalancer here or use round robin DNS).
Then, the ingress definition is gvm.domain.company and path is "/".
Hope it helps others!

503 Service Temporarily Unavailable - nginx, minikube, k8s

Hello I am new to devops
Problem: Unable to access the ticketing.dev from browser (configured using nginx)
I am using nginx and use minikube (running everything locally)
this is my service and deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: arshad/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: NodePort
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
this is my ingress file
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-serv
port:
number: 3000
this is my
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> ticketing.dev 192.168.99.101 80 45m
I also added the /etc/hosts ip like ticketing.dev 192.168.99.101 but still I am getting 503 Service Temporarily Unavailable
Anyone please help.
Hi you have a typo thats why. Your service name is auth-srv when in ingress you are calling service name auth-serv . Change it on ingress to auth-srv instead of auth-serv .

Managed Certificate in Ingress, Domain Status is FailedNotVisible

I'm simply following the tutorial here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate
Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:
Status:
Certificate Name: daojnfiwlefielwrfn
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
That domain clearly works so what am I missing?
EDIT:
Here's the Cert:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: moviedecisionengine
spec:
domains:
- moviedecisionengine.com
The Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089
kubernetes.io/ingress.global-static-ip-name: 34.107.208.110
networking.gke.io/managed-certificates: moviedecisionengine
creationTimestamp: "2020-01-16T19:44:13Z"
generation: 4
name: showcase-mde-ingress
namespace: default
resourceVersion: "1039270"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress
uid: 92a2f91f-3898-11ea-b820-42010a800045
spec:
backend:
serviceName: showcase-mde
servicePort: 80
rules:
- host: moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
- host: www.moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 34.107.208.110
And lastly, the load balancer:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-13T22:41:27Z"
labels:
app: showcase-mde
name: showcase-mde
namespace: default
resourceVersion: "2298"
selfLink: /api/v1/namespaces/default/services/showcase-mde
uid: d5a77d7b-3655-11ea-af7f-42010a800157
spec:
clusterIP: 10.31.251.46
externalTrafficPolicy: Cluster
ports:
- nodePort: 31721
port: 80
protocol: TCP
targetPort: 80
selector:
app: showcase-mde
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.232.156.172
For the full output of kubectl describe managedcertificate moviedecisionengine:
Name: moviedecisionengine
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-01-17T16:47:19Z
Generation: 3
Resource Version: 1042869
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine
UID: 06c97b69-3949-11ea-b820-42010a800045
Spec:
Domains:
moviedecisionengine.com
Status:
Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
Events: <none>
I was successful in using Managedcertificate with GKE Ingress resource.
Let me elaborate on that:
Steps to reproduce:
Create IP address with gcloud
Update the DNS entry
Create a deployment
Create a service
Create a certificate
Create a Ingress resource
Create IP address with gcloud
Invoke below command to create static ip address:
$ gcloud compute addresses create example-address --global
Check newly created IP address with below command:
$ gcloud compute addresses describe example-address --global
Update the DNS entry
Go to GCP -> Network Services -> Cloud DNS.
Edit your zone with A record with the same address that was created above.
Wait for it to apply.
Check with $ nslookup DOMAIN.NAME if the entry is pointing to the appropriate address.
Create a deployment
Below is example deployment which will respond to traffic:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply it with command $ kubectl apply -f FILE_NAME.yaml
You can change this deployment to suit your application but be aware of the ports that your application will respond to.
Create a service
Use the NodePort as it's the same as in the provided link:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply it with command $ kubectl apply -f FILE_NAME.yaml
Create a certificate
As shown in guide you can use below example to create ManagedCertificate:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: example-certificate
spec:
domains:
- DOMAIN.NAME
Apply it with command $ kubectl apply -f FILE_NAME.yaml
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
-- Google Cloud documentation
Creation of this certificate should be affected by DNS entry that you provided earlier.
Create a Ingress resource
Below is example for Ingress resource which will use ManagedCertificate:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: example-address
networking.gke.io/managed-certificates: example-certificate
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Apply it with command $ kubectl apply -f FILE_NAME.yaml
It took about 20-25 minutes for it to fully work.

GKE Managed Certificate not serving over HTTPS

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>
I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.

Istio: Ingress for ACME-challenge not working (503)

We are running Istio 1.1.3 on 1.12.5-gke.10 cluster-nodes.
We use certmanager for managing our let's encrypt certificates.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certs.ourdomain.nl
namespace: istio-system
spec:
secretName: certs.ourdomain.nl
newBefore: 360h # 15d
commonName: operations.ourdomain.nl
dnsNames:
- operations.ourdomain.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
acme:
config:
- http01:
ingressClass: istio
domains:
- operations.ourdomain.nl
Next thing we see the acme backend, service (nodeport and ingress) deployed. The ingress (auto-generated) looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "1734084804"
certmanager.k8s.io/acme-http-token: "1476005735"
name: cm-acme-http-solver-69vzw
namespace: istio-system
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Certificate
name: certs.ourdomain.nl
uid: 751011d2-4fc8-11e9-b20e-42010aa40101
spec:
rules:
- host: operations.ourdomain.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-fzk8q
servicePort: 8089
path: /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck
status:
loadBalancer: {}
However, when we try to access the url operations.ourdomain.nl /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck we get a 404.
We do have a loadbalancer for istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
labels:
app: istio-ingress
chart: gateways-1.1.0
heritage: Tiller
istio: ingress
release: istio
name: istio-ingress
namespace: istio-system
spec:
selector:
app: istio-ingress
servers:
- hosts:
- operations.ourdomain.nl
#port:
# name: http
# number: 80
# protocol: HTTP
#tls:
# httpsRedirect: true
- hosts:
- operations.ourdomain.nl
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: certs.ourdomain.nl
mode: SIMPLE
privateKey: sds
serverCertificate: sds
This interesting article gives a good insight in how the acme-challenge is supposed to work. For purpose of testing we have removed the port 80 and redirect to https in our custom gateway. We have added the autogenerated k8s gateway, listening only on port 80.
Istio is supposed to create a virtualservice for the acme-challenge. This seems to be happening, because now, when we request the acme-challenge url we get a 503: upstream connect error or disconnect/reset before headers. I believe this means the request gets to the gateway and is matched by a virtualservice, but there is no service / healthy pod to revert the traffic to.
We do see some possibly interesting logging in the istio-pilot:
“ProxyStatus”: {“endpoint_no_pod”:
{“cm-acme-http-solver-l5j2g.istio-system.svc.cluster.local”:
{“message”: “10.16.57.248”}
I have double checked and the service mentioned above does have a pod it is exposing. So I am not sure whether this line is relevant to this issue.
The acme-challenge pods do not have an istio-sidecar. Could this be the issue? If so: why does it apparently work for others