getting 502 bad gateway iin openshift route - kubernetes

Hi today when I try to expose my service using route I'm getting 502 bad gateway...my openshift cluster version is 3.11. I used oc expose my-service to expose my service using the route. I have described my route below.
Name: hello-world
Namespace: uvindu-k8soperator
Labels: app=hello-world
Annotations: openshift.io/host.generated: true
API Version: route.openshift.io/v1
Kind: Route
Metadata:
Creation Timestamp: 2020-03-31T05:45:05Z
Resource Version: 15860504
Self Link: /apis/route.openshift.io/v1/namespaces/uvindu-k8soperator/routes/hello-world
UID: c5e6e8cc-7312-11ea-b6ad-fa163e41f92e
Spec:
Host: hello-world-uvindu-k8soperator.apps.novalocal
Port:
Target Port: port-9095
To:
Kind: Service
Name: hello-world
Weight: 100
Wildcard Policy: None
Status:
Ingress:
Conditions:
Last Transition Time: 2020-03-31T05:45:05Z
Status: True
Type: Admitted
Host: hello-world-uvindu-k8soperator.apps.novalocal
Router Name: router
Wildcard Policy: None
Events: <none>

Related

Managed Certificate in Ingress, Domain Status is FailedNotVisible

I'm simply following the tutorial here: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_managed_certificate
Everything works fine until I deploy my certificate and wait 20 minutes for it to show up as:
Status:
Certificate Name: daojnfiwlefielwrfn
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
That domain clearly works so what am I missing?
EDIT:
Here's the Cert:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: moviedecisionengine
spec:
domains:
- moviedecisionengine.com
The Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/backends: '{"k8s-be-31721--1cd1f38313af9089":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/ssl-cert: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
ingress.kubernetes.io/target-proxy: k8s-tp-default-showcase-mde-ingress--1cd1f38313af9089
ingress.kubernetes.io/url-map: k8s-um-default-showcase-mde-ingress--1cd1f38313af9089
kubernetes.io/ingress.global-static-ip-name: 34.107.208.110
networking.gke.io/managed-certificates: moviedecisionengine
creationTimestamp: "2020-01-16T19:44:13Z"
generation: 4
name: showcase-mde-ingress
namespace: default
resourceVersion: "1039270"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/showcase-mde-ingress
uid: 92a2f91f-3898-11ea-b820-42010a800045
spec:
backend:
serviceName: showcase-mde
servicePort: 80
rules:
- host: moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
- host: www.moviedecisionengine.com
http:
paths:
- backend:
serviceName: showcase-mde
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 34.107.208.110
And lastly, the load balancer:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-13T22:41:27Z"
labels:
app: showcase-mde
name: showcase-mde
namespace: default
resourceVersion: "2298"
selfLink: /api/v1/namespaces/default/services/showcase-mde
uid: d5a77d7b-3655-11ea-af7f-42010a800157
spec:
clusterIP: 10.31.251.46
externalTrafficPolicy: Cluster
ports:
- nodePort: 31721
port: 80
protocol: TCP
targetPort: 80
selector:
app: showcase-mde
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.232.156.172
For the full output of kubectl describe managedcertificate moviedecisionengine:
Name: moviedecisionengine
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"moviedecisionengine","namespace...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-01-17T16:47:19Z
Generation: 3
Resource Version: 1042869
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/moviedecisionengine
UID: 06c97b69-3949-11ea-b820-42010a800045
Spec:
Domains:
moviedecisionengine.com
Status:
Certificate Name: mcrt-14cb8169-25ba-4712-bca5-cb612562a00b
Certificate Status: Provisioning
Domain Status:
Domain: moviedecisionengine.com
Status: FailedNotVisible
Events: <none>
I was successful in using Managedcertificate with GKE Ingress resource.
Let me elaborate on that:
Steps to reproduce:
Create IP address with gcloud
Update the DNS entry
Create a deployment
Create a service
Create a certificate
Create a Ingress resource
Create IP address with gcloud
Invoke below command to create static ip address:
$ gcloud compute addresses create example-address --global
Check newly created IP address with below command:
$ gcloud compute addresses describe example-address --global
Update the DNS entry
Go to GCP -> Network Services -> Cloud DNS.
Edit your zone with A record with the same address that was created above.
Wait for it to apply.
Check with $ nslookup DOMAIN.NAME if the entry is pointing to the appropriate address.
Create a deployment
Below is example deployment which will respond to traffic:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 3
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply it with command $ kubectl apply -f FILE_NAME.yaml
You can change this deployment to suit your application but be aware of the ports that your application will respond to.
Create a service
Use the NodePort as it's the same as in the provided link:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply it with command $ kubectl apply -f FILE_NAME.yaml
Create a certificate
As shown in guide you can use below example to create ManagedCertificate:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: example-certificate
spec:
domains:
- DOMAIN.NAME
Apply it with command $ kubectl apply -f FILE_NAME.yaml
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
-- Google Cloud documentation
Creation of this certificate should be affected by DNS entry that you provided earlier.
Create a Ingress resource
Below is example for Ingress resource which will use ManagedCertificate:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: example-address
networking.gke.io/managed-certificates: example-certificate
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Apply it with command $ kubectl apply -f FILE_NAME.yaml
It took about 20-25 minutes for it to fully work.

Openshift re-encrypt TLS termination route does not work. Application is not available

Can anyone pls help me with Open-Shift Routes?
I have set up a Route with Reencrypt TLS termination. Calls made to the service endpoint (https://openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com) results in:
Requests made to the URL does not seem to reach the pods, it is returning a 503 Application not available error. The liberty application is running fine on port 8543, application logs looks clean.
I am unable to identify the root cause of this error, The requests made on external https URLs does not make it to the application pod. Any suggestions on how to get the endpoint url's working?
Thanks for your help in advance!
Openshift version 4.2
Liberty version 19
Route.yaml
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: openshift-pmi-dev-reencrypt
namespace: default
selfLink: >-
/apis/route.openshift.io/v1/namespaces/default/routes/openshift-pmi-dev-reencrypt
uid: 5de29e0d-16b6-11ea-a1ab-0a580afe00ab
resourceVersion: '7059134'
creationTimestamp: '2019-12-04T16:51:50Z'
labels:
app: apm-pm-api
annotations:
openshift.io/host.generated: 'true'
spec:
host: openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com
subdomain: ''
path: /ibm/pmi/service
to:
kind: Service
name: apm-pm-api-service
weight: 100
port:
targetPort: https
tls:
termination: reencrypt
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
status:
ingress:
- host: openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com
routerName: default
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2019-12-04T16:51:50Z'
wildcardPolicy: None
routerCanonicalHostname: apps.vapidly.os.fyre.ibm.com
Service.yaml
kind: Service
apiVersion: v1
metadata:
name: apm-pm-api-service
namespace: default
selfLink: /api/v1/namespaces/default/services/apm-pm-api-service
uid: 989040ed-166c-11ea-b792-00000a1003d7
resourceVersion: '7062857'
creationTimestamp: '2019-12-04T08:03:46Z'
labels:
app: apm-pm-api
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8543
selector:
app: apm-pm-api
clusterIP: 172.30.122.233
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
Looking at the snapshot, the browser is stating "Not Secure" for the connection. Is this an attempt to access the application over HTTP, not HTTPS?
Having spec.tls.insecureEdgeTerminationPolicy: None means that traffic on insecure schemes (HTTP) is disabled - see the "Re-encryption Termination" section in this doc.
I'd suggest to also use that documentation to determine if you may need to configure spec.tls.destinationCACertificate.

Ingress doesn't redirect to my service after AppID on IKS

I have an IKS cluster with AppID tied to it. I have a problem with redirecting an NodeJS app with the ingress. All other apps works with both AppID and ingress, but this one gives 500 Internal Server error when redirecting back from AppID. The service works fine when used as a NodePort and accessed on the server address and that nodePort.
When describing the ingress I'm only getting successful results back
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Success 58m public-cr9c603a564cff4a27adb020dd40ceb65e-alb1-59fb8fc894-s59nd Successfully applied ingress resource.
My ingress looks like:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/appid-auth: bindSecret=binding-raven3-app-id namespace=default
requestType=web serviceName=r3-ui
ingress.bluemix.net/rewrite-path: serviceName=r3-ui rewrite=/;
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.bluemix.net/appid-auth":"bindSecret=binding-raven3-app-id namespace=default requestType=api serviceName=r3-ui [idToken=false]"},"name":"myingress","namespace":"default"},"spec":{"rules":[{"host":"*host*","http":{"paths":[{"backend":{"serviceName":"r3-ui","servicePort":3000},"path":"/"}]}}],"tls":[{"hosts":["mydomain"],"secretName":"mytlssecret"}]}}
creationTimestamp: "2019-06-20T10:31:57Z"
generation: 21
name: myingress
namespace: default
resourceVersion: "24140"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myingress
uid: a15f74aa-9346-11e9-a9bf-f63d33811ba6
spec:
rules:
- host: *host*
http:
paths:
- backend:
serviceName: r3-ui
servicePort: 3000
path: /
tls:
- hosts:
- *host*
secretName: raven3
status:
loadBalancer:
ingress:
- ip: 169.51.71.141
kind: List
metadata:
resourceVersion: ""
selfLink: ""
and my service looks like:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-06-20T09:25:30Z"
labels:
app: r3-ui
chart: r3-ui-0.1.0
heritage: Tiller
release: r3-ui
name: r3-ui
namespace: default
resourceVersion: "23940"
selfLink: /api/v1/namespaces/default/services/r3-ui
uid: 58ff6604-933d-11e9-a9bf-f63d33811ba6
spec:
clusterIP: 172.21.180.240
ports:
- name: http
port: 3000
protocol: TCP
targetPort: http
selector:
app: r3-ui
release: r3-ui
sessionAffinity: None
type: ClusterIP
status:
What's weird is that I'm getting different results on port 80 and on port 443. On port 80 I'm getting HTTP error 500 and on port 443 I'm getting Invalid Host header
Are you using https in order to access your application? For security reasons, App ID authentication only supports back ends with TLS/SSL enabled.
If you are using SSL and still having troubles, can you kindly share your Ingress and application logs so we can figure out what went wrong?
Thanks.

GKE Managed Certificate not serving over HTTPS

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>
I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.

Why is my ISTIO policy configuration not applied?

I am using Istio-1.0.6 to implement Authentication/Authorization. I am attempting to use Jason Web Tokens (JWT). I followed most of the examples from the documentation but I am not getting the expected outcome. Here are my settings:
Service
kubectl describe services hello
Name: hello
Namespace: agud
Selector: app=hello
Type: ClusterIP
IP: 10.247.173.177
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.16.0.193:8080
Session Affinity: None
Gateway
kubectl describe gateway
Name: hello-gateway
Namespace: agud
Kind: Gateway
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-15T13:40:43Z
Resource Version: 1374497
Self Link:
/apis/networking.istio.io/v1alpha3/namespaces/agud/gateways/hello-gateway
UID: ee483065-4727-11e9-a712-fa163ee249a9
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Virtual Service
kubectl describe virtualservices
Name: hello
Namespace: agud
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-18T07:38:52Z
Generation: 0
Resource Version: 2329507
Self Link:
/apis/networking.istio.io/v1alpha3/namespaces/agud/virtualservices/hello
UID: e099b560-4950-11e9-82a1-fa163ee249a9
Spec:
Gateways:
hello-gateway
Hosts:
*
Http:
Match:
Uri:
Exact: /hello
Uri:
Exact: /secured
Route:
Destination:
Host: hello.agud.svc.cluster.local
Port:
Number: 8080
Policy
kubectl describe policies
Name: jwt-hello
Namespace: agud
API Version: authentication.istio.io/v1alpha1
Kind: Policy
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-18T07:45:33Z
Generation: 0
Resource Version: 2331381
Self Link:
/apis/authentication.istio.io/v1alpha1/namespaces/agud/policies/jwt-hello
UID: cf9ed2aa-4951-11e9-9f64-fa163e804eca
Spec:
Origins:
Jwt:
Audiences:
hello
Issuer: testing#secure.istio.io
Jwks Uri: https://raw.githubusercontent.com/istio/istio/release-1.0/security/tools/jwt/samples/jwks.json
Principal Binding: USE_ORIGIN
Targets:
Name: hello.agud.svc.cluster.local
RESULT
I am expecting to get a 401 error but I am getting a 200. What is wrong with my configuration and how do I fix this?
curl $INGRESS_HOST/hello -s -o /dev/null -w "%{http_code}\n"
200
You have:
Port: <unset> 8080/TCP
For Istio routing and security, you must set the port name to http or http-<something>.
I tried with Istio 1.1. I got a 503 rather than a 401.