No matches for kind"gateway" and "virtualservice" - kubernetes

I am using Docker Desktop version 3.6.0 which has Kubernetes 1.21.3.
I am following this tutorial to get started on Istio
https://istio.io/latest/docs/setup/getting-started/
Istio is properly installed as per the instructions.
Now whenever i try to apply the Istio configuration
by issuing the command kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml.
I get the following error
unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3"
I checked in internet and found that the Gateway and VirtualService resources are missing.
If i perform kubectl get crd i get no resources found
Content of bookinfo-gatway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080

The CRDs for istio should be installed as part of the istioctl install process, I'd recommend re-running the install if you don't have them available.
>>> ~/.istioctl/bin/istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete
kubectl get po -n istio-system should look like
>>> kubectl get po -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-7ddb45fcdf-ctnp5 1/1 Running 0 3m20s
istio-ingressgateway-f7cdcd7dc-zdqhg 1/1 Running 0 3m20s
istiod-788ff675dd-9p75l 1/1 Running 0 3m32s
Otherwise your initial install has gone wrong somewhere.

You can apply CRD to your cluster without using istioctl install from https://github.com/istio/istio/blob/master/manifests/charts/base/crds/crd-all.gen.yaml
with
kubectl apply -f ./crd-all.gen.yaml

Related

Make istio-ingress working with metallb bare metal kubernetes cluster

Update 14-03-2021
Metallb LoadBalancer IP 192.168.0.21 accessible from Cluster (Master/Nodes) Only.
root#C271-KUBE-NODE-0-04:~# curl -s -I -HHost:httpbin.example.com "http://192.168.0.21:80/status/200"
HTTP/1.1 200 OK
server: istio-envoy
date: Sun, 14 Mar 2021 17:32:36 GMT
content-type: text/html; charset=utf-8
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 0
x-envoy-upstream-service-time: 2
Issue
Trying to get istio working with metallb on Vmware ESXI.
Intalled MetalLb with helm install metallb bitnami/metallb -n metallb-system -f metallb-config.yaml
configInline:
address-pools:
- name: prod-k8s-pool
protocol: layer2
addresses:
- 192.168.0.21
Used https://istio.io/latest/docs/setup/install/helm/ to install istio.
helm install istio-base manifests/charts/base --set global.jwtPolicy=first-party-jwt -n istio-system
helm install istiod manifests/charts/istio-control/istio-discovery --set global.jwtPolicy=first-party-jwt -n istio-system
helm install istio-ingress manifests/charts/gateways/istio-ingress --set global.jwtPolicy=first-party-jwt -n istio-system
helm install istio-egress manifests/charts/gateways/istio-egress --set global.jwtPolicy=first-party-jwt -n istio-system
❯ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpbin LoadBalancer 10.104.32.168 <none> 8000:32483/TCP 16m
istio-egressgateway ClusterIP 10.107.11.137 <none> 80/TCP,443/TCP,15443/TCP 20m
istio-ingressgateway LoadBalancer 10.109.199.203 192.168.0.21 15021:32150/TCP,80:31977/TCP,443:30960/TCP,15012:30927/TCP,15443:31439/TCP 31m
istiod ClusterIP 10.96.10.193 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 33m
At the same time, metallb controller logs say it allocated IP.
metallb-system/metallb-controller-64c58bc7c6-bks6m[metallb-controller]: {"caller":"service.go:114","event":"ipAllocated","ip":"192.168.0.21","msg":"IP address assigned by controller","s
ervice":"istio-system/istio-ingressgateway","ts":"2021-03-14T09:20:12.906308842Z"}
I am trying to install a simple sample HTTPBIN using https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/
kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml)
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
EOF
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
route:
- destination:
port:
number: 8000
host: httpbin
EOF
But the IP 192.168.0.21 never resolves. From other machines in the same network.
curl -s -I -HHost:httpbin.example.com "http://192.168.0.21:80/status/200"
I tried Nginx-ingress installation with
spec:
type: LoadBalancer
loadBalancerIP: 192.168.0.21
that is working fine, Can anybody guide how istio will work with bare metal metallb.

Grafana with kubeflow

I am trying to integrate Grafana with my kubeflow in order to monitor my model.
I have no clue from where to start as I am not able to find anything in the documentation.
Can someone help?
To run Grafana with kubeflow, follow the steps:
create namespace
kubectl create namespace knative-monitoring
setup monitoring components
kubectl apply --filename
https://github.com/knative/serving/releases/download/v0.13.0/monitoring-metrics-prometheus.yaml
Launch grafana board via port forwarding
kubectl port-forward --namespace knative-monitoring $(kubectl get pod
--namespace knative-monitoring --selector="app=grafana" --output jsonpath='{.items[0].metadata.name}') 8080:3000
Access the grafana dashboard on http://localhost:8080.
It depends on your configuration. I had a MiniKF instance running on an EC2 VM and needed to specify the address was 0.0.0.0 for the port-forwarding method to work.
kubectl port-forward --namespace knative-monitoring \
$(kubectl get pod --namespace knative-monitoring \
--selector="app=grafana" --output jsonpath='{.items[0].metadata.name}') \
--address 0.0.0.0 8080:3000
Then you should be able to access the grafana dashboard at http://{your-kf-ip}:8080
You can also expose it via istio, using this virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: kubeflow
spec:
gateways:
- kubeflow-gateway
hosts:
- '*'
http:
- match:
- method:
regex: GET|POST
uri:
prefix: /istio/grafana/
rewrite:
uri: /
route:
- destination:
host: grafana.istio-system.svc.cluster.local
port:
number: 3000
So if you're visiting your kubeflow dashboard usually via https://kubeflow.example.com, having this exposed through kubeflow-gateway will allow you to access it via https://kubeflow.example.com/istio/grafana/
If you're not using Istio's grafana but Knative's, you can change the destination accordingly.
You might also need to change the root url of grafana via an env variable in grafana's deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: istio-system
spec:
template:
containers:
- env:
- name: GF_SERVER_ROOT_URL
value: https://kubeflow.example.com/istio/grafana

Istio JWT authentication passes traffic without token

Background:
There was a similar question: Here but it didn't offer a solution to my issue.
I have deployed an application which is working as expected to my Istio Cluster. I wanted to enable JWT authentication, so adapting the instructions Here to my use-case.
ingressgateway:
I first applied the following policy to the istio-ingressgateway. This worked and any traffic sent without a JWT token was blocked.
kubectl apply -n istio-system -f mypolicy.yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: istio-system
spec:
targets:
- name: istio-ingressgateway
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Once that worked I deleted this policy and installed a new policy for my service.
kubectl delete -n istio-system -f mypolicy.yaml
service/core-api-service:
After editing the above policy, changing the namespace and target as below, I reapplied the policy to the correct namespace.
Policy:
kubectl apply -n solarmori -f mypolicy.yaml
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: solarmori
spec:
targets:
- name: core-api-service
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Service:
apiVersion: v1
kind: Service
metadata:
name: core-api-service
spec:
type: LoadBalancer
ports:
- port: 80
name: api-svc-port
targetPort: api-app-port
selector:
app: core-api-app
The outcome of this action didn't appear to change anything in processing of traffic. I was still able to reach my service even though I did not provide a JWT.
I checked the istio-proxy of my service deployment and there was no creation of a local_jwks in the logs as described Here.
[procyclinsur#P-428 istio]$ kubectl logs -n solarmori core-api-app-5dd9666777-qhf5v -c istio-proxy | grep local_jwks
[procyclinsur#P-428 istio]$
If anyone knows where I am going wrong I would greatly appreciate any help.
For a Service to be part of Istio's service mesh you need to fulfill some requirements as shown in the official docs.
In your case, the service port name needs to be updated to:
<protocol>[-<suffix>] with the <protocol> as either:
grpc
http
http2
https
mongo
mysql
redis
tcp
tls
udp
At that point requests forwarded to the service will go through the service mesh; Currently, requests are resolved by Kubernetes networking.

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

Istio Gateway and Traffic Routing does not work (deployed via Jenkins X/jx)

So we have a environment staging" repo which was created by jenkins x. In it we commit the following ymls to theenv/templates` folder. The kubernetes cluster is in AWS EKS.
apiVersion: v1
kind: Namespace
metadata:
name: global-gateway
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: app-gateway
namespace: global-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-hosts
namespace: jx-staging
spec:
hosts:
- "*"
gateways:
- app-gateway.global-gateway.svc.cluster.local
http:
- match:
- uri:
prefix: /
route:
- destination:
host: test-app
port:
number: 80
The above YMLs work perfectly, and I can access the service when applied via kubectl apply -f .
However instead of creating them manually we commit and push it to the repo which triggers a JX job which runs successfully. Afterwards we can see that all of the Gateway and VirtualService has been deployed correctly. i.e. if we run kubectl get Gateway we can see our gateway.
However the URL does not work and does not redirect to the microservice after been applied from jenkins.
The command that jenkins seems to run is
helm upgrade --namespace jx-staging --install --wait --force --timeout 600 --values values.yaml jx-staging .
To try and diagnose the problem I deployed using kubectl and jenkins and diffed the output of kubectl describe Gateway/VirtualService <name>
The jenkins/heml deployment showed Annotations: <none> while when deployed with kubectl it showed showed
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.
The Resource Version numbers were also different but I assume that is correct and alright?
EDIT:
The helm chart is as follows as well
description: GitOps Environment for this Environment
icon: https://www.cloudbees.com/sites/default/files/Jenkins_8.png
maintainers:
- name: Team
name: env
version: "39"
Please advice on how to get the istio gateway running with jx/helm.