Issue running KNative with MicroK8S on Multipass - kubernetes

I'm trying to get KNative to be able to create services on my Multipass VM with MacOS as the host OS and I am using MicroK8S. I have DNS enabled and I am using metallb as my ingress controller. I have also changed Multipass to use hyperkit instead of VirtualBox. I don't know what's not been configured or missconfigured. The error I get when I try to create a new service is pasted below:
ubuntu#uncommon-javelin:~/sandbox/sessions/serverless_k8s/yaml$ kn service create nginx --image nginx --port 80
Error: Internal error occurred: failed calling webhook "webhook.serving.knative.dev": failed to call webhook: Post "https://webhook.knative-serving.svc:443/defaulting?timeout=10s": dial tcp 10.152.183.167:443: connect: connection refused
Run 'kn --help' for usage
When I ping that IP, it times out. So it seems like that IP address is either locked down or doesn't exist. Port 443 is configured in my ingress-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
And here is what I have configured for metallb address pool
apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
metallb.univers.tf/address-pool: custom-addresspool
spec:
selector:
name: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
And here's another address-pool.yaml I have configured for my cluster, I'm pretty sure that I have either something networking misconfigured or I'm missing a configuration somewhere.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: custom-addresspool
namespace: metallb-system
spec:
addresses:
- 192.168.1.1-192.168.1.100

Knative uses validating admission webhooks to ensure that the resources in the cluster are valid. It seems like the Knative webhooks are not running on your cluster, but the validatingwebhookconfiguration has been created, as has the service in front of the webhook (the IP in the error message is a ClusterIP of a Kubernetes service on your cluster).
I'd look at the webhook pods in the knative-serving namespace for more details.

Related

Kubernetes local using kind, can't reach service

I am following a very simple tutorial where it spawns a simple pod with an http endpoint and a service to expose that app using kubernetes.
The setup is very simple:
app-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
labels:
app: web
spec:
containers:
- name: web-ctr
image: nigelpoulton/getting-started-k8s:1.0
ports:
- containerPort: 8080
And the nodeport service:
apiVersion: v1
kind: Service
metadata:
name: ps-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 31111
protocol: TCP
selector:
app: web
The service and pod seem to be healthy:
But I can't reach the running app:
locahost:31111
Give " This site can't be reached message"
I am new to this stuff so any help will be appreciated.
In Kubernetes Kind cluster, by default, NodePort may not be bound to localhost. Please check the following resources:
https://kind.sigs.k8s.io/docs/user/quick-start/#mapping-ports-to-the-host-machine
How to use NodePort with kind?
The simplest way to access the service from localhost (like you are trying to do) would be to use
kubectl port-forward
e.g. the following command would work in your case - which forwards traffic from localhost -> ps-nodeport service
kubectl port-forward service/ps-nodeport 31111: 31111

Routing traffic from GCP VM to VPC Native Cloud DNS GKE Cluster

I'm trying to achieve the following scenario:
My VM should be able to connect over a ClusterIP Service to the Pods behind it. The new beta feature here looks like this should be possible, but maybe I'm doing something wrong or misunderstood something...
The full documentation is here:
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns?hl=de#vpc_scope_dns
The DNS is working, I get a service IP, a route to the default network is available.
I can connect via pod IP. But it seems the service IP is not routable from outside the cluster. I know, that normally a ClusterIP is not available from outside the cluster. But then, I don't understand why this feature exists and it also does not match with the diagram provided in the docs. As this feature seems to provide cross-cluster/VM communication via services.
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
Do I understand the feature wrong or am I missing a configuration?
Traffic Check:
This only works with headless Kubernetes services. So you'll need to modify your service spec to included clusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: my-cip-service
namespace: default
spec:
clusterIP: None
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP

Unable to open Istio ingress-gateway for gRPC

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.
On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.
On the client side: I have a Go client which can send gRPC requests to the service.
It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.
Logs:
I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.
Istio's Ingress-Gateway
kubectl describe service -n istio-system istio-ingressgateway
outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)
Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?
Here is the relevant yaml:
Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
Re checking DNS works on the client:
getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
outputs 3 IP's, I'm assuming that's good.
[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
Checking Istio Ingressgateway's routes:
istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
returns
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
Port 8081 is routed to myservice.mynamespace, seems good to me.
UPDATE 1:
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.
As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by #A_Suh, #Ryota and #peppered.
Kubectl edit
Helm
Istio Operator
Additional resources:
How to create custom istio ingress gateway controller?
How to configure ingress gateway in istio?
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I see you have already create new question here, so let's just move there.
I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.
Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

Allow traffic to rabbitMQ service from Istio

I've setup a K8S-cluster in GKE and installed RabbitMQ (from the marketplace) and Istio (via Helm). I can access rabbitMQ from pods until I enable the envoy proxy to be injected into these pods, but after that the traffic will not reach rabbitMQ, and I can't figure out how to enable traffic to the rabbitmq service.
There is a service rabbitmq-rabbitmq-svc (in the rabbitmq namespace) that is of type LoadBalancer.
I've tried a simple busybox when I don't have envoy running and then I have no trouble telneting to rabbitmq (port 5672), but as soon as I try with automatic envoy injection envoy prevents the traffic.
I tried unsuccessfully to add a DestinationRule. (I've added a rule but it makes no difference)
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq.rabbitmq.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
It seems like it should be a simple solution, but I can't figure it out... :/
UPDATE
Turns out it was a simple error in the hostname, ended up using this and it works:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rabbitmq-rabbitmq-svc
spec:
host: rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
Turns out it was a simple error in the hostname, the correct one was rabbitmq-rabbitmq-svc.rabbitmq.svc.cluster.local
The only thing I needed to do to get RabbitMQ clusters to work within Istio is to annotate the RabbitMQ pods as such:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
spec:
override:
statefulSet:
spec:
template:
metadata:
annotations:
#annotate rabbitMQ pods to only redirect traffic on ports 15672 and 5672 to Envoy proxy sidecars.
traffic.sidecar.istio.io/includeInboundPorts: "15672, 5672"
traffic.sidecar.istio.io/includeOutboundPorts: "15672, 5672"
For some reason the exclude port annotations weren't working so I just flipped it by using include port annotations. In my case, the global Istio config is controlled by another team in the company so perhaps there's a clash when trying to use the exclude port annotations.
I maybe encounter the same problem with you before. But my app can connect rabbitmq by envoy after declaring epmd with 4369 port in rabbitmq service.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
type: ClusterIP
ports:
- port: 5672
targetPort: 5672
name: message
- port: 4369
targetPort: 4369
name: epmd
- port: 15672
targetPort: 15672
name: management
selector:
app: rabbitmq

Kubernetes local port for deployment in Minikube

I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.
I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: http://pastebin.com/gL5ZBZg7
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
II: http://pastebin.com/sSuyhzC5
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.
The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".
I am on Mac OS X.
How can I set this up properly?
The LoadBalancer service is meant for Cloud providers and not really relevant for minikube.
From the documentation:
On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.
Using a Service of type NodePort (see documentation) as mentioned in the Networking part of the minikube documentation is the intended way of exposing services on minikube.
So your configuration should look like this:
apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
And access your application through:
> IP=$(minikube ip)
> curl "http://$IP:30356"
Hope that helps.
Minikube now has the service command to access a service.
Use kubectl service <myservice>.
That will give you a URL which you can use to talk to the service.