I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug
Related
Error while getting config map appconfig
Get "https://xxx.xx.x.x:443/api/v1/namespaces/app/configmaps/appconfig": dial tcp xxx.xx.x.x:443: connect: connection refused"
But when istio sidecar is not injected, there is no error
Try this:
oc patch deploy <deployment-name> -p '{"spec":{"template":{"metadata":{"annotations":{"traffic.sidecar.istio.io/excludeOutboundIPRanges": "'$(oc get svc kubernetes -n default -o jsonpath='{.spec.clusterIP}')/32'"}}}}}'
Not sure if it is a bug or not, but apparently istio sidecar proxy does not allow for application containers to communicate with kubernetes API server when data plane is in strict mtls mode.
The above patch introduces an IP range in which the kubernetes API server resides and allows connections to those addresses go outside the sidecar proxy, thus avoiding network rules it enforces.
I have make my deployment work with istio ingressgateway before. I am not aware of any changes made in istio or k8s side.
When I tried to deploy, I see an error in replicaset side that's why it cannot create new pod.
Error creating: Internal error occurred: failed calling webhook
"namespace.sidecar-injector.istio.io": Post
"https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp
10.104.136.116:443: connect: no route to host
When I try to go inside api-server and ping 10.104.136.116 (istiod service IP) it just hangs.
What I have tried so far:
Deleted all coredns pods
Deleted all istiod pods
Deleted all weave pods
Reinstalling istio via istioctl x uninstall --purge
turning all of VMs firewall
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
restarted all of the nodes
manual istio pod injection
Setup
k8s version: 1.21.2
istio: 1.10.3
HA setup
CNI: weave
CRI: containerd
In my case this was related to firewall. More info can be found here.
The gist of it is that on GKE at least you need to open another port 15017 in addition to 10250 and 443. This is to allow communication from your master node(s) to you VPC.
I don't have a definite answer unto why is this happening. But kube-apiserver cannot access istiod via service IP, wherein it can connect when I used the istiod pod IP.
Since I don't have the control over the VM and lower networking layer and not sure if they have changed something (because it is working before).
I made this work by changing my CNI from weave to flannel
In my case it was due to firewall. Following this Istio debug guide, I identified that the kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4 command was timing out while all other cluster internal calls were ok.
The best way to diagnose this is to open temporarly your AWS Security Groups involved to 0.0.0.0/0 for port 15017 and then try again.
If the errror won't show again, you know there's need to fix this part.
I am using EKS with Amazon VPC CNI v1.12.2-eksbuild.1
I have a Kubernetes cluster which doesn't need to expose ports to the public. I am installing monitoring and logging (Prometheus & Loki or Elastic) for in house use and would like to use their GUI. I could provision https ingress and limit IP access but port forwarding seems to work.
How Does port forwarding work, under the hood?
Is port forwarding as secure as my kubectl connection?
Is the connection as fast as an ingress load balancer based HTTPs connection?
In Kubernetes documentation you can find information that port-forward command allows you to access and interact with internal Kubernetes cluster processes from your localhost. Also it's one of the best tools to debugging.
Forward one or more local ports to a pod. This command requires the node to have 'socat' installed.
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
If there are multiple pods matching the criteria, a pod will be selected automatically. The forwarding session ends when the selected pod terminates, and rerun of the command is needed to resume forwarding.
1. How Does port forwarding work, under the hood?
This information can be found in How Does Kubernetes Port Forwarding Work? article.
The whole process is simplified by the fact that kubectl already has a built-in port forwarding functionality.
A user interacts with Kubernetes using the kubectl command-line on their local machine.
The port-forward command specifies the cluster resource name and defines the port number to port-forward to.
As a result, the Kubernetes API server establishes a single HTTP connection between your localhost and the resource running on your cluster.
The user is now able to engage that specific pod directly, either to diagnose an issue or debug if necessary.
Port forwarding is a work-intensive method. However, in some cases, it is the only way to access internal cluster resources.
2. Is port forwarding as secure as my kubectl connection?
For this question, you can find answer in Is kubectl port-forward encrypted?. As pointed by #iomv
As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.
or #neokyle
kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities. The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router.
Kubecelt port-forward is encrypted.
3. Is the connection as fast as an ingress load balancer based HTTPs connection
As connection is inside the cluster, it should be faster than connection from outside the cluster to the cluster.
In addition, there was similar Stackoverflow thread about kubectl port-forward.
I have a question regarding how to use go-micro with Kubernetes. AFAIK, Kubernetes already has kube-dns for service discovery and kube-proxy with Service abstraction to expose the pods.
Is it possible to use go-micro, but skip the kubernetes go-micro plugin to register itself to the Kubernetes API server?
Because I am not sure why it is necessary in first place. The fact is that kubelet will do that for us automatically (by livenessProbe and readinessProbe check, it can then determine pod is healthy or not), by only including the healthy pod to the endpoint of service.
I am asking the question because we're also using istio-proxy. We got micro-services errors whenever the pod is starting, due to istio-proxy is not yet ready to route the traffic (even the traffic to kube api, since it intercepts the egress traffic from our main container (it uses the go-micro Kubernetes plugin)).
2018/10/17 04:37:55 Can't create server! reason: Patch
https://10.32.64.1:443/api/v1/namespaces/data-cdp/pods/cdp-booking-context-svc-stable-864645684b-xd2tb:
dial tcp 10.32.64.1:443: connect: connection refused
It then causes the main container (go-micro kube plugin app) in the crashloopback multiple times, until the istio-proxy is ready. This is not a big issue, but it troubles my mind about the motivation behind the registration thing.
I've been running my kubernetes masters separate from my kubernetes nodes. So I have kube-apiserver, kube-scheduler and kube-controllermanager running on a server without kubelet, kube-proxy or flannel.
So far this has worked perfectly. However, today I attempted to set up the Web UI and access it through an API server. I got the the following error when accessing http://kube-master-0:8080/ui:
Error: 'dial tcp 172.16.72.12:9090: getsockopt: connection timed out'
Trying to reach: 'http://172.16.72.12:9090/'
This suggests to me that the API server is trying to connect to the pod IP, as we don't have flannel or kube-proxy running on this host, the 172.16.72.12 IP will not be routed.
Am I expected to run kube-proxy and flannel on my API servers? Is there another way to let the API server proxy the UI?
It's not required, but it will certainly make your life easier.
The reason this isn't working is because kube-proxy isn't directing traffic to the service. Try kube-node:8080/ui (assuming you have exposed it as with NodePort configuration
In theory, Kube apiserver does not expect the presence of kube-proxy.
This means kube apiserver will run correctly, receives requests and handles them(mostly reads from and writes to etcd).
But if you want the whole cluster working, you will need other components running, for example:
if you want pods or deployments to be scheduled, kube-scheduler should be running
if you want pods and containers be running in nodes, kubelet has to be running
if you want replications can be guarded, controller-manager should be runing
As for kube-proxy and flannel, they are critical parts to make sure networking is working. Load Balance, service, across-hosts pod communication etc all depends on them.