OpenShift v3.11 with Maistra (Istio) Install - kubernetes

I am attempting to install Maistra on top of an Origin (OKD) v3.11 cluster. Following this guide:
https://medium.com/#jakub.jozwicki/ocp-part-3-installing-istio-1d9f37665d3b
I have attempted installs via v0.7 and v0.12 of the Maistra origin-ansible branch and continue to get an error on my istio-operator pod of: "Failed create pod sandbox.." / "NetworkPlugin cni failed to setup pod, network: OpenShift SDN network process is not (yet?) available". Any ideas how to resolve this error? I have done extensive searching...but things to seem to be not applicable/outdated.

Related

How to install multiple istio control plane on same kubernetes cluster

We want to install multiple istio control plane on same kubernetes cluster.
We installed istio by like
istioctl install -f istioOperator.yaml
istioOperator.yaml is based on
istioctl profile dump minimal
And it is further modified by changing istioNamespace, metadata/namespace and restricting namespaces in the mesh by discoverySelector.
When installing second istio in the same way, an error occurred like below (istio-system-la is second istio's namespace).
✔ Istio core installed
- Processing resources for Istiod.
2022-07-13T05:32:17.577423Z error installer failed to update resource with server-side apply for obj EnvoyFilter/istio-system-la/stats-filter-1.11: Internal error occurred: failed calling webhook "rev.validation.istio.io": failed to call webhook: Post "https://istiod.istio-system-la.svc:443/validate?timeout=10s": service "istiod" not found
...
How can we avoid this error, and successfully for istios to coexisting?

Kubectl connection refused existing cluster

Hope someone can help me.
To describe the situation in short, I have a self managed k8s cluster, running on 3 machines (1 master, 2 worker nodes). In order to make it HA, I attempted to add a second master to the cluster.
After some failed attempts, I found out that I needed to add controlPlaneEndpoint configuration to kubeadm-config config map. So I did, with masternodeHostname:6443.
I generated the certificate and join command for the second master, and after running it on the second master machine, it failed with
error execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available
Checking the first master now, I get connection refused for the IP on port 6443. So I cannot run any kubectl commands.
Tried recreating the .kube folder, with all the config copied there, no luck.
Restarted kubelet, docker.
The containers running on the cluster seem ok, but I am locked out of any cluster configuration (dashboard is down, kubectl commands not working).
Is there any way I make it work again? Not losing any of the configuration or the deployments already present?
Thanks! Sorry if it’s a noob question.
Cluster information:
Kubernetes version: 1.15.3
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method: kubeadm
Host OS: RHEL 7
CNI and version: weave 0.3.0
CRI and version: containerd 1.2.6
This is an old, known problem with Kubernetes 1.15 [1,2].
It is caused by short etcd timeout period. As far as I'm aware it is a hard coded value in source, and cannot be changed (feature request to make it configurable is open for version 1.22).
Your best bet would be to upgrade to a newer version, and recreate your cluster.

Error creating: Internal error occurred: failed calling webhook "validator.trow.io" installing Ceph with Helm on Kubernetes

I'm trying to install Ceph using Helm on Kunbernetes following this tutorial
install ceph
Probably the problem is that I installed trow registry before because as soon as I run the helm step
helm install --name=ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml
I get this error in ceph namespace
Error creating: Internal error occurred: failed calling webhook "validator.trow.io": Post https://trow.kube-public.svc:443/validate-image?timeout=30s: dial tcp 10.102.137.73:443: connect: connection refused
How can I solve this?
Apparently you are right with the presumption, I have a few concerns about this issue.
Trow registry manager controls the images that run in the cluster via implementing Admission webhooks that validate every request before pulling image, and as far as I can see Docker Hub images are not accepted by default.
The default policy will allow all images local to the Trow registry to
be used, plus Kubernetes system images and the Trow images themselves.
All other images are denied by default, including Docker Hub images.
Due to the fact that during Trow installation procedure you might require to distribute and approve certificate in order to establish secure HTTPS connection from target node to Trow server, I would suggest to check certificate presence on the node where you run ceph-helm chart as described in Trow documentation.
The other option you can run Trow registry manager with disabled TLS over HTTP, as was guided in the installation instruction.
This command should help to get it cleaned.
kubectl delete ValidatingWebhookConfiguration -n rook-ceph rook-ceph-webhook

Nginx Ingress Controller Installation Error, "dial tcp 10.96.0.1:443: i/o timeout"

I'm trying to setup a kubernetes cluster with kubeadm and vagrant. I faced an error during installing nginx ingress controller was timeout when the pods is trying to retrieve the configmap through kubernetes API. I have looked around and trying to apply their solution, still no luck, this is the reason I come out with this post.
Environment:
I'm using vagrant to setup 2 nodes with ubuntu/xenial image.
kmaster
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.71
kworker1
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.72
I followed the kubeadm to setup the cluster
[Setup kubernetes with kubeadm]
And my kube cluster init command as below:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.2.71
and apply calico network plugin policy:
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
(Calico is a plugin I currently successful installed with, I will come out another post for flannel plugin which the plugin unable to access the service)
I'm using helm to install ingress controller followed the tutorial
https://kubernetes.github.io/ingress-nginx/deploy/
That's the error occurred once I applied helm deploy command when I describe the pod
Appreciate someone can help, as I know the reason was the pod unable to access kubernetes API. But not this already should enable by kubernetes by default?
My kubesystem pods status as below:
Another solution provided from kubernetes official website:
1) install kube-proxy with sidecar, I still new with kubernetes and I'm looking for example how to install kube-proxy with sidecar. Appreciate if someone could provide an example.
2) use client-go, I'm very confuse when I read this post, it seems that using go command to pull the go script, and I have no clue how's it working with kubernetes pods.
You guys are right, I have tested with digital ocean's droplet and it works as expected, I hit another error is "forbidden, user service account not permitted". Look like the pods is able to access the kubernetes api already. I also tested install istio which I encountered the same issue before, and now it worked in digital ocean droplet.
Thank you guys.

Kubernetes 1.11 could not find heapster for metrics

I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10
If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.
Please note that to install metrics server on kubernetes, you should first clone it by typing:
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
then you should install it, WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE , only via:
kubectl create -f kubernetes-metrics-server/
In this way all services and components are installed correctly and you can run:
kubectl top nodes
or
kubectl top pods
and get the correct result.
For kubectl top node/pod to work you either need the heapster or the metrics server installed on your cluster.
Like the warning says: heapster is being deprecated so the recommended choice now is the metrics server.
So follow the directions here to install the metrics server