Why does the logstash pod continually fail for the elastic-stack Helm chart? - elastic-stack

I would like to use a full elastic stack instance on a production grade Kubernetes cluster managed by GKE. I'm pretty new to Kubernetes in general but especially Helm.
I have tried running the following for both versions 1.6.0 and 1.5.1 of the following chart:
helm install stable/elastic-stack --version 1.5.1 --name test-elastic
Most of the pods seem to spin up fine, except for the StatefulSet test-elastic-logstash, where the pod within seems to fail with a reproducible CrashLoopBackOff error. The readiness probe seems to continually fail with connection refused, but that's not something I find especially constructive.
Could anyone please offer any thoughts on what's required to get this working?

Related

Using Colima I am getting `error: Metrics API not available` when running `kubectl top nodes` on Mac

I am using Colima version 0.4.6 on Mac 12.6.1 I am trying to create a test Ubuntu pod for training purposes found here How can I keep a container running on Kubernetes? . After creating that pod and running kubectl get pods that pod is hanging in the pending state. After running kubectl top nodes I am getting the error error: Metrics API not available
Any help on this or points of direction would be appreciated.
I am hoping that I am able to get pods up and running so I can launch a little test environment.
Please let me know if more info is needed.

Kubectl connection refused existing cluster

Hope someone can help me.
To describe the situation in short, I have a self managed k8s cluster, running on 3 machines (1 master, 2 worker nodes). In order to make it HA, I attempted to add a second master to the cluster.
After some failed attempts, I found out that I needed to add controlPlaneEndpoint configuration to kubeadm-config config map. So I did, with masternodeHostname:6443.
I generated the certificate and join command for the second master, and after running it on the second master machine, it failed with
error execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available
Checking the first master now, I get connection refused for the IP on port 6443. So I cannot run any kubectl commands.
Tried recreating the .kube folder, with all the config copied there, no luck.
Restarted kubelet, docker.
The containers running on the cluster seem ok, but I am locked out of any cluster configuration (dashboard is down, kubectl commands not working).
Is there any way I make it work again? Not losing any of the configuration or the deployments already present?
Thanks! Sorry if it’s a noob question.
Cluster information:
Kubernetes version: 1.15.3
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method: kubeadm
Host OS: RHEL 7
CNI and version: weave 0.3.0
CRI and version: containerd 1.2.6
This is an old, known problem with Kubernetes 1.15 [1,2].
It is caused by short etcd timeout period. As far as I'm aware it is a hard coded value in source, and cannot be changed (feature request to make it configurable is open for version 1.22).
Your best bet would be to upgrade to a newer version, and recreate your cluster.

Nginx Ingress Controller Installation Error, "dial tcp 10.96.0.1:443: i/o timeout"

I'm trying to setup a kubernetes cluster with kubeadm and vagrant. I faced an error during installing nginx ingress controller was timeout when the pods is trying to retrieve the configmap through kubernetes API. I have looked around and trying to apply their solution, still no luck, this is the reason I come out with this post.
Environment:
I'm using vagrant to setup 2 nodes with ubuntu/xenial image.
kmaster
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.71
kworker1
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.72
I followed the kubeadm to setup the cluster
[Setup kubernetes with kubeadm]
And my kube cluster init command as below:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.2.71
and apply calico network plugin policy:
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
(Calico is a plugin I currently successful installed with, I will come out another post for flannel plugin which the plugin unable to access the service)
I'm using helm to install ingress controller followed the tutorial
https://kubernetes.github.io/ingress-nginx/deploy/
That's the error occurred once I applied helm deploy command when I describe the pod
Appreciate someone can help, as I know the reason was the pod unable to access kubernetes API. But not this already should enable by kubernetes by default?
My kubesystem pods status as below:
Another solution provided from kubernetes official website:
1) install kube-proxy with sidecar, I still new with kubernetes and I'm looking for example how to install kube-proxy with sidecar. Appreciate if someone could provide an example.
2) use client-go, I'm very confuse when I read this post, it seems that using go command to pull the go script, and I have no clue how's it working with kubernetes pods.
You guys are right, I have tested with digital ocean's droplet and it works as expected, I hit another error is "forbidden, user service account not permitted". Look like the pods is able to access the kubernetes api already. I also tested install istio which I encountered the same issue before, and now it worked in digital ocean droplet.
Thank you guys.

Kubernetes 1.11 could not find heapster for metrics

I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10
If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.
Please note that to install metrics server on kubernetes, you should first clone it by typing:
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
then you should install it, WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE , only via:
kubectl create -f kubernetes-metrics-server/
In this way all services and components are installed correctly and you can run:
kubectl top nodes
or
kubectl top pods
and get the correct result.
For kubectl top node/pod to work you either need the heapster or the metrics server installed on your cluster.
Like the warning says: heapster is being deprecated so the recommended choice now is the metrics server.
So follow the directions here to install the metrics server

How to install influxdb and grafana?

enter image description hereI tried to used the instructions from this link https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md but I was not able to install it. specifically I dont know what this instruction means "Ensure that kubecfg.sh is exported." I dont even know where I can find this I did this sudo find / -name "kubecfg.sh" and I found no results.
moving on to the next step "kubectl create -f deploy/kube-config/influxdb/" when I did this it says kube-system not found I am using latest version of kubernetes version 1.0.1
These instructions are broken can any one provide some instructions on how to install this? I have kubernetes cluster up and running I was able to create and delete pods and so on and default is the only namespace I have when i do kubectl get pods,svc,rc --all-namespaces
Changing kube-system to default in the yaml files is just getting me one step further but I am unable to access the UI and so on. so installing kube-system makes more sense however I dont know how to do it and any instructions on installing influxdb and grafana to get it up and running will be very helpful
I am using latest version of kubernetes version 1.0.1
FYI, the latest version is v1.2.3.
... it says kube-system not found
You can create the kube-system namespace by running
kubectl create namespace kube-system.
Hopefully once you've created the kube-system namespace the rest of the instructions will work.
We had the same issue deploying grafana/influxdb. So we dug into it:
Per https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md since we don’t have an external load balancer, we changed the port type on the grafana service to NodePort which made it accessible at port 30397.
Then looked at the controller configuration here: https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/influxdb-grafana-controller.yaml and noticed the comment about using the api-server proxy which we wouldn’t be doing by exposing the NodePort, so we deleted the GF_SERVER_ROOT_URL environment variable from the config. At that point Grafana at least seemed to be running, but it looked like it was having trouble reaching influxdb.
We then changed the datasource to use localhost instead of monitoring-influxd and was able to connect. We're getting data on the cluster usage now, though individual pod data doesn’t seem to be working.