I am new to Kubernetes and Influx so there is a big chance that there are things that I just didn't understood correct .
I managed to deploy influxDB and Telegraf to my cluster in EKS (AWS) but now I am trying to edit the Telegraf configuration so that I can get some input data from Kafka and store them to InfluxDB . My problem is that I cannot edit any file in the system or create a new configuration file .
For example when I try to edit the config file
vi /etc/telegraf/telegraf.conf
I get the Read-only file system error .
I read that the systems are read-only in the newest Kubernetes version but there has to be some workaround to solve this issue.
My Kubernetes version is :
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
and I obviously run everything with admin rights. Any help would be appreciated.
Related
After the upgrading kubectl, I am unable to login on Openshift cluster.
When i am trying to login with oc login command it is reverting the below error message:
Error: unknown command "login" for "kubectl"
Did you mean this?
logs
plugin
Run 'kubectl --help' for usage.
Below are the some details of version from machine where i have the login issue
# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Below are the some details of version from machine where i have no issue , i am able to login and didn't upgrade kubectl version
# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v0.0.0-master+$Format:%h$", GitCommit:"$Format:%H$", GitTreeState:"", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.2+f2384e2", GitCommit:"f2384e2", GitTreeState:"clean", BuildDate:"2020-06-16T03:21:27Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Addition: I have checked by downgrade the kubectl versions , but getting same error on that machine where i unable to login using oc login.
It looks like you have symlinked oc to kubectl somehow.
As you note, kubectl does not have a login method, you need to actually use the oc CLI tool to log into your OpenShift cluster. This will get the proper tokens that you need to talk to the OpenShift API.
Alternatively, you can get the necessary token via the OpenShift Web Console (top right, "Copy Login Command" or something like that).
Minikube not starting with several error messages.
kubectl version gives following message with port related message:
iqbal#ThinkPad:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
You didn't give more details, but there are some concerns that I solved few days ago about minikube issues with kubernetes 1.12.
Indeed, the compatibility matrix between kubernetes and docker recommends to run :
Docker 18.06 + kubernetes 1.12 (Docker 18.09 is not supported now).
Thus, make sure docker version is NOT above 18.06. Then, run the following:
# clean up
minikube delete
minikube start --vm-driver="none"
kubectl get nodes
If you are still encountering issues, please give more details, namely minikube logs.
If you want to change the VM driver add the appropriate --vm-driver=xxx flag to minikube start. Minikube supports
the following drivers:
virtualbox
vmwarefusion
KVM2
KVM (deprecated in favor of KVM2)
hyperkit
xhyve
hyperv
none (Linux-only) - this driver can be used to run the Kubernetes cluster components on the host instead of in a VM. This can be useful for CI workloads which do not support nested virtualization. For example, if your vm is virtualbox then use:
$ minikube delete
$ minikube start --vm-driver=virtualbox
Here are the versions I'm using
Docker-ce
Client:
Version: 17.06.1-ce
Server:
Engine:
Version: 17.06.1-ce
minikube:
kubectl
Kubectl:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Kubeadm:
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
VirtualBox:
Version 5.2.18 r124319 (Qt5.6.2)
I happen to need to specify the following:
kubeadm reset
kubeadm init --pod-network-cidr=192.168.0.0/16
However when I then start minikube it always fails with the following:
kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong CA cert
The workaround I've been able to find is to delete all .conf files in /etc/kubernetes
cd /etc/kubernetes/
sudo rm *.conf
cd
sudo minikube delete # may also need rm -rf ~/.minikube
sudo minikube start --vm-driver=none
However new config files are generated and so are the .yaml files under `/etc/kubernetes/manifest' thus erasing all additional attributes of the configuration
Up to that point doing kubeadm config view would show the kube init pod-network-cidr parmameter but not after deleting the .conf files and starting minikube again
First:
Is this ...wrong CA cert error a bug with minikube?
Is there an alternate workaround that would maintain the extra parameters passed during kubeadm init?
I've also tried to pass the following 3 attributes that get cleared from the kube-controller-manager.yaml file as extra-config parameters on the minikube start command
The three missing attributes associated with --pod-network-cidr=192.168.0.0/16 that I've been able to ascertain are:
--allocate-node-cidrs=true
--cluster-cidr=192.168.0.0/16
--node-cidr-mask-size=24
My mikikube start command looks like this:
sudo minikube start --vm-driver=none --extra-config=controller-manager.allocate-node-cidrs=true, controller-manager.cluster-cidr=192.168.0.0/16, controller-manager.node-cidr-mask-size=24
But I get further error when trying this
Any suggestions?
You generally use minikube to setup a mini kubernetes cluster of its own. Generally on your local machine.
You generally use kubeadm to setup a full blown cluster of its own.
You don't generally use both of them together.
Hope it helps!
Is it possible to run "kubeadm init" without Internet access?
When executing kubeadm init on isolated networks where the host is not allowed to make external connections, it fails on download of a stable version check of some sort, as it tries to retrieve https://storage.googleapis.com/kubernetes-release/release/stable-1.6.txt .
# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
unable to get URL "https://storage.googleapis.com/kubernetesrelease/release/stable-1.6.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.6.txt: dial tcp 216.58.204.80:443: i/o timeout
Why is this check needed? The contents of that URL seems to today be "v1.6.4", which is the version that is installed:
# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
This seems to be a behavior introduced after 1.6.0. I have looked at documentation, flags, configuration options but have not found a way to execute kubeadm init without this (not even with --skip-preflight-checks).
Resolved by using the following command:
kubeadm init --kubernetes-version=v1.6.4
(Note the "v" in the version number.)
I want to use the command kubectl set image to upgrade my deployment as detailed in the kubernetes hello world walkthrough, but whenever I do I get the following error:
Error: unknown command "set" for "kubectl"
My kubernetes version (obtained by running kubectl version) is the following:
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.5", GitCommit:"25eb53b54e08877d3789455964b3e97bdd3f3bce", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean"}
Any help would be really appreciated.
kubectl set was added in 1.3, and your client is 1.2.5
Furthermore, you can use this answer to get familiarize with the syntax of the set image command:
https://stackoverflow.com/a/71849740/14486701