error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable - kubernetes

I have KUBECONFIG inside my .kube/ folder still facing this issue.
I have also tried
kubectl config set-context ~/.kube/kubeconfig.yml
kubectl config use-context ~/.kube/kubeconfig.yml
No luck! Still the same.

Answering my own question
Initially, I had lost a lot of time on this error, but later I found that my kubeconfig is not having the correct context.
I tried the same steps:
kubectl config set-context ~/.kube/kubeconfig1.yml
kubectl config use-context ~/.kube/kubeconfig1.yml
or add a line with kubeconfig to your ~/.bashrc file
export KUBECONFIG=~/.kube/<kubeconfig_env>.yml
Also, if you want to add multiple kubeconfig: Add it like in ~/.bashrc file
export KUBECONFIG=~/.kube/kubeconfig_dev.yml:~/.kube/kubeconfig_dev1.yml:~/.kube/kubeconfig_dev2.yml
with different Kube contexts and it worked well.
Though the error should simplify and return the specific error related to kubecontext.
But after some digging it worked.

if u are running k3s u can put the following line to ur ~/.zshrc / ~/.bashrc
k3s
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

By default, kubectl looks for a file named config in the $HOME/. kube directory. So you have to rename your config file or set its location properly.
Look here for more detailed explanation

I had the same problem because I used the Linux subsystem on Windows.
If you use the command kubectl in the subsystem, you must copy your config file to part of the Linux system.
For example, if your username on Windows is jammy, and your username is also jammy in the Linux subsystem, you must copy the config file
from:
/mnt/c/Users/jammy/.kube/config
to:
/home/jammy/.kube/

Related

Can not change ceph configuration

I deployed ceph cluster with cephadm on 5 nodes, I am trying to change bluestore_cache_size with this command:
sudo ceph config-key set bluestore_cache_size 200221225472
but when run this command:
sudo ceph-conf --show-config |grep bluestore_cache
always bluestore_cache_size = 0 appears. How I can change this configuration? Any help would be appreciated.
try this:
ceph tell osd.* injectargs --bluestore_cache_size=200221225472
you might need to know about different config
The following CLI commands are used to configure the cluster:
ceph config dump will dump the entire configuration database for the cluster.
ceph config get <who> will dump the configuration for a specific daemon or client (e.g., mds.a), as stored in the monitors’ configuration database.
ceph config set <who> <option> <value> will set a configuration option in the monitors’ configuration database.
ceph config show <who> will show the reported running configuration for a running daemon. These settings may differ from those stored by the monitors if there are also local configuration files in use or options have been overridden on the command line or at run time. The source of the option values is reported as part of the output.
ceph config assimilate-conf -i <input file> -o <output file> will ingest a configuration file from input file and move any valid options into the monitors’ configuration database. Any settings that are unrecognized, invalid, or cannot be controlled by the monitor will be returned in an abbreviated config file stored in output file. This command is useful for transitioning from legacy configuration files to centralized monitor-based configuration.
source: https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/

kubeadm init --config issue

I'm trying to init a kubernetes cluster using kubeadm. I followed the instructions found here for the stacked control plane/etcd nodes. For the container runtime installation, it is recommended that the runtime (in my case, containerd) and kubelet use the same cgroup driver (in my case, systemd).
Still following the instructions, I added
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
to /etc/containerd/config.toml and then restarted containerd using sudo systemctl restart containerd
So far so good, everything work as expected. Then I get to the part where I have to configure cgroup driver used by kubelet. It says I have to call kubeadm init with a config file. Something like: sudo kubeadm init --config.yaml. Here is my problem: I found very little information about what should that config file look like. The documentation says kubeadm config print init-defaults should print the default config but the default config has nothing to do with my setup (ie, it uses docker instead of containerd). I found an example of a file here but so far I've not managed to adapt it to my setup and I've found very little documentation to do so. There has to be a simpler way than just rewriting an entire config file just for one attribute change, right? How come I can't print the literal config file used by kubeadm init?
My solution was to restart containerd with default config (no systemd cgroup), then run kubeadm init as I would normally. Once the cluster was started, I printed the config to a file with: kubeadm config view. I then modified that file to add the required parameters to set systemd cgroup. Finally, I configured containerd to use systemd and ran kubeadm init with the newly created config file. Everything worked.
Warning: The command kubeadm config view says that the "view" command is deprecated and to use kubectl get cm -o yaml -n kube-system kubeadm-config instead, but the output of that command doesn't create a valid kubeadm config file, so I don't recommend doing that.

Kubernetes : do we need to set http_proxy and no_proxy in apiserver manifest?

My cluster is behind a corporate proxy, and I have manually set http_proxy=myproxy, https_proxy=myproxy and no_proxy=10.96.0.0/16,10.244.0.0/16,<nodes-ip-range> in the three kubernetes core manifests (kube-apiserver.yaml, kube-controller-manager.yaml and kube-scheduler.yaml). Now, I want to upgrade kubernetes with kubeadm. But I know kubeadm will regenerate these manifests from the kubeadm-config configmap when upgrading, so without these environment variables. I can't find an extraEnvs key in kubeadm-config configmap (like extraArgs and extraVolumes).
Do I really need to set these variables in all kubernetes manifests ? If not, I think kubeadm will throw a warning because all communications will use the proxy (and I don't want that).
How can I pass these variables to kubeadm when upgrading ?
There are no such a flags available for Kubeadm at the moment. You may want to open github request for that feature.
You can use the way described here or here and export variables:
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
And then use sudo -E bash to use the current
$ sudo -E bash -c "kubeadm init... "
Alternative way would be to reference those variables in the command as showed here:
NO_PROXY=master-ip,node-ip,127.0.0.1 HTTPS_PROXY=http://proxy-ip:port/ sudo kubeadm init --pod-network-cidr=192.168.0.0/16...

K8s getting error after change gcloud sdk location on Macos

I'm just fished connect to my kubernete.
But affter that I need to change the gcloud location from /Download to /usr folder.
Next, I run install.sh file for update the new location in .bash_profile.
Then I check gcloud command. It working well
But when I run kubectl get pod. The error showing.
Unable to connect to the server: error executing access token command "/Users/panda/Downloads/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=fork/exec /Users/panda/Downloads/google-cloud-sdk/bin/gcloud: no such file or directory output= stderr=
Hmm, how to update the location of gcloud sdk for solve this problem.
Thanks for your help.
In your Kubeconfig file (probably in ~/.kube/config) you’ll see it has the old path to your gcloud CLI. Update that file with the new path.

How to save generated kube config to .kube/config

I'm using rke to generate a Kubernetes cluster in a private cloud. It produces a kube_config_cluster.yml file. Is there a way to add this config to my $HOME/.kube/config file?
Without having the .kube/config set, when using kubectl, I have to pass the argument:
kubectl --kubeconfig kube_config_cluster.yml <command>
Or set the KUBECONFIG environment variable.
export KUBECONFIG=kube_config_cluster.yml
kubectl config merge command is not yet available. But you can achieve a config merge by running:
Command format:
KUBECONFIG=config1:config2 kubectl config view --flatten
Example:
Merge a config to ~/.kube/config and write back to ~/.kube/config-new.yaml.
Do not pipe directly to the config file! Otherwise, it will delete all your old content!
KUBECONFIG=~/.kube/config:/path/to/another/config.yml kubectl config view --flatten > ~/.kube/config-new.yaml
cp ~/.kube/config-new.yaml ~/.kube/config
If kubectl can read that as a valid config file, you can just use that as your kubeconfig. So cp kube_config_cluster.yaml $HOME/.kube/config should work fine. From there it'll read that config file by default and you won't have to specify it.
I generally use the below commands to see and change context, not too cluttered and easy to fire
kubectl config current-context #show the current context in use
kubectl config use-context context-name-you-want-to-use