Getting error while trying to setup kubernetes on debian using helm - kubernetes

While running the helm init I was getting an error:
Error: error installing: the server could not find the requested resource (post deployments.extensions)
But I solved it by running :
helm init --client-only
But when I run:
helm upgrade --install --namespace demo demo-databases-ephemeral charts/databases-ephemeral --wait
I'm getting:
Error: serializer for text/html; charset=utf-8 doesn't exist
I found nothing convincing as a solution and I'm not able to proceed forward in the setup.
Any help would be appreciated.

Check if your ~/.kube/config exists and is properly set up. If not, run the following command:
sudo cp -i /etc/kubernetes/admin.config ~/.kube/config
Now check if kubectl is properly setup using:
kubectl version
This answer is specific to the issue you are getting. If this does not resolve the issue, please provide more error log.

Apparently, your kube-dns pod not able to find api server, so it returns text/html, rather then JSON
1) Check errors in dns container apart from Error: serializer for text/html; charset=utf-8 doesn't exist
kubectl logs <kube-dns-pod> -n kube-system kubedns
2) Update your dns pod config with following flags:
--kubecfg-file=~/.kube/config <-- path to your kube-config file
--kube-master-url=https://0.0.0.0:3000 <--address to your master node

Related

Kubernetes apply command produces 'wrong encoding error'

I'm trying to execute:
microk8s kubectl apply -f deployment.yaml
and I am always getting:
error: string field contains invalid UTF-8
No matter which file and string as a file path parameter I'm trying to use. Even if I execute:
microk8s kubectl apply -f blablabla
Result is the same.
UPD: I resolved the problem by restarting microk8s service. After restart everything is fine, but I still have no idea what it was.
I have posted Community Wiki answer for better visibility.
As OP has mentioned in the question, he resolved the problem by restarting microk8s service:
I resolved the problem by restarting microk8s service. After restart everything is fine.
This is not a wrong format in the manifest, instead, it's a corrupted cache in $HOME/.kube/
try to delete the cache:
rm -rf $HOME/.kube/http-cache
rm -rf $HOME/.kube/cache

Kubernetes Control Plan - All kubectl commands fail with 403 Forbidden

OS: Redhat 7.9
Docker and Kubernetes (kubectl,kubelet,kubeadm) installed as per the documentation.
Kuberenetes cluster initialized using
sudo kubeadm init
After this all, on checking 'docker ps', find all the services up.
But all kubectl commands except for 'kubectl config view' fail with error
'Unable to connect to the server: Forbidden'
The issue was with corporate proxy. I had to set the 'no_proxy' as ENV variable and also as part of docker proxy and this issue got resolved.

Internal certificate used when installing Helm Tiller Kubernetes

The error below is triggered when executing kubectl -n gitlab-managed-apps logs install-helm.
I've tried regenerating the certificates, and bypassing the certificate check. Somehow it is using my internal certificate instead of the certificate of the source.
root#dev # kubectl -n gitlab-managed-apps logs install-helm
+ helm init --tiller-tls --tiller-tls-verify --tls-ca-cert /data/helm/helm/config/ca.pem --tiller-tls-cert /data/helm/helm/config/cert.pem --tiller-tls-key /data/helm/helm/config/key.pem
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate is valid for *.tdebv.nl, not kubernetes-charts.storage.googleapis.com
What might be the issue here? Screenshot below is the error Gitlab is giving me (not much information either).
After having the same issue I finally found the solution for it:
In the /etc/resolv.conf file on your Master and Worker nodes you have to search and remove the search XYZ.com entry.
If you are using Jelastic you have to remove this entry every time after a restart. It gets added by Jelastic automatically. I already contacted them so maybe they will fix it soon.
Creating "~/.helm/repository/repositories.yaml" with the following content solved the problem.
cat << EOF >> ~/.helm/repository/repositories.yaml
apiVersion: v1
repositories:
- caFile: ""
cache: ~/.helm/repository/cache/stable-index.yaml
certFile: ""
keyFile: ""
name: stable
password: ""
url: https://kubernetes-charts.storage.googleapis.com
username: ""
- caFile: ""
cache: ~/.helm/repository/cache/local-index.yaml
certFile: ""
keyFile: ""
name: local
password: ""
url: http://127.0.0.1:8879/charts
username: ""
EOF
#helm init
I experienced the same issue on Kubernetes with the Calico network stack under Debian Buster.
Checking a lot of configs and parameters, I ended up with getting it to work by changing the policy for the forward rule to ACCEPT. This made it clear that the issue is somewhere around the firewall.
Running iptables -L gave me the following unveiling warning: # Warning: iptables-legacy tables present, use iptables-legacy to see them
The output given by the list command does not contain any Calico rules. Running iptables-legacy -L showed me the Calico rules, so it seems obvious now why it didn't work. So Calico seems to use the legacy interface.
The issue is the change in Debian to iptables-nft in the alternatives, you can check via:
ls -l /etc/alternatives | grep iptables
Doing the following:
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy
Now it works all fine! Thanks to Long at the Kubernetes Slack channel for pointing the route to solving it.

Determine what resource was not found from "Error from server (NotFound): the server could not find the requested resource"

I'm running kubectl create -f notRelevantToThisQuestion.yml
The response I get is:
Error from server (NotFound): the server could not find the requested
resource
Is there any way to determine which resource was requested that was not found?
kubectl get ns returns
NAME STATUS AGE default Active 243d
kube-public Active 243d kube-system Active 243d
This is not a cron job.
Client version 1.9
Server version 1.6
This is very similar to https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1 but my k8s cluster has been deployed correctly (everything's been working for almost a year, I'm adding a new pod now).
To solve this, downgrade the client or upgrade the server. In my case I've upgraded server (new minikube) but forget to upgrade client (kubectl) and end up with those versions.
$ kubectl version --short
Client Version: v1.9.0
Server Version: v1.14.1
When I'd upgraded client version (in this case to 1.14.2) then everything started to work again.
Instructions how to install (in your case upgrade) client are here https://kubernetes.io/docs/tasks/tools/install-kubectl
I have the same error when trying to do a CD with Jenkins and Kubernetes. In the pipeline I excute kubectl create -f app-deployment.yml -v=8 This image show more information about the error:
The cause of problem in versions:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.
From http://words.yuvi.in/post/kubectl-rbac/
Running kubectl create -f notRelevantToThisQuestion.yml -v=8 will print all the HTTP traffic (requests and responses!) in an easy to read way. In this way, one can identify which resource is not available from the http responses.
apply these and then try
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This solution is particularly for mac users.
Step 1:- Update kubernetes
brew upgrade kubernetes-cli
Step 2:- Overwrite it
brew link --overwrite kubernetes-cli
For Openshift, I was using old oc CLI version, after updating to latest oc CLI solved my issue
I stumbled upon this question when creating resource from Dashboard.
The resource was namespaced and I had no namespace selected. Selecting namespace fixed the server could not find the requested resource error.
In my case, I didn't enable kubernetes from docker desktop.
I enabled it which worked.

unable to pull public images with kubernetes using kubectl

I run the following commands and when I check if the pods are running I get the following errors:
Failed to pull image "tomcat": rpc error: code = Unknown desc = no
matching manifest for linux/amd64 in the manifest list entries
kubectl run tomcat --image=tomcat --port 8080
and
Failed to pull image "ngnix": rpc error: code = Unknown desc
= Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login'
kubectl run nginx3 --image ngnix --port 80
I seen a post in git about how to complete this when private repos cause an issue but not public. Has anyone ran into this before?
First Problem
From github issue
Sometimes, we'll have non-amd64 image build jobs finish before their amd64 counterparts, and due to the way we push the manifest list objects to the library namespace on the Docker Hub, that results in amd64-using folks (our primary target users) getting errors of the form "no supported platform found in manifest list" or "no matching manifest for XXX in the manifest list entries"
Docker Hub manifest list is not up-to-date with amd64 build for tomcat:latest.
Try another tag
kubectl run tomcat --image=tomcat:9.0 --port 8080
Second Problem
Use nginx not ngnix. Its a typo.
$ kubectl run nginx3 --image nginx --port 80