I have just installed Rancher for test purpose. First I install docker and I install kubectl and helm. Then I install Rancher. When I try to create a new kubernetes cluster, I got this error. I searched about it and it is about the certification error I thought.
Failed to create fleet-default/aefis-test cluster.x-k8s.io/v1beta1, Kind=Cluster for rke-cluster fleet-default/aefis-test: Internal error occurred: failed calling webhook "default.cluster.cluster.x-k8s.io": failed to call webhook: Post "https://webhook-service.cattle-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-cluster?timeout=10s": service "webhook-service" not found"
I used this command to install Rancher:
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:latest --no-cacerts"
I hope anybody has a good idea and solution for this error? Thanks.
If I want to delete the webhood secret for triggering to create new one, it throws this error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Related
I'm trying to build a website in AWS using Docker and Kubernetes, however I'm getting the error The connection to the server localhost:8080 was refused - did you specify the right host or port?
I don’t have a specific file for Kubernetes, but a folder with all of them. So I'm building on this way:
docker build .
kubectl apply -f ./helm/cognos-proxy-login-chart --recursive
Docker command completed successfully.
Am I building in the right way? What should I do?
I got this error messages in my pod, using this command:
kubectl create deploy fastapi-helloworld --image=juanb3r/fastapi-multi:latest
I don´t know why the container can't be created.
I just needed to install this:
zypper install apparmor-parser
on my vagrant
OS: Redhat 7.9
Docker and Kubernetes (kubectl,kubelet,kubeadm) installed as per the documentation.
Kuberenetes cluster initialized using
sudo kubeadm init
After this all, on checking 'docker ps', find all the services up.
But all kubectl commands except for 'kubectl config view' fail with error
'Unable to connect to the server: Forbidden'
The issue was with corporate proxy. I had to set the 'no_proxy' as ENV variable and also as part of docker proxy and this issue got resolved.
I install the Bitnami Helm chart, using the example shown in the README:
helm install my-db \
--namespace dar \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
Then, following the instructions shown in the blurb which prints after the installation is successful I forward the port to port 5432 then try and connect:
PGPASSWORD="secretpassword" psql --host 127.0.0.1 -U postgres -d my-database -p 5432
But I get the following error:
psql: error: could not connect to server: FATAL: password authentication failed for user "postgres"
How can this be? Is the Helm chart buggy?
Buried deep in the stable/postgresql issue tracker is the source of this very-hard-to-debug problem.
When you run helm uninstall ... it errs on the side of caution and doesn't delete the storage associated with the database you got when you first ran helm install ....
This means that once you've installed Postgres once via Helm, the secrets will always be the same in subsequent installs, regardless of what the post-installation blurb tells you.
To fix this, you have to manually remove the persistent volume claim (PVC) which will free up the database storage.
kubectl delete pvc data-my-db-postgresql-0
(Or whatever the PVC associated with your initial Helm install was named.)
Now a subsequent helm install ... will create a brand-new PVC and login can proceed as expected.
I run the following commands and when I check if the pods are running I get the following errors:
Failed to pull image "tomcat": rpc error: code = Unknown desc = no
matching manifest for linux/amd64 in the manifest list entries
kubectl run tomcat --image=tomcat --port 8080
and
Failed to pull image "ngnix": rpc error: code = Unknown desc
= Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login'
kubectl run nginx3 --image ngnix --port 80
I seen a post in git about how to complete this when private repos cause an issue but not public. Has anyone ran into this before?
First Problem
From github issue
Sometimes, we'll have non-amd64 image build jobs finish before their amd64 counterparts, and due to the way we push the manifest list objects to the library namespace on the Docker Hub, that results in amd64-using folks (our primary target users) getting errors of the form "no supported platform found in manifest list" or "no matching manifest for XXX in the manifest list entries"
Docker Hub manifest list is not up-to-date with amd64 build for tomcat:latest.
Try another tag
kubectl run tomcat --image=tomcat:9.0 --port 8080
Second Problem
Use nginx not ngnix. Its a typo.
$ kubectl run nginx3 --image nginx --port 80