I'm trying to build a website in AWS using Docker and Kubernetes, however I'm getting the error The connection to the server localhost:8080 was refused - did you specify the right host or port?
I don’t have a specific file for Kubernetes, but a folder with all of them. So I'm building on this way:
docker build .
kubectl apply -f ./helm/cognos-proxy-login-chart --recursive
Docker command completed successfully.
Am I building in the right way? What should I do?
Related
OS: Redhat 7.9
Docker and Kubernetes (kubectl,kubelet,kubeadm) installed as per the documentation.
Kuberenetes cluster initialized using
sudo kubeadm init
After this all, on checking 'docker ps', find all the services up.
But all kubectl commands except for 'kubectl config view' fail with error
'Unable to connect to the server: Forbidden'
The issue was with corporate proxy. I had to set the 'no_proxy' as ENV variable and also as part of docker proxy and this issue got resolved.
I installed Kubernetes in virtual BOX previously it was working properly but not it is showing The connection to the server 192.168.42.141:6443 was refused - did you specify the right host or port?, Please help.
The connection to the server 192.168.42.141:6443 was refused - did you specify the right host or port?
According to issue there might be kube-apiserver not running state. To check the apiserver status run following command
$ docker ps
# If above is not sowing apiserver container, then it is stopped, To see the stopped container run
$ docker ps -a
P.S: From the comment there is also a version mismatch. To update kubectl follow this
kubectl on any machine reads the current context from kubeconfig file. The file is located at the path $USER_HOME/.kube/config
There are clusters configured inside this file alongwith the IP or domain name of the cluster. If the IP is invalid or not reachable OR the domain name can not be resolved and is unreachable OR the config file is corrupted or the config file is empty, then this error occurs.
In brief, you need to check your config file. It will save you a lot of effort.
I have started a new (.net core 3.0)project in Visual Studio, with Docker support (Windows)
I have added Docker support (right-click on project Add->Docker support) and in the same way added Docker compose support.
If I just Click "play-button" for Docker Compose, the project starts everything works well.
But when I run docker-compose up from the solution folder I get
Cannot start service testproj30: failed to create endpoint
testproj30_testproj30_1 on network nat: hnsCall failed in Win32: The
specified port already exists.
(I have closed my VS solution). If I remove the port mapping in docker-compose.override.yaml I dont get this error message. I have dont the most common tricks with restarting docker servce, hni service and so on. Nothing helps.
I dont want to depend on all VS-voodoo from the project file and God knows what other files that are involved.
I can run docker run -p 8080:80 443:443 without any port problems
I fixed a similar problem by removing some terminated container and then pruning networks.
List terminated container :
docker ps -a
Remove them (Cygwin syntax) :
docker rm $(docker ps -aq)
You will have error message for runnnig containers.
Clean your networks :
docker network prune
For myself, the main cause was the Docker killing process skiped the port releasing mechanism of my application.
I'm using Minikube on Windows based machine. On the same machine, I also have docker-machine setup.
I've pointed docker client towards minikube’s docker environment. This way, can see Docker environment inside Kubernetes.
Without issues, I can build docker images & run docker containers from Minikube VM. However, when I try to start any docker container via kubectl(from PowerShell), its failing to start primarily as if kubectl can't find docker image due to following error -
Failed to pull image "image name": rpc error: code = Unknown desc =
Error response from daemon: repository "image-repo-name" not found:
does not exist or no pull access Error syncing pod
I don't know what's missing. If "docker run" can access the image why "kubectl" can not do?
Here my Dockerfile:
FROM node:4.4
EXPOSE 9002
COPY server.js .
CMD node server.js
Make sure your image path in your yaml is correct. That image should exist on your local machine. It should be named with a number not "latest"
Have this in your deployment yaml:
image: redis:1.0.48
run "> docker images" to see the list of images on your machine.
When I define multiple containers in a pod/pod template like one container running agent and another php-fpm, how can they access each other? I need the agent container to connect to php-fpm by shell and need to execute few steps interactively through agent container.
Based on my understanding, we can package kubectl into the agent container and use kubectl exec -it <container id> sh to connect to the container. But I don't want Agent container to have more privilege than to connect to the target container with is php-fpm.
Is there a better way for agent container to connect to php-fpm by a shell and execute commands interactively?
Also, I wasn't successful in running kubectl from a container when using minikube due to following errors
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
Error in configuration:
* unable to read client-cert /Users/user/.minikube/apiserver.crt for minikube due to open /Users/user/.minikube/apiserver.crt: no such file or directory
* unable to read client-key /Users/user/.minikube/apiserver.key for minikube due to open /Users/user/.minikube/apiserver.key: no such file or directory
* unable to read certificate-authority /Users/user/.minikube/ca.crt for minikube due to open /Users/user/.minikube/ca.crt: no such file or directory
docker run -it -v ~/.kube:/root/.kube lachlanevenson/k8s-kubectl get nodes
First off, every Pod within a k8s cluster has its own k8s credentials provided by /var/run/secrets/kubernetes.io/serviceaccount/token, and thus there is absolutely no need to attempt to volume mount your home directory into a docker container
The reason you are getting the error about client-cert is because the contents of ~/.kube are merely strings that point to the externally defined ssl key, ssl certificate, and ssl CA certificate defined inside ~/.kube/config -- but I won't speak to fixing that problem further since there is no good reason to be using that approach