I initially ran jenkins in a docker container through my MacOS terminal successfully after running docker-compose up which generated the long admin password cypher. However after I restarted my machine, the setup vanished. But each time I run docker-compose up after exposing jenkins port 8080 on port 8082 and Jira port 50000 on port 200000 having tried exposing them externally on other ports previously, I keep getting the error below:
**Creating jenkins ... error
ERROR: for jenkins Cannot start service jenkins: driver failed programming external connectivity on endpoint jenkins (****************************************************): Bind for 0.0.0.0:20000 failed: port is already allocated
ERROR: for jenkins Cannot start service jenkins: driver failed programming external connectivity on endpoint jenkins (****************************************************): Bind for 0.0.0.0:20000 failed: port is already allocated**
I have stopped, killed and removed all containers, removed all images and pruned all networks, but nothing seems to work.
What's a way around this and how do I free up allocated ports?
You can find the process that is running on port 20000 using:
lsof:
lsof -nP -iTCP -sTCP:LISTEN | grep <port-number>
or
netstat:
netstat -anv | grep <port-number>
It is probably just an old process that stays as zombie. Just kill that process (you can use kill -9 <pid>) and try the same operation again.
I am trying to deploy an app in a Code Engine project. The container image is pretty standard: docker.io/library/httpd. All I did in the configuration wizard is to change the port from Code Engine default 8080 to port 80.
Code Engine comes back with:
Revision failed to start with "exit code 1". Check your image and configuration.
In the logs I found these two lines:
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
Why?
I don't know the answer to your question "why", except I see some people on Stackoverflow mention the range up to 1024 is reserved by the OS. I could run my httpd locally on port 80, but in the IBM Code Engine I had to change to 8080.
This is how I managed to get it running:
I edited the httpd.conf as this post implies:
"There is a hint on how to do this at the DockerHub page. An alternative config file must be obtained and added to the container via the Dockerfile.
First get a copy of the config file:
docker run --rm httpd:2.4 cat /usr/local/apache2/conf/httpd.conf > my-httpd.conf
Then edit the my-httpd.conf file and modify the port:
Listen 8080
Finally add to the Dockerfile the instruction to copy it:
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf "
I have been trying to take backup using Cassy, but I could only get metadata backup.
It seems that there's no error-logs on Cassy and the status is "STARTED" on Cassy's BackupList.
Below is the steps I tried for deployment. Is there any lacking steps or something should be correct?
First, I created scalar DL, cassandra and envoy with git clone from below.
git clone https://github.com/scalar-labs/scalar-samples.git
I've chencked that it works and I can execute contract correctly.
Then I added Cassy container as below.
Add ssh to cassandra nodes.
Change commitlog_sync from periodic to batch.
Git clone from below
git clone https://github.com/scalar-labs/cassy
Edit cassy.properties file to add S3 infomation and other paths.
Create container using cloned Dockerfile.
cassy/build/docker/Dockerfile
I'm not fully sure about the environment you are working on.
Is Cassy master in your localhost and other components like Cassandra are in Docker ?
If that is the case, then I doubt that Cassy can connect Cassandra via JMX.
BTW, are there any logs in /var/log/scalar ?
I deployed all nodes on one docker EC2.
Cassandra and Cassy can connect using docker local network.
Also I checked if Cassandra is listening JMX.
172.21.0.2 is the cassandra container's docker network IP.
# netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:9042 0.0.0.0:* LISTEN
tcp 0 0 172.21.0.2:7000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:46715 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:7199 0.0.0.0:* LISTEN
If I deploy cassy using "./gradlew installDist", I can get /var/log/scalar/cassy.log.
It shows only a below log.
2020-07-22 05:35:21.641 [pool-3-thread-1] INFO c.s.c.transferer.AwsS3FileUploader - Uploading /tmp/cassy.db.dump
But If I changed to deploy cassy using "./gradlew docker", there is no such log file on cassy container.
I recently installed kubernetes on VMware and also configured few pods , while configuring those pods , it automatically used IP of the VMware and configured. I was able to access the application during that time but then recently i rebooted VM and machine which hosts the VM, during this - IP of the VM got changed i guess and now - I am getting below error when using command kubectl get pod -n <namspaceName>:
userX#ubuntu:~$ kubectl get pod -n NameSpaceX
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
userX#ubuntu:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
kubectl cluster-info as well as other related commands gives same output.
in VMware workstation settings, we are using network adapter which is sharing host IP address setting. We are not sure if it has any impact.
We also tried to add below entry in /etc/hosts , it is not working.
127.0.0.1 localhost \n
192.168.214.136 localhost \n
127.0.1.1 ubuntu
I expect to run the pods back again to access the application.Instead of reinstalling all pods which is time consuming - we are looking for quick workaround so that pods will get back to running state.
If you use minikube sometimes all you need is just to restart minikube.
Run:
minikube start
I encountered the same issue - the problem was that the master node didn't expose port 6443 outside.
Below are the steps I took to fix it.
1 ) Check IP of api-server.
This can be verified via the .kube/config file (under server field) or with: kubectl describe pod/kube-apiserver-<master-node-name> -n kube-system.
2 ) Run curl https://<kube-apiserver-IP>:6443 and see if port 6443 is open.
3 ) If port 6443 you should get something related to the certificate like:
curl: (60) SSL certificate problem: unable to get local issuer certificate
4 ) If port 6443 is not open:
4.A ) SSH into master node.
4.B ) Run sudo firewall-cmd --add-port=6443/tcp --permanent (I'm assuming firewalld is installed).
4.C ) Run sudo firewall-cmd --reload.
4.D ) Run sudo firewall-cmd --list-all and you should see port 6443 is updated:
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: dhcpv6-client ssh
ports: 6443/tcp <---- Here
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
The common practice is to copy config file to the home directory
sudo cp /etc/kubernetes/admin.conf ~/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also, make sure that api-server address is valid.
server: https://<master-node-ip>:6443
If not, you can manually edit it using any text editor.
You need to export the admin.conf file as kubeconfig before running the kubectl commands. You may put this as your env variable
export kubeconfig=<path>/admin.conf
after this you should be able to run the kubectl command. I am hoping that your setup of K8S cluster is proper.
Last night I had the exact same error installing Kubernetes using this puppet module: https://forge.puppet.com/puppetlabs/kubernetes
Turns out that it is an incorrect iptables setting in the master that blocks all non-local requests towards the api.
The way I solved it (bruteforce solution) is by
completely remove alle installed k8s related software (also all config files, etcd data, docker images, mounted tmpfs filesystems, ...)
wipe the iptables completely https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
reinstall
This is what solved the problem in my case.
There is probably a much nicer and cleaner way to do this (i.e. simply change the iptables rules to allow access).
if you getting the below error then you also check once the token validity.
Unable to connect to the server: dial tcp 192.168.93.10:6443: connect: no route to host
Check your token validity by using the command kubeadm token list if your token is expired then you have to reset the cluster using kubeadm reset and than initialize again using command kubeadm init --token-ttl 0.
Then again check the status of the token using kubeadm token list. Note here the TTL value will be <forever> and Expire value will be <never>.
example:-
[root#master1 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
nh48tb.d79ysdsaj8bchms9 <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Ubuntu 22.04 LTS Screenshot
Select docker-desktop and run again your command, e.g kubectl apply -f <myimage.yaml>
Run minikube start command
The reason behind that is your minikube cluster with driver docker stopped
when you shutdown the system
To all those who are trying to learn and experiment kubernetes using Ubuntu on Oracle VM:
IP address is assigned to Guest OS/VM based on the network adapter selection. Based on your network adapter selection, you need to configure the settings in Oracle VM network section or your router settings.
See the link for most common Oracle VM network adapter.
https://www.nakivo.com/blog/virtualbox-network-setting-guide/
I was using bridge adapter which put VM and host OS in parallel. So the my router was randomly assigning IP to my VM after every restart and my cluster stopped working and getting the same exact error message posted in the question.
> k get pods -A
> Unable to connect to the server: dial tcp 192.168.214.136:6443: connect: no route to host
> systemctl status kubelet
> ........
> ........ "Error getting node" err="node \"node\" not found"
Cluster started working again after reserving static IP address to my VM in router settings.(if you are using NAT adapter, you should configure it in VM network settings)
When you are reserving IP address to your VM, make sure to assign the same old IP address which was used for configuring kubelet.
I got this error when start minikube. Can anybody help me? Thanks in advance.
I1105 12:57:36.987582 15567 cluster.go:77] Machine state: Running
Waiting for SSH to be available...
Getting to WaitForSSH function...
Using SSH client type: native
&{{{ 0 [] [] []} docker [0x83b300] 0x83b2b0 [] 0s} 127.0.0.1 22 }
About to run SSH command:
exit 0
Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
In order to have local instance of cluster-like Kubernetes installation there is minikube utility.
This tool can do all the magic of the installation process automatically.
In the short - the process of deploying consist of downloading runtime images, spinning up containers and configuring necessary elements of Kubernetes without any user interaction required. It works like a charm.
You may consider to just drop current installation and start it over?
Looks like
minikube delete && minikube start
may help.
Removing the .minikube file and starting minikube helps.
rm -rf ~/.minikube & minikube start