kubeadm init kubelet complains default bind address already in use - kubernetes

kubeadm version 1.12.2
$ sudo kubeadm init --config kubeadm_new.config --ignore-preflight-errors=all
/var/log/syslog shows:
Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438374 5101 server.go:1013] Started kubelet
Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438406 5101 server.go:133] Starting to listen on 0.0.0.0:10250
Nov 15 08:44:13 khteh-T580 kubelet[5101]: E1115 08:44:13.438446 5101 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Nov 15 08:44:13 khteh-T580 kubelet[5101]: E1115 08:44:13.438492 5101 server.go:753] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
Nov 15 08:44:13 khteh-T580 kubelet[5101]: I1115 08:44:13.438968 5101 server.go:318] Adding debug handlers to kubelet server.
Nov 15 08:44:13 khteh-T580 kubelet[5101]: F1115 08:44:13.439455 5101 server.go:145] listen tcp 0.0.0.0:10250: bind: address already in use
I have tried sudo systemctl stop kubelet and manually kill kubelet process but to no avail. Any advice and insights are appreciated.

Here is what you can do:
Try the following command to find out which process is holding the port 10250
root#master admin]# ss -lntp | grep 10250
LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=23373,fd=20))
It will give you PID of that process and name of that process. If it is unwanted process which is holding the port, you can always kill the process and that port becomes available to use by kubelet.
After killing the process again run the above command, it should return no value.
Just to be on safe side run kubeadm reset and then run kubeadm init and it should go through.

Have you tried using netstat to see what other process is running that has already bound to that port?
sudo netstat -tulpn | grep 10250

For me, later on I discovered that there were 2 extra containers "Terminating" of core-dns-xxxxx in my cluster.
After deleting them forcefully solved the problem for me:
kubectl delete core-dns-xxxx --force
Thanks.

I ditch kubeadm and use microk8s.

Related

kubelet won't start after kuberntes/manifest update

This is sort of strange behavior in our K8 cluster.
When we try to deploy a new version of our applications we get:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
I used kubectl get cs and found controller and scheduler in Unhealthy state.
As describer here updated /etc/kubernetes/manifests/kube-scheduler.yaml and
/etc/kubernetes/manifests/kube-controller-manager.yaml by commenting --port=0
When I checked systemctl status kubelet it was working.
Active: active (running) since Mon 2020-10-26 13:18:46 +0530; 1 years 0 months ago
I had restarted kubelet service and controller and scheduler were shown healthy.
But systemctl status kubelet shows (soon after restart kubelet it showed running state)
Active: activating (auto-restart) (Result: exit-code) since Thu 2021-11-11 10:50:49 +0530; 3s ago<br>
Docs: https://github.com/GoogleCloudPlatform/kubernetes<br> Process: 21234 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET
Tried adding Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false" to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf as described here, but still its not working properly.
Also removed --port=0 comment in above mentioned manifests and tried restarting,still same result.
Edit: This issue was due to kubelet certificate expired and fixed following these steps. If someone faces this issue, make sure /var/lib/kubelet/pki/kubelet-client-current.pem certificate and key values are base64 encoded when placing on /etc/kubernetes/kubelet.conf
Many other suggested kubeadm init again. But this cluster was created using kubespray no manually added nodes.
We have baremetal k8 running on Ubuntu 18.04.
K8: v1.18.8
We would like to know any debugging and fixing suggestions.
PS:
When we try to telnet 10.233.0.1 443 from any node, first attempt fails and second attempt success.
Edit: Found this in kubelet service logs
Nov 10 17:35:05 node1 kubelet[1951]: W1110 17:35:05.380982 1951 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "app-7b54557dd4-bzjd9_default": unexpected command output nsenter: cannot open /proc/12311/ns/net: No such file or directory
Posting comment as the community wiki answer for better visibility
This issue was due to kubelet certificate expired and fixed following these steps. If someone faces this issue, make sure /var/lib/kubelet/pki/kubelet-client-current.pem certificate and key values are base64 encoded when placing on /etc/kubernetes/kubelet.conf

The connection to the server x.x.x.:6443 was refused - did you specify the right host or port? Kubernetes

I've installed, Docker, Kubectl and kubeAdm.
I want to create my device model and device CRDs (I'm following this guide.
So, when I run the command :
kubectl create -f devices_v1alpha1_devicemodel.yaml
as a user I get the following out:
The connection to the server 10.0.0.68:6443 was refused - did you
specify the right host or port?
(I have added the permission for the user to access the .kube folder)
With netstat, I get :
> ubuntu#kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$
> sudo netstat -atunp Active Internet connections (servers and
> established) Proto
> Recv-Q Send-Q Local Address Foreign Address State
> PID/Program name tcp 0 0 0.0.0.0:22
> 0.0.0.0:* LISTEN 1298/sshd tcp 0 224 10.0.0.68:22 160.98.31.160:52503 ESTABLISHED
> 2061/sshd: ubuntu [ tcp6 0 0 :::22 :::*
> LISTEN 1298/sshd udp 0 0 0.0.0.0:68
> 0.0.0.0:* 910/dhclient udp 0 0 10.0.0.68:123 0.0.0.0:*
> 1241/ntpd udp 0 0 127.0.0.1:123
> 0.0.0.0:* 1241/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:*
> 1241/ntpd udp6 0 0 fe80::f816:3eff:fe0:123 :::*
> 1241/ntpd udp6 0 0 2001:620:5ca1:2f0:f:123 :::*
> 1241/ntpd udp6 0 0 ::1:123 :::*
> 1241/ntpd udp6 0 0 :::123 :::*
> 1241/ntpd
With lsof -i :
ubuntu#kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$ sudo lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 910 root 6u IPv4 12765 0t0 UDP *:bootpc
ntpd 1241 ntp 16u IPv6 15340 0t0 UDP *:ntp
ntpd 1241 ntp 17u IPv4 15343 0t0 UDP *:ntp
ntpd 1241 ntp 18u IPv4 15347 0t0 UDP localhost:ntp
ntpd 1241 ntp 19u IPv4 15349 0t0 UDP 10.0.0.68:ntp
ntpd 1241 ntp 20u IPv6 15351 0t0 UDP ip6-localhost:ntp
ntpd 1241 ntp 21u IPv6 15353 0t0 UDP [2001:620:5ca1:2f0:f816:3eff:fe0a:874a]:ntp
ntpd 1241 ntp 22u IPv6 15355 0t0 UDP [fe80::f816:3eff:fe0a:874a]:ntp
sshd 1298 root 3u IPv4 18821 0t0 TCP *:ssh (LISTEN)
sshd 1298 root 4u IPv6 18830 0t0 TCP *:ssh (LISTEN)
sshd 2061 root 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh->160.98.31.160:52503 (ESTABLISHED)
sshd 2124 ubuntu 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh->160.98.31.160:52503 (ESTABLISHED)
I've already tried this
and:sudo swapoff -a
Please perform below steps on the master node. It works like charm.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
I am facing similar problem with following error while deploying the pod network into a cluster using flannel:
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server 192.168.1.101:6443 was refused - did you specify the right host or port?
I performed below steps to solved the issue:
$ sudo systemctl stop kubelet
$ sudo systemctl start kubelet
$ strace -eopenat kubectl version
then apply the yml file
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
kubelet must be down. you need to check kubelet logs on the master and ensure api server is running and online. then only you should be able to deploy
I'll add another reason for this error that was the issue in my case.
I exported the wrong Kubeconfig file to shell and the error message was very accurate in that case - The endpoint for the API server was wrong (and of course other fields like the cluster name and the certificates - but the server endpoint is the first step in the chain).
I've encountered this problem and swapoff -a works for me though.
sudo -i
swapoff -a
exit
strace -eopenat kubectl version
This is because docker is down. Start docker on your machine.
I have tried many ways but couldn't get it work, then accidentally found the solution to my own situation:
In ~/.kube/ I have
drwxr-x--- 4 staff 128 25 Jul 22:31 cache
-rw------- 1 staff 8781 25 Jul 22:46 config
drwxr-xr-x 8 staff 256 25 Jul 22:46 configs
-rw-r--r-- 1 staff 14 25 Jul 22:31 kubectx
drwxr-xr-x 4 staff 128 29 Jun 16:59 kubens
My assumption is that there is something messed up in the .kube configuration, however I couldn't figure out which files, so I removed most of the directories/files, including cache, config. (If you don't want to keep all the configs, maybe you should remove all of them)
Then from docker dashboard to re-enable kubernetes, to get all the files installed back.
Re-config the docker-desktop by kubectl config use-context docker-desktop.
Finally, my 6443 responded.
Another suggestion is to restart your container sudo systemctl restart containerd In my situation I'm working with containerd and not a typical docker container
After doing this I think it should fix the issue for you but if doesn't work then try sudo swapoff -a
Assuming that's all the output from your netstat command and that you ran it on the master node (the one you installed Kubernetes via kubadm on), it looks to me like the installation did not complete correctly as none of the usual ports you would expect to see on a Kubernetes master node are present.
Usually on a kubernetes master node you'd expect to see kube-apiserver, kube-scheduler, kube-controller, kubelet and possibly etcd all listening on the network.
What was the output of your kubeadm init command?
I ran into this issue as well, I tried the solutions noted above and it did not work for me. Here is what worked for me:
FIX:
kubeadm init --apiserver-advertise-address=10.139.0.42 --ignore-preflight-errors all --pod-network-cidr=172.17.0.1/16 --token-ttl 0
source:
https://www.c-sharpcorner.com/article/kubernetes-installation-in-redhat-and-centos/
I faced this issue recently due to expired certificates for my K8S cluster.
I followed this blog link to renew the certificates and also replace the kube config file that I was using.
Note, it is important to replace the kube config post renewal of the certificates or else you would end up getting following error message for kubectl CLIs:
error: You must be logged in to the server (Unauthorized)
I faced same issue with the same error, You need to check that your container run time (docker/containerd) is active and running:
systemctl status containerd
systemctl restart containerd
systemctl restart kubelet
then if you check its status, it supposed to be up and running and you are able to create k8s objects right now.
In my case KUBECONFIG was the problem causing the same error.
This solved it:
export KUBECONFIG=/home/$(whoami)/.kube/config
I solved this exact problem by making sure that in the /etc/hosts file the IP address and host name was set correctly:
192.168.10.11 kube-01.testing
instead of:
127.0.1.1 kube-01.testing
(or 127.0.0.1).
This happens on Ubuntu and Debian as far as I know-
I solved this exact problem by making sure that in the /etc/hosts file the IP address and host name was set correctly:
192.168.10.11 kube-01.testing kube-01
instead of:
127.0.1.1 kube-01.testing kube-01
(or 127.0.0.1).
This happens on Ubuntu and Debian as far as I know-
If you did all the above steps (sudo swapoff -a, kubeconfig file permissions, kubelet and containerd status, etc) and nothing works for you, It is a good idea to take a look at the kubelet logs:
journalctl -xeu kubelet
In my case, I realize that the kubelet wants to download the images, but it fails.
So I turned on the VPN on the node, and after a couple of seconds, everything worked!
I have faced the same issue "The connection to the server {IP}:6443 was refused - did you specify the right host or port?"
The reason was that since Kubernetes 1.24+, kubenet has been removed.
So, when installing the Kubernetes cluster with kubeadm and using Docker as a Container Runtime, the cri-dockerd must be installed as well (otherwise I got the above error).
As it is mention in the kubernetes documentation:
On each of your nodes, install Docker for your Linux distribution as
per Install Docker Engine.
Install cri-dockerd, following the
instructions in that source code repository.
As per the error message, it is clearly says port number 6443 connection is refused.
Means
port number is blocked by firewall
if there is no firewall, then
Blockquote
the port number is not running on the specified host 6443. you can
cross verify using the below command
#netstat -tulpn | grep -i 6443.
Solution:
6443 it is kube-apiserver port number in k8s. if it is not running make sure kube-apiserver is running properly. I have faced the same problem. After that I have fix properly set the correct argument in that specific port.
/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.11:2379,https://192.168.5.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
sudo -i
swapoff -a
exit
strace -eopenat kubectl version

kubelet saying node "master01" not found

I try to stack up my kubeadm cluster with three masters. I receive this problem from my init command...
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
But I do not use no cgroupfs but systemd
And my kubelet complain for not knowing his nodename.
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.251885 5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.352932 5620 kubelet.go:2266] node "master01" not found
Jan 23 14:54:12 master01 kubelet[5620]: E0123 14:54:12.453895 5620 kubelet.go:2266] node "master01" not found
Please let me know where is the issue.
The issue can be because of docker version, as docker version < 18.6 is supported in latest kubernetes version i.e. v1.13.xx.
Actually I also got the same issue but it get resolved after downgrading the docker version from 18.9 to 18.6.
If the problem is not related to Docker it might be because the Kubelet service failed to establish connection to API server.
I would first of all check the status of Kubelet: systemctl status kubelet and consider restarting with systemctl restart kubelet.
If this doesn't help try re-installing kubeadm or running kubeadm init with other version (use the --kubernetes-version=X.Y.Z flag).
In my case,my k8s version is 1.21.1 and my docker version is 19.03. I solved this bug by upgrading docker to version 20.7.

could not bind IPv4 socket: Permission denied

I am trying to set up a new instance of PostgreSQL 9.6 on a machine. I have tested it on another machine and its working fine on that machine. But the same process is not working on new machine. Below are the steps I am using
created a new data directory with below command
/opt/rh/rh-postgresql96/root/bin/initdb -D /var/lib/pgsql/9.6/data/
created a service file /etc/systemd/system/rh-postgresql96-inst2.service with below content
.include /lib/systemd/system/rh-postgresql96-postgresql.service
[Service]
Environment=PGDATA=/var/lib/pgsql/9.6/data/
Environment=PGPORT=5433
User=postgres
Group=root
registered service using command systemctl enable rh-postgresql96-inst2
now using command systemctl start rh-postgresql96-inst2 to start service.
All these steps are working fine on one machine but not on the 2nd one.
I am getting below error while starting service on the 2nd machine
rh-postgresql96-inst2.service - PostgreSQL database server
Loaded: loaded (/etc/systemd/system/rh-postgresql96-inst2.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-06-18 09:59:01 UTC; 10s ago
Process: 7552 ExecStart=/opt/rh/rh-postgresql96/root/usr/libexec/postgresql-ctl start -D ${PGDATA} -s -w -t ${PGSTARTTIMEOUT} (code=exited, status=1/FAILURE)
Process: 7550 ExecStartPre=/opt/rh/rh-postgresql96/root/usr/libexec/postgresql-check-db-dir %N (code=exited, status=0/SUCCESS)
HINT: Is another postmaster already running on port 5433? If not, wait a few seconds and retry.
LOG: could not bind IPv4 socket: Permission denied
HINT: Is another postmaster already running on port 5433? If not, wait a few seconds and retry.
WARNING: could not create listen socket for "localhost"
FATAL: could not create any TCP/IP sockets
LOG: database system is shut down
systemd[1]: rh-postgresql96-inst2.service: control process exited, code=exited status=1
systemd[1]: Failed to start PostgreSQL database server.
systemd[1]: Unit rh-postgresql96-inst2.service entered failed state.
systemd[1]: rh-postgresql96-inst2.service failed.
However, I am able to start service using pg_ctl.
Also, I have checked with netstat, lsof command to check if any other postgresql instance is running on port 5433 but its not the case.
Infact i tried 5431, 5434 ports also but server is not starting up
Instead of turning of SELinux you should allow postgres to bind to port 5433 in SELinux.
There is a port parameter postgresql_port_t which by default has port 5432 and 9898.
semanage port -l | grep post
postgresql_port_t tcp 5433, 9898
What you could do is simply add port 5433 to this list.
semanage port -a -t postgresql_port_t 5433 -p tcp
semanage port -l | grep post
postgresql_port_t tcp 5433, 5432, 9898
After that you can start your postgres server listening on port 5433
systemctl enable rh-postgresql96-postgresql
systemctl start rh-postgresql96-postgresql
netstat -tulpn
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2847/postgres
tcp 0 0 127.0.0.1:5433 0.0.0.0:* LISTEN 2775/postgres
There is also a handy tool called audit2allow to help debug selinux problems.
audit2allow -m whatiswrong < /var/log/audit/audit.log > /root/showme.te
The file showme.te show you why SELinux is not allowing the service to do what you need.
You should not turn off SELinux just because it's hard to understand or if you don't know how it works. Instead you should study it :)
I reccomend this lecture from the Red Hat Summit https://www.redhat.com/en/about/videos/summit-2018-security-enhanced-linux-mere-mortals
This issue was related to SELinux.
When I run command sestatus on both machines, output was a little bit different.
One server had Current mode: permissive and 2nd one had Current mode: enforcing.
So I changed the current mode to permissive on the 2nd machine using command setenforce 0.
and it resolved the permission related issue. Now I am able to start 2nd instance.

Failed to start puppetserver Service

While trying to run a puppet update form a node:
sudo /opt/puppetlabs/bin/puppet agent -t
I get an error:
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Connection refused - connect(2) for "puppet" port 8140`
Elsewhere indicates this is likely a problem with the puppetserver service, and suggests to reboot the server. Restarting didn't help, and when I try to restart the service I get failure:
~$ sudo service puppetserver restart
Job for puppetserver.service failed because the control process exited with error code. See "systemctl status puppetserver.service" and "journalctl -xe" for details.
I've looked at these logs, and as a puppet/linux noob, I'm not sure what to do next.
systemctl status puppetserver.service
● puppetserver.service - puppetserver Service
Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; vendor preset: enabled)
Active: activating (start-post) since Fri 2016-09-02 15:54:26 PDT; 2s ago
Process: 22301 ExecStartPre=/usr/bin/install --directory --owner=puppet --group=puppet --mode=775 /var/run/puppetlabs/puppetserver (code=exited
Main PID: 22306 (java); : 22307 (bash)
Tasks: 17
Memory: 335.7M
CPU: 5.535s
CGroup: /system.slice/puppetserver.service
├─22306 /usr/bin/java -Xms6g -Xmx6g -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -cp /opt/p
└─control
├─22307 /bin/bash /opt/puppetlabs/server/apps/puppetserver/ezbake-functions.sh wait_for_app
└─22331 sleep 1
Sep 02 15:54:26 puppet systemd[1]: Starting puppetserver Service...
Sep 02 15:54:26 puppet java[22306]: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
puppet version 4.6.1
The puppet master communicates with the other node using port number 8140.
I don't think a restart will help, since this looks like a connection issue between the server and the node.
please try the following -
first make sure that the puppet master is actually listening on port 8140. run the following command on the puppetmaster -
netstat -ntlp | grep 8140
this command should return something like this -
tcp 0 0 0.0.0.0:8140 0.0.0.0:* LISTEN 1783/puppetmaster
If you don't get the same output, your puppetmaster is not listening, and therefore can not compile catalogs for the node.
Try checking the puppet master log at /var/log/puppetmaster.log
check that the node can communicate with the puppetmaster on the relevant port. you can check this quickly with the telnet command. run this on your node -
telnet < puppetmaster ip address \ dns name> 8140
you should get something like -
Connected to <puppet-master-IP/DNS-name>
Escape character is '^]'.
if you don't get this output, this means that something is blocking you from accessing the puppetmaster. try opening the port in your firewall to access the puppetmaster.
if you're still stuck try using the --debug flag for verbose output and edit your question.
Could be 2 things: (1) in puppet.conf you have configured more memory than you have on your machine. Or (2) You installed both apt-get install puppetserver and apt-get install puppet.
If you get failed to start puppet.service: unit not found. error on slave machine while connecting to puppet.
Close the putty and then again open and connect it.The issue wont come while starting putty on slave.
The error occurs because there is not enough RAM and to fix the error, open the Puppet server configuration file:
sudo nano /etc/sysconfig/puppetserver
And reduce the amount of allocated RAM for the Puppet server (for example, I specified 512m instead of 2g):
JAVA_ARGS="-Xms512m -Xmx512m"
Now let’s start the Puppet server:
sudo systemctl start puppetserver