Puppet Server Can't restart - server

I am trying to restart puppet server after using it for sometime and below is the error.
Any idea how to overcome this error?
[root#chakriin56 user]# free -m
total used free shared buff/cache available
Mem: 7820 2889 4080 86 851 4579
Swap: 7812 0 7812
[root#chakriin56 user]# puppet status
{
"is_alive": true,
"version": "4.5.3"
}
[root#chakriin56 user]# systemctl status puppetserver.service
Unit puppetserver.service could not be found.
[root#chakriin56 user]# service puppetserver status
Redirecting to /bin/systemctl status puppetserver.service
Unit puppetserver.service could not be found.
[root#chakriin56 user]# systemctl restart puppetserver
Failed to restart puppetserver.service: Unit not found.
[root#chakriin56 user]#

You can enumerate the available services via the command puppet resource service. In that list, find the correct name for the Puppetserver service as it is installed on your machine, and use that to restart it.
If no such service is listed then your Puppetserver installation is damaged or non-standard. Supposing the former to be more likely, in that case you could try reinstalling. You might or might not be able to do that without first killing the running Puppetserver instance. Be sure to back up your configuration, data, and manifest set before attempting a reinstallation.

Related

unable to initiate kubeadm in the controller node , It says port in use

sudo kubeadm init
I0609 02:20:26.963781 3600 version.go:252] remote version is much newer: v1.21.1; falling back to: stable-1.18
W0609 02:20:27.069495 3600 configset.go:202]
WARNING: kubeadm cannot validate component configs `for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]`
`[init] Using Kubernetes version: v1.18.19`
`[preflight] Running pre-flight checks`
`error execution phase preflight: [preflight] Some fatal errors occurred:`
`[ERROR Port-10259]: Port 10259 is in use`
`[ERROR Port-10257]: Port 10257 is in use`
`[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: `/etc/kubernetes/manifests/kube-apiserver.yaml already exists`
`[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]:` `/etc/kubernetes/manifests/kube-controller-manager.yaml already exists`
`[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]:` /etc/kubernetes/manifests/kube-scheduler.yaml already exists
`[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists`
`[ERROR Port-10250]: Port 10250 is in use`
`[ERROR Port-2379]: Port 2379 is in use`
`[ERROR Port-2380]: Port 2380 is in use`
`[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty`
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Hi and welcome to Stack Overflow.
"Port in use" means that there's a process running that uses that port. So you need to stop that process. Since you already ran kubeadm init once, it must have already changed a number of things.
First run kubeadm reset to undo all of the changes from the first time you ran it.
Then run systemctl restart kubelet.
Finally, when you run kubeadm init you should no longer get the error.
Even after following the above steps , if you get this error:
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Then, remove the etcd folder (/var/lib/etcd) before you run kubeadm init.
Note:
This solution worked for other users.
The warning itself is not an issue, it's just warning that kubeadm no longer validates the KubeletConfiguration, KubeProxyConfiguration that it feeds to the kubelet, kube-proxy components.
I also got this issue, additionally i had to manually kill kubelet , using
$ pkill kubelet
kubeadm init worked without issues after this,.

Proxmox lxc add add linux.kernel_modules

I am trying to setup an LXC container (debian) as a Kubernetes node.
I am so far that the only thing in the way is the kubeadm init script...
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.4.44-2-pve/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.4.44-2-pve\n", err: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
After some research I figured out that I probably need to add the following: linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
But adding this to /etc/pve/lxc/107.conf doesn't do anything.
Does anybody have a clue how to add the linux kernel modules?
To allow load with modprobe any modules inside privileged proxmox lxc container, you need add this options to container config:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: proc:rw sys:rw
lxc.mount.entry: /lib/modules lib/modules none bind 0 0
before that, you must first create the /lib/modules folder inside the container
I'm not sure what guide you are following but assuming that you have the required kernel modules on the host, this would do it:
lxc config set my-container linux.kernel_modules overlay
You can follow this guide from K3s too. Basically:
lxc config edit k3s-lxc
and
config:
linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
raw.lxc: lxc.mount.auto=proc:rw sys:rw
security.privileged: "true"
security.nesting: "true"
✌️
For the fix ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file run from the host:
pct set $VMID --mp0 /usr/lib/modules/$(uname -r),mp=/lib/modules/$(uname -r),ro=1,backup=0
For the fix [ERROR SystemVerification]: failed to parse kernel config run from the host:
pct push $VMID /boot/config-$(uname -r) /boot/config-$(uname -r)
Where $VMID is your container id.

Failed to infer CIDR network for mon ip

I follow the instructions to bootstrap a new Ceph (I'm new to Ceph) cluster.
I got the following error:
sudo cephadm bootstrap --mon-ip <mon-ip>
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/podman) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: e08484be-72c1-11ea-a13e-0050563f093a
INFO:cephadm:Verifying IP *<mon-ip>* port 3300 ...
INFO:cephadm:Verifying IP *<mon-ip>* port 6789 ...
ERROR: Failed to infer CIDR network for mon ip *<mon-ip>*; pass --skip-mon-network to configure it later
What does it mean ? How to fix it ?
cephadm is still fairly new. I've tracked a few days ago in:
https://tracker.ceph.com/issues/44828
Please run
ceph config set mon public_network <mon_network>
after bootstrap finished.
Is this the exact command you ran?
sudo cephadm bootstrap --mon-ip *<mon-ip>*
If so you actually need to replace *<mon-ip>* with the actual IP address that you want the monitor daemon to listen on.
For future reference, on that page, any command you see that has a variable surrounded by asterisks is something you would need to replace with an address/host/hostname etc. that applies to your environment.

Context broker is not started: "su: user orion does not exist"

I'm trying to deploy contextBroker using the command /etc/init.d/contextBroker and I get the following error:
Starting...
contextBroker is stopped
Starting contextBroker... su: user orion does not exist
cat: /var/log/contextBroker/contextBroker.pid: No such file or directory
pidfile not found [FAILED]
Using the following command I can start contextBroker:
/usr/bin/contextBroker -port 10026 -logDir /var/log/contextBroker
-pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion
Which could be the cause of the problem?
There was a bug in the Orion RPM fixed in 0.16.0 that causes the removal of the "orion" user when updating the RPM package. The "orion" user is the one used by default by the /etc/init.d/contextBroker script, thus causing the error message su: user orion does not exist.
Note that although the bug has been fixed in 0.16.0, updating from 0.15.0 (for instance) to 0.16.0 will be problematic, as the version being updated (0.15.0) is still "buggy". Updating from 0.16.0 to any newer version (e.g. upcoming 0.17.0) should work without problem.
Fortunatelly, the problem has an easy solution: instead of updating the package, remove it and install again, typically with:
yum remove contextBroker
yum install contextBroker

Issues with running two instances of searchd

I have just updated our Sphinx server from 1.10-beta to 2.0.6-release, and now I have run into some issues with searchd. Previously we were able to run two instances of searchd next to each other by specifying two different config-files, i.e:
searchd --config /etc/sphinx/sphinx.conf
searchd --config /etc/sphinx/sphinx.staging.conf
sphinx.conf listens to 9306:mysql41, and 9312, while sphinx.staging.conf listens to 9307:mysql41 and 9313.
After we updated to 2.0.6 however, a second instance is never started. Or rather.. the output makes it seem like it starts, and a pid-file is created etc. But for some reason only the first searchd instance keeps running, and the second seems to shutdown right away. So while trying to run searchd --config /etc/sphinx/sphinx.conf twice (if that was the first one started) complains that the pid-file is in use, trying to run searchd --config /etc/sphinx/sphinx.staging.conf (if that is the second started instance) "starts" the daemon again and again, only no new process is created..
Note that if I switch these commands around when first creating the process, then sphinx.conf is the instance not really started.
I have checked, and rechecked, that these ports are only used by searchd.
Does anyone have any idea of what I can do/try next? I've installed it from source on ubuntu 10.04 LTS with:
./configure --prefix /etc/sphinx --with-mysql --enable-id64 --with-libstemmer
make -j4 install
Note to self: Check the logs!
RT-indices use binary logs to enable crash recovery. Since my old config files did not specify a path for where these should be stored, both instances of searchd tried to write to the same binary logs. The instance started last was of course not permitted to manipulate these files, and thus exited with a fatal error:
[Fri Nov 2 17:13:32.262 2012] [ 5346] FATAL: failed to lock
'/etc/sphinx/var/data/binlog.lock': 11 'Resource temporarily unavailable'
[Fri Nov 2 17:13:32.264 2012] [ 5345] Child process 5346 has been finished,
exit code 1. Watchdog finishes also. Good bye!
The solution was simple, ensure to specify a binlog_path inside the searchd configuration section of each configuration file:
searchd
{
[...]
binlog_path = /path/to/writable/directory
[...]
}