k3s multimaster with embedded etcd is failing to form join cluster - kubernetes

I have two fresh ubuntu VM(s)
VM-1 (65.0.54.158)
VM-2 (65.2.136.2)
I am trying to set up a HA k3s cluster with embedded ETCD. I am referring to the official document
Here is what I have executed on VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --cluster-init
Here is the response from VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --cluster-init
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
additionally, I have checked
sudo kubectl get nodes
and this worked perfectly
NAME STATUS ROLES AGE VERSION
ip-172-31-41-34 Ready control-plane,etcd,master 18m v1.24.4+k3s1
Now I am going to ssh into VM-2 and make it join the server running on VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --server https://65.0.54.158:6443
response
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details
here is the contents of /var/log/syslog
Sep 6 19:10:00 ip-172-31-46-114 systemd[1]: Starting Lightweight Kubernetes...
Sep 6 19:10:00 ip-172-31-46-114 sh[9516]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Sep 6 19:10:00 ip-172-31-46-114 sh[9517]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Sep 6 19:10:00 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:00Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Sep 6 19:10:00 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:00Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2"
Sep 6 19:10:02 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:02Z" level=info msg="Starting k3s v1.24.4+k3s1 (c3f830e9)"
Sep 6 19:10:22 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:22Z" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://65.0.54.158:6443/cacerts\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: k3s.service: Failed with result 'exit-code'.
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: Failed to start Lightweight Kubernetes.
I am stuck at this for two days. I would really appreciate some help. Thank you.

Related

having problem when using k3sup join command

I have generated ssh-key on client and copied them to master and worker nodes. The path is ~/.ssh/id_rsa. I have this error and using sudo -S doesn't fix it too.
k3sup join --ip $WORKER_IP --user $WORKER_USER --server-ip $MASTER_IP --server-user $MASTER_USER --k3s-extra-args "--node-external-ip $WORKER_IP --node-ip $WORKER_IP" --k3s-channel stable --print-command
Running: k3sup join
ssh: sudo cat /var/lib/rancher/k3s/server/node-token
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
sudo: a password is required
Error: unable to get join-token from server: Process exited with status 1
However, I expect getting the following output:
$ k3sup join --ip $WORKER_IP --user $WORKER_USER --server-ip $MASTER_IP --server-user $MASTER_USER --k3s-extra-args "--node-external-ip $WORKER_IP --node-ip $WORKER_IP" --k3s-channel stable --print-command
Running: k3sup join
ssh: sudo cat /var/lib/rancher/k3s/server/node-token
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx::server:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ssh: curl -sfL https://get.k3s.io | K3S_URL='https://10.1.1.1:6443' K3S_TOKEN='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx::server:xxxxxxxxxxxxxxxxxxxxxxxxx' INSTALL_K3S_CHANNEL='stable' sh -s - --node-external-ip 10.1.1.2 --node-ip 10.1.1.2
\[INFO\] Finding release for channel stable
\[INFO\] Using v1.20.0+k3s2 as release
\[INFO\] Downloading hash https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/sha256sum-amd64.txt
\[INFO\] Downloading binary https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/k3s
\[INFO\] Verifying binary download
\[INFO\] Installing k3s to /usr/local/bin/k3s
\[INFO\] Creating /usr/local/bin/kubectl symlink to k3s
\[INFO\] Creating /usr/local/bin/crictl symlink to k3s
\[INFO\] Creating /usr/local/bin/ctr symlink to k3s
\[INFO\] Creating killall script /usr/local/bin/k3s-killall.sh
\[INFO\] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
\[INFO\] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
\[INFO\] systemd: Creating service file /etc/systemd/system/k3s-agent.service
\[INFO\] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
\[INFO\] systemd: Starting k3s-agent
Logs: Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
Output: \[INFO\] Finding release for channel stable
\[INFO\] Using v1.20.0+k3s2 as release
\[INFO\] Downloading hash https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/sha256sum-amd64.txt
\[INFO\] Downloading binary https://github.com/rancher/k3s/releases/download/v1.20.0+k3s2/k3s
\[INFO\] Verifying binary download
\[INFO\] Installing k3s to /usr/local/bin/k3s
\[INFO\] Creating /usr/local/bin/kubectl symlink to k3s
\[INFO\] Creating /usr/local/bin/crictl symlink to k3s
\[INFO\] Creating /usr/local/bin/ctr symlink to k3s
\[INFO\] Creating killall script /usr/local/bin/k3s-killall.sh
\[INFO\] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
\[INFO\] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
\[INFO\] systemd: Creating service file /etc/systemd/system/k3s-agent.service
\[INFO\] systemd: Enabling k3s-agent unit
\[INFO\] systemd: Starting k3s-agent

Could not access file "pglogical" while trying to install pglogical

I'm following instructions from https://github.com/2ndQuadrant/pglogical to install pglogical on postgres 12 on Centos 8. The install seems be successful:
yum -y install postgresql12-pglogical
Last metadata expiration check: 0:21:30 ago on Wed 30 Sep 2020 09:32:13 PM CDT.
Dependencies resolved.
=====================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=====================================================================================================================================================================================================================================================
Installing:
postgresql12-pglogical x86_64 2.3.2-1.el8 2ndquadrant-dl-default-release-pg12 145 k
Installing dependencies:
postgresql12 x86_64 12.4-1PGDG.rhel8 pgdg12 1.6 M
postgresql12-server x86_64 12.4-1PGDG.rhel8 pgdg12 5.2 M
Transaction Summary
=====================================================================================================================================================================================================================================================
Install 3 Packages
Total download size: 7.0 M
Installed size: 29 M
Downloading Packages:
(1/3): postgresql12-12.4-1PGDG.rhel8.x86_64.rpm 1.5 MB/s | 1.6 MB 00:01
(2/3): postgresql12-pglogical-2.3.2-1.el8.x86_64.rpm 117 kB/s | 145 kB 00:01
(3/3): postgresql12-server-12.4-1PGDG.rhel8.x86_64.rpm 4.0 MB/s | 5.2 MB 00:01
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 5.3 MB/s | 7.0 MB 00:01
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : postgresql12-12.4-1PGDG.rhel8.x86_64 1/3
Running scriptlet: postgresql12-12.4-1PGDG.rhel8.x86_64 1/3
failed to link /usr/bin/psql -> /etc/alternatives/pgsql-psql: /usr/bin/psql exists and it is not a symlink
failed to link /usr/bin/clusterdb -> /etc/alternatives/pgsql-clusterdb: /usr/bin/clusterdb exists and it is not a symlink
failed to link /usr/bin/createdb -> /etc/alternatives/pgsql-createdb: /usr/bin/createdb exists and it is not a symlink
failed to link /usr/bin/createuser -> /etc/alternatives/pgsql-createuser: /usr/bin/createuser exists and it is not a symlink
failed to link /usr/bin/dropdb -> /etc/alternatives/pgsql-dropdb: /usr/bin/dropdb exists and it is not a symlink
failed to link /usr/bin/dropuser -> /etc/alternatives/pgsql-dropuser: /usr/bin/dropuser exists and it is not a symlink
failed to link /usr/bin/pg_basebackup -> /etc/alternatives/pgsql-pg_basebackup: /usr/bin/pg_basebackup exists and it is not a symlink
failed to link /usr/bin/pg_dump -> /etc/alternatives/pgsql-pg_dump: /usr/bin/pg_dump exists and it is not a symlink
failed to link /usr/bin/pg_dumpall -> /etc/alternatives/pgsql-pg_dumpall: /usr/bin/pg_dumpall exists and it is not a symlink
failed to link /usr/bin/pg_restore -> /etc/alternatives/pgsql-pg_restore: /usr/bin/pg_restore exists and it is not a symlink
failed to link /usr/bin/reindexdb -> /etc/alternatives/pgsql-reindexdb: /usr/bin/reindexdb exists and it is not a symlink
failed to link /usr/bin/vacuumdb -> /etc/alternatives/pgsql-vacuumdb: /usr/bin/vacuumdb exists and it is not a symlink
Running scriptlet: postgresql12-server-12.4-1PGDG.rhel8.x86_64 2/3
Installing : postgresql12-server-12.4-1PGDG.rhel8.x86_64 2/3
Running scriptlet: postgresql12-server-12.4-1PGDG.rhel8.x86_64 2/3
Installing : postgresql12-pglogical-2.3.2-1.el8.x86_64 3/3
Running scriptlet: postgresql12-pglogical-2.3.2-1.el8.x86_64 3/3
Verifying : postgresql12-pglogical-2.3.2-1.el8.x86_64 1/3
Verifying : postgresql12-12.4-1PGDG.rhel8.x86_64 2/3
Verifying : postgresql12-server-12.4-1PGDG.rhel8.x86_64 3/3
Installed:
postgresql12-12.4-1PGDG.rhel8.x86_64 postgresql12-pglogical-2.3.2-1.el8.x86_64 postgresql12-server-12.4-1PGDG.rhel8.x86_64
Complete!
But when I try to restart postgres, I get this error
systemctl restart postgresql
Job for postgresql.service failed because the control process exited with error code.
See "systemctl status postgresql.service" and "journalctl -xe" for details.
Relevant portions of the journalctl -xe
-- Unit postgresql.service has begun starting up.
Sep 30 21:54:59 aba postmaster[305963]: 2020-10-01 02:54:59.825 UTC [305963] FATAL: could not access file "pglogical": No such file or directory
Sep 30 21:54:59 aba postmaster[305963]: 2020-10-01 02:54:59.825 UTC [305963] LOG: database system is shut down
Sep 30 21:54:59 aba systemd[1]: postgresql.service: Main process exited, code=exited, status=1/FAILURE
Sep 30 21:54:59 aba systemd[1]: postgresql.service: Failed with result 'exit-code'.
Sep 30 21:54:59 aba systemd[1]: Failed to start PostgreSQL database server.
-- Subject: Unit postgresql.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit postgresql.service has failed.
--
-- The result is failed.
I am lost!
Your session log tells the the server was installed as a prerequisite, but the "link" messages insinuate that there was already an incompatible client version in place. Probably you had installed PostgreSQL from the CentOS packages, but the pglogical RPMs pulled in the PGDG packages.
The error message probably means that shared_preload_libraries contains pglogical, but pglogical.so could not be found in the lib directory.
Presumably the installation process edited the configuration in your old server installation, but installed the shared object in the new one.
Upshot: you cannot use those pglogical binaries with your installation. Either switch to the PGDG RPMs or build pglogical from source.
You see that there is a certain amount of conjecture in my deductions, but that should help you solve the problem.

Adding removed etcd member in Kubernetes master

I was following Kelsey Hightower's kubernetes-the-hard-way repo and successfully created a cluster with 3 master nodes and 3 worker nodes. Here are the problems encountered when removing one of the etcd members and then adding it back, also with all the steps used:
3 master nodes:
10.240.0.10 controller-0
10.240.0.11 controller-1
10.240.0.12 controller-2
Step 1:
isaac#controller-0:~$ sudo ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
Result:
b28b52253c9d447e, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
Step 2 (Remove etcd member of controller-2):
isaac#controller-0:~$ sudo ETCDCTL_API=3 etcdctl member remove b28b52253c9d447e --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
Step 3 (Add the member back):
isaac#controller-0:~$ sudo ETCDCTL_API=3 etcdctl member add controller-2 --peer-urls=https://10.240.0.12:2380 --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
Result:
Member 66d450d03498eb5c added to cluster 3e7cc799faffb625
ETCD_NAME="controller-2"
ETCD_INITIAL_CLUSTER="controller-2=https://10.240.0.12:2380,controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.240.0.12:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
Step 4 (run member list command):
isaac#controller-0:~$ sudo ETCDCTL_API=3 etcdctl member list --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
Result:
66d450d03498eb5c, unstarted, , https://10.240.0.12:2380,
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380,
https://10.240.0.10:2379 ffed16798470cab5, started, controller-1,
https://10.240.0.11:2380, https://10.240.0.11:2379
Step 5 (Run the command to start etcd in controller-2):
isaac#controller-2:~$ sudo etcd --name controller-2 --listen-client-urls https://10.240.0.12:2379,http://127.0.0.1:2379 --advertise-client-urls https://10.240.0.12:2379 --listen-peer-urls https://10.240.0.12:
2380 --initial-advertise-peer-urls https://10.240.0.12:2380 --initial-cluster-state existing --initial-cluster controller-0=http://10.240.0.10:2380,controller-1=http://10.240.0.11:2380,controller-2=http://10.240.0.1
2:2380 --ca-file /etc/etcd/ca.pem --cert-file /etc/etcd/kubernetes.pem --key-file /etc/etcd/kubernetes-key.pem
Result:
2019-06-09 13:10:14.958799 I | etcdmain: etcd Version: 3.3.9
2019-06-09 13:10:14.959022 I | etcdmain: Git SHA: fca8add78
2019-06-09 13:10:14.959106 I | etcdmain: Go Version: go1.10.3
2019-06-09 13:10:14.959177 I | etcdmain: Go OS/Arch: linux/amd64
2019-06-09 13:10:14.959237 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2019-06-09 13:10:14.959312 W | etcdmain: no data-dir provided, using default data-dir ./controller-2.etcd
2019-06-09 13:10:14.959435 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-06-09 13:10:14.959575 C | etcdmain: cannot listen on TLS for 10.240.0.12:2380: KeyFile and CertFile are not presented
Clearly, the etcd service did not start as expected, so I do the troubleshooting as below:
isaac#controller-2:~$ sudo systemctl status etcd
Result:
● etcd.service - etcd Loaded: loaded
(/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Sun 2019-06-09 13:06:55 UTC; 29min ago
Docs: https://github.com/coreos Process: 1876 ExecStart=/usr/local/bin/etcd --name controller-2
--cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file=/etc/etcd/kubernetes.pem --peer-key-file=/etc/etcd/kube Main PID: 1876 (code=exited, status=0/SUCCESS) Jun 09 13:06:55 controller-2 etcd[1876]: stopped
peer f98dc20bce6225a0 Jun 09 13:06:55 controller-2 etcd[1876]:
stopping peer ffed16798470cab5... Jun 09 13:06:55 controller-2
etcd[1876]: stopped streaming with peer ffed16798470cab5 (writer) Jun
09 13:06:55 controller-2 etcd[1876]: stopped streaming with peer
ffed16798470cab5 (writer) Jun 09 13:06:55 controller-2 etcd[1876]:
stopped HTTP pipelining with peer ffed16798470cab5 Jun 09 13:06:55
controller-2 etcd[1876]: stopped streaming with peer ffed16798470cab5
(stream MsgApp v2 reader) Jun 09 13:06:55 controller-2 etcd[1876]:
stopped streaming with peer ffed16798470cab5 (stream Message reader)
Jun 09 13:06:55 controller-2 etcd[1876]: stopped peer ffed16798470cab5
Jun 09 13:06:55 controller-2 etcd[1876]: failed to find member
f98dc20bce6225a0 in cluster 3e7cc799faffb625 Jun 09 13:06:55
controller-2 etcd[1876]: forgot to set Type=notify in systemd service
file?
I indeed tried to start the etcd member using different commands but seems the etcd of controller-2 still stuck at unstarted state. May I know the reason of that? Any pointers would be highly appreciated! Thanks.
Turned out I solved the problem as follows (credit to Matthew):
Delete the etcd data directory with the following command:
rm -rf /var/lib/etcd/*
To fix the message cannot listen on TLS for 10.240.0.12:2380: KeyFile and CertFile are not presented, I revised the command to start the etcd as follows:
sudo etcd --name controller-2 --listen-client-urls https://10.240.0.12:2379,http://127.0.0.1:2379 --advertise-client-urls https://10.240.0.12:2379 --listen-peer-urls https://10.240.0.12:2380 --initial-advertise-peer-urls https://10.240.0.12:2380 --initial-cluster-state existing --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 --peer-trusted-ca-file /etc/etcd/ca.pem --cert-file /etc/etcd/kubernetes.pem --key-file /etc/etcd/kubernetes-key.pem --peer-cert-file /etc/etcd/kubernetes.pem --peer-key-file /etc/etcd/kubernetes-key.pem --data-dir /var/lib/etcd
A few points to note here:
The newly added arguments --cert-file and --key-file presented the required key and certificate of controller2.
Argument --peer-trusted-ca-file is also presented so as to check if the x509 certificate presented by controller0 and controller1 are signed by a known CA. If this is not presented, error etcdserver: could not get cluster response from https://10.240.0.11:2380: Get https://10.240.0.11:2380/members: x509: certificate signed by unknown authority may be encountered.
The value presented for the argument --initial-cluster needs to be in-line with that shown in the systemd unit file.
if you are re-adding the more easy solution is following
rm -rf /var/lib/etcd/*
kubeadm join phase control-plane-join etcd --control-plane

nodered on Raspberry PI is not starting any longer

I am not quite sure what node is causing this behaviour and there are tooo many flows so I can not install from scratch and yes I do not have a backup of them.
I realized today in the morning that I can not access the http gui of my nodered instance any longer on my raspberrypi zero. Just edited some flows but nothing real serious.
I am trying to start my node red on my Rapsberry PI zere and no GUI and UI is starting up to access the node red instance. I don't know how to solve and troubleshoot this. What I am doing or trying to do is:
pi#nodered-pi:~/.node-red $ node-red-start
Start Node-RED
Once Node-RED has started, point a browser at http://192.168.1.42:1880
On Pi Node-RED works better with the Firefox or Chrome browser
Use node-red-stop to stop Node-RED
Use node-red-start to start Node-RED again
Use node-red-log to view the recent log output
Use sudo systemctl enable nodered.service to autostart Node-RED at every boot
Use sudo systemctl disable nodered.service to disable autostart on boot
To find more nodes and example flows - go to http://flows.nodered.org
Starting as a systemd service.
Started Node-RED graphical event wiring tool.
19 Aug 15:13:55 - [info]
Welcome to Node-RED
===================
19 Aug 15:13:55 - [info] Node-RED version: v0.18.7
19 Aug 15:13:55 - [info] Node.js version: v8.11.1
19 Aug 15:13:55 - [info] Linux 4.14.52+ arm LE
19 Aug 15:14:06 - [info] Loading palette nodes
19 Aug 15:14:37 - [info] Dashboard version 2.9.6 started at /ui
19 Aug 15:14:49 - [warn] ------------------------------------------------------
19 Aug 15:14:49 - [warn] [node-red-contrib-delta-timed/delta-time] 'delta' already registered by module node-red-contrib-change-detect
19 Aug 15:14:49 - [warn] ------------------------------------------------------
19 Aug 15:14:49 - [info] Settings file : /home/pi/.node-red/settings.js
19 Aug 15:14:49 - [info] User directory : /home/pi/.node-red
19 Aug 15:14:49 - [warn] Projects disabled : set editorTheme.projects.enabled=true to enable
19 Aug 15:14:49 - [info] Flows file : /home/pi/.node-red/flows_nodered-pi.json
19 Aug 15:14:50 - [info] Server now running at http://127.0.0.1:1880/
19 Aug 15:14:50 - [warn]
---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------
19 Aug 15:14:50 - [warn] Error loading credentials: SyntaxError: Unexpected token T in JSON at position 0
19 Aug 15:14:50 - [warn] Error loading flows: Error: Failed to decrypt credentials
19 Aug 15:14:51 - [info] Starting flows
19 Aug 15:15:01 - [warn] [telegram receiver:Telegram Receiver] bot not initialized
19 Aug 15:15:01 - [warn] [telegram sender:Temperatur Wetterstation] bot not initialized.
19 Aug 15:15:01 - [error] [function:Versorge mit Information] SyntaxError: Invalid or unexpected token
19 Aug 15:15:01 - [info] Started flows
19 Aug 15:15:02 - [info] [sonoff-server:166ef3ba.0029bc] SONOFF Server Started On Port 1080
19 Aug 15:15:02 - [red] Uncaught Exception:
19 Aug 15:15:02 - Error: listen EACCES 0.0.0.0:443
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
nodered.service: Main process exited, code=exited, status=1/FAILURE
nodered.service: Unit entered failed state.
nodered.service: Failed with result 'exit-code'.
nodered.service: Service hold-off time over, scheduling restart.
Stopped Node-RED graphical event wiring tool.
Started Node-RED graphical event wiring tool.
19 Aug 15:15:20 - [info]
Welcome to Node-RED
===================
19 Aug 15:15:20 - [info] Node-RED version: v0.18.7
19 Aug 15:15:02 - Error: listen EACCES 0.0.0.0:443
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
This error implies that something else is already running on port 443. This could be an existing copy of Node-RED or something else. You can search what applications are listening on what ports with the following command
lsof -i :443
This will list what is listening on port 443

Node-red looping on start raspberry pi

I was using node-red in my raspberry pi normally until get the brillant idea to install new nodes.
Now, I am unable to start it. The node-red is in loop when starting.
> Start Node-RED Once Node-RED has started, point a browser at
> http://192.168.0.113:1880 On Pi Node-RED works better with the Firefox
> or Chrome browser Use node-red-stop to
> stop Node-RED Use node-red-start to start
> Node-RED again Use node-red-log to view
> the recent log output Use sudo systemctl enable nodered.service to
> autostart Node-RED at every boot Use sudo systemctl disable
> nodered.service to disable autostart on boot To find more nodes and
> example flows - go to http://flows.nodered.org Starting as a systemd
> service. Started Node-RED graphical event wiring tool.. 10 Jan
> 00:29:50 - [info] Welcome to Node-RED
> =================== 10 Jan 00:29:50 - [info] Node-RED version: v0.17.5 10 Jan 00:29:50 - [info] Node.js version: v6.12.3 10 Jan 00:29:50 -
> [info] Linux 4.9.35-v7+ arm LE 10 Jan 00:29:51 - [info] Loading
> palette nodes [../deps/mpg123/src/output/alsa.c:165] error: cannot
> open device default node: pcm_params.c:2286: snd_pcm_hw_refine:
> Assertion `pcm && params' failed. nodered.service: main process
> exited, code=killed, status=6/ABRT Unit nodered.service entered failed
> state. nodered.service holdoff time over, scheduling restart. Stopping
> Node-RED graphical event wiring tool.... Starting Node-RED graphical
> event wiring tool.... Started Node-RED graphical event wiring tool..
> 10 Jan 00:29:57 - [info] Welcome to Node-RED
> =================== 10 Jan 00:29:57 - [info] Node-RED version: v0.17.5 10 Jan 00:29:57 - [info] Node.js version: v6.12.3 10 Jan 00:29:57 -
> [info] Linux 4.9.35-v7+ arm LE 10 Jan 00:29:58 - [info] Loading
> palette nodes [../deps/mpg123/src/output/alsa.c:165] error: cannot
> open device default node: pcm_params.c:2286: snd_pcm_hw_refine:
> Assertion `pcm && params' failed. nodered.service: main process
> exited, code=killed, status=6/ABRT Unit nodered.service entered failed
> state. nodered.service holdoff time over, scheduling restart. Stopping
> Node-RED graphical event wiring tool.... Starting Node-RED graphical
> event wiring tool.... Started Node-RED graphical event wiring tool..
> 10 Jan 00:30:04 - [info] Welcome to Node-RED
> =================== 10 Jan 00:30:04 - [info] Node-RED version: v0.17.5 10 Jan 00:30:04 - [info] Node.js version: v6.12.3 10 Jan 00:30:04 -
> [info] Linux 4.9.35-v7+ arm LE 10 Jan 00:30:05 - [info] Loading
> palette nodes
Go to the directory /home/pi/.node-red and open the package.json file.
It should contain a dependencies section that lists the extra nodes you installed (assuming you did so via the Palette Manager in the editor). Identify any nodes related to playing audio - the crashing error is related to trying to use the mpg123 command line tool for playing audio files.
Then on the command line, run npm remove NAME-OF-MODULE --save.
Node-RED should then restart cleanly.