I started out installing opennebula stable(v5.10) I have installed it on Ubuntu Server 18.04 LTS.
after my installation has complete, but the service opennebula-sunstone doesn't start
Sunstone-server.conf:
# VNC Configuration
:vnc_proxy_base_port: 0
:novnc_path: /usr/share/opennebula/websockify/websocketproxy.py
# Default language setting
:lang: en_US
Occi-server.config:
:vnc_enable: yes
:vnc_proxy_port: 0
:vnc_proxy_path: /usr/share/opennebula/websockify/websocketproxy.py
:vnc_proxy_support_wss: yes
:vnc_proxy_cert:
:vnc_proxy_key:
# dpkg -l | grep novnc
novnc 1:0.4+dfsg+1+20131010+gitf68af8af3d-7 all HTML5 VNC client - daemon and programs
python-novnc 1:0.4+dfsg+1+20131010+gitf68af8af3d-7 all HTML5 VNC client - libraries
client - libraries
error
Failed to start OpenNebula noVNC Server.
I fixed problem With reinstall sunstone and ruby packages by gems
sudo /usr/share/one/install_gems sunstone rubydevelopmentlibrary
systemctl start opennebula-novnc.service
Loaded: loaded (/lib/systemd/system/opennebula-novnc.service; disabled; vendor preset: enabled)
Active: active (running) since Sun 2020-03-01 18:08:08 +0330; 1 day 17h ago
Main PID: 959 (python)
Tasks: 1
Memory: 13.4M
CPU: 15.896s
CGroup: /system.slice/opennebula-novnc.service
└─959 python /usr/share/one/websockify/run --target-config=/var/lib/one/sunstone_vnc_tokens 29876
Related
I intalled latest superviosrd
pip install supervisord==4.2.1
and then I copied supervisord init script to /etc/init.d/supervisord
scp /path/to/init_script.sh /etc/init.d/supervisord
sudo update-rc.d supervisord defaults
and then I run:
sudo service supervisord start && sudo service supervisord status
got:
● supervisord.service - LSB: Starts supervisord - see http://supervisord.org
Loaded: loaded (/etc/init.d/supervisord; generated)
Active: active (exited) since Sat 2020-08-29 13:03:57 UTC; 19min ago
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 1074)
Memory: 0B
CGroup: /system.slice/supervisord.service
Aug 29 13:03:57 vagrant systemd[1]: Starting LSB: Starts supervisord - see http://supervisord.org...
Aug 29 13:03:57 vagrant systemd[1]: Started LSB: Starts supervisord - see http://supervisord.org.
and then I check
supervisorctl
got
unix:///tmp/supervisor.sock no such file
supervisor>
However, if I manually run
supervisord
it will start:
redis STARTING
supervisor>
Any ideas on how I can make it run automatically on start? and why sudo service supervisord start does not start it?
To answer my own question, it is because the init script I was using has the following:
NAME=supervisord
DAEMON=/usr/bin/$NAME
SUPERVISORCTL=/usr/bin/supervisorctl
however, supervisord is installed in /usr/local/bin/, so I changed the path, and everything works now.
I want to remove tomcat from my centos 7. I already removed it with yum but its still there.
sudo service tomcat status
Redirecting to /bin/systemctl status tomcat.service
tomcat.service - Tomcat 9 servlet container
Loaded: loaded (/etc/systemd/system/tomcat.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2020-06-02 16:57:54 CEST; 6s ago
Process: 24809 ExecStart=/opt/tomcat/apache-tomcat-9.0.30/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 24816 (java)
CGroup: /system.slice/tomcat.service
└─24816 /usr/lib/jvm/jre/bin/java -Djava.util.logging.config.file=/opt/tomcat/apache-tomcat-9.0.30/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.security.egd=file:///dev/urandom -Djdk.tls....
[root#localhost share]$ sudo rpm -qa | grep tomcat
tomcat-lib-7.0.76-11.el7_7.noarch
tomcat-javadoc-7.0.76-11.el7_7.noarch
tomcat-servlet-3.0-api-7.0.76-11.el7_7.noarch
tomcat-el-2.2-api-7.0.76-11.el7_7.noarch
tomcat-jsp-2.2-api-7.0.76-11.el7_7.noarch
Hope you can help me.
I am installing ELK on my Ubuntu 16.04 VM and I am not facing some issues after running the command even after having have done all the necessary changes in the elasticsearch.yml file. Please help me resolve this issue.
Below is the error after runnung the command, service elasticsearch status:
service elasticsearch status
* elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-11-06 07:36:45 GMT; 1h 37min ago
Docs: http://www.elastic.co
Main PID: 73248 (code=exited, status=1/FAILURE)
Check the configuration changes done to elasticsearch. Service is not able to start due to invalid configuration.
While trying to run a puppet update form a node:
sudo /opt/puppetlabs/bin/puppet agent -t
I get an error:
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Connection refused - connect(2) for "puppet" port 8140`
Elsewhere indicates this is likely a problem with the puppetserver service, and suggests to reboot the server. Restarting didn't help, and when I try to restart the service I get failure:
~$ sudo service puppetserver restart
Job for puppetserver.service failed because the control process exited with error code. See "systemctl status puppetserver.service" and "journalctl -xe" for details.
I've looked at these logs, and as a puppet/linux noob, I'm not sure what to do next.
systemctl status puppetserver.service
● puppetserver.service - puppetserver Service
Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; vendor preset: enabled)
Active: activating (start-post) since Fri 2016-09-02 15:54:26 PDT; 2s ago
Process: 22301 ExecStartPre=/usr/bin/install --directory --owner=puppet --group=puppet --mode=775 /var/run/puppetlabs/puppetserver (code=exited
Main PID: 22306 (java); : 22307 (bash)
Tasks: 17
Memory: 335.7M
CPU: 5.535s
CGroup: /system.slice/puppetserver.service
├─22306 /usr/bin/java -Xms6g -Xmx6g -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -cp /opt/p
└─control
├─22307 /bin/bash /opt/puppetlabs/server/apps/puppetserver/ezbake-functions.sh wait_for_app
└─22331 sleep 1
Sep 02 15:54:26 puppet systemd[1]: Starting puppetserver Service...
Sep 02 15:54:26 puppet java[22306]: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
puppet version 4.6.1
The puppet master communicates with the other node using port number 8140.
I don't think a restart will help, since this looks like a connection issue between the server and the node.
please try the following -
first make sure that the puppet master is actually listening on port 8140. run the following command on the puppetmaster -
netstat -ntlp | grep 8140
this command should return something like this -
tcp 0 0 0.0.0.0:8140 0.0.0.0:* LISTEN 1783/puppetmaster
If you don't get the same output, your puppetmaster is not listening, and therefore can not compile catalogs for the node.
Try checking the puppet master log at /var/log/puppetmaster.log
check that the node can communicate with the puppetmaster on the relevant port. you can check this quickly with the telnet command. run this on your node -
telnet < puppetmaster ip address \ dns name> 8140
you should get something like -
Connected to <puppet-master-IP/DNS-name>
Escape character is '^]'.
if you don't get this output, this means that something is blocking you from accessing the puppetmaster. try opening the port in your firewall to access the puppetmaster.
if you're still stuck try using the --debug flag for verbose output and edit your question.
Could be 2 things: (1) in puppet.conf you have configured more memory than you have on your machine. Or (2) You installed both apt-get install puppetserver and apt-get install puppet.
If you get failed to start puppet.service: unit not found. error on slave machine while connecting to puppet.
Close the putty and then again open and connect it.The issue wont come while starting putty on slave.
The error occurs because there is not enough RAM and to fix the error, open the Puppet server configuration file:
sudo nano /etc/sysconfig/puppetserver
And reduce the amount of allocated RAM for the Puppet server (for example, I specified 512m instead of 2g):
JAVA_ARGS="-Xms512m -Xmx512m"
Now let’s start the Puppet server:
sudo systemctl start puppetserver
I'm trying to setup systemd service configuration to restart service on watchdog failure. If my application does not call sd_notify() in time, systemd spawns new instance.
However, previus instance is not killed. After some time, I have many instances of my application running.
$ systemctl status my-daemon.service
Loaded: loaded (/lib/systemd/system/my-daemon.service; disabled)
Active: active (running) since Tue, 26 Aug 2014 10:27:46 +0000; 7s ago
Main PID: 1433 (attendance-syst)
CGroup: name=systemd:/system/my-daemon.service
├ 1281 /usr/local/bin/my-daemon
├ 1384 /usr/local/bin/my-daemon
├ 1407 /usr/local/bin/my-daemon
└ 1433 /usr/local/bin/my-daemon
...
This is part of my service file:
[Service]
ExecStart=/usr/local/bin/my-daemon
TimeoutStopSec=5
WatchdogSec=10
Restart=on-failure
How can i configure systemd to kill instances which fails on watchdog?
I have already read manual page but it didn't help me.
I thought Restart=on-failure shall restart hanged process by default...
It's a bug and it's already fixed in newer versions of systemd.
In systemd 208 (available for debian jessie) it works correctly.
In systemd 204 (available for debian wheezy via backports) it's still broken.
I haven't found exact release where they fixed it.