Remove Tomcat completely from Centos 7 - centos

I want to remove tomcat from my centos 7. I already removed it with yum but its still there.
sudo service tomcat status
Redirecting to /bin/systemctl status tomcat.service
tomcat.service - Tomcat 9 servlet container
Loaded: loaded (/etc/systemd/system/tomcat.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2020-06-02 16:57:54 CEST; 6s ago
Process: 24809 ExecStart=/opt/tomcat/apache-tomcat-9.0.30/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 24816 (java)
CGroup: /system.slice/tomcat.service
└─24816 /usr/lib/jvm/jre/bin/java -Djava.util.logging.config.file=/opt/tomcat/apache-tomcat-9.0.30/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.security.egd=file:///dev/urandom -Djdk.tls....
[root#localhost share]$ sudo rpm -qa | grep tomcat
tomcat-lib-7.0.76-11.el7_7.noarch
tomcat-javadoc-7.0.76-11.el7_7.noarch
tomcat-servlet-3.0-api-7.0.76-11.el7_7.noarch
tomcat-el-2.2-api-7.0.76-11.el7_7.noarch
tomcat-jsp-2.2-api-7.0.76-11.el7_7.noarch
Hope you can help me.

Related

mongoDB compass doesn't work! how to make it work .error mongodb-compass

mongoDB compass doesn't work!
installed on this site
before that there was also a problem, I had to reinstall ubuntu in Russian
[23114:0923/070140.222706:FATAL:gpu_data_manager_impl_private.cc(894)] The display compositor is frequently crashing. Goodbye.
Additional Information
sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor>
Active: active (running) since Thu 2021-09-23 06:44:41 EEST; 18s ago
Docs: https://docs.mongodb.org/manual
Main PID: 18125 (mongod)
Memory: 60.7M
CPU: 685ms
CGroup: /system.slice/mongod.service
└─18125 /usr/bin/mongod --config /etc/mongod.conf
sep 23 06:44:41 d991 systemd[1]: Started MongoDB Database Server.

supervisord not starting automatically on Ubuntu 20.04

I intalled latest superviosrd
pip install supervisord==4.2.1
and then I copied supervisord init script to /etc/init.d/supervisord
scp /path/to/init_script.sh /etc/init.d/supervisord
sudo update-rc.d supervisord defaults
and then I run:
sudo service supervisord start && sudo service supervisord status
got:
● supervisord.service - LSB: Starts supervisord - see http://supervisord.org
Loaded: loaded (/etc/init.d/supervisord; generated)
Active: active (exited) since Sat 2020-08-29 13:03:57 UTC; 19min ago
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 1074)
Memory: 0B
CGroup: /system.slice/supervisord.service
Aug 29 13:03:57 vagrant systemd[1]: Starting LSB: Starts supervisord - see http://supervisord.org...
Aug 29 13:03:57 vagrant systemd[1]: Started LSB: Starts supervisord - see http://supervisord.org.
and then I check
supervisorctl
got
unix:///tmp/supervisor.sock no such file
supervisor>
However, if I manually run
supervisord
it will start:
redis STARTING
supervisor>
Any ideas on how I can make it run automatically on start? and why sudo service supervisord start does not start it?
To answer my own question, it is because the init script I was using has the following:
NAME=supervisord
DAEMON=/usr/bin/$NAME
SUPERVISORCTL=/usr/bin/supervisorctl
however, supervisord is installed in /usr/local/bin/, so I changed the path, and everything works now.

opennebula-sunstone service faild

I started out installing opennebula stable(v5.10) I have installed it on Ubuntu Server 18.04 LTS.
after my installation has complete, but the service opennebula-sunstone doesn't start
Sunstone-server.conf:
# VNC Configuration
:vnc_proxy_base_port: 0
:novnc_path: /usr/share/opennebula/websockify/websocketproxy.py
# Default language setting
:lang: en_US
Occi-server.config:
:vnc_enable: yes
:vnc_proxy_port: 0
:vnc_proxy_path: /usr/share/opennebula/websockify/websocketproxy.py
:vnc_proxy_support_wss: yes
:vnc_proxy_cert:
:vnc_proxy_key:
# dpkg -l | grep novnc
novnc 1:0.4+dfsg+1+20131010+gitf68af8af3d-7 all HTML5 VNC client - daemon and programs
python-novnc 1:0.4+dfsg+1+20131010+gitf68af8af3d-7 all HTML5 VNC client - libraries
client - libraries
error
Failed to start OpenNebula noVNC Server.
I fixed problem With reinstall sunstone and ruby packages by gems
sudo /usr/share/one/install_gems sunstone rubydevelopmentlibrary
systemctl start opennebula-novnc.service
Loaded: loaded (/lib/systemd/system/opennebula-novnc.service; disabled; vendor preset: enabled)
Active: active (running) since Sun 2020-03-01 18:08:08 +0330; 1 day 17h ago
Main PID: 959 (python)
Tasks: 1
Memory: 13.4M
CPU: 15.896s
CGroup: /system.slice/opennebula-novnc.service
└─959 python /usr/share/one/websockify/run --target-config=/var/lib/one/sunstone_vnc_tokens 29876

Redis fails to start with error: redis-server.service: Failed at step NAMESPACE spawning /usr/bin/redis-server: Stale file handle

After upgrading Debian, it has an issue starting redis-server.service.
In the output of journalctl -xe I see the following:
redis-server.service: Failed at step NAMESPACE spawning /usr/bin/redis-server: Stale file handle.
I can't start the redis-server.service and in the output of the systemctl start redis-server I have:
Job for redis-server.service failed because the control process exited with error code.
See "systemctl status redis-server.service" and "journalctl -xe" for details.
In the output of systemctl status redis-server I have:
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-01-29 10:29:08 MSK; 58s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 11701 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=226/NAMESPACE)
Process: 11720 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=226/NAMESPACE)
Main PID: 10193 (code=exited, status=0/SUCCESS)
Jan 29 10:29:08 xxx systemd[1]: redis-server.service: Service hold-off time over, scheduling restart.
Jan 29 10:29:08 xxx systemd[1]: redis-server.service: Scheduled restart job, restart counter is at 5.
Jan 29 10:29:08 xxx systemd[1]: Stopped Advanced key-value store.
My question how to fix this issue and start redis-server.service?
Found a workaround solution:
I've played around with the /lib/systemd/system/redis-server.service editing service file as root, commenting out different fields trying to find where the failure happens and restarting systemd (via systemctl daemon-reload, systemctl stop redis-server, systemctl start redis-server)
For me the issue was the following line in the redis-server.service file:
ReadOnlyDirectories=/
which I have commented out and that allowed redis-server to start succesfully.
So my current /lib/systemd/system/redis-server.service is:
[Unit]
Description=Advanced key-value store
After=network.target
Documentation=http://redis.io/documentation, man:redis-server(1)
[Service]
Type=forking
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStop=/bin/kill -s TERM $MAINPID
PIDFile=/var/run/redis/redis-server.pid
TimeoutStopSec=0
Restart=always
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=2755
UMask=007
PrivateTmp=yes
LimitNOFILE=65535
PrivateDevices=yes
ProtectHome=yes
#Modified 20180129 to avoid issue to start redis
#redis-server.service: Failed at step NAMESPACE spawning /usr/bin/redis-server: Stale file handle
#ReadOnlyDirectories=/
ReadWriteDirectories=-/var/lib/redis
ReadWriteDirectories=-/var/log/redis
ReadWriteDirectories=-/var/run/redis
NoNewPrivileges=true
CapabilityBoundingSet=CAP_SETGID CAP_SETUID CAP_SYS_RESOURCE
MemoryDenyWriteExecute=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
# redis-server can write to its own config file when in cluster mode so we
# permit writing there by default. If you are not using this feature, it is
# recommended that you replace the following lines with "ProtectSystem=full".
ProtectSystem=true
ReadWriteDirectories=-/etc/redis
[Install]
WantedBy=multi-user.target
Alias=redis.service
I've just encountered this issue after running sudo apt -y dist-upgrade and got this slightly different error in /var/log/syslog
redis-server.service: Failed at step NAMESPACE spawning /usr/bin/redis-server: Invalid argument
The solution, Launchpad Bug 1638410, is:
sudo systemctl edit redis-server
[Service]
ProtectHome=no
Save and exit the editor and complete the upgrade:
sudo apt install -f
cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
$ apt policy redis-server
redis-server:
Installed: 5:5.0.0-3chl1~xenial1
Candidate: 5:5.0.0-3chl1~xenial1
Version table:
*** 5:5.0.0-3chl1~xenial1 500
500 http://ppa.launchpad.net/chris-lea/redis-server/ubuntu xenial/main amd64 Packages
100 /var/lib/dpkg/status
2:3.0.6-1 500
500 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
If you're using Ubuntu, in
/etc/redis/redis.conf
you should have :
supervised systemd

Kestrel failed to start UBUNTU 16.04

I have the folowing config:
[Unit]
Description=Example .NET Web API Application running on CentOS 7
[Service]
WorkingDirectory=/var/www/FEEDER
ExecStart=/usr/bin/dotnet /var/www/FEEDER/FeedService.MVC.dll
SyslogIdentifier=dotnet-example
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Development
[Install]
WantedBy=multi-user.target
when I start the prgram I dont get any error.
the status throw this:
sudo ystemctl status kestrel-hellomvc.service
● kestrel-hellomvc.service - Example .NET Web API Application running on CentOS 7
Loaded: loaded (/etc/systemd/system/kestrel-hellomvc.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since ג' 2017-10-31 09:26:20 IST; 54min ago
Main PID: 20077 (code=exited, status=131/n/a)
אוק 31 09:26:20 avi-VirtualBox systemd[1]: Stopped Example .NET Web API Application running on CentOS 7.
אוק 31 09:26:20 avi-VirtualBox systemd[1]: Started Example .NET Web API Application running on CentOS 7.
אוק 31 09:26:20 avi-VirtualBox systemd[1]: kestrel-hellomvc.service: Main process exited, code=exited, status=131/n/a
אוק 31 09:26:20 avi-VirtualBox systemd[1]: kestrel-hellomvc.service: Unit entered failed state.
אוק 31 09:26:20 avi-VirtualBox systemd[1]: kestrel-hellomvc.service: Failed with result 'exit-code'.
Warning: kestrel-hellomvc.service changed on disk. Run 'systemctl daemon-reload' to reload units.
the owner of the folder is www-data and the permissions are 0755
What might be the problem?
Thanks
you should deploy your app myapp with:
sudo cp -a /home/user/myapp/bin/Debug/netcoreapp2.0/publish/* /var/aspnetcore/myapp
eg.
Copy all files from build folder to deployment location (*.json etc)