Add a site on Caddy Web Server without restart - webserver

I'm setting up a static html webserver where users can upload their own files and create their own websites.
I'm using Caddy as the webserver and as far as I understand, every time a new host is added on Caddyfile there's a need to restart Caddy to start serving the new site.
I wonder if there's a way to get around that, where the other sites won't be affected or some other way without the need to restart Caddy entirely.

I got an answer from Matt Holt, the creator of Caddy:
You could signal Caddy with USR1, which does a zero-downtime reload.
Caddy can be easily reloaded as such:
From the terminal run the following commands:
1.Get the PID from the running caddy instance:
ps -C caddy
PID TTY TIME CMD
1392 pts/0 00:00:00 caddy
2.Send kill command with USR1 parameter
kill -s USR1 1392
And that's it. Caddy will be reloaded without affecting any other site.

Related

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?

Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?
I need to know the config files that might interfere with the ssh server to normally start up at boot.
I believe that you are looking for the following commands (assuming you are running the last version of raspbian):
sudo systemctl stop sshd
sudo systemctl disable sshd
sudo systemctl mask sshd
stop Basically stops the service immediately. disable disables the service from starting at bootup. Additionally, mask will make it impossible to load the service.
Digging deeper into what each command does, on modern linux distributions there are configuration files for each service called unit files. They are stored (usually) in /usr/lib/systemd. These are basically the evolution of scripts to start services.
the stop command just calls the sshd.service unit file with a stop parameter, in order to shut down the server.
the disable (or enable) command removes(or creates) a symlink of the unit file in a directory where systemd looks into when booting services (usually, /etc/systemd/system).
systemctl mask creates a symlink to /dev/null instead of the unit file. That way the service cant be loaded.

How to configure telnet service for yocto image

telnet is necessary in order to maintain compatibility with older software in this case. I'm working with the Yocto Rocko 2.4.2 distribution. when I try to telnet to the board I'm getting the oh so detailed message "connection refused".
Using the method here and the options here I modified the busybox configuration per suggestion. When the board is booted up and logged in, if you execute: telnet, it spits out usage info and a quick directory check shows that telnet is installed to /usr/bin/telnet. My guess is that the telnet client is installed but the telnet server is not running?
I need to get telnetd to start manually at least so I know it will work with an init script in place. The second reference link there suggests that 'telnetd will not be started automatically though...' and that there will need to be an init script. How can I start telnetd manually for testing?
systemctl enable telnetd
returns: Unit telnetd.service could not be found
UPDATE
telnetd in located in /usr/sbin/telnetd. I was able to manually start the telnetd service for testing from there. After manually starting the service telnet login now works. looking into writing a systemd init script to auto start the telnetd service, so I suppose this issue is closed. unless anyone would like to offer up detailed telnet busybox configuration and setup steps as an answer to 'How to configure telnet service for yocto image'
update
Perhaps there is something more? I created a unit file that looks like this:
[Unit]
Description=auto start telnetd
[Service]
ExecStart=/usr/sbin/telnetd
[Install]
WantedBy=multi-user.target
on reboot, systemd indicates the process executed and succeeded:
systemctl status telnetd
.
.
.
Process: 466 ExecStart=/usr/sbin/telnetd (code=exited, status=0/SUCCESS)
.
.
.
The service is not running however. netstat -l does not list it and telnet login fails. Something I'm missing?
last update...i think
so following this post, I managed to get telnet.socket service to startup on reboot.
systemctl status telnet.socket
shows that it is running and listening on 23. Now however, when I try to remote in with telnet I'm getting
Connection closed by foreign host
Everything I've read so far has been talking about xinetd service (which I do not have...). What is confusing is that, if I just navigate to /usr/sbin/ and execute telnetd, the server is up and running and I can telnet into the board, so I do not believe I'm missing any utilities or services (like the above mentioned xinetd), but something is still not being configured correctly. any ideas?

Getting Access to PID in install4j / Mirth Connect

I am using Mirth connect which uses install4j to launch the program.
I am using the mcservice program and would like to get the pid of the launched application so that I can monitor it. How do I do this? Right now the service only has the standard start, status, etc commands.
Back in 2011 there seemed to be some indication that pid monitoring would be coming soon: http://www.mirthcorp.com/community/forums/showthread.php?t=5509
If you need get the the number of the process pid, you can do it by the terminal command:
$ pgrep mcservice
Also, you can store the pid number into a file and run the next command:
$ kill -9 `cat /path/to/file.pid`
In this case, you choosing the best solution for your issue. If you need something more complex you can view this link: Simple Process Checker To Find Out If A Service Is Running or Not

What is veewee waiting for when it's waiting for ssh login?

When veewee is displaying the following message, Waiting for ssh login on 127.0.0.1 with user veewee to sshd on port => 7222 to work, timeout=10000 sec what exactly is it waiting on?
As far as I can tell there is a ssh server on port 7222 on the host that veewee has put up and it's waiting on that. This means that something in the guest is going to connect back to it. However, I can't figure out what that thing might be - and thus I can't debug further.
Further details
I'm trying to build a virtualbox image for vagrant with the CentOS-6.3-x86_64-minimal template. My steps:
bundle exec veewee vbox define 'ejs-centos6.3-1' 'CentOS-6.3-x86_64-minimal'
wget http://mirror.symnds.com/distributions/CentOS-vault/6.3/isos/x86_64/CentOS-6.3-x86_64-minimal.iso
bundle exec veewee vbox build 'ejs-centos6.3-1'
The CentOS install appeared to run without error but it's stuck waiting for the ssh login.
You're right, there's a Ssh server on listening on port 7222, but it's on the guest (VM), not the host.
The host (Veewee) is waiting to connect to it. This SSH service is supposed to become available when the VM install process finishes, that's one of the steps used by Veewee to assume that the setup went fine and that the VM is ready.
If Veewee blocks and never gets this SSH connection, I think there could be multiple reasons:
VM setup went wrong and something prevents it from finishing successfully. Check Veewee output and the Virtualbox VM graphical console that should have opened when launching vewee box build.
There's something preventing your host computer to connect to the VM at the network level.
The VM image doesn't have Sshd installed, and/or the veewee box configuration files (in veewee/definitions/ejs-centos6.3-1/) miss instructions to install the ssh package
You should try to login to the VM using Virtuabox console window and check if there's an ssh package installed (rpm -qa | grep openssh-server) and a process named sshd running.
I've run Veewee against Centos 7 built with GUI on and it stuck on anaconda asking for source of packages. I've checked the ks.cfg and it was pointing to dead resource (404). After pointing to valid url it went through.

Avoid a GUID refresh in OBIEE

I am unable to login to Answers (/analytics) after every time I deploy the metadata repository of OBIEE using Enterprise manager, on Linux. It works after I refresh the GUIDs. Is there a way to avoid refreshing GUIDs?
Open the rpd offline before deployment, Goto Manage -> Identity->users
Check if your users are there in the rpd, if so remove them. Now deploy your rpd on your target instance. This should go fine. You wont have to reset GUIDs...
Cheers,
RamC
Yes there is.
Stop BI Server
opmnctl stopproc ias-component=coreapplication_obis1
Backup the original repository
cp repository1.rpd repository2.rpd
Modify repository1.rpd on a Windows machine and copy it back to the Linux machine running OBIEE
Start BI Server
opmnctl startproc ias-component=coreapplication_obis1
Stop Services in Linux:
1.Stop opmnctl
Navigate to /instances/instance1/bin
./opmnctl stopall
2.Stop Managed Server (bi_server1)
Navigate to /user_projects/domains/bifoundation_domain/bin
./stopManagedWebLogic.sh bi_server1
3.Stop Admin Server (weblogic)
in the same above location
./stopWebLogic.sh
4.Stop Node manger
Just kill the Node Manager process
ps -ef|grep node –to find nodemanger pid
kill -9
Note: If Managed and Admin server not stopped properly ,you can kill same way like above
ps -ef|grep weblogic
kill -9
Start Services:
======================
1.Start Node Manager
Navigate to /wlserver_10.3/server/bin
nohup sh startNodeManager.sh &
2.Start Admin Server
Navigate to /user_projects/domains/bifoundation_domain/bin
nohup sh startWebLogic.sh -Dweblogic.management.username=weblogic -Dweblogic.management.password=weblogic123 > admin_server.log &
Tip: you can check log using tail command
tail -f admin_server.log
ctrl-z or ctrl-c to exit log window
3.Start Managed Server (bi_server1)
In the same above location
nohup sh startManagedWebLogic.sh bi_server1 http:// managed_server.log &
Start opmnctl
Navigate to /instances/instance1/bin
./opmnctl startall