How can I split the Dancer error log by day? - perl

I am trying to use dancer and starman for my website. And i am succeed in setting the error log into file. Of course i can run a script to move the error log everyday. But I just want to know whether exits method or cpan module to solve the problem.
Thanks~

Do not reinvent the wheel, you will repeat errors of the past that are already fixed.
Use logrotate. It is a unix tool for specifically this kind of task.
To rotate your logs you would usually create a logrotate config for your task in /etc/logrotate.d/.
For example to daily rotate and keep your logs for 14 days:
# /etc/logrotate.d/dancer-error-log
/path/to/my/dancer-error.log {
daily
rotate 14
create 0660 mydanceruser mydancergroup
}

Related

RobotFrameWork: Is there a way of checking the report.html although the run paused?

Situation: VisualStudioCode (Browser library) runs a couple of .robot files (manually started)
Then it pauses because of an error...
At that point the process breaks and there is no final report.html
If you stop the run it doesn't generate an report.html that's not what you want. You actually want the results until that point. (or even better described: you still want the links output.xml, log.html and report.html)
you should be able to generate lag.htm and report.html using the rebot command. however you need output.xml for this. output.xml is created when you run the tests. when you break you will probobaly not have all the resources you need.
I would suggest to assign test timeout to the test that causes the pause. When the timeout is reached the test will be stoped automaticaly and you should have all reports. You can also set it globaly for all tests eg.:
*** Settings ***
Test Timeout 2 minutes

How to restart an exe when it is exits in windows 10?

I have a process in windows which i am running in startup. Now i need to make it if somehow that process get killed or stopped i need to restart it again in Windows 10?
Is there any way. Process is a HTTP server which if somehow stopped in windows i need to restart it. I have tried of writing a power-shell in which I'll check task-list status of process and then if not found I'll restart but that is not a good way. Please suggest some good way to do it.
I have a golang exe; under a particular scenario my process got killed or stopped i need to start it up again automatically. This has to be done imediately after the exe got killed. What is the best way to achieve this?
I will give you a brief rundown. You can enable Audit Process Termination in local group policy of the machine as shown below. In your case, success audits would be enough. Please note that the pic is for Windows 7. It may change with OS.
Now every time a process gets terminated, a success event will be generated and written to the security eventlog.
This will allow you to create a task scheduler that triggers on the generation of this event that calls a script that would run the process again. Simple right?
Well, you might have some trouble setting that task up especially when you want to pass details about the generating event to the script. This should help you get through that.
You can user Task scheduler for this purpose. There is a option of "restart on failure" which can be selected and whenever your process get failed it will restart again.
Reference :- https://social.technet.microsoft.com/Forums/windowsserver/en-US/4545361c-cc1f-4505-a0a1-c2dcc094109a/restarting-scheduled-task-that-has-failed?forum=winserverManagement

Best way to send email when PHP process dies

I wrote a quick PHP page to handle 502 requests. Nginx will re-direct to this page when a 502 is encountered and an email is fired off.
The problem is, most of the time that the 502 is encountered is because PHP has died, so writing to the DB and sending an email using PHP is no longer possible. Tweaks to PHP-FPM settings have done a lot to help (restarting PHP, etc), but I'd still like a fall-back.
There are numerous ways to send an email outside of PHP, but I am curious what others out there are doing with good success? I'd like to keep it simple for configuration (i.e. not have yet another complex dependency to worry about on the servers) and reliability reasons.
Googling and searching SO didn't turn up much, probably because "dies" and "fail" bring back a lot of false positives for my scenario.
What about use a cronjob (bash based) to parse error_log file periodically (x hours) and send an email (mutt/mail) when find something like resuming normal operations in the last period (x hours). I think is simple and effective...
[Thu Dec 27 14:37:52 2012] [notice] caught SIGTERM, shutting down
[Thu Dec 27 14:37:53 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.6-2~precise+1 configured -- resuming normal operations
UPDATE:
#Brian As #takeshin says cronjobs can run even every second if you want, but some sysadmins could bite you... :|
Here is what I've ended up doing. I've not rolled it out to our prod servers yet, but all testing thus far looks good.
Nginx does not support CGI natively, so you need another means to do it. thttpd fit the bill nicely. There is a good write up the nginx wiki showing how to use it.
I configured thttpd with the following:
dir=/var/www/htdocs
user=thttpd
logfile=/var/log/thttpd.log
pidfile=/var/run/thttpd.pid
port=8000
cgipat=**.cgi
And added this to my nginx config:
error_page 502 #thttpd;
location #thttpd {
include proxy.include;
proxy_pass http://127.0.0.1:8000;
}
Finally, I created a basic CGI script that calls PHP on the command line and passed in my already-written PHP script. This was an ideal solution for me because the script was already set up to log to our alerts table and fire off an email. This is also real-time, as the script will execute as soon as nginx returns a 502 code (subsequent 502s will not hammer me with emails, per the logic of the script).
I was able to run some simulation tests be forcing nginx to return a 502 (see more here).
I'm going to continue tweaking this, but I'm pretty happy with the relative ease of deploying it and that I could re-use existing code.
We have dual solution.
We use shell script to send out email notifications, if PHP dies. We check if php service is running with shell command in the shell script, if it is not running, we'll fire off a shell command to send an email.
This is all in a few lines of Shell Script. Not too hard.
Of course, set it up in cron.

Get Chef to execute a mongodb script after mongodb has started

We're currently using chef to provision our servers and we want our recipe/cookbook to automatically add some data to the mongo database once its installed and running.
This is where we start to run into problems. We were using an execute resource to run the mongo script like this:
execute "install-mongodb-config" do
command "mongo #{node[:mongodb][:mongo_db_host]}/#{node[:mongodb][:mongo_db]} \"#{node[:mongodb][:mongo_add_config_script]}\""
action :run
end
This part of the recipe always failed no matter what we tried! I won't get into the details of everything we tried here (unless i need to) but lets just say that i've exhausted all possibilities of subscribes and notifies (i think).
The problem originates from the fact that we are using the mongodb::10gen_repo to install mongodb. The recipe exits when apt-get installs the package and then chef continues on to execute more resourses.
We have tried executing the above resource directly after mongodb::10gen_repo but it doesn't seem like mongodb is available and the mongo shell cannot connect and run the script. The error we see is somewhat like this:
MongoDB shell version: 2.0.2
Thu Sep 6 18:40:45 ReferenceError: setTimeout is not defined mongotest.js:2
failed to load: mongoAddConfig.js
Nothing we have tried has been able to get around this in a nice chef way. The thing that we resorted to was to replace the execute resource with the following:
execute "install-mongodb-config" do
command "sleep 60; mongo #{node[:mongodb][:mongo_db_host]}/#{node[:mongodb][:mongo_db]} \"#{node[:mongodb][:mongo_add_config_script]}\""
action :run
end
Which just makes the command sleep for 60 seconds before the mongo script is run. I know this isn't the Right way to do this but it works for now.
Can anyone suggest the Right way to do this? I have a feeling that I will need to talk to the guys that created the mongodb chef script and request a feature!
First of all. Remove this "sleep 60". This can be done by chef: All resources have common attributes and "retries" and "retry_delay" are part of them. So the easiest way would be:
execute "install-mongodb-config" do
command "mongo some_command"
action :run
retries 6
retry_delay 10
end
If you have more than 2-3 places, where you have to run some command on mongo database, consider creating LWRP, similar to one created in this mongodb cookbook. (Particularly check the libraries/mongodb.rb file). You can hide the logic that waits for the server to respond there.
Is it important that the same Chef run that installs the software also injects the initial configuration? The 'chefly' method to constructing cookbooks and recipes is to guard against idempotency in order to ensure that they can be run over and over again without producing unintended results.
In this particular case, I would limit the first recipe to only just installing and starting up mongodb. This recipe would do nothing if it saw that mongodb was already running on the host. Then, I'd have another recipe that would run only if it saw that mongo had been setup and was running. It would query the mongodb to see if the initial configuration had been done. If so, it would simply return. If not, it would run your configuration routine.
In this way, these recipes could run all the time, anytime, on your machine. Even if someone uninstalled mongodb, chef would get around to ensuring that it was set back up again and pristine.
So, I don't know much at all about chef. But your problem seems to be that you try to immediately connect after bringing the server up.
Server's are not immediately available when you bring them up since there is a bit of overhead that goes into electing a primary, getting all the server status's etc.
You can recreate this without chef by trying to bring up a replica set and immediately trying to connect to it in a simple script. So it's not chef specific.
Not sure if there is a way around the server startup lag since bringing up a primary is expected to be a relatively infrequent occurrence compared to just adding nodes to a set.
The only potential solution I see that is cleaner is adding a longer Timeout for the connection to be formed in the configuration. You can find how to do this in the mongodb documentation here: http://www.mongodb.org/display/DOCS/Connections
The flag of interest for you is likely connectTimeoutMS

Where can I find application runtime errors using Nginx, Starman, Plack and Catalyst?

I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.