Necessity of cloning classes for background processes running through rake? - rake

I have a resque worker class which works with ActionMailer and another that works with Mail directly. Here's a short example:
class NotificationWorker
def self.perform(id)
Mailer.delivery_method.settings = {
# custom settings here
}
# Working with Mailer to deliver mails
end
end
Assuming that there may be two workers running on NotificationWorker, I am not sure if these interfer each other. From my understanding, working directly on the Mail class would break functionality because this would result in both mailers using the same settings instead of their assigned ones. A solution would be to create a clone of such a class (which works with ActionMailer, but not with Mail AFAIK).
According to the Resque docs:
Resque workers are rake tasks that run forever. They basically do
this:
start
loop do
if job = reserve
job.process
else
sleep 5 # Polling frequency = 5
end
end
shutdown
I am not familiar with rake besides the basic usage for rails apps. So can anyone enlighten me?

not quite sure what you're trying to achieve here. I have a resque system which queue and delivers automated emails. I have it set up like this:
1) env.rb
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {...}
2) notification_job.rb # its the job not the worker that needs creating.
class NotificationWorker
def self.perform(id)
# Working with Mailer to deliver mails
end
end
If you really need to work with mailer directly and each worker needs different settings then you may need to create a yaml file which relates to a variable that you give the worker on startup.

Related

(Laravel 5) Monitor and optionally cancel an ALREADY RUNNING job on queue

I need to achieve the ability to monitor and be able to cancel an ALREADY RUNNING job on queue.
There's a lot of answers about deleting QUEUED jobs, but not on an already running one.
This is the situation: I have a "job", which consists of HUNDREDS OF THOUSANDS rows on a database, that need to be queried ONE BY ONE against a web service.
Every row needs to be picked up, queried against a web service, stored the response and its status updated.
I had that already working as a Command (launching from / outputting to console), but now I need to implement queues in order to allow piling up more jobs from more users.
So far I've seen Horizon (which doesn't runs on Windows due to missing process control libs). However, in some demos seen around it lacks (I believe) a couple things I need:
Dynamically configurable timeout (the whole job may take more than 12 hours, depending on the number of rows to process on the selected job)
Ability to CANCEL an ALREADY RUNNING job.
I also considered the option to generate EACH REQUEST as a new job instead of seeing a "job" as the whole collection of rows (this would overcome the timeout thing), but that would give me a Horizon "pending jobs" list of hundreds of thousands of records per job, and that would kill the browser (I know Redis can handle this without itching at all). Further, I guess is not possible to cancel "all jobs belonging to X tag".
I've been thinking about hitting an API route, fire the job and decouple it from the app, but I'm seeing that this requires forking processes.
For the ability to cancel, I would implement a database with job_id, and when the user hits an API to cancel a job, I'd mark it as "halted". On every loop I would check its status and if it finds "halted" then kill itself.
If I've missed any aspect just holler and I'll add it or clarify about it.
So I'm asking for an advice here since I'm new to Laravel: how could I achieve this?
So I finally came up with this (a bit clunky) solution:
In Controller:
public function cancelJob()
{
$jobs = DB::table('jobs')->get();
# I could use a specific ID and user owner filter, etc.
foreach ($jobs as $job) {
DB::table('jobs')->delete($job->id);
}
# This is a file that... well, it's self explaining
touch(base_path(config('files.halt_process_signal')));
return "Job cancelled - It will stop soon";
}
In job class (inside model::chunk() function)
# CHECK FOR HALT SIGNAL AND [OPTIONALLY] STOP THE PROCESS
if ($this->service->shouldHaltProcess()) {
# build stats, do some cleanup, log, etc...
$this->halted = true;
$this->service->stopProcess();
# This FALSE is what it makes the chunk() method to stop looping
return false;
}
In service class:
/**
* Checks the existence of the 'Halt Process Signal' file
*
* #return bool
*/
public function shouldHaltProcess() :bool
{
return file_exists($this->config['files.halt_process_signal']);
}
/**
* Stop the batch process
*
* #return void
*/
public function stopProcess() :void
{
logger()->info("=== HALT PROCESS SIGNAL FOUND - STOPPING THE PROCESS ===");
$this->deleteHaltProcessSignalFile();
return ;
}
It doesn't looks quite elegant, but it works.
I've surfed the whole web and many goes for Horizon or other tools that doesn't fit my case.
If anyone has a better way to achieve this, it's welcome to share.
Laravel queue have 3 important config:
1. retry_after
2. timeout
3. tries
See more: https://laravel.com/docs/5.8/queues
Dynamically configurable timeout (the whole job may take more than 12
hours, depending on the number of rows to process on the selected job)
I think you can config timeout + retry_after about 24h.
Ability to CANCEL an ALREADY RUNNING job.
Delete job in jobs table
Delete process by process id in your server
Hope it help you :)

How to run AutoHotKey scripts on several PCs at once, controlled from one place?

For some load testing simulations, I'm looking at scripting with AHK 1.1. The issue is we have a client-server setup with multiple workstations so I'd really like to be able to trigger the same script (or even variations) to run on multiple PCs at once, to accurately simulate multiple users all hammering the system.
Even more useful would be to make sure the same test happens at exactly (within some tolerance) the same time, to check this doesn't cause problems.
What be would the best way to do this? Do it from within AHK itself, or use some separate remote-control tool to let me fire off scripts on PCs of my choosing?
With ahk you will need scripts acting as server and clients so both needs to be running no matter the method used...
As to the TCP/IP you can do this, you just need to find out if you have any usable/open posts your scripts can use...
I just helped an australian guy the other day setup a great working lot of server/client scripts
using the Socket Class by Bentschi looking something like this
Server:
;Server
#include Socket.ahk
myTcp := new SocketTCP()
myTcp.bind("addr_any", 54321)
myTcp.listen()
myTcp.onAccept := Func("OnTCPAccept")
return
OnTCPAccept(this)
{
newTcp := this.accept()
newTcp.onRecv := func("OnTCPRecv")
newTcp.sendText("Connected")
}
OnTCPRecv(this)
{
msgbox % this.recvText()
}
Client:
;Client
#include Socket.ahk
myTcp := new SocketTCP()
myTcp.connect("your servers A_IPAddress1", 54321) ; lokal
myTcp.onRecv := Func("OnTcpRecv")
return
OnTcpRecv(this)
{
ToolTip % this.RecvText()
}
But to use and or set something like this up you may need to know what ports are usable on the network or have the ability to change settings as needed.
The speed of the TCP/IP scripts are in the low milliseconds (under 20 on my network) so no real tolerance to speak of.
Hope it helps
As wrote Sidola you can check shared folder for some file or folder. You can use
IfExist command for it. Here is example:
Loop
{
IfExist, c:\a.txt
{
break
}
}
;code to execute if c:\a.txt exists comes below.
MsgBox, 1
Also you can add Sleep command to put less stress on hdd like for example in code below:
Loop
{
IfExist, c:\a.txt
{
break
}
Sleep, 1000
}
;code to execute if c:\a.txt exists comes below.
MsgBox, 1
Also, always use AutoHotkey and its documenatation from http://ahkscript.org/ (current uptodate version, new official website)! AutoHotkey and its documentation from autohotkey.com is outdated and you may have some problems using them!
I can't provide actual code, but there is a way to kind of inject a service into your LAN machines through the Admin$ share and remotely control it. This way AHK wouldn't need to run on the LAN computers all the time.
I don't know how exactly this could be done, but PsShutdown does it to hibernate LAN PCs which is normally impossible.
In case you actually manage to do it, it would be great if you could share it.

Forwarding AnyEvent::Log messages to a callback if certain requirements are met

I am working on a project that uses AnyEvent Log in the main program as well as several dependent modules/packages. I currently have each module writing to it's own context, and all contexts are added to the main programs context as slaves. This project is part of a much larger project, and in addition to writing out a local log file, there are certain messages that I would like to send to a remote program which will then be responsible for presenting the messages to users.
The problem is that in order to send to the remote program, I have to have a piece of information that is only available from the main program, so it's not feesible to just implement a method at the package level to send messages. The piece of information I need is more or less a transaction id, and the log messages are interesting events from a particular transaction.
The main program has 2 contexts ( main , secondary ). The messages I am interested in will either come from the secondary ctx OR one of the package/module contexts. I am interested in only sending info - crit level messages to users, but ONLY WHEN the txID exists in the main program. I ALWAYS want messages to be written to my local log file regardless of whether or not a deployment is running. I would like this to be something that I setup in the main program rather than in a module because the modules are tasked to do certain thing and shouldn't even be aware of the fact that there is an ID associated with the task at hand.
Here is a quick breakdown of the log configuration specific code in the main program.
# Immediately after Proc::Daemon::Init
my $logger = AnyEvent::Log::ctx "desman";
# configure is done before daemonization to allow for --nodaemon
sub configure {
my ( $level, $file ) = #_;
$AnyEvent::Log::FILTER->level($level);
$AnyEvent::Log::LOG->log_to_file($file);
}
sub log_event {
... logic to send messages as tx event ...
}
sub worker_init {
threads->create(sub {
$logger->attach( my $worklog = AnyEvent::Log::ctx "worker" );
... more stuff for worker specifics ...
});
}
Ideally, I would be able to use one or both of log_cb and fmt_cb to handle the formatting and sending of messages to the remote program using the log_event sub. I have tried a few different things, and so far I'm stuck.
# doesn't seem to do anything
$logger->fmt_cb( sub { ... } );
$logger->log_cb( sub { ... } );
# broke everything
$AnyEvent::Log::COLLECT->attach( my $evtlog = new AnyEvent::Log::Ctx
fmt_cb => \&event_formatter,
log_cb => \&log_event
);
$evtlog->levels('crit','warning','notice','info');
I've been searching around for more examples than what's in the docs, but haven't found much yet. Not much of a surprise there since AE::log is pretty much awesome as it is, but anything to help will be greatly appreciated.

How to setup email notification alert in IIS 6.0 web server when a file is uploaded via any ftp client?

I'm trying to setup email notification alerts in IIS 6 when a file is uploaded via any FTP client. Does anyone know how to accomplish this?
I found something similar but don't understand how to implement it:
http://forums.iis.net/t/1196793.aspx/1?How+to+add+email+notification+service+in+IIS+6+0+when+a+file+is+uploaded+via+FTP+
Does anyone have any insight on this?
function countFolders(strPath)
dim objShell
dim objFolder
dim folderCount
set objShell = CreateObject("shell.application")
set objFolder = objShell.NameSpace(strPath)
if (not objFolder is nothing) then
dim objFolderItems
set objFolderItems = objFolder.Items
if (not objFolderItems Is Nothing) then
folderCount=objFolderItems.Count
end if
set objFolderItem = nothing
end if
set objFolder = nothing
set objShell = nothing
countFolders=folderCount
end function
The post you're citing basically suggests this:
Create a script which checks the number of files in a folder(or folders) (as you have).
Create a running total of number of files. Maybe save this value into a database or another txt file.
If the number of files differs from the last time the check was ran, then send the email.
It suggests using scheduled tasks. This means an email is sent exactly when the FTP is updated only when you script is executed. The good thing about Windows Tasks is you can run it as often as you like. So, assuming you don't need an immediate notification, you could set your script to run once a minute, once every 10 minutes or similar.
The problem with the above though is if people are removing files as well, you'll probably get missed notifications. EG assuming you don't want to be notified when a file is removed, this means if my current count of files is 10, 3 and removed and 1 added, this means next time the script runs I have 8 files. There is no way to know that files had been removed/re-added. In this case, you want to take a notice of the file-names and paths, make a note of them so you can compare existing paths to previous paths!
I have just completed a very similar task, but I had an extra luxury. I wrote the FTP client which had to be installed on all clients machines to send files to my FTP. This meant, in my FTP program, I had an extra bit of code which did: OnUploadCompleted -> Send Notofication Email
You could create a service which uses the FileSystemWatcher.
The FileSystemWatcher listens to files system change notifications. In the provided link is a good example how to use the class.

Perl IPC - FIFO and daemons & CPU Usage

I have a mail parser perl script which is called every time a mail arrives for a user (using .qmail). It extracts a calendar attachment out of the mail and places the "path" of the file in a FIFO queue implemented using the Directory::Queue module.
Another perl script which reads the path of the calendar attachment and performs certain file operations on the local system as well as on the remote CalDAV server, is being run as a daemon, as explained here. So basically this script looks like:
my $declarations
sub foo {
.
.
}
sub bar {
.
.
}
while ($keep_running) {
for(keep-checking-the-queue-for-new-entries) {
sub caldav_logic1 {
.
.
}
sub caldav_logic2 {
.
.
}
}
}
I am using Proc::Daemon for running the script as a daemon. Now the problem is, this process has almost 100% CPU usage. What are the suggested ways to implement the daemon in a more standard, safer way ? I am using pretty much the same code as mentioned in the link mentioned for usage of Proc::Daemon.
I bet it is your for loop and checking for new queue entries.
There are ways to watch a directory for file changes. These ways are OS dependent but there might be a Perl module that wraps them up for you. Use that instead of busy looping. Even with a sleep delay, the looping is inefficient when you can have your program told exactly when to wake up by an OS event.
File::ChangeNotify looks promising.
Maybe you don't want truly continuous polling. Is keep-checking-the-queue-for-new-entries a CPU-intensive part of the code, even when the queue is empty? That would explain why your processor is always busy.
Try putting a sleep 1 statement at the very top (or very bottom) of the while loop to let the processor rest between queue checks. If that doesn't degrade the program performance too much (i.e., if everyone can tolerate waiting an extra second before the company calendars get updated) and if the CPU usage still seems high, try sleep 2, sleep 5, etc.
cpan Linux::Inotify2
The kernel knows when files change and sends this information to your program which runs the sub. Maybe this will be better because the program will run the sub only when the file is changed.