I've been honing the performance a large, decades old codebase I use for projects over the last few weeks and it was suggested to me on here that I should look at something like FastCGI or HTTP::Engine. I've found it impressively straightforward to make use of FastCGI, but there's one nagging question I've found mixed answers on.
Some documents I've read say you should never call exit on a script being run through FastCGI, since that harms the whole concept of keeping it loaded persistently. Others say it doesn’t matter. My code uses exit in a lot of places where it is important to make sure nothing keeps executing. For example, I have restricted access components that call an authorization check:
use MyCode::Authorization;
our $authorization = MyCode::Authorization->new();
sub administration {
$authorization->checkCredentials();
#...Do restricted access stuff.
}
To make it as hard for there to be an error in the code as possible where someone would be permitted to access those functions when they shouldn't, checkCredentials ends the process with exit() after generating a user friendly response with a login page if the answer is that the user does not have the appropriate credentials. E.g.:
sub checkCredentials {
#Logic to check credentials
if ($validCredential) {
return 1;
}
else {
# Build web response.
# Then:
exit;
}
}
}
I’ve used it so that I don’t accidentally overlook something continuing on that causes a security hole. At present, the calling routine can safely assume it only gets back control from checkCredentials if the right credentials are provided.
However, I’m wondering if I need to remove those calls to make good use of FastCGI. Is FCGI's $req->Finish() (or the equivalent in PSGI for HTTP::Engine) an adequate replacement?
I've read say you should never call exit on a script being run through FastCGI,
You don't want the process to exit since the point of using FastCGI is to use a single process to handle multiple requests (to avoid load times, etc).
So you want to do is override exit so that is ends your request-specific code, but not the FastCGI request loop.
You can override exit, but you must do so at compile-time. So use a flag to signal whether the override is active or not.
our $override_exit = 0;
BEGIN {
*CORE::GLOBAL::exit = sub(;$) {
die "EXIT_OVERRIDE\n" if $override_exit;
CORE::exit($_[0] // 0);
};
}
while (get_request()) {
# Other setup...
eval {
local $override_exit = 1;
handle_request();
};
my $exit_was_called = $# eq "EXIT_OVERRIDE\n";
log_error($#) if $# && !$exit_was_called;
log_error("Exit called\n") if $exit_was_called;
# Other cleanup...
}
But that creates an exception that might be caught unintentionally. So let's use last instead.
our $override_exit = 0;
BEGIN {
*CORE::GLOBAL::exit = sub(;$) {
no warnings qw( exiting );
last EXIT_OVERRIDE if $override_exit;
CORE::exit($_[0] // 0);
};
}
while (get_request()) {
# Other setup...
my $exit_was_called = 1;
EXIT_OVERRIDE: {
local $override_exit = 1;
eval { handle_request() };
log_error($#) if $#;
$exit_was_called = 0;
}
log_error("Exit called\n") if $exit_was_called;
# Other cleanup...
}
Related
I'm writing a server application in D, who should be able to manage n connections simultaneously.
To achieve this i am using std.socket.Socket.select. This works fine. But I can't bind session specific data to the socket and i don't see any way to do this, cause Socket does not allow to save a handle to user specific data. After
Socket.select(socketSet, null, null);
I'm able to get all affected sockets, but I can't assign this sockets to my user specific session data. What's my mistake? Is it possible to reach my goal in this way? Or should I choose another way for my requirements?
My relevant code:
ushort port = 5010;
stoprequest = false;
auto listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
enum MAX_CONNECTIONS = 100;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
Session[] sessions;
while (true)
{
socketSet.add(listener);
foreach (session; sessions)
socketSet.add(session.socket);
Socket.select(socketSet, null, null);
for (size_t i = 0; i < reads.length; i++)
{
if (socketSet.isSet(reads[i]))
{
// Now i should acces to session related data, but how?
char[1024] buf;
auto datLength = reads[i].receive(buf[]);
if (datLength == Socket.ERROR)
writeln("Connection error.");
else if (datLength != 0)
{
writefln("Received %d bytes from %s: \"%s\"", datLength, reads[i].remoteAddress().toString(), buf[0..datLength]);
continue;
}
else { // Error Handling. Shortened, since unimportant for the example}
reads[i].close();
reads = reads.remove(i);
i--;
}
}
if (socketSet.isSet(listener))
{
Socket sn = null;
sn = listener.accept();
if (reads.length < MAX_CONNECTIONS)
{
Session session = new Session();
session.socket = sn;
sessions ~= session;
}
else { // Error Handling for too many connection. Shortened, since unimportant for the example}}
}
socketSet.reset();
}
The hint to use poll() was helpful. After reading https://daniel.haxx.se/docs/poll-vs-select.html I think that both variants work and neither of them are the real thing. For an efficient way, I should better deal with libev. Fortunately, efficiency is not my problem in this particular project. For this reason I will use select(), because i found out, that accessing handle gives me a unique number which can be passed to a own lookup table. This allows me to assign session data to a socket. So I prefer to stick with the encapsulated functionality of std.socket.Socket and don't work around it.
My concrete question can therefore be answered with :
Use Socket.handle to identify the socket and manage session related
data
A few other alternatives you can consider:
1) use a subclass of Socket. You can make your own class that inherits from it and adds more stuff.
2) The poll function is found in import core.sys.posix.poll;, and you can pass socket.handle to that as well. But note it will not work on Windows without modification.
or indeed 3) do your own lookup table, that works too.
Note that the std.socket.Socket is a very thin wrapper around the bsd socket api, just internally it does conveniently handle the slight differences between Windows and posix. Still it is pretty easy to adapt code to use the other apis with it (or tutorials on C language stuff to D) since it is all basically the same thing - and literally the same functions if you import core.sys stuff.
Updated Comment: I'm attempting to use PDFCreator to convert pdf files into txt files via PowerShell but it still doesn't seem to be working.
Any help is appreciated!
$PDFCreator = New-Object -ComObject PDFCreator.JobQueue
$PDF = 'C:\Users\userName\Downloads\SampleACORD.pdf'
$TXT = 'C:\Users\userName\Downloads\SampleACORD.txt'
try {
$PDFCreator.initialize()
if($PDFCreator.WaitForJob(5)){
$PJ = $PDFCreator.NextJob
}
if($PJ){
$PJ.PrintFile($PDF)
$PJ.ConvertTo($TXT)
}
} catch {
$_
Break
}
finally {
if($PDFCreator){
$PDFCreator.ReleaseCom()
}
}
You are getting that because $PJ is $null. NextJob isn't returning anything.
To guard against this, WaitForJob(int) returns a bool, $true if a job arrived and $false if not, so you should know after WaitForJob completes whether there is a job to get or not:
if( $PDFCreator.WaitForJob(5) ){
$PJ = $PDFCreator.NextJob
$PJ.allowDefaultPrinterSwitch('C:\Users\userName\Downloads\SampleACORD.txt', $true)
$PJ.ConvertTo($TXT)
} else {
# Handle the no jobs case here
}
You could also do a null check against $PJ before trying to call $PJ.allowDefaultPrinterSwitch:
if( $PJ ){
$PJ.allowDefaultPrinterSwitch('C:\Users\userName\Downloads\SampleACORD.txt', $true)
$PJ.ConvertTo($TXT)
}
Here is some more information on the PDFCreator.JobQueue API, which you may find useful.
To address your issue in the comments, where the file is not being produced, this page of the documentation explains the logical flow of how the conversion process should work:
Call the Initialize() method with your COM Object.
Call WaitForJob(timeOut) if you are waiting for one print job to get in the queue. The parameter timeOut specifies the maximum time the queue waits for the print job to arrive.
Now you are able to get the next job in the queue by calling the property NextJob.
Setup the profile of the job with the method SetProfileByGuid(guid). The guid parameter is used to assign the appropriate conversion profile.
Start the conversion on your print job with ConvertTo(path). The path parameter includes the full path to the location where the converted file should be saved and its full name.
The property IsFinished informs about the conversion state. If the print job is done, IsFinished returns true.
If you want to know whether the job was successfully done, consider the property IsSuccessful. It returns true if the job was converted successfully otherwise false.
In your case, I'm not sure how essential the profile would be, but it does look like your code fails to wait for completion. The following code will wait for the conversion job to finish (and check for success if you need to):
# Wait for completion
while( -Not $PJ.IsFinished ){
Start-Sleep -Seconds 5
}
# Check for success
if( $PJ.IsSuccessful ){
# success case
} else {
# failure case
}
Unrelated, but good practice, wrap your code in a try/finally block, and put your COM release in that block. This way your COM connection closes cleanly even in the event of a terminating error:
$PDFCreator = New-Object -ComObject PDFCreator.JobQueue
try {
# Handle PDF creator calls
} finally {
if( $PDFCreator ){
$PDFCreator.ReleaseCom()
}
}
The finally block is guaranteed to execute before returning to a parent scope, so whether the code succeeds or fails, the finally block will be run.
I'm wondering if is it possible to set a condition for call answered/picked up in an onreply_route
something like this
onreply_route {
if(call picked up) {
xlog("ON AIR");
}
}
There are quite a few ways in which you can achieve this. For your case, I would use the tm module's t_check_status() function:
onreply_route {
if (t_check_status("2[0-9][0-9]")) {
xlog("ON AIR");
}
}
However, note that this will not work if your SIP proxy is stateless (i.e. if you don't use tm at all)! In this case, we would need to access the information in a more low-level way, by reading it straight off the received message using the $rs variable (SIP reply status):
onreply_route {
if ($rs == 200) { # or ($rs =~ "2[0-9][0-9]")
xlog("ON AIR");
}
}
I know I might be facing an impossible mission. What I do want is radiusd to write down every mac received in an Acces-Request, for later on deny access to those MAC.
I know the policies file is written in unlang, bad news are that radiusd does not have any write permissions on any of the conf files...
Anyway, was anyone capable of WRITTING to a file in the POLICY PROCESSING of FreeRADIUS?
What I want to achieve would be something like this:
raddb/sites-available/default
authorize {
rewrite_calling_station_id
unauthorized_macs
if(ok) {
reject
}
else {
update control {
Auth-Type := Accept
}
GET MAC FROM CALLIN_STATION_ID ATTRIBUTE
WRITE THIS F***ING MAC TO unauthorized_macs FILE
}
}
Thanks to Arran, I could solve this the following way:
authorize {
rewrite_calling_station_id
authMac
if(ok) {
reject
}
else {
linelog
update control {
Auth-Type := Accept
}
}
}
Where linelog is configured as follows:
raddb/mods-enabled/linelog
linelog {
filename = /path/to/hell/authMac
format = "%{Calling-Station-ID}"
}
update request {
Tmp-String-0 := `echo "%{Calling-Station-ID}" >> "/path/to/f___ing_unauthorized_macs_file"`
}
There's also the linelog module which would be better in >= v3.0.x as it implements internal locking (in addition to flock) to prevent line interleaving.
See /etc/raddb/mods-available/linelog for examples.
I'm using an Event var watcher to implement an internal queue. When the producer thread adds something to the queue (just an array) it will change the value of a watched variable to signal that an element was added.
How can you do the same with AnyEvent? It doesn't seem to support variable watching. Do I have to use pipes and use an IO watcher (i.e. the producer writes a byte on one end of the pipe when it has added an element.)
I'd also be interested to know how to do this with Coro.
It sounds as if you are using variable watching as means of transferring control back to the consumer. In AnyEvent, this can be done with condition variables by calling $cv->send() from the producer and $cv->recv() in the consumer. You could consider send()ing the item that you'd otherwise have put in the queue, but calling send without parameters should be an allowed way of notifying the consumer.
I figured out the paradigm to use:
my #queue;
my $queue_watcher;
sub add_item {
push(#queue, $_[0]);
$queue_watcher ||= AnyEvent->timer(after => 0, cb => \&process_queue);
}
sub process_queue {
... # remove zero or more elements from #queue
if (#queue) {
$queue_watcher = AnyEvent->timer(after => 0, cb => \&process_queue);
} else {
undef $queue_watcher;
}
}
Basically $queue_watcher is defined and active if and only if #queue is not empty.