I am doing a security scan with OWASP that detects a vulnerability by remote injection of operating system commands, through the GET method, therefore it shows me a URL like this
www.mysite.com/create.php?id=167%27%3Bstart-sleep+-s+15&name=11
I understand that it forces the server to sleep for 15 seconds, but the problem is that if the thread is really sleeped, the website should delay the loading process by about 15 seconds, but the web page loads instantly, even if I replace the 15 by 60 or 120, the page still loads instantly.
Also, the OWASP report doesn't show me any text in the evidence field.
Any ideas?
Is it possible that OWASP throws a false alert?
Or does the sleep command not sleep the thread?
Why is the remote command not running?
Related
When user registers, an email is sent to user after 20 seconds. Is this possible to code with sleep() in moodle.
sleep(20);
if (!send_confirmation_email($user)) {
print_error('noemail','core_email');
}
The sleep will be fairly poor UX and block the session. An adhoc task is the right way to go as DerKanzler said.
As of https://tracker.moodle.org/browse/MDL-66925 you can run the adhoc tasks in keep alive mode and they will process continuously as a psuedo daemon:
php admin/cli/adhoc_task.php --keep-alive=60 --execute
If you still wanted the email to be sent roughly 20 seconds into the future, when you use the Task API to queue the task you can set the future time that it should run:
https://docs.moodle.org/dev/Task_API#Set_a_task_to_run_at_a_future_time
Either way this sounds like a terrible idea to force a user to either way in their browser for 20 seconds, or to wait reloading their email client for 20 seconds. I'd strongly recommend against it.
If you want to do it the 'hacky' way and patch some core files sleep(seconds) is indeed the easiest way to go.
If you are writing a plugin, you can have a look at the Task API, especially at defining AdHoc Tasks. Though this will not execute until your cron job is fired. So you would have to lower your cron execution time limit. Besides that there is currently no option to do this with the moodle API.
I am using WinHTTP 5.1 in one of my Delphi FMX application which connects to REST servers in parallel.
There are lot of GET and POST which are called continuously by the client.
I have observed that after say 200-300 GET commands below method is becoming extremely slow:
WinHTTPRequest.Open(FURL, FDocument, True)
The credentials are set properly and i can observe on Wireshark that the Open method is taking approx a minute to execute as after open i am calling Send :
WinHTTPRequest.Send
Any advises ?
Is there any way to increase the maximum time that a plugin can execute for?
It's 2 minutes by default. I found that here.
The limit is there to help protect the performance of the server, so the correct approach here is to re-engineer your solution (e.g. move your intensive logic out into a workflow or a web service and call it asynchonrously).
I'm not aware of any setting, flag or registry entry that will extend the two-minute timeout, though if you must persevere, you may find it possible to fudge a solution by wrapping your logic in a try/catch block, catching System.TimeoutExceptionand continuing your code. Maybe (untested).
I'd like to add that it seems that the time limit only applies when a plugin is registered in a sandbox / partial trust mode.
We had this kind of an issue and solved it by registering the plugin in fully trusted (non-sandbox) mode. I verified this by using Thread.Sleep function to wait 2 minutes before even starting to executing any plugin logic. In total almost 4 minutes were spent but the plugin still managed to do well when in non-sandbox mode. In a sandbox mode it threw us a 2 minutes exception.
According to E-learning material from Microsoft sandbox plugins in CRM 2013 has only 30 seconds limit instead of 120 seconds. I haven't tested that out yet.
Currently i am following thread to check wheather my internet is active or not in my application, but as it is taking time to give the response ,so this will freeze my UI.
So is there any way to implement it without freezing UI(like NSOperation).
If the internet is indeed down, it takes time. It is limitation of Apple's API. We have to live with it or put a timer to cancel the operation after 30 secs or so. But if a genuine response especially via GPRS takes more than 30 secs, you will be canceling that too if you put timer condition.
Alternatively, you could check for internet status asynchronously and display an ActivityIndicator or similar in the main thread. This means that you create a new thread which will run parallel with your main thread (in your case, the GUI that are freezing).
This is a shared hosting environment. I control the server, but not necessarily the content. I've got a client with a Perl script that seems to run out of control every now and then and suck down 50% of the processor until the process is killed.
With ASP scripts, I'm able to restrict the amount of time the script can run, and IIS will simply shut it down after, say, 90 seconds. This doesn't work for Perl scripts, since it's running as a cgi process (and actually launches an external process to execute the script).
Similarly, techniques that look for excess resource consumption in a worker process will likely not see this, since the resource that's being consumed (the processor) is being chewed up by a child process rather than the WP itself.
Is there a way to make IIS abort a Perl script (or other cgi-type process) that's running too long? How??
On a UNIX-style system, I would use a signal handler trapping ALRM events, then use the alarm function to start a timer before starting an action that I expected might timeout. If the action completed, I'd use alarm(0) to turn off the alarm and exit normally, otherwise the signal handler should pick it up to close everything up gracefully.
I have not worked with perl on Windows in a while and while Windows is somewhat POSIXy, I cannot guarantee this will work; you'll have to check the perl documentation to see if or to what extent signals are supported on your platform.
More detailed information on signal handling and this sort of self-destruct programming using alarm() can be found in the Perl Cookbook. Here's a brief example lifted from another post and modified a little:
eval {
# Create signal handler and make it local so it falls out of scope
# outside the eval block
local $SIG{ALRM} = sub {
print "Print this if we time out, then die.\n";
die "alarm\n";
};
# Set the alarm, take your chance running the routine, and turn off
# the alarm if it completes.
alarm(90);
routine_that_might_take_a_while();
alarm(0);
};
The ASP script timeout applies to all scripting languages. If the script is running in an ASP page, the script timeout will close the offending page.
An update on this one...
It turns out that this particular script apparently is a little buggy, and that the Googlebot has the uncanny ability to "press it's buttons" and drive it crazy. The script is an older, commercial application that does calendaring. Apparently, it displays links for "next month" and "previous month", and if you follow the "next month" too many times, you'll fall off a cliff. The resulting page, however, still includes a "next month" link. Googlebot would continuously beat the script to death and chew up the processor.
Curiously, adding a robots.txt with Disallow: / didn't solve the problem. Either the Googlebot had already gotten ahold of the script and wouldn't let loose, or else it simply was disregarding the robots.txt.
Anyway, Microsoft's Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) was a huge help, as it allowed me to see the environment for the perl.exe process in more detail, and I was able to determine from it that it was the Googlebot causing my problems.
Once I knew that (and determined that robots.txt wouldn't solve the problem), I was able to use IIS directly to block all traffic to this site from *.googlebot.com, which worked well in this case, since we don't care if Google indexes this content.
Thanks much for the other ideas that everyone posted!
Eric Longman
Googling for "iis cpu limit" gives these hits:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/38fb0130-b14b-48d5-a0a2-05ca131cf4f2.mspx?mfr=true
"The CPU monitoring feature monitors and automatically shuts down worker processes that consume large amounts of CPU time. CPU monitoring is enabled for individual application pools."
http://technet.microsoft.com/en-us/library/cc728189.aspx
"By using CPU monitoring, you can monitor worker processes for CPU usage and optionally shut down the worker processes that consume large amounts of CPU time. CPU monitoring is only available in worker process isolation mode."