SQLMap takes long time, when i choose level=5 and risk=3 - sql-injection

When I set risk=3 and level=5, sqlmap takes a long time and about 1 hour to finish the task, its really tedious, I am a newbie to Penetration Testing, kindly help me.
python sqlmap.py -u "Application url" --dbms=MySQL --threads=10 -time-sec=10 --level=3 --risk=2 --cookie="PHPSESSID:1r4fk7m10s3sj0nkbgdu1a48v0" --batch -v2 -o --keep-alive --user-agent="Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0" --no-cast --no-escape --tamper="space2comment,between"

Sometimes SQL injection is very slow and tedious (like if it requires time-based blind SQLi).
In your case, it looks like you are using -time-sec=10, which from the output of the help command is: Seconds to delay the DBMS response. So, you appear to be artificially delaying the response by 10 seconds per request. If that is necessary, of course it will take forever. If it isn't necessary, then, well... take it out and it should finish (at least) twice as fast.
Also, keep in mind that SQLi can be very noisy, and could get you caught on a pentest. Often, slow and steady is preferred.

setting higher risk and level values increases the number of attempts with different added attacks on the parameter. You should start with medium values a.ka. level=2 risk=1..etc..and then increment if it doesn't work. As the other user pointed out...sqlmap is very noisy. In a some Databases ..it will actually leave behind all the attempts as entry into the table rows.

Related

Powershell download page in 0 milliseconds

I have a strange problem with my script in powershell, I want to examine the average time of downloading page. I write script which fires frequently. But sometimes my script returns result 0, which means it downloads site in 0 ms. If i modified my script to save whole site to the file when the download time is about 0ms it doesn't saves anything. And I'm interesting if I do something wrong, or powershell function isn't too accurate to count such "small" times.
ps. other "good" results are about 4-9 ms.
Here is a part of my script which responds to count the download time:
$StartTime = Get-Date
$PageDownload = $Request.DownloadString("mypage.com")
$TimeTaken = ((Get-Date) - $StartTime).TotalMilliseconds
Get-Date should be as precise as the system clock is.
There could be web caching going on. Unfortunately, disabling caching for WebClient is not possible, from what I see elsewhere. The "do it right" method is to construct your own Http request with the TcpClient class, but that's also pretty complex.
One easy way to make sure you're not being cached is to put an arbitrary value as a GET request. It's a hack, but it is often enough to fool a cache. So, instead of:
"http://mypage.com"
You use:
"http://mypage.com?someUnusedValueName=$([System.Environment]::TickCount)"

How can I get exchange rates after google retired iGoogle?

I used this link to fetch the pound to euro exchange rate on a daily (nightly) basis:
http://www.google.com/ig/calculator?hl=en&q=1pound=?euro
This returned an array which I then stripped and used the data I needed.
Since the first of November they retired iGoogle resulting in the URL to forward to: https://support.google.com/websearch/answer/2664197
Anyone knows an alternative URL that won't require me to rewrite the whole function? I'm sure google didn't stop providing this service entirely.
I started getting cronjob errors today on this very issue. So I fell back to a prior URL I was using before I switched to the faster/reliable iGoogle.
Url to programmatically hit (USD to EUR):
http://www.webservicex.net/CurrencyConvertor.asmx/ConversionRate?FromCurrency=USD&ToCurrency=EUR
Details about it:
http://www.webservicex.net/ws/WSDetails.aspx?CATID=2&WSID=10
It works for now, but its prone to be slow at times, and used to respond with an "Out of space" error randomly. Just be sure to code in a way to handle that, and maybe run the cron four times a day instead of once. I run ours every hour.
Example code to get the rate out of the return (there is probably a more elegant way):
$ci = curl_init($accessurl);
curl_setopt($ci, CURLOPT_HTTPGET, 1);
curl_setopt($ci, CURLOPT_RETURNTRANSFER, 1);
$rawreturn = curl_exec($ci);
curl_close($ci);
$rate = trim(preg_replace("/.*<double[^>]*>([^<]*)<\/double[^>]*>.*/i","$1",$rawreturn));

How does Solaris SMF determine if something is to be in maintenance or to be restarted?

I have a daemon process that I wrote being executed by SMF. The problem is when an error occurs, I have fail code and then it will need to restart from scratch. Right now it is sending sys.exit(0) (Python), but SMF keeps throwing it in maintenance mode.
I've worked with SMF enough to know that it sometimes auto-restarts certain services (and lets others fail and have you deal with them like this). How do I classify this process as one that needs to auto-restart? Is it an SMF setting, a method of failing, what?
Manpage
Solaris uses a combination of startd/critical_failure_count and startd/critical_failure_period as described in the svc.startd manpage:
startd/critical_failure_count
startd/critical_failure_period
The critical_failure_count and critical_failure_period properties together specify the maximum number of service failures allowed in a given time interval before svc.startd transitions the service to maintenance. If the number of failures exceeds critical_failure_count in any period of critical_failure_period seconds, svc.startd will transition the service to maintenance.
Defaults in the source code
The defaults can be found in the source, the value depends on whether the service is "wait style":
if (instance_is_wait_style(inst))
critical_failure_period = RINST_WT_SVC_FAILURE_RATE_NS;
else
critical_failure_period = RINST_FAILURE_RATE_NS;
The defaults are either 5 failures/10 minutes or 5 failures/second:
#define RINST_START_TIMES 5 /* failures to consider */
#define RINST_FAILURE_RATE_NS 600000000000LL /* 1 failure/10 minutes */
#define RINST_WT_SVC_FAILURE_RATE_NS NANOSEC /* 1 failure/second */
These variables can be set in the SMF as properties:
<service_bundle type="manifest" name="npm2es">
<service name="site/npm2es" type="service" version="1">
...
<property_group name="startd" type="framework">
<propval name='critical_failure_count' type='integer' value='10'/>
<propval name='critical_failure_period' type='integer' value='30'/>
<propval name="ignore_error" type="astring" value="core,signal" />
</property_group>
...
</service>
</service_bundle>
TL;DR
After checking against the startd values, If the service is "wait style", it will be throttled to a max restart of 1/sec, until it no longer exits with a non-cfg error. If the service is not "wait style" it will be put into maintenance mode.
Presuming a normal service manifest, I would suspect that you're dropping into maintenance because SMF is restarting you "too quickly" (which is a bit arbitrarily defined). svcs -xv should tell you if that is the case. If it is, SMF is restarting you, and then you're exiting again rapidly and it's decided to give up until the problem is fixed (and you've manually svcadm clear'd it.
I'd wondered if exiting 0 (and indicating success) may cause further confusion, but it doesn't appear that it will.
I don't think Oracle Solaris allows you to tune what SMF considers "too quickly".
You have to create a service manifest. This is more complicated than not. This has example manifests and documents the manifest structure.
http://www.oracle.com/technetwork/server-storage/solaris/solaris-smf-manifest-wp-167902.pdf
As it turns out, I had two pkills in a row to make sure everything was terminated correctly. The second one, naturally, was exiting something other than 0. Changing this to include an exit 0 at the end of the script solved the problem.

How many iterations are saved by JAGS/BUGS when burnin and thinning are specified?

I have a quick question about the details of running a model in JAGS and BUGS.
Say I run a model with n.burnin=5000, n.iter=5000 and thin=2. Does this mean that the program will:
Run 5,000 iterations, and discard results; and then
Run another 10,000 iterations, only keeping every second result?
If I save these simulations as a CODA object, are all 10,000 saved, or only the thinned 5,000? I'm just trying to understand which set of iterations are used to make the ACF plot?
With JAGS, n.burnin=5000, n.iter=5000 and thin=2, means you keep nothing. You run 5000, discard the first 5000 of these 5000 and then only keep a half of the remaining values of the chain (keep 1 value and discard the next one ..).
Use for example n.burnin=2000, n.iter=7000, thin=50, n.chains=5 : so you have (7000-2000)/50 * 5 = 500 values.
Could you be more specific which software you're talking about? It looks like you're referring to the arguments of the function bugs() in the R2WinBUGS package (except that the argument is called n.thin not thin). Looking at help(bugs) it just says n.burnin is the "number of iterations to discard at the beginning". Which doesn't specifically answer your question, but looking at the source for bugs.script() in that package suggests to me that it would run 5000 iterations burn in, as you suspected. You could send a suggestion to the maintainers of that package to clarify their documentation.
In your example, bugs() would then run 0 further iterations after the burn-in. Here the documentation is clearer - n.iter is the total number of iterations including the burn-in.
For your second question, the CODA output from WinBUGS (and any software which calls WinBUGS or OpenBUGS) will only include the thinned sample.

Perl CGI gets parameters from a different request to the current URL

This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.
This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!