I'm a student learning php & mysql development. i have setup a private lab ( VM ) inside my computer to test & learn how sql injection works. When things get harder i use sqlmap to exploit and later on study the requests it made to my test app using verbose mode & by capturing packets via wireshark. I came across a small problem and that's to specify the parameter in a URL to sqlmap to test.
http://localhost/vuln/test.php?feature=music&song=1
i want sqlmap to scan the parameter song so i tried these solutions
-u http://localhost/vuln/test.php?feature=music&song=1 --skip feature
-u http://localhost/vuln/test.php? --data="feature=music&song=1" -p song
Tried different variations by adding and removing quotes and equal signs , non worked. I even tried setting the --risk to --level to its maximum but it still fails to pick up the last parameter.
I will be very thankful if an expert can help me out with this.
Thank you.
the p option can be used in the following way
-u "http://localhost/vuln/test.php?feature=music&song=1" -p song
I noticed also that you can scan multiple parameters using this :
-u "http://localhost/vuln/test.php?feature=music&song=1" -p 'song,feature'
This will scan the song parameter, then the feature parameter.
If sqlmap find a vulnerable parameter, it will ask you if you want to continue with the others.
You can simply add * to your value of parameter which you want to scan. Did you try that one?
I have this problem too. I think sqlmap inject the first parameter. If you type :
-u http://localhost/vuln/test.php?feature=music&song=1
sqlmap will inject 'feature' parameter. To make it inject 'song' parameter you need to reorder the parameter as follows :
-u http://localhost/vuln/test.php?song=1&feature=music
Dont forget to add '&' between each parameter. It worked for me.
I have already triggered this type of problem. You can simply skip the 'feature' parameter. E.g -u http:// localhost/vuln/test.php?feature=music&song=1 --skip=feature and then certainly it will start testing the 'song' parameter.
Related
I'm working on a routing application using OSM data in pgrouting. I'm using overpass-api to access the data from a specific bounding box. However, after downloading the data, there seem to be tag_keys missing from the data.
When inspecting the data using postgis or QGIS, certain tag_keys are there, like "highway", "oneway" or "maxpeed". However, others seem to be missing. In particular the tag keys "bicycle" (with possible values like "yes" or "no") or "access" are not included in the data. These tag keys are available on OSM online, however.
The following code is used to retrieve the data from OSM through Overpass-API and put it into PGrouting
CITY="Utrecht_west"
BBOX="4.9926,52.0698,5.0772,52.1172"
wget --progress=dot:mega -O "$CITY.osm" "http://www.overpass-api.de/api/xapi?*[bbox=${BBOX}][#meta]"
OSM2pgrouting converter
cd ~/Desktop/Utrecht
osm2pgrouting \
-f Utrecht_west.osm \
-d utrecht_west \
-U user
I expect these lines to download all data in the bounding box, but some tag keys seem to be missing. What am I doing wrong here?
edit: it seems to be a similar issue to this post, however, I cannot find another answer to a similar issue
I'm not familiar with osm2pgrouting. However it looks like mapconfig.xml doesn't include "bicycle" and "access" tags. You either need to add them or create your own config file. If you want osm2pgrouting to consider these tags during routing this might not be enough, though.
I can't seem to find good enough solution to my problem. Is there a good way of grouping variables in some kind of file so that multiple scripts could access them?
I've been doing some work with Desired State Configuration but the work that needs to be done cannot be efficiently implemented that way. The point is to install Azure Build Agent on a server and then to configure it. There are some variables that really should not be inside a script file just copypasted like Personal Access Token. I just want to be able to easily change it without the need to go inside every script that would be using it. In DSC you can just make a .psd1 file and access the variables like for example AllNodes.NodeName. The config file invocation and parameters look like this:
.\config.cmd --unattended --url $myUrl --auth PAT --token $myToken --pool default --agent "$env:COMPUTERNAME" --acceptTeeEula --work $workDir'
I want to make the variable $myToken accessible from outside file for better security and having a centralized place from where I can change values. $myUrl is also important to have access to due to it changing with new update to Build Agent.
Thank you in advance for your effort. If anything is not clear please let me know.
I have two very different answers to your question, although either one of them may miss your point.
First, it's possible to define veriables inside your profile script. Most people only use the profile script to define a library of functions or classes. But a variable can be made global the same way.
I have a variable named $myps that identifies the folder where I keep my PS scripts (in subfolders).
When I start a session I generally switch to this directory (oops, I called it a folder above.
The second way involde storing values of variables in a CSV file, while the names are stored in the CSV header.i then have a quickie little comandlet that steps through a CSV file, record by record, generating different expansions of a template each time through.
These values are not quite global, but they can be used in more than one context.
Thank you for the help. Those are very useful solutions in some cases, but I dug a bit deeper and found solution that suits my purpose. Basically if you have a psd1 file suited for DSC use you can also access its content via normal ps1 file. For example:
NonNodeData =
#{
Pat = 'somePAT'
}
Let's say this section of a psd1 file called ENV.psd1 is on your local machine in C:/Configuration
To access the content of this file you have to make a variable inside your script and use Import-PowerShellDataFile like so:
$configData = Import-PowerShellDataFile -Path "C:\Configuration\ENV.psd1"
And now you are free to use anything stored inside ENV.psd1. For example if I want to extract my PAT from config file to be able to store it in a variable in the script:
$myPat = $configData.NonNodeData.Pat
Thanks to that I can just pass $myPat as a parameter when invoking config.cmd like so:
.\config.cmd --unattended --auth PAT --token $myPat
Keeping my code cleaner and easier for any future updates.
I need to create entry to Windows Event Log (e.g. application log). I know how to do all the stuff beside filling in the user who performed the action.
Example:
I need to create a script, that writes some message into application log. I used this tutorial, which worked fine: http://blogs.technet.com/b/heyscriptingguy/archive/2013/06/20/how-to-use-powershell-to-write-to-event-logs.aspx
But I am not able to influence the "user". When adding entry in windows log, it always fills "User: N/A".
Any idea how to pass "user" argument to the "write-eventlog" cmdlet?
Thank you for your help.
Even though (as far as I'm aware) Write-EventLog does not provide an option to write directly to the "User" field, you have two workarounds:
Use built-in standalone exec "EventCreate.exe" (type in eventcreate /? to see the manual)
This one does support providing the username field. I'm not sure, but it may require a password for that user too.
Second workaround would be to pass $env:USERNAME to the "message" field of Write-EventLog. This way you will still obtain the environment's current user.
I hope that helped.
I've researched and found the way to export our active directory information for our application is like this:
csvde -d OU=MyAppsOU,DC=dot,DC=testdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Now, I've read that to do an import from that file, you just have to add the -i option to the line like this:
csvde -i -d OU=MyAppsOU-New,DC=dot,DC=newdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Obviously, I'm very scared to try this as I don't want to blow away anything. My questions are:
Does specifying the OU=MyAppsOU-New create the new OU structure with that specific name? (I'm just trying to be 100% positive)
Does specifying the different domain name (newdmz) just update all of the data in the file to contain the new domains name?
or
Do I need to modify the exported csv file to change the domain name (testdmz) to what the new domain name will be (newdmz)?
Is there a different way I should be doing this?
I just want to re-create the OU structure without groups, roles (which are groups) and users. I will probably do those in a different process because we have different usernames for test and production.
Wow ! lost of question here, but according to me not enougth.
Begining by the end. CSVE.EXE is really not the exact tool I would use. As a Directorie developper I prefer LDIFDE.EXE, because it generates LDIF (LDAP data Interchange Format) which is more standard and more readable. You can also have a look to tools like ADAMSync.EXE that allow to synchronize two directories in AD world (but it's a big hammer for whant you want to do here)
Now choosing LDIFDE.EXE you will see that LDIF format is almost importable as is, but you nned to remove operational attributes (system attributes) from the file. The best way is to take them during the rxport. So you will use -L to only export the attributes you need or -O option to omit operational attributes.
To import in another domain, you will use -C option to change original domain part (DC=dot,DC=testdmz,DC=lan) by the new domain part.
Try it before in a virtual machine.
How do you create a cron job in Kohana? I setup a regular controller which extends off the Controller_Base and I ran the command line:
/usr/bin/wget http://domain/controller/custom_cron
But I can't get it to work. It just doesn't execute. No error, nothing. I didn't put any special code in my controller ... just what I need to run my program. So if there is like a special command to call a cron job, I didn't add it (cause I don't know what it would be).
Also, I need it to make MySQL calls so I would need to include the db info and connection and what not (if it doesn't do that automatically). And I work off a custom model. How would I include that (if it doesn't do it automatically).
Thank you.
php /path/to/index.php --uri=controller/action/etc/etc
Calling it like this pretty much makes it act exactly like in a web environment. The only difference is the protocol for requests is 'cli'. You'll need to keep that in mind if you are generating links.
So if there is like a special command
to call a cron job, I didn't add it
(cause I don't know what it would be)
Daft question - have you added that wget command to crontab or similar?
If on the other hand you're looking to make a "poor man's cron", you could try creating a hook that runs on every page load and checks the last time the job was run, perhaps storing the last timestamp in a file or database.
I had to use cURL as my fire-this-script command in curl
Ex:
30 18 * * * curl "http://domain.com/controller/method"
php and wget didn't work, even when calling index.php and adding the uri as suggested above.
Also, FYI, Most transparent way to test this was just running the line from SSH manually to see what the results were. Once I confirmed it was working there, then I put it in the cron.