I do have a list of files on page and next each file there is a link says delete, simply user by clicking the delete link it passes the file name on to the function on the same script then it deletes the file from server and it says on the same page, any idea?
#some other stuff goes here such list of files
print "<TD><a onclick='deleteFile()' href='#'>delete</a> </td>";
sub deleteFile()
{
unlink ($file);
}
I also tried pure cgi perl and when I click delete link it prints error "Internal Error" but when I look for the file to see if it has been delete or not then the file actually deleted so there is no permission issue here else it wouldn't delete or unlink the file, here is what changed to:
print "<a href='../cgi-bin/deleteFile.cgi?param1=$dir¶m2=$file'>delete</a>";
here what I have in deleteFile.cgi I get both param1 & 2 and use unlike like below
unlink($location);
You really haven't tried hard enough to find your own solution here. I will give you some pointers ...
The onclick attribute in the HTML will trigger Javascript to be run in the browser (there are better ways to make a click event run Javascript code).
None of the Perl code in your CGI script will run unless the browser sends a request to the CGI script on the server. Things that could generate a request include:
the user clicking a link with an href that points to the CGI script (perhaps with the file pathname in a querystring parameter)
the user clicking a submit button in a form with an action that points to the CGI script (perhaps with the file pathname in a hidden form field)
some Javascript code in the browser that issues an AJAX request to the CGI script (with the file pathname as a POST parameter)
Clicking a link would result in a GET request - it is generally considered bad practice to run code that changes the state of the server (e.g.: deleting a file) is response to a GET request.
A form submission or an AJAX request can cause a POST request. You could even explicitly use a DELETE request via AJAX. These are more appropriate request methods to use for mutating server state.
Even when you get your code working, it will only be able to delete files in directories that the web server has write access to. Web servers are not generally configured with write access to any directories by default.
The problem was after deleting there was no redirect so after adding redirect page then it worked like a charm..
unlink glob ($file);
print redirect(-url=>'http://main.cgi');
thanks
Related
I want to save a page to a file in CODEIGNITER 3 - a whole page just before the page is sent to a browser. But I don't know how to do it and the most important is where to process the page in Codeigniter 3 code?
Oh, I was lucky to find it somehow in my code:
https://codeigniter.com/userguide3/general/hooks.html?highlight=display_override
display_override Overrides the _display() method, used to send the
finalized page to the web browser at the end of system execution. This
permits you to use your own display methodology. Note that you will
need to reference the CI superobject with $this->CI =& get_instance()
and then the finalized data will be available by calling
$this->CI->output->get_output().
I am setting up duo two-factor authentication in a perl web application (using mojolicious). I am new to perl so this may be a simple answer.
I have everything set up except I need to verify the response by doing the following:
"After the user authenticates (e.g. via phone call, SMS passcode, etc.) the IFRAME will generate a signed response called sig_response and POST it back to the post_action URL. Your server-side code should then call verify_response() to verify that the signed response is legitimate."
In perl, how can you call for sig_response, is there a module? Below is in example using python:
sig_response = self.get_argument("sig_response") # for example (if using Tornado: http://www.tornadoweb.org/en/stable/documentation.html)
Duo Web: https://duo.com/docs/duoweb
It looks like this sig_response is just a value that's POSTed to your response handler. When you created the URL to show in the iframe, it had a post_action parameter. That's the endpoint in your application that handles the user coming back from the iframe.
You need to build a route in Mojo for that. There, you need to look at the parameters you are receiving in the POST body. I don't know if it's form-data or something else, like JSON. It doesn't really say in the documentation. I suggest you dump the parameters, and if that doesn't show it, dump the whole request body.
Once you have that sig_response parameter, you need to call the verify_response function that duo's library provides and look at the return value.
If you have not done it yet, get the SDK at https://github.com/duosecurity/duo_perl. It's not a full distribution. You can either clone the whole thing or just download the pm file from their github and put it in the lib directory of your application. Then use it like you would any other module. Since it doesn't export anything, you need to use the fully qualified name to call the verify_response function.
The whole thing might look something like this untested code:
post '/duo_handler' => sub {
my $c = shift;
my $sig_request = $c->param('sig_response');
my $user = DuoWeb::verify_response($ikey, $skey, $akey, $sig_request);
if ($user) {
# logged in
} else {
# not logged in
}
};
Disclaimer: I don't know this service. I have only quickly read the documentation you linked, and taken a look at their Perl SDK, which they should really put on CPAN.
I am using WWW::Mechanize::Firefox along with MozRepl plugin in Firefox. The code works properly to fetch content from sites by sending them an HTTP GET request.
I am going through a list of URLs and sending an HTTP GET Request to each of them.
However, if the request hangs on a particular URL, it keeps waiting.
Please note that I am referring to cases where a part of the web page content is loaded while some of the content is still pending. It happens in cases where a web page loads a lot of content from third party sites and if one of the resources (an image for instance) could not be loaded, the browser keeps waiting for it.
I want the request to timeout after 'n' seconds so that I can read the next URL from the list and continue with the code execution.
In WWW::Mechanize perl module, the constructor supported the timeout option as shown below:
$mech=WWW::Mechanize->new(timeout => 10);
However, I could not find a similar option in the documentation for the Perl Module, WWW::Mechanize::Firefox here:
http://metacpan.org/pod/WWW::Mechanize::Firefox
I tried this:
$mech=WWW::Mechanize::Firefox->new(timeout => 10);
But I think it does not work as there are still some sites for which the request hangs.
WWW::Mechanize::Firefox uses Mozrepl to connect with Firefox browser, so you don't need to declare timeout parameter, because Firefox will wait for page load.
If you want to check if site is really fully loaded, you should check that the element that you want (e.g. div) is present:
while (!$mech->is_visible( xpath => '//div[#class="myDivClassAtHtml"]')) {
sleep 1;
};
# do something with your page
I am working on existing code that uses CGI::FormBuilder, and I've gone through all of the documentation to see how this might work, and I'm not 100% convinced that it will. The code has several free-form fields and 3 buttons: Update, Cancel and Test. The test button sends an email using settings entered into the fields.
In the JS for the form, I use an ajax call when "Test" is clicked so that the perl code in the form executes. The update and cancel buttons return like the form is supposed to when it is submitted. The reason for this is that when the test email is sent, I don't want the user to be taken to a returned page, but remain on the form with the values intact, so that if the values are correct, the user does not have to re-enter them when they want to update the actual values (which updates the values in my DB). Apparently, since the form isn't being "submitted," the values that it attempts to use on this "test" are the values loaded into the form with the page opens - it isn't using the values the user input before hitting the test button. Is there a way to make this happen?
Long question short: with CGI::FormBuilder, can I get the values currently in the fields via PERL without submitting the page? Thanks!
Short answer: yes.
Medium answer: Yes. You can use javascript in the page to send information to your server side application.
Long answer:
You seem to have some confusion about how server and client side code interact with webpages. This is pretty common. Many people expect their to be some kind of communication between the rendered page and the program that generated it. AJAX and related technologies blur the lines here and make things more confusing.
Here's a timeline of a simple, old-school CGI form:
Client requests page. Server receives page request. Server dispatches
to CGI script.
Server executes CGI script.
Server sends result of CGI script to client.
Client renders script results.
User fills out form.
User clicks "Submit". Client requests page with parameter information (details vary with type of request, form configuration).'
Server receives page request.
Server dispatches to CGI script.
Server executes CGI script. Server sends result of CGI script to client.
Client renders script results.
Each message from the Client is handled separately.
AJAX lets you send messages to the server and get the response without clearing the currently loaded page.
So, just throw some javascript code into the html, and set up an onModify handler that will make an AJAX request and pass data back to the server. The AJAX request is just another HTTP request, just like those above, but it runs in the backgound. All you need to do is catch the submitted data and respond. Your javascript needs to catch the response and do something with it.
Answer to the short question is "No".
Answer to the long question is "Yes".
All you need to have two "Submit" buttons: "Submit" and "Test".
The submit by Test will send form to the CGI and CGI will only validate the fields' values and render same form with same values back and message if there is an error in fields.
Well, say I have a number of html pages in my web. The case is that I´m doing changes sometimes in the directory structure, so when anybody try to access to a determinated URL, it's possible that such URL does not exit. The files names don't change but so do the paths.
As far as I now, the server takes the user to a "404" page that can be customized. Is possible to customize the page in this way?:
The user tries oneweb.com/oldpath/page.html; which does not exist.
A 404 customized page is launched
404 page runs an script IS THIS POSSIBLE?
The script is given the name of the file WHERE IS STORED SUCH NAME?
The script search the entire directory structure to find page.html HOW TO ACCESS TO THE STRUCTURE
The file is found and the new URL is stored: oneweb.com/newpath/page.html
a link appears showing the new URL
Maybe this process is relatively common and I can find some related code or tutorial?
Are you using Apache? Linux?
Add a 404 handler
ErrorDocument 404 /404.php
Then use 404.php to parse the url. This simple example just grabs everything after the last / in the URI so http://example.com/foo/bar/page.html would put page.html in $url:
$url = end(explode('/', $_SERVER['REQUEST_URI']));
Then use one of the comment example functions in http://php.net/manual/en/function.readdir.php to search your directory and find the file.
Then do a header 301 redirect
header ('HTTP/1.1 301 Moved Permanently');
header ('Location: http://example.com/' . $file_path);