Debugging Facebook Payments Callback - facebook

I can't determine the best way to debug my Facebook Payments Callback PHP file. The script isn't requested on the client side, so I'm not sure how to pass any "authentic" values to the script to run it locally. The topic doesn't seem to be covered in any of the Facebook documentation, nor was I able to find it on Google (with the exception of this previously asked question, which was seeking to debug the script with no internet connection at all), so hopefully the answer is just that obvious and simple. So far my only two 'solutions' have been to make the script output a text file containing any debug output, or blindly hack away at the code until the vague client-side API errors go away.

There is another option. Mail the output to yourself. I find it more convenient than reading a text file.
ob_start();
//the contents of your file
$output = ob_get_clean();
mail(
'youremail#email.com',
'fb_payments',
$output,
'From: noreply#misite.com' . "\r\n".'X-Mailer: PHP/' . phpversion()
);
echo $output;
Additionally, you can also capture the $_POST information you get from Facebook and send it it to the file through a form/javascript, locally. Then you don't have to have Facebook ping it, and you can see the errors in your browser.

Related

how to use onclick with href on cgi perl

I do have a list of files on page and next each file there is a link says delete, simply user by clicking the delete link it passes the file name on to the function on the same script then it deletes the file from server and it says on the same page, any idea?
#some other stuff goes here such list of files
print "<TD><a onclick='deleteFile()' href='#'>delete</a> </td>";
sub deleteFile()
{
unlink ($file);
}
I also tried pure cgi perl and when I click delete link it prints error "Internal Error" but when I look for the file to see if it has been delete or not then the file actually deleted so there is no permission issue here else it wouldn't delete or unlink the file, here is what changed to:
print "<a href='../cgi-bin/deleteFile.cgi?param1=$dir&param2=$file'>delete</a>";
here what I have in deleteFile.cgi I get both param1 & 2 and use unlike like below
unlink($location);
You really haven't tried hard enough to find your own solution here. I will give you some pointers ...
The onclick attribute in the HTML will trigger Javascript to be run in the browser (there are better ways to make a click event run Javascript code).
None of the Perl code in your CGI script will run unless the browser sends a request to the CGI script on the server. Things that could generate a request include:
the user clicking a link with an href that points to the CGI script (perhaps with the file pathname in a querystring parameter)
the user clicking a submit button in a form with an action that points to the CGI script (perhaps with the file pathname in a hidden form field)
some Javascript code in the browser that issues an AJAX request to the CGI script (with the file pathname as a POST parameter)
Clicking a link would result in a GET request - it is generally considered bad practice to run code that changes the state of the server (e.g.: deleting a file) is response to a GET request.
A form submission or an AJAX request can cause a POST request. You could even explicitly use a DELETE request via AJAX. These are more appropriate request methods to use for mutating server state.
Even when you get your code working, it will only be able to delete files in directories that the web server has write access to. Web servers are not generally configured with write access to any directories by default.
The problem was after deleting there was no redirect so after adding redirect page then it worked like a charm..
unlink glob ($file);
print redirect(-url=>'http://main.cgi');
thanks

issue in fetching yahoo contact list in iOS

yos-social-objc-master
I have also the problem with yos-social-objc-master project i found in github. after login with yahoo credential. I always got a page and a code xxxx with below lines
"To complete sharing of yahoo! info with xxxx, enter code xxxx into xxxx"
So, I am not getting that where I should enter this code? And how will it redirect to my application.
So that I can get contact list. I have done R&D on it. but didnt get any appropriate solution. Please help me out,how can i resolve this issue.
I have found the solution though with a little overheads.
Steps are: 1> Create a PHP script in you own server (say named, YRedirect.php).
2> Pest the following code in it-
<?php
$query = $_SERVER['QUERY_STRING'];
header("Location: YOUR_APP_ID_OR_BUNDLE_ID://oauth-response?" . $query);
?>
3> Add an URL Scheme in your info.plist file with the YOUR_APP_ID_OR_BUNDLE_ID.
That's it and you are DONE with the authentication problem.
And then use YQL to fetch contacts.

How to create a header for perl?

I am new to perl. I wish to create a perl program which sends request to a website and downloads the data. I read HTTP::Headers and HTTP::Request.
I wish to use HTTP::Request->new( 'POST', $URL, $Header, $PostData ).
My question is how can I determine header values and post data.
Thank you
I was creating a similar script sometime back. I think first you should capture the http request using the browser.
1) Add "HTTPFox" to firefox. It is very helpful.
2) Open HttpFox in a new window. Press start button. This will start capturing packets.
3) Open the website and follow the procedures to download the data.
Please ask if anything is not clear.

Facebook debugger: Clear whole site cache

I am aware that Facebook caches the Like data for specific pages on your site once they're visited for the first time, and that entering the url into the debugger page clears the cache. However, we've now improved our Facebook descriptions/images/etc and we need to flush the cache for the entire site (about 300 pages).
Is there a simple way to do this, or if we need to write a routine to correct them one by one, what would be the best way to achieve this?
Is there a simple way to do this,
Not as simple as a button that clears the cache for a whole domain, no.
or if we need to write a routine to correct them one by one, what would be the best way to achieve this?
You can get an Open Graph URL re-scraped by making a POST request to:
https://graph.facebook.com/?id=<URL>&scrape=true&access_token=<app_access_token>
So you’ll have to do that in a loop for your 300 objects. But don’t do it too fast, otherwise you might hit your app rate limit – try to leave a few seconds between the requests, according to a recent discussion in the FB developers group that should work fine. (And don’t forget to URL-encode the <URL> value properly before inserting it into the API request URL.)
The simple solution in wordpress, go to permalinks and change the permalinks and use a custom permalink, in my case I just added an underscore so did this...
/_%postname%/
Facebook then has no info on the (now) new urls so they scrape it all fresh.
I was looking for this same answer and all the answers were super complicated for me as a non coder.
Turned out there is a very simple answer and I came up with it all by myself :) .
I have a wordpress website that with a variety of plugins I've bulk uploaded over 4,000 images that created 4,000 posts.
The problem was I uploaded them and then tried setting up the facebook share plugins before sorting the og:meta tag issue so the total 4,000 posts were scraped by FB with no og:meta so when I then added them it made no difference. The fb debugger could not be used as I had over 4k posts.
I must admit I'm a bit excited, for many years I have got helpfull answers from google searches sending me to this forum. Often the suggestions I found were well over my head as I'm not a coder, I'm a "copy paster".
I'm so happy to be able to give back to this great forum and help someone else out :)
Well i also got the same scenario and used hack and it works but obviously as #Cbroe mentioned in his answer that the API call has some limitation with rate limiting so i guess you should take care of it in my case i only have 100 URLs to re-scrape.
So here is the solution:
$xml = file_get_contents('http://example.com/post-sitemap.xml'); // <-- Because i have a wordpress site which has sitemap.
$xml = simplexml_load_string($xml); // Load it as XML
$applicationAccessToken = 'YourToken'; // Application Access Token You can get it from https://developers.facebook.com/tools/explorer/
$urls = [];
foreach($xml->url as $url) {
$urls[] = $url->loc; // Get URLS from site map to our new Array
}
$file = fopen("response.data", "a+"); // Write API response to another file so later we can debug it.
foreach($urls as $url) {
echo "\033[Sending URL for Scrape $url \n";
$data = file_get_contents('https://graph.facebook.com/?id='.$url.'&scrape=true&access_token='.$applicationAccessToken);
fwrite($file, $data . "\n"); //Put Response in file
sleep(5); // Sleep for 5 seconds!
}
fclose($file); // Close File as all the urls is scraped.
echo "Bingo It's Compelted!";

Perl / CGI: redirect to a different page after downloading a file

this is quite a newbie question and I've searched on this topic for a while, but nothing I've found seems to work as described. I have this piece of code, for providing a file download to the user, which works perfectly:
open(DOC, "<$file_name") or die "$!";
#textFile = <DOC>;
close DOC;
print "Content-Type:application/x-download\n";
print "Content-Disposition:attachment;filename=" . $basename . "\n\n";
print #textFile;
My problem is that after the file-download has started, I would like to redirect the user to a different page. The script above is actually being submitted from a form by another script where I have:
<form action="/cgi-bin/download.pl">
<p> some msg </p>
<p><input type="submit" value="Download" name="Download"></p>
</form>
I've tried putting some javascript statements in the input-Tag like:
onclick="javascript:window.document.location.href=\'http://www.mynewpage.com'
as well as printing at the end (of download.pl) something like:
print "Location: http://www.mynewpage.com";
It doesn't work.
If someone could give me a hint, I'd really appreciate that!
Thanks in advance!
Alex
You cannot do this with HTTP (see ADW's resposne).
What most sites do is to redirect you to a page which has some javascript which then starts the download. It also provides a link in case that doesn't work.
For example: http://sourceforge.net/projects/sevenzip/files/7-Zip/9.22/7z922.exe/download
See this SO question about the javascript: starting file download with JavaScript
You can't do it using straight http/html.
The response to the query is a file, you're not allowed to send an additional response (i.e. a redirect) as well.
As Dave says, normally it's done with JavaScript.