Marketing API Export Async Report ads_insights V2.5 - facebook

I have problems to export Report of ad's statistics since code written in.NET Framework when I use ads_insights (version 2.5). Before when I used reportstats with version 2.3 I could download the report succesfully
My request is //www.facebook.com/ads/ads_insights/export_report?report_run_id=0000000&format=xls&access_token=token
When I execute the request in browser I can download the report succesfully (file xls completed), but when I execute the request since by code .NET Framework (C#) I download the file .xls incomplete [enter image description here][2]
The tasks to get report are (using code .NET C#)
1º request with method POST
graph.facebook.com/v2.5/act_countNumbrer/insights?level=ad&time_range=%7B%27since%27%3A%272015-11-02%27%2C%27until%27%3A%272015-11-02%27%7D&actions_group_by=%5B%27action_type%27%5D&fields=campaign_name%2Cad_name%2Cad_id%2Creach%2Cfrequency%2Cimpressions%2Ccpm%2Ccpp%2Cspend%2Csocial_clicks%2Cunique_clicks%2Cctr%2Cunique_ctr%2Caccount_name%2Cactions%2Ctotal_actions%2Cwebsite_clicks&time_increment=1&access_token=token
Result: successful -> I get a report_run_id
2º request with method GET
graph.facebook.com/v2.5/id_report&access_token=token
Result: successful -> I get a
{
"id": "xxxx",
"account_id": "xxx",
"time_ref": 1447171267,
"time_completed": 1447171269,
"async_status": "Job Completed",
"async_percent_completion": 100,
}
3º when "async_status" is "Job Completed", I execute request
www.facebook.com/ads/ads_insights/export_report?report_run_id=xxxx&format=xls&access_token=token
Result: I download the file .xls incomplete. If you paste the query (URL) in browser you download the report succesful (file xls completed) enter image description here
If I execute the request with code .NET Framework (C#) and saved the response as string the response said we "should update your browser " enter image description here
Why can't I download the report ?
Thank you
Code using to execute the download the report XLS
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net;
using System.Text;
using System.Threading.Tasks;
using System.Xml;
namespace test
{
class Program
{
static void Main(string[] args)
{
string token="token";
string report_run_id="report_number";
string url = "https://www.facebook.com/ads/ads_insights/export_report?report_run_id="+report_run_id+"format=xls&access_token"+token;
//option 1
string reportDownloadUrl = "repo"+DateTime.Now.Ticks + ".xls"; ;
Stream responseStream = null;
try
{
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = "GET";
//request.UserAgent = "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)";
var response = (HttpWebResponse)request.GetResponse();
responseStream = response.GetResponseStream(); //relleno el flujo
using (var fileStream = new FileStream(reportDownloadUrl, FileMode.Create, FileAccess.Write))
{
responseStream.CopyTo(fileStream);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
if (responseStream != null) responseStream.Close();
}
Console.WriteLine("File Download" +reportDownloadUrl);
/* //option 2
using (WebClient wc = new WebClient())
{
wc.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
wc.Headers["User-Agent"] = "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)";
wc.DownloadFile(url,"repo.xls");
}
Console.WriteLine("File Download");
*/
Console.ReadKey();
}
}
}

I had the same issue, trying to download the report using request npm in
nodejs.
Adding the User-Agent header solved my problem.

This works for me. You have some issues in your URL string.
Here is an updated url string:
string url = "https://www.facebook.com/ads/ads_insights/export_report?report_run_id=" + report_id + "&format=csv&access_token=" + accessToken;
This method works.
using (WebClient wc = new WebClient())
{
wc.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
wc.Headers["User-Agent"] = "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)";
wc.DownloadFile(url, "page1.csv");
}
The report gets saved to:
C:\your\path\to\project\FacebookReportPuller\bin\Debug

Related

login via HttpURLConnection return status code 200 while it should redirect

I tries to login to the website using username and password. when I make a #HttpURLConnection and post it, the status code is 200 but it actually doesn't login. when I checked the login process with Chrome #DevTools Console, I found that after entering the login button, the parameters are sent to the address I used and it returned 302 as a status code. even I add this line to my code by the result doesn't changed.
connection2.setInstanceFollowRedirects(true);
here is my code.
String loginPageURL = "https://AAAAAAAAAA";
CookieManager cookieManager = new CookieManager();
cookieManager.setCookiePolicy(CookiePolicy.ACCEPT_ALL);
cookies.forEach(cookie -> cookieManager.getCookieStore().add(null, cookie));
URL url2 = new URL(loginPageURL);
HttpURLConnection connection2 = (HttpURLConnection) url2.openConnection();
connection2.setRequestProperty("Cookie",
StringUtils.join(cookieManager.getCookieStore().getCookies(), ";"));
connection2.setInstanceFollowRedirects(true);
String loginPayload ="mypayload";
connection2.setRequestMethod("POST");
connection2.setDoOutput(true);
connection2.setRequestProperty("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9");
connection2.setRequestProperty("Accept-Encoding", "deflate, br");
connection2.setRequestProperty("Accept-Language", "en-US,en;q=0.9,fa;q=0.8");
connection2.setRequestProperty("Cache-Control", "max-age=0");
connection2.setRequestProperty("Connection", "keep-alive");
connection2.setRequestProperty("Content-Length", String.valueOf(loginPayload.length()));
connection2.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
connection2.setRequestProperty("Host", "https://BBBBBBBBBB");
connection2.setRequestProperty("Origin", "https://BBBBBBBBBB");
connection2.setRequestProperty("Referer", "https://AAAAAAAAAA");
connection2.setRequestProperty("sec-ch-ua", " Not A;Brand;v=99, Chromium;v=100, Google Chrome;v=100");
connection2.setRequestProperty("sec-ch-ua-mobile", "?0");
connection2.setRequestProperty("sec-ch-ua-platform", "Windows");
connection2.setRequestProperty("Sec-Fetch-Dest", "document");
connection2.setRequestProperty("Sec-Fetch-Mode", "navigate");
connection2.setRequestProperty("Sec-Fetch-Site", "same-origin");
connection2.setRequestProperty("Sec-Fetch-User", "?1");
connection2.setRequestProperty("Upgrade-Insecure-Requests", "1");
connection2.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36");
DataOutputStream out = new DataOutputStream(connection2.getOutputStream());
out.writeBytes(loginPayload);
System.out.println("login connection status code: "+connection2.getResponseCode());
System.out.println("content length "+loginPayload.length());
out.close();
System.out.println("*************************************************************");
int status = connection2.getResponseCode();
if (status == HttpURLConnection.HTTP_OK) {
String header = connection2.getHeaderField("Location");
System.out.println(header);
}
Anybody can help me figuring out where the problem is?
thanks in advance.

403 response with HttpClient but not with browser

I’m having a problem with the HttpClient library in java.
The target web site is on SSL (https://www.betcris.com), and I can load the index page from that site just fine .
However, the different pages showing odds for the different sports returns a 403 response code with HttpClient, but loading the same pages in a browser works just fine.
Here is such a page : https://www.betcris.com/en/live-lines/soccer.
I started troubleshooting this page with the information gathered by HttpFox (a Firefox add-on that resembles LiveHttpHeaders), making sure I had all the correct request headers and cookies, but I couldn’t get it to load using HttpClient. I also determined that cookies have nothing to do with the problem, as I can remove all cookies for that web site within my browser, and then hit the page directly and it will load.
I confirmed that there’s something special going on with these pages by using the online tool at http://www.therightapi.com/test. This tool allows you to input the url of a page along with any Request header you want, and shows you the response you get from the target web site. Using that tool, I can load https://www.google.com just fine, but I get the same 403 error when trying to load https://www.betcris.com/en/live-lines/soccer.
Here's my setup at therightapi :
And the response :
Does anyone know what’s going on here ?
Thanks.
EDIT : I've created a test project, here's the java code, followed by the maven dependency you should have in your pom :
package com.yourpackage;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.HttpClientBuilder;
public class TestHttpClient {
public static void main(String[] args) {
String url = "https://www.betcris.com/en/live-lines/soccer";
HttpClient client = HttpClientBuilder.create().build();
HttpGet request = new HttpGet(url);
// add request header
request.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0");
try {
HttpResponse response = client.execute(request);
System.out.println("Response Code : "
+ response.getStatusLine().getStatusCode());
BufferedReader rd = new BufferedReader(
new InputStreamReader(response.getEntity().getContent()));
StringBuffer result = new StringBuffer();
String line = "";
while ((line = rd.readLine()) != null) {
result.append(line);
}
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.3</version>
</dependency>
I have solved this problem (avoiding 403) by setting up User-Agent property while making a request as like follow:
If you use HttpClient
HttpGet httpGet = new HttpGet(URL_HERE);
httpGet.setHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11");
If you use HttpURLConnection
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11");
I use the following code to consume HTTPS Urls:
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.conn.ssl.NoopHostnameVerifier;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.ssl.SSLContextBuilder;
...
SSLContext sslContext =
new SSLContextBuilder().loadTrustMaterial(null, (certificate, authType) -> true).build();
try (CloseableHttpClient httpClient = HttpClients.custom().setSSLContext(sslContext)
.setSSLHostnameVerifier(new NoopHostnameVerifier()).build()) {
HttpGet httpGet = new HttpGet("YOUR_HTTPS_URL");
httpGet.setHeader("Accept", "application/xml");
httpGet.setHeader("User-Agent",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11");
HttpResponse response = httpClient.execute(httpGet);
logger.info("Response: " + response);
}
pom.xml:
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.3</version>
</dependency>
In my case, the web server does not use a proxy to communicate with APIs.
I just disbaled the defaultproxy under system.net in web.config.
<system.net>
<defaultProxy enabled="false" />
</system.net>
403 Forbidden is used to signal an authentication requirement. In fact, the full 403 response should tell you exactly that. Luckily, HttpClient can do authentication.

The underlying connection was closed; but works in browsers

I have a simple web server running on a wifi chip (esp8266, written in LUA, running on nodeMCU). When i query the website in any browser, i get the response as expected and displayed in the browser (about 45 characters long). When i query it in C# or powershell, i get:
"The underlying connection was closed: The connection was closed
unexpectedly."
I have tried numerous options suggested across many forums but none of them seem to have worked.
Is there any way possible to make a web request the same way IE or Chrome does? I'm not sure what extra steps browsers are doing internally such that they are able to get the response without issue? Why is this an issue in .NET?
My script is below. I am considering just using c# to fire off PhantomJS (headless browser), then using javascript to tell it to open the website, and then pass back the response. Or alternatively, use sockets to open a connection and do it that way, rather than relying on .NET wrappers.
# Set the useUnsafeHeaderParsing property to true
$netAssembly = [Reflection.Assembly]::GetAssembly([System.Net.Configuration.SettingsSection])
$bindingFlags = [Reflection.BindingFlags] "Static,GetProperty,NonPublic"
$settingsType = $netAssembly.GetType("System.Net.Configuration.SettingsSectionInternal")
$instance = $settingsType.InvokeMember("Section", $bindingFlags, $null, $null, #())
if($instance)
{
$bindingFlags = "NonPublic","Instance"
$useUnsafeHeaderParsingField = $settingsType.GetField("useUnsafeHeaderParsing", $bindingFlags)
if($useUnsafeHeaderParsingField)
{
$useUnsafeHeaderParsingField.SetValue($instance, $true)
}
}
# Try setting the certificate policy to a custom child class that always returns true
add-type #"
using System.Net;
using System.Security.Cryptography.X509Certificates;
public class TrustAllCertsPolicy : ICertificatePolicy {
public bool CheckValidationResult(
ServicePoint srvPoint, X509Certificate certificate,
WebRequest request, int certificateProblem) {
return true;
}
}
"#
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
# Try setting other attributes on the ServicePointManager class
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls -bxor [System.Net.SecurityProtocolType]::Ssl3
[System.Net.ServicePointManager]::Expect100Continue = $false;
# Initiate the web request
$r = [System.Net.WebRequest]::Create("http://192.168.1.7/GetStatusAsJson")
# Try long timeouts, with KeepAlive set to false; Also try giving it a user agent string etc.
$r.Timeout = 5000
$r.ReadWriteTimeout = 5000
$r.KeepAlive = $false
$r.Method = "GET"
$r.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36";
$r.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8";
$resp = $r.GetResponse()
I ended up using MSXML2.XMLHTTP.3.0 with vbscript, and executed the vbscript from powershell.
on error resume next
Dim oXMLHTTP
Dim oStream
Set oXMLHTTP = CreateObject("MSXML2.XMLHTTP.3.0")
oXMLHTTP.Open "GET", WScript.Arguments.Item(2) & "/OpenDoor", False
oXMLHTTP.Send
If oXMLHTTP.Status = 200 Then
Set oStream = CreateObject("ADODB.Stream")
oStream.Open
oStream.Type = 1
oStream.Write oXMLHTTP.responseBody
oStream.SaveToFile WScript.Arguments.Item(0) & "\OpenDoor.html"
oStream.Close
End If

Authorisation issue while accessing a page from repository in CQ5.

I'm trying to hit a page which contains a xml structure. for that i'm using this code
#Reference
private SlingRepository repository;
adminSession = repository.loginAdministrative( repository.getDefaultWorkspace());
String pageUrl = "http://localhost:4504"+page+".abc.htm";
conn = (HttpURLConnection)new URL(pageUrl).openConnection();
conn.setRequestProperty("Accept-Charset", charset);
conn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401"); // Do as if you'rusing Firefox 3.6.3
urlResponse = new BufferedInputStream(conn.getInputStream());
BufferedReader reader = new BufferedReader( new InputStreamReader(urlResponse) );
While accesing the page i'm getting this issue
org.apache.sling.auth.core.impl.SlingAuthenticator getAnonymousResolver: `Anonymous access not allowed by configuration - requesting credentials`
I'm logged in as an admin and whenever i'm directly hitting this urlfrom browser it is working properly bt while accessing it thriugh my code i'm getting this error.
any suggestion ?
If you are trying to call an url on an author instance, the following method I use in one of my projects might help (using apache commons HttpClient):
private InputStream getContent(final String url)
HttpClient httpClient = new HttpClient();
httpClient.getParams().setAuthenticationPreemptive(true);
httpClient.getState().setCredentials(new AuthScope(null, -1, null),
new UsernamePasswordCredentials("admin", "admin"));
try {
GetMethod get = new GetMethod(url);
httpClient.executeMethod(get);
if (get.getStatusCode() == HttpStatus.SC_OK) {
return get.getResponseBodyAsStream();
} else {
LOGGER.error("HTTP Error: ", get.getStatusCode());
}
} catch (HttpException e) {
LOGGER.error("HttpException: ", e);
} catch (IOException e) {
LOGGER.error("IOException: ", e);
}
}
Though at it is using admin:admin it only works on a local dev instance, if you are on a productive environment, I wouldn't put the admin password in plaintext, even though it is onyl code...
You are mixing up sling credentials & http credentials. While you are logged in at the sling-repository the http session is not aware of any authentication informations!

handle StatusCode==302 when using HttpWebrequest - c#

I have a windows app that I am trying to build to simulate an upload to a web app. The project is in C# 3.0.
When using Fiddler, I can see the following
/login page - 200 code
enter pwd/uname
/home page - 302 code
/home page - 200 code
/upload page [This page has simple multi-part form post where user can select at max 2 files to upload] - 200 code
/fileprocessed page - 302 code
/fileprocessed page - 200 code
When i use HttpWebrequest with webresponse object, i get
/login page - 200 code
enter pwd/uname
/home page - 200 code
/upload page [This page has simple multi-part form post where user can select at max 2 files to upload] - 200 code
/fileprocessed page - 302 code
I do have SetAutoRedirect to true
My code is
Stream requestStream = request.GetRequestStream();
requestStream.Write(bytes, 0, bytes.Length);
requestStream.Write(buffer, 0, buffer.Length);
FileStream stream2 = new FileStream(filePath, FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[0x1000];
int count = 0;
while ((count = stream2.Read(buffer, 0, buffer.Length)) != 0)
{
requestStream.Write(buffer, 0, count);
}
bytes = Encoding.ASCII.GetBytes("\r\n--" + str2 + "--\r\n");
requestStream.Write(bytes, 0, bytes.Length);
requestStream.Close();
stream.Close();
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
if(response.StatusCode == HttpStatusCode.Found)
{
string newURL = response.Headers["Location"];
}
How do I avoid the 302 error or how do I account for it wherein I can do a successful form post to simulate the upload
I had a similar case, where I recieved 302 status codes when I expected 200 to be returned (like in Fiddle or Firebug: net tab). I was able to solve the problem by giving an user agent specification to the HttpWebResponse
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0";