iTunes Connect Autoingestion.class doing nothing - app-store

I'm trying to get the iTunes connect report of an app.
To do so I use the Autoingestion.class that apple provides, and set my username/password in autoingestion.properties. The documentation of apple isn't up to date about the properties file who's now unavoidable.
My problem is that when I execute the command line, no error is shown and nothing happens.
My command line look like this :
java Autoingestion autoingestion.properties 8****** Sales Daily Summary 20130701
autoingestion.properties contains :
userID = xxxx#XXX.com
password = PaSsWoRd
What am I missing ?
My output (nothing) :
$C:\autoingestion>java Autoingestion autoingestion.properties 8****** Sales Daily Summary 20130701
$C:\autoingestion>
EDIT:
Ok so I came back to work this morning, did the EXACT same command line, and now it works...My guess is iTunes Connect is having some troubles...

I have the same error. The same code is working on my local environement.
Autoingestion is retrieving data from this itunes url https://reportingitc.apple.com/autoingestion.tft.
In my local machine I can download the data correctly. Here you can see the headers recived:
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< X-UA-Compatible: IE=EmulateIE8
< Set-Cookie: JSESSIONID=C661B770C05C723FB06CFD0223D46976; Path=/
< Content-Encoding: agzip
< Content-Disposition: attachment;filename=S_D_85242578_20130923.txt.gz
< filename: S_D_85242578_20130923.txt.gz
< Content-Type: application/a-gzip
< Transfer-Encoding: chunked
< Date: Thu, 03 Oct 2013 13:47:48 GMT
In my prod environement I get these headers:
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< X-UA-Compatible: IE=EmulateIE8
< Content-Length: 0
< Date: Thu, 03 Oct 2013 13:52:04 GMT
<
So nothing else is returned in the request. (Using same version of curl)

Installing OpenJDK fixed this for me.

Related

Uploading a file with google cloud API with a PUT at root of server?

I have a server using the google Drive API. I tried with a curl PUT request to upload a simple file (test.txt) at http://myserver/test.txt. As you can see, I did the PUT request at the root of my server. The response I get is the following:
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UqANa4Bj6ilL7z5HZH0wlQi_ufxDiHPtb2zq1Gzcx7IxAEcOt-AOlWsbX1q_lsZUwWt_hyKOA3weAeVpQvPQTwbQhLhIA
ETag: "6e809cbda0732ac4845916a59016f954"
x-goog-generation: 1548877817413782
x-goog-metageneration: 1
x-goog-hash: crc32c=jwfJwA==
x-goog-hash: md5=boCcvaBzKsSEWRalkBb5VA==
x-goog-stored-content-length: 6
x-goog-stored-content-encoding: identity
Content-Type: text/html; charset=UTF-8
Accept-Ranges: bytes
Via: 1.1 varnish
Content-Length: 0
Accept-Ranges: bytes
Date: Wed, 30 Jan 2019 19:50:17 GMT
Via: 1.1 varnish
Connection: close
X-Served-By: cache-bwi5139-BWI, cache-cdg20732-CDG
X-Cache: MISS, MISS
X-Cache-Hits: 0, 0
X-Timer: S1548877817.232336,VS0,VE241
Vary: Origin
Access-Control-Allow-Methods: POST,PUT,PATCH,GET,DELETE,OPTIONS
Access-Control-Allow-Headers: Cache-Control,X-Requested-With,Authorization,Content-Type,Location,Range
Access-Control-Allow-Credentials: true
Access-Control-Max-Age: 300
I know you're not supposed to use the API that way. I did that for testing purposes. I understand every headers returned but can't figure out if my file has been uploaded because I don't have enough knowledge of this API.
My question is very simple :
Just by looking at the response, can you tell me if my file has been uploaded ?
If yes can I retrieve it and how ?
The HTTP status code traditionally indicates, for any given request, if it was successful. The status code in the response is always on the first line:
HTTP/1.1 200 OK
200 type status codes mean success. You should take some time to familiarize yourself with HTTP status codes if you intend to work with HTTP APIs.

Structured Data Testing Tool Reports "URL was not found", but the URL does exist.

When using the Structured Data Testing Tool to test my Mom's recipe site page titled Perfect Chicken Fajitas I get the following...
ERROR
The URL was not found. Make sure the domain name is correct and the server is responding with a 200 status code.
However, if I curl for the same URL, I can see that a 200 results...
$ curl -I http://www.lindysez.com/recipe/perfect-chicken-fajitas/
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Server: Microsoft-IIS/7.5
Set-Cookie: bb2_screener_=1457484500+172.4.33.122; path=/
X-UA-Compatible: IE=edge
Link: <http://www.lindysez.com/wp-json/>; rel="https://api.w.org/"
X-Powered-By: ASP.NET
Date: Wed, 09 Mar 2016 00:48:21 GMT
What could be the problem?

Elisp `url-copy-file` sometimes downloads garbage?

I am writing some Elisp code that downloads files using url-copy-file, and most of the time it works fine, but sometimes the contents of the file end up being the http headers, e.g. the downloaded file has the following contents:
HTTP/1.1 200 OK
Server: GitHub.com
Date: Thu, 14 Nov 2013 20:54:41 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Status: 200 OK
Strict-Transport-Security: max-age=31536000
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-UA-Compatible: chrome=1
Access-Control-Allow-Origin: https://render.github.com
Or sometimes the following is appended to the end of an otherwise correctly-downloaded file:
0
- Peer has closed the GnuTLS connection
However, when these things occur, the function seems to return just fine, so there is no way for me to verify that the file has really been downloaded. Is there any more reliable way to download a file in elisp (without shelling out to wget/curl/whatever)?
As recommended, I have reported this as a bug: lists.gnu.org/archive/html/bug-gnu-emacs/2013-11/msg00758.html

How to request only for a web page header using netcat?

I am not sure whether netcat has some command for requesting http header or should I use sed to filter the result? If sed is used in this case, is it so that I only have to extract everything before the first "<" occurs ?
The command line tool curl has the -I option which only gets the headers:
-I/--head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature the command HEAD which this uses to get nothing but the header of a document. When used on a FTP or FILE file, curl displays the file size and last modification time only.
Demo:
$ curl -I stackoverflow.co.uk
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 503
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/7.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
Date: Thu, 26 Sep 2013 21:06:15 GMT

iOS (iPhone/iPad): downloading a big PDF via Safari doesn't work

I've a small site designed to sell a HTTP-downloadable, ~300 MB PDF, No-DRM, page-scanned images e-book (download the test copy here http://test.magicmedicine.eu/get/ac123457965d0d4b4d17557a73cf2fe8 ).
It works flawlessly on PC, Mac and Android, but I'm experiencing issues with iOS: when the customer opens (I tried via broadband Wi-Fi+DSL) the download URL via Safari, the page loads for ~45 seconds (the page is blank but the activity indicator rotates), then Safari exits with no error messages at all.
I tried to create the PDF with the "Fast web view" (=progressive download) attribute and I also lowered the compatibility to the minimum (PDF version 1.3), with no results.
Application-side, the download is sent from Apache+PHP via mod_xsendfile ( https://tn123.org/mod_xsendfile/ ) to the client with the following headers (my intent is to avoid the PDF-in-the-browser-via-plugin view):
HTTP/1.1 200 OK
Date: Wed, 23 May 2012 09:50:13 GMT
Server: Apache/2.2.15 (CentOS)
X-Powered-By: PHP/5.3.13
Expires: Thu, 24 May 2012 11:50:13 +0200
Cache-Control: must-revalidate, post-check=0, pre-check=0
Pragma: public
Content-Disposition: attachment; filename="book.pdf"
Last-Modified: Sun, 20 May 2012 11:26:54 GMT
ETag: "2e01b4-dde8a9b-4c07610070008"
Content-Length: 232688283
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: application/octet-stream
Any ideas?
Note: I asked this on SuperUser a couple of days ago and was closed as "off topic". I hope here it's ok to repost it here.