Large Azure DevOps (and Azure DevOps Server 2019) changesets fail with "Request Entity Too Large" - azure-devops

We seem to be hitting a limit when trying to add a large binary file to Azure DevOps using the REST APIs. The same file check-in works fine using the old SOAP APIs and also works using the TFVC CLI (tf.exe). But we have a use case where we need to occasionally check in large files programatically from machines that don't have VS installed. We're trying to migrate our application from the old SOAP APIs to the REST API because we're moving to .NET Core where the SOAP API is not supported.
A POST to the /_apis/tfvc/changesets (Create Changeset) API with large (> about 19 MB) files results in:
HTTP 400: Bad Request
This issue has been reported a couple of times on the Azure DevOps .NET Samples github repo, but that isn't the right forum for this question so it hasn't been answered there.
https://github.com/microsoft/azure-devops-dotnet-samples/issues/176
https://github.com/microsoft/azure-devops-dotnet-samples/issues/104
How do we create large files in TFVC using the REST API?
Update 2020-01-13
Here's a sample console app we've used to demonstrate the problem:
using Microsoft.TeamFoundation.SourceControl.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApp1
{
internal class Program
{
internal static async Task Main(string[] args)
{
var orgUrl = new Uri(args[0]);
string serverPath = args[1];
string localPath = args[2];
string contentType = args[3];
string pat = args[4];
var changes = new List<TfvcChange>()
{
new TfvcChange()
{
ChangeType = VersionControlChangeType.Add,
Item = new TfvcItem()
{
Path = serverPath,
ContentMetadata = new FileContentMetadata()
{
Encoding = Encoding.UTF8.WindowsCodePage,
ContentType = contentType,
}
},
NewContent = new ItemContent()
{
Content = Convert.ToBase64String(File.ReadAllBytes(localPath)),
ContentType = ItemContentType.Base64Encoded
}
}
};
var changeset = new TfvcChangeset()
{
Changes = changes,
Comment = $"Added {serverPath} from {localPath}"
};
var connection = new VssConnection(orgUrl, new VssBasicCredential(string.Empty, pat));
var tfvcClient = connection.GetClient<TfvcHttpClient>();
await tfvcClient.CreateChangesetAsync(changeset);
}
}
}
Running this console app with a ~50MB zip file against an Azure DevOps TFVC repository results in a VssServiceResponseException (with inner ArgumentException) with a message of: The maximum request size of 26214400 bytes was exceeded.
Update 2020-01-14
Added display of file size and base64 content length before sending TFVC request in my sample code.
Corrected my observations about the file size limit.
Corrected the error code returned by Azure DevOps (400, not 413).
Originally I stated that the limit was around 13 MB. This was based on the failures I saw with files > 20MB, success with files < about 10 MB, and the limits described in the linked github issues.
I have run more tests of my own and have narrowed in on about 19,200 KB as the actual limit. This seems correct based on the error message indicating a 26,214,400 byte limit. A 19 MB base-64 encoded file would expand to about 26 MB.
I also noticed in my most recent tests, when monitoring with Fiddler, that Azure DevOps returns a 400 status code (not 413). My notes had indicated a 413 Request Entity Too Large was observed at some point in the past. Perhaps this with with an older version of Azure DevOps Server? In any case, the error we see now is 400 Bad Request.
Here is what Fiddler shows for the request headers:
POST /<...REMOVED...>/_apis/tfvc/changesets HTTP/1.1
Host: dev.azure.com
Accept: application/json; api-version=5.1
User-Agent: VSServices/16.153.29226.1 (NetStandard; Microsoft Windows 10.0.18363)
X-VSS-E2EID: 6444f0b5-57e0-45da-bd86-a4c62d8a1794
Accept-Language: en-US
X-TFS-FedAuthRedirect: Suppress
X-TFS-Session: 9f8e8272-db48-4e93-b9b0-717937244aff
Expect: 100-continue
Authorization: Basic <...REMOVED...>
Accept-Encoding: gzip
Content-Type: application/json; charset=utf-8; api-version=5.1
Content-Length: 26324162
And the raw response:
HTTP/1.1 400 Bad Request
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 206
Content-Type: application/json; charset=utf-8
Expires: -1
P3P: CP="CAO DSP COR ADMa DEV CONo TELo CUR PSA PSD TAI IVDo OUR SAMi BUS DEM NAV STA UNI COM INT PHY ONL FIN PUR LOC CNT"
X-TFS-ProcessId: 7611d69f-e722-4108-8050-e55a61b1cbb4
Strict-Transport-Security: max-age=31536000; includeSubDomains
ActivityId: 15e046e5-3788-4fdb-896a-7d0482121ddd
X-TFS-Session: 9f8e8272-db48-4e93-b9b0-717937244aff
X-VSS-E2EID: 6444f0b5-57e0-45da-bd86-a4c62d8a1794
X-VSS-UserData: <...REMOVED...>
X-FRAME-OPTIONS: SAMEORIGIN
Request-Context: appId=cid-v1:e3d45cd2-3b08-46bc-b297-cda72fdc1dc1
Access-Control-Expose-Headers: Request-Context
X-Content-Type-Options: nosniff
X-MSEdge-Ref: Ref A: 7E9E95F5497946AC87D75EF3AAD06676 Ref B: CHGEDGE1521 Ref C: 2020-01-14T14:01:59Z
Date: Tue, 14 Jan 2020 14:02:00 GMT
{"$id":"1","innerException":null,"message":"The maximum request size of 26214400 bytes was exceeded.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}

Until now, the most useful solution is to allocated a sufficiently large buffer via modifying the configuration in IIS.
Go machine -> open IIS manager:
(1) Select the site of your collection
(2) Double click “Configuration Editor”
(3) Insert system.webServer/serverRuntime into Section
(4) Expand the uploadReadAheadSize value based on your scenario.
Then click Apply to apply above changes.
I know, it would be very costly for large request bodies. But, as I know, this is the often method we used.

I opened a support ticket with Microsoft. The support engineer indicated that this was a known limitation of the Azure DevOps service. He suggested that I create a feature request. I've done that. If this limitation is problematic for you, please upvote the feature request.
https://developercommunity.visualstudio.com/idea/1130401/allow-creating-large-tfvc-changesets-via-the-api.html

Related

How can I find out why ISAPI is returning a 302 status for a specific file?

I have a website hosted served by IIS 10 on a Windows Server (2019) running Plesk. The site is mainly Classic ASP. I have a staging subdomain at staging.example.com, with the production site at www.example.com.
The two are fairly strictly separated, except that I don’t store image files, PDFs and such things on the staging server; I have a URL rewrite directive that redirects to the production site with a 302 status based on the URL not matching the following regex:
\.(php|asp|js|css|csv|json|htm|html|svg|svgz)(\?.+)?$
This generally works well: ASP pages are served from the staging site when the staging URL is called, but images on the page are pulled from the production site.
Except that there’s one ASP file which – for some reason – gives a 302 and redirects to the production site no matter what I do. The file exists in both locations. I’ve tested the URL in the pattern tester provided in the IIS URL-rewrite section, and it matches the pattern (meaning it shouldn’t redirect).
When I trace the request (that is, the initial request to the staging URL) in Firefox’s browser console, I get the following response headers (redacted):
HTTP/2 302 Found
cache-control: no-cache
content-type: text/html
location: https://www.example.com/path/to/file.asp
server: Microsoft-IIS/10.0
set-cookie: ASPSESSION****=********; secure; path=/
x-powered-by: ASP.NET
x-powered-by-plesk: PleskWin
date: Sun, 19 Dec 2021 18:52:05 GMT
content-length: 201
Accept
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding
gzip, deflate, br
Accept-Language
en-US,en;q=0.5
Authorization
Basic *************
Connection
keep-alive
Cookie
[cookies]
Host
staging.example.com
Referer
https://staging.example.com/path/to/file.asp
Sec-Fetch-Dest
document
Sec-Fetch-Mode
navigate
Sec-Fetch-Site
same-origin
Sec-Fetch-User
?1
Upgrade-Insecure-Requests
1
User-Agent
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:96.0) Gecko/20100101 Firefox/96.0
I’ve painstakingly gone through the entire file and all the file includes within it, and I can’t find any kind of Response.Redirect in any of them that might be responsible.
So it seems it’s IIS that’s redirecting with a 302… despite the fact that there doesn’t seem to be a directive that tells it to do this.
Is there a way to trace exactly what on the server is causing this 302 for one specific file? Some sort of tracing mechanism that tells me where the request gets passed on to before the 302 response is returned?
 
 
Update 26 Dec
Based on samwu’s comment, I’ve enabled Failed Request Tracing for the page, and looking through the resulting .frb file, it’s clear that none of the rewrite conditions are met – they all have succeed: false. It seems the redirect is not happening in the WWW Server at all, in fact, but in the ISAPI extension. This is the only place that the production site URL is mentioned at all in the request trace (except of course in the GENERAL_RESPONSE_HEADER section at the very end):
ISAPI_START
MODULE_SET_RESPONSE_SUCCESS_STATUS ModuleName="IsapiModule", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus="302", HttpReason="Object moved"
GENERAL_SET_RESPONSE_HEADER HeaderName="Location", HeaderValue="https://www.example.com/path/to/file.asp", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Content-Length", HeaderValue="201", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Content-Type", HeaderValue="text/html", Replace="false"
GENERAL_SET_RESPONSE_HEADER HeaderName="Cache-control", HeaderValue="no-cache", Replace="false"
NOTIFY_MODULE_COMPLETION ModuleName="IsapiModule", Notification="EXECUTE_REQUEST_HANDLER", fIsPostNotificationEvent="false", CompletionBytes="0", ErrorCode="The operation completed successfully. (0x0)"
ISAPI_END
In the ISAPI Filters section in IIS Manager, there are four filters: a 32-bit and a 64-bit version for ASP.Net 2.0 and the same for ASP.Net 4.0, all called aspnet_filter.dll. I’m guessing these are standard filters – I know for certain, at least, that we haven’t mucked about with any ISAPI filters at all.
As should be obvious by now, I’m not really a server admin, and ISAPI filters are definitely above my level of knowledge.
So how do I proceed from here? How do I figure out why ISAPI is redirecting?

How to upload an image using AWS API Gateway Proxy Integration with S3

After setting up my API to upload files, I realised that there is a special case where you want to upload a picture (jpg), you defined the binary support at the API, but you get the following error:
The request signature we calculated does not match the signature you
provided. Check your AWS Secret Access Key and signing method.
Consult the service documentation for details.
The Canonical String for this request should have been
'PUT /test/vi-dummy-bucket/testImg2.jpg
content-type:application/x-www-form-urlencoded
host:qhweyos7z2.execute-api.us-west-1.amazonaws.com
x-amz-date:20170808T154441Z
x-amz-security-token: // security token string no quotes
content-type;host;x-amz-date;x-amz-security-token 5fa90f0 ...'
The String-to-Sign should have been
'AWS4-HMAC-SHA256\n20170808T154441Z
20170808/us-west-1/execute-api/aws4_request
f7a38fa ...'
The strange thing is that uploading simple text files works with the exact same api call, then only thing I have to change is
Content-Type 'text/plain'
and write a text in the raw portion of the request.
Not sure if this is a Content-Type issue or a Request Body Issue, if I leave everything in the working state (text/plain & text in the body) and just change the body to binary and set the image, I get the above error.
My API gateway is in us-west-1 region
My S3 bucket is in us-east-1 region
And the request I am using is:
PUT /test/vi-dummy-bucket/testImg2.jpg HTTP/1.1
Host: qhwe7z2.execute-api.us-west-1.amazonaws.com
Content-Type: application/x-www-form-urlencoded
X-Amz-Security-Token: FQoDYX ...
X-Amz-Date: 20170808T154441Z
Authorization: AWS4-HMAC-SHA256
Credential=ASIAJICO6JFTJWN7A/20170808/us-west-1/execute-
api/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-
security-token,
Signature=6a792 ... Cache-Control: no-cache
Postman-Token: e9d1f730-f50b-7e27-70cc-c15a138d8cc6
(Binary Image)
This is another version of the request (same error):
PUT /test/vi-dummy-bucket/testImg2.jpg HTTP/1.1
Content-Type: image/jpeg
x-amz-security-token: FQoDY ...
x-amz-date: 20170808T190134Z
Authorization: AWS4-HMAC-SHA256
Credential=ASIAIZSP5YKVLJ3GVVQA/20170808/us-west-1/execute-
api/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-
security-token,
Signature=b2324 ...
Host: qhos7z2.execute-api.us-west-1.amazonaws.com
Connection: close
User-Agent: Paw/3.1.2 (Macintosh; OS X/10.12.6) GCDHTTPRequest
Content-Length: 823236
--- UPDATE ---
After implementing the sigV4 sigining manually using the generated SDK, the signature is no longer an issue.
The only problem left, is that the generated SDK only accepts a string as the "body", so I have to convert the file to a binary string. Then it passes correctly and a file is created in S3, but the size is now double and its not viewable, as if the binary string wasn't converted back to the binary file. So frustrating...
BTW, I've already tried PASSTHROUGH and CONVERT_TO_BINARY.
Updated: It looks like this may be related to a known error in Postman. For reference here is a related SO question: AWS Signature Error using Postman to access the AWS API Gateway when posting a binary
and here is the bug report for Postman: https://github.com/postmanlabs/postman-app-support/issues/3232
Does the request work if you use an alternate rest client and/or a command line utility like curl or httpie?
If you configured the binary support you should probably set the Content-Type to match the binary content you're sending.
From what you've posted you're sending the binary content with Content-Type application/x-www-form-urlencoded but if the body is actually a binary jpeg file I'd expect that you should be sending Content-Type image/jpeg

API Gateway Mock Integration Fails with 500

I have an API Gateway integration for a method/resource which works when I call it from the API but not when I actually call it:
$ aws apigateway test-invoke-method --rest-api-id $REST_API_ID \
--resource-id $RESOURCE_ID --http-method GET | jq -r .log,.body
This works out fine and I get the following output:
Tue May 16 17:46:42 UTC 2017 : Starting execution for request: test-invoke-request
Tue May 16 17:46:42 UTC 2017 : HTTP Method: GET, Resource Path: /status.json
Tue May 16 17:46:42 UTC 2017 : Method request path: {}
Tue May 16 17:46:42 UTC 2017 : Method request query string: {}
Tue May 16 17:46:42 UTC 2017 : Method request headers: {}
Tue May 16 17:46:42 UTC 2017 : Method request body before transformations:
Tue May 16 17:46:42 UTC 2017 : Endpoint response body before transformations:
Tue May 16 17:46:42 UTC 2017 : Endpoint response headers: {}
Tue May 16 17:46:42 UTC 2017 : Method response body after transformations: { "statusCode": 200 }
Tue May 16 17:46:42 UTC 2017 : Method response headers: {Content-Type=application/json}
Tue May 16 17:46:42 UTC 2017 : Successfully completed execution
Tue May 16 17:46:42 UTC 2017 : Method completed with status: 200
{ "statusCode": 200 }
However, I cannot access this at my URL, which is api.naftuli.wtf/v1/status.json. I have stages defined at glhf, stable, and v1, so by replacing that, you will see different responses. I just simply want a dummy response that returns a 200 JSON blob.
My Terraform for the resources is here as a Gist. Hopefully this fully shows my API Gateway configuration.
If I test invoke this from the CLI or from the web console, I get back what is expected. However, if I curl this from my deployed API at api.naftuli.wtf, I don't get anything nice:
$ for stage in glhf stable v1 ; do
> url="https://api.naftuli.wtf/${stage}/status.json"
> echo "${url}:"
> curl -i -H 'Content-Type: application/json' \
> https://api.naftuli.wtf/${stage}/status.json
> echo -e '\n
> done
https://api.naftuli.wtf/glhf/status.json:
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 36
Connection: keep-alive
Date: Tue, 16 May 2017 21:41:38 GMT
x-amzn-RequestId: 712ba52b-3a80-11e7-9fec-b79b62d3bf7f
X-Cache: Error from cloudfront
Via: 1.1 da7a5d0ed7f424609000879e43743066.cloudfront.net (CloudFront)
X-Amz-Cf-Id: hBwlbPCP9n2rlz53I-Qb9KoffHB_FoxUCZUaJYNnU3XhCWuMpQTP1Q==
{"message": "Internal server error"}
https://api.naftuli.wtf/stable/status.json:
HTTP/1.1 403 Forbidden
Content-Type: application/json
Content-Length: 23
Connection: keep-alive
Date: Tue, 16 May 2017 21:41:38 GMT
x-amzn-RequestId: 71561066-3a80-11e7-9b00-6700be628328
x-amzn-ErrorType: ForbiddenException
X-Cache: Error from cloudfront
Via: 1.1 0c146399837c7d36c1f0f9d2636f8cf8.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ITX765xD8s4sNuOdXaJ2kPvqPo-w_dsQK3Sq_No130FAHxFuoVhO8w==
{"message":"Forbidden"}
https://api.naftuli.wtf/v1/status.json:
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 36
Connection: keep-alive
Date: Tue, 16 May 2017 21:41:39 GMT
x-amzn-RequestId: 7185fa99-3a80-11e7-a3b1-2f9e659fc361
X-Cache: Error from cloudfront
Via: 1.1 586f1a150b4ba39f3a668b8055d4d5ea.cloudfront.net (CloudFront)
X-Amz-Cf-Id: dvnOa1s-YlwLSNzBfVyx5tSL6XrjFJM4_fES7MyTofykB3ReU5R1fg==
{"message": "Internal server error"}
My understanding of stages were that they were additional path prefixes to the base path under which all API resources were available. If I had a stage called v1 with a path of /v1, I'd expect that an API Gateway resource for status.json will be basically mapped under /v1, yielding /v1/status.json.
I may be misunderstanding how API Gateway base path mappings and stages work, but CloudWatch tells me that the call is at least happening, though failing for some obscure reason:
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Verifying Usage Plan for request: c5be3842-6af4-4725-a34f-d6eea8042d17. API Key: API Stage: tcips69qx2/prod_v1
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) API Key authorized because method 'GET /status.json' does not require API Key. Request will not contribute to throttle or quota limits
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Usage Plan check succeeded for API Key and API Stage tcips69qx2/prod_v1
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Starting execution for request: c5be3842-6af4-4725-a34f-d6eea8042d17
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) HTTP Method: GET, Resource Path: /v1/status.json
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Execution failed due to configuration error: statusCode should be an integer which defined in request template
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Method completed with status: 500
Apparently only traffic across the V1 stage is getting through to CloudWatch logs. I have a misconfiguration somewhere and I can't seem to find it.
Can you try and change you request template in the integration request setup to this:
{
"statusCode": 200
}
API Gateway looks for the status code to return in the response in your integration request template. The response is generated by the mapping template in the integration response. I can see from your terraform setup that you are loading the output json file in the integration request template. This is content API Gateway does not expect.
With Mock Integration Amazon API Gateway there are 2 common reasons for 500 Internal Server error.
Check the mapping template in Integration Request and ensure that you are passing statusCode as an integer to the MOCK Integration endpoint.
{
"statusCode": <Integer_Status_code>
}
Note: Make sure that status code is passed as integer not string.
Correct : 200 Incorrect : "200"
Mock Integrations do not support the binary content. If the API is enabled with bixnary support and has application/json or */* set as binaryMediaTypes, MOCK Integration endpoints would throw a 500 Internal server error when trying to transform the content.
A workaround is to update the contentHandling property of the MOCK Integration to CONVERT_TO_TEXT
Read more here :- https://cloudnamaste.com/500-internal-server-error-mock-integration/
For my case, when I deploy the lambda with serverless framework, the OPTIONS returns 200 when called. However, when I configure manually on AWS API Gateway, it returns 500 Internal Server Error.
When I check the execution log of API gateway, it said
(ee0a42d9-2cfc-4788-8679-00fbd7938cf1) Method request body before transformations: [Binary Data]
(ee0a42d9-2cfc-4788-8679-00fbd7938cf1) Execution failed due to configuration error: Unable to transform request
(ee0a42d9-2cfc-4788-8679-00fbd7938cf1) Gateway response body:
{
"message": "Internal server error"
}
After you create the Resources and OPTIONS methods, select the OPTIONS method then
At Method Execution --> Integration Request --> expand the Mapping Templates, choose "When no template matches the request Content-Type header". Add or select "application/json" in the Content-Type. Click the "application/json", and in the Generate template, paste
{statusCode: 200} without any double quote. Note, if the {"statusCode": 200} exists but with double quote, remove it to be the same as above. Then Save it
At Method Execution --> Integration Response --> expand the 200 response status --> Mapping Template --> Add or select "application/json" in the Content-Type. Make sure that the Generate template box is empty. Then Save it
At Method Execution --> Method Response --> expand the 200 response. In Response Headers for 200, add three headers: Access-Control-Allow-Headers, Access-Control-Allow-Methods, and Access-Control-Allow-Origin. Leave the Response Body for 200 to be empty
Action --> Enable CORS for the Resource
Action --> Deploy API
You have at least two distinct problems with your configuration.
First, one of your three base path mappings doesn't match the way you're trying to invoke your API. Note that the base paths don't have to be the same as your stage names, but they can be if you desire. Since your base path mappings include base paths and stage names, API Gateway is expecting the invoke path to include a base path mapping and not a stage, so it is interpreting the [glhf stable v1] portion of your path as a base path and looking for the corresponding base path mapping entry to determine the API and stage to use. This works fine for the v1 and glhf base paths which return 500 (indicating a different problem). The stable base path (in https://api.naftuli.wtf/stable/status.json) returns a 403 Forbidden because there is no base path of "stable" defined for the domain name api.naftuli.wtf. The stable stage is mapped to the "latest" base path, so calling https://api.naftuli.wtf/latest/status.json should be the way to call the stable stage. This doesn't currently work, and I don't know why. If you tell me what region your running this in, I can look-up the config and do more digging.
The second problem is indicated by the following entry from your CloudWatch logs:
Execution failed due to configuration error: statusCode should be an integer which defined in request template
Can you check that your integration request template (in the file your reference in "${file("${path.module}/files/status.json")}") contains "statusCode: 200" as a top level attribute.
I also found it surprising that you're using the same file for a request template and a response template.
Having had the identical errors I found what helped me solve this issue was to delete my OPTIONS request definition in the AWS Console. I then followed the console's "Enable CORS" form which created a new OPTIONS method.
I subsequently ran terraform plan and looked at the diff between my OPTIONS defintion and theirs. Given that the AWS Console created OPTIONS method worked I applied the changes.
Using terraform 0.12 or greater makes this possible as the terraform plan output detail is more fine grained.
I was doing this in CloudFormation.
It took me a while to get it and the accepted answer here was extremely helpful, but a little vague, so adding some more info.
Stefano Buliani's answer, in CloudFormation YAML, looks like:
RequestTemplates:
application/json: |
{ statusCode: 200 }
What was especially weird here was apparently, the fix was simply to create a deployment using the AWS CLI for each of the stages. Apparently, Terraform was not updating or re-kicking deployments on changes, and so my changes never got out.
I had a similar problem and eventually figured out that my client was using a different content type than I expected. I had foolishly assumed it would use application/json, but it was some custom json thing. In my setup, API Gateway is logging to cloudwatch, which is where I found the content type it received from the client. Once I updated the content type in the request template of the mock integration, things worked as expected.

Uber API issue with CORS

First time asking a question here. I'm a beginner at this, but i'm truly stumped at the problem i'm facing.
Browsers in use:
Safari and Firefox (both on Mac OS Sierra)
Firefox (Linux - Ubuntu 16.04.2)
I am registered as an Uber Developer and have registered an App in the Dashboard. I'm only using the Server Token for authentication at the moment. In the Dashboard, I have set the following entries in the "Authorizations" tab of the App for CORS (Optional URI for CORS Support):
http://localhost:8000 <-- web server in my PC
https://subdomain.mydomain.com <--- remote web server
A few months ago i created a web app using HTML, CSS and JS (with Jquery v2.2.4) to play around with the Ride Estimates API and was able to get it to report data for many locations in my area successfully. Somehow it no longer works. I'm trying to fix that and improve the functionality. However, i just can't get past the initial query to the API because of CORS issues that were not existent before.
My API URL is:
https://api.uber.com/v1/estimates/price?start_latitude=8.969145&start_longitude=-79.5177675&end_latitude=8.984104&end_longitude=-79.517467&server_token={*********SERVER*TOKEN**********}
When i paste that in the address bar of the browser i get valid JSON:
{"prices":[{"localized_display_name":"uberX","distance":1.58,"display_name":"uberX","product_id":"811c3224-5554-4d29-98ae-c4366882011f","high_estimate":3,"surge_multiplier":1.0,"minimum":2,"low_estimate":2,"duration":420,"estimate":"2-3\u00a0$","currency_code":"USD"},{"localized_display_name":"X English","distance":1.58,"display_name":"X English","product_id":"8fe2c122-a4f0-43cc-97e0-ca5ef8b57fbc","high_estimate":4,"surge_multiplier":1.0,"minimum":3,"low_estimate":3,"duration":420,"estimate":"3-4\u00a0$","currency_code":"USD"},{"localized_display_name":"uberXL","distance":1.58,"display_name":"uberXL","product_id":"eb454d82-dcef-4d56-97ca-04cb11844ff2","high_estimate":4,"surge_multiplier":1.0,"minimum":3,"low_estimate":3,"duration":420,"estimate":"3-4\u00a0$","currency_code":"USD"},{"localized_display_name":"Uber Black","distance":1.58,"display_name":"Uber Black","product_id":"ba49000c-3b04-4f54-8d50-f7ae0e20e867","high_estimate":6,"surge_multiplier":1.0,"minimum":4,"low_estimate":4,"duration":420,"estimate":"4-6\u00a0$","currency_code":"USD"},{"localized_display_name":"Uber SUV","distance":1.58,"display_name":"Uber SUV","product_id":"65aaf0c2-655a-437d-bf72-5d935cf95ec9","high_estimate":7,"surge_multiplier":1.0,"minimum":5,"low_estimate":5,"duration":420,"estimate":"5-7\u00a0$","currency_code":"USD"}]}
I then proceed to set up JS (w/ JQuery) code in webpage...
var url = "https://api.uber.com/v1/estimates/price?start_latitude=8.969145&start_longitude=-79.5177675&end_latitude=8.984104&end_longitude=-79.517467&server_token={*********SERVER*TOKEN**********}";
$.getJSON(url, function(result){
console.log(result);
});
Uploading the HTML and JS to my remote web server and then loading the webpage in any of my browsers yields a 200 status from Uber API. However, the console log shows CORS blocking my request (PROBLEM #1):
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.uber.com/v1/estimates/price?start_latitude=8.969145&start_longitude=-79.5177675&end_latitude=8.984104&end_longitude=-79.517467&server_token={*********SERVER*TOKEN**********}. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
Then, in the Inspector view of both Mac Browsers, under the Network / Resources areas, i see the 200 Status message from the GET request. However, along with the Response message (PROBLEM #2):
SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data
The Request Headers are:
GET /v1/estimates/price?start_latitude=8.969145&start_longitude=-79.5177675&end_latitude=8.984104&end_longitude=-79.517467&server_token={*********SERVER*TOKEN**********} HTTP/1.1
Host: api.uber.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Firefox/52.0
Accept: application/json, text/javascript, */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: https://subdomain.domain.com/Uber/index.html
Origin: https://subdomain.domain.com
Connection: keep-alive
The Response Headers are:
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 19 Mar 2017 22:26:31 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Content-Geo-System: wgs-84
Content-Language: en
X-Rate-Limit-Limit: 2000
X-Rate-Limit-Remaining: 1998
X-Rate-Limit-Reset: 1489964400
X-Uber-App: uberex-nonsandbox, optimus, migrator-uberex-optimus
Strict-Transport-Security: max-age=604800
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Encoding: gzip
In Firefox for Linux i sometimes don't get the Syntax Error; i always seem to get it on the Mac Browsers. In Linux, when i do get that error, then clicking on the "Edit and Resend" Headers button (resending the Headers but without really editing the Headers), the Syntax Error disappears and the Response text actually shows the Uber API Object that is supposed to be there... but i still get the CORS Blocked message on the Console Log. I really don't understand why this is, but it seems contradictory. In the end, i am unable to get to use the API data that, using the same method months ago, i could get for several dozens of locations.
I have looked for answers in similar questions but so far have found none that apply to my case. Any help will be greatly appreciated. Getting really frustrated... really stuck here.
This issue was caused by the API not including the header correctly. This issue is resolved and the api is now working as expected. Also, the allow origin header will only be returned in a response if an origin is specified in the request.

The MAC signature found in the HTTP request '...' is not the same as any computed signature

I'm sending the following request in Postman to retrieve a simple .jpg from Azure Blob storage at this URL https://steamo.blob.core.windows.net/testcontainer/dog.jpg
GET /testcontainer/dog.jpg HTTP/1.1
Host: steamo.blob.core.windows.net
Authorization: SharedKey steamo:<my access key>
x-ms-date: Tue, 26 May 2015 17:35:00 GMT
x-ms-version: 2014-02-14
Cache-Control: no-cache
Postman-Token: b1134f8a-1a03-152c-2810-9cb351efb9ce
If you're unfamiliar with Postman it is just a REST client - the Postman-Token header can probably be ignored.
My access key is copied from my Azure Management Portal.
I get this error:
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:2482503d-0001-0033-60da-9708ed000000 Time:2015-05-26T17:35:41.4577821Z
With this AutheticationErrorDetail:
The MAC signature found in the HTTP request '<my access key>' is not the same as any computed signature. Server used following string to sign: 'GET x-ms-date:Tue, 26 May 2015 17:35:00 GMT x-ms-version:2014-02-14 /steamo/testcontainer/dog.jpg'.
How do I fix this? Let me know if you need any more info from me.
Authentication for Azure Storage is not simply a matter of providing the access key (that is not very secure). You need to create a signature string that represents the given request, sign the string with the HMAC-SHA256 algorithm (using your storage key to sign), and encode the result in base 64. See https://msdn.microsoft.com/en-us/library/azure/dd179428.aspx for full details, including how to construct the signature string.
Just got this working, here's my code:
string signWithAccountKey(string stringToSign, string accountKey)
{
var hmacsha = new System.Security.Cryptography.HMACSHA256();
hmacsha.Key = Convert.FromBase64String(accountKey);
var signature = hmacsha.ComputeHash(Encoding.UTF8.GetBytes(stringToSign));
return Convert.ToBase64String(signature);
}