I used this tutorial and created "put" endpoint successfully.
https://sanderknape.com/2017/10/creating-a-serverless-api-using-aws-api-gateway-and-dynamodb/
When I follow this advice, I get authroization required error..
Using your favorite REST client, try to PUT an item into DynamoDB
using your API Gateway URL.
python is my favorite client:
import requests
api_url = "https://0pg2858koj.execute-api.us-east-1.amazonaws.com/tds"
PARAMS = {"name": "test", "favorite_movie":"asdsf"}
r = requests.put(url=api_url, params=PARAMS)
the response is 403
My test from console is successful, but not able to put a record from python.
The first step you can take to resolve the problem is to investigate the information returned by AWS in the 403 response. It will provide a header, x-amzn-ErrorType and error message with information about the concrete error. You can test it with curl in verbose mode (-v) or with your Python code. Please, review the relevant documentation to obtain a detailed enumeration of all the possible error reasons.
In any case, looking at your code, it is very likely that you did not provide the necessary authentication or authorization information to AWS.
The kind of information that you must provide depends on which mechanism you configured to access your REST API in API Gateway.
If, for instance, you configured IAM based authentication, you need to set up your Python code to generate an Authorization header with an AWS Signature derived from your user access key ID and associated secret key. The AWS documentation provides an example of use with Postman.
The AWS documentation also provides several examples of how to use python and requests to perform this kind of authorization.
Consider, for instance, this example for posting information to DynamoDB:
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of the
# License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
# OF ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# AWS Version 4 signing example
# DynamoDB API (CreateTable)
# See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
# This version makes a POST request and passes request parameters
# in the body (payload) of the request. Auth information is passed in
# an Authorization header.
import sys, os, base64, datetime, hashlib, hmac
import requests # pip install requests
# ************* REQUEST VALUES *************
method = 'POST'
service = 'dynamodb'
host = 'dynamodb.us-west-2.amazonaws.com'
region = 'us-west-2'
endpoint = 'https://dynamodb.us-west-2.amazonaws.com/'
# POST requests use a content type header. For DynamoDB,
# the content is JSON.
content_type = 'application/x-amz-json-1.0'
# DynamoDB requires an x-amz-target header that has this format:
# DynamoDB_<API version>.<operationName>
amz_target = 'DynamoDB_20120810.CreateTable'
# Request parameters for CreateTable--passed in a JSON block.
request_parameters = '{'
request_parameters += '"KeySchema": [{"KeyType": "HASH","AttributeName": "Id"}],'
request_parameters += '"TableName": "TestTable","AttributeDefinitions": [{"AttributeName": "Id","AttributeType": "S"}],'
request_parameters += '"ProvisionedThroughput": {"WriteCapacityUnits": 5,"ReadCapacityUnits": 5}'
request_parameters += '}'
# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
def getSignatureKey(key, date_stamp, regionName, serviceName):
kDate = sign(('AWS4' + key).encode('utf-8'), date_stamp)
kRegion = sign(kDate, regionName)
kService = sign(kRegion, serviceName)
kSigning = sign(kService, 'aws4_request')
return kSigning
# Read AWS access key from env. variables or configuration file. Best practice is NOT
# to embed credentials in code.
access_key = os.environ.get('AWS_ACCESS_KEY_ID')
secret_key = os.environ.get('AWS_SECRET_ACCESS_KEY')
if access_key is None or secret_key is None:
print('No access key is available.')
sys.exit()
# Create a date for headers and the credential string
t = datetime.datetime.utcnow()
amz_date = t.strftime('%Y%m%dT%H%M%SZ')
date_stamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope
# ************* TASK 1: CREATE A CANONICAL REQUEST *************
# http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
# Step 1 is to define the verb (GET, POST, etc.)--already done.
# Step 2: Create canonical URI--the part of the URI from domain to query
# string (use '/' if no path)
canonical_uri = '/'
## Step 3: Create the canonical query string. In this example, request
# parameters are passed in the body of the request and the query string
# is blank.
canonical_querystring = ''
# Step 4: Create the canonical headers. Header names must be trimmed
# and lowercase, and sorted in code point order from low to high.
# Note that there is a trailing \n.
canonical_headers = 'content-type:' + content_type + '\n' + 'host:' + host + '\n' + 'x-amz-date:' + amz_date + '\n' + 'x-amz-target:' + amz_target + '\n'
# Step 5: Create the list of signed headers. This lists the headers
# in the canonical_headers list, delimited with ";" and in alpha order.
# Note: The request can include any headers; canonical_headers and
# signed_headers include those that you want to be included in the
# hash of the request. "Host" and "x-amz-date" are always required.
# For DynamoDB, content-type and x-amz-target are also required.
signed_headers = 'content-type;host;x-amz-date;x-amz-target'
# Step 6: Create payload hash. In this example, the payload (body of
# the request) contains the request parameters.
payload_hash = hashlib.sha256(request_parameters.encode('utf-8')).hexdigest()
# Step 7: Combine elements to create canonical request
canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash
# ************* TASK 2: CREATE THE STRING TO SIGN*************
# Match the algorithm to the hashing algorithm you use, either SHA-1 or
# SHA-256 (recommended)
algorithm = 'AWS4-HMAC-SHA256'
credential_scope = date_stamp + '/' + region + '/' + service + '/' + 'aws4_request'
string_to_sign = algorithm + '\n' + amz_date + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()
# ************* TASK 3: CALCULATE THE SIGNATURE *************
# Create the signing key using the function defined above.
signing_key = getSignatureKey(secret_key, date_stamp, region, service)
# Sign the string_to_sign using the signing_key
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()
# ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
# Put the signature information in a header named Authorization.
authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature
# For DynamoDB, the request can include any headers, but MUST include "host", "x-amz-date",
# "x-amz-target", "content-type", and "Authorization". Except for the authorization
# header, the headers must be included in the canonical_headers and signed_headers values, as
# noted earlier. Order here is not significant.
# # Python note: The 'host' header is added automatically by the Python 'requests' library.
headers = {'Content-Type':content_type,
'X-Amz-Date':amz_date,
'X-Amz-Target':amz_target,
'Authorization':authorization_header}
# ************* SEND THE REQUEST *************
print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++')
print('Request URL = ' + endpoint)
r = requests.post(endpoint, data=request_parameters, headers=headers)
print('\nRESPONSE++++++++++++++++++++++++++++++++++++')
print('Response code: %d\n' % r.status_code)
print(r.text)
I think it could be easily adapted to your needs.
In the console, everything works fine because when you invoke your REST endpoints in API Gateway, you are connected to a user who is already authenticated and authorized to access these REST endpoints.
Related
I try to deploy the Azure function by using Rest API and zip-archive of solution.
It works properly in Postman.
I've found advice on how to upload mp3 files and develop a solution for my task.
But when I try to create a payload for request by AL-code for Business Central (file have been uploaded to instr):
CR := 13;
LF := 10;
NewLine += '' + CR + LF;
httpHeader.Clear();
TempBlob.CreateOutStream(PayloadOutStream);
PayloadOutStream.WriteText('--boundary' + NewLine);
PayloadOutStream.WriteText(StrSubstNo('Content-Disposition: form-data; name="file"; filename="%1"', filename) + NewLine);
PayloadOutStream.WriteText('Content-Type: application/zip' + NewLine);
PayloadOutStream.WriteText(NewLine);
CopyStream(PayloadOutStream, InStr);
PayloadOutStream.WriteText(NewLine);
PayloadOutStream.WriteText('--boundary');
PayloadOutStream.WriteText(NewLine);
TempBlob.CreateInStream(PayloadInStream);
Content.WriteFrom(PayloadInStream);
Content.GetHeaders(httpHeader);
if httpHeader.Contains('Content-Type') then httpHeader.Remove('Content-Type');
httpHeader.Add('Content-Type', 'multipart/form-data;boundary=boundary');
httpRequest := CreateHttpRequestMessage(Content, 'Post', RequestURI);
Client.Clear();
Client.DefaultRequestHeaders.Add('Authorization', StrSubstNo('Bearer %1', token));
if Client.Send(httpRequest, httpResponse) then begin
httpResponse.Content().ReadAs(responseText);
Message(responseText);
end
else
Error(RequestErrorMsg);
I received an error in the response message from the deployment process like this:
{"Message":"An error has occurred.","ExceptionMessage":"Number of entries expected in End Of Central Directory does not correspond to number of entries in Central Directory.","ExceptionType":"System.IO.InvalidDataException","StackTrace":" at System.IO.Compression.ZipArchive.ReadCentralDirectory()\r\n at System.IO.Compression.ZipArchive.get_Entries()\r\n at Kudu.Core.Infrastructure.ZipArchiveExtensions.Extract(ZipArchive archive, String directoryName, ITracer tracer, Boolean doNotPreserveFileTime) in C:\\Kudu Files\\Private\\src\\master\\Kudu.Core\\Infrastructure\\ZipArchiveExtensions.cs:line 114\r\n at Kudu.Services.Deployment.PushDeploymentController.<>c__DisplayClass21_0.<LocalZipFetch>b__1() in C:\\Kudu Files\\Private\\src\\master\\Kudu.Services\\Deployment\\PushDeploymentController.cs:line 746\r\n at System.Threading.Tasks.Task.InnerInvoke()\r\n at System.Threading.Tasks.Task.Execute()......
I believe, something is wrong when I build the payload. Could you give me advice on how I have to build the body of request for my case?
I have a DockerFile based on Varnish 7.0 alpine, I have a custom vcl file to handle JWT authentication. We pass the JWT as a Bearer in the header.
I am based on this example: https://feryn.eu/blog/validating-json-web-tokens-in-varnish/
set req.http.tmpPayload = regsub(req.http.x-token,"[^\.]+\.([^\.]+)\.[^\.]+$","\1");
set req.http.tmpHeader = regsub(req.http.x-token,"([^\.]+)\.[^\.]+\.[^\.]+","\1");
set req.http.tmpRequestSig = regsub(req.http.x-token,"^[^\.]+\.[^\.]+\.([^\.]+)$","\1");
set req.http.tmpCorrectSig = digest.base64url_nopad_hex(digest.hmac_sha256(std.fileread("/jwt/privateKey.pem"), req.http.tmpHeader + "." + req.http.tmpPayload));
std.log("req sign " + req.http.tmpRequestSig);
std.log("calc sign " + req.http.tmpCorrectSig);
if(req.http.tmpRequestSig != req.http.tmpCorrectSig) {
std.log("invalid signature match");
return(synth(403, "Invalid JWT signature"));
}
My problem is that tmpCorrectSig is empty, I don't know if I can load from a file, since my file contains new lines and other caracteres ?
For information, this Vmod is doing what I want: https://code.uplex.de/uplex-varnish/libvmod-crypto, but I can't install it on my Arm M1 pro architecture, I spent so much time trying...
Can I achieve what I want?
I have a valid solution that leverages the libvmod-crypto. The VCL supports both HS256 and RS256.
These are the commands I used to generated the certificates:
cd /etc/varnish
ssh-keygen -t rsa -b 4096 -m PEM -f jwtRS256.key
openssl rsa -in jwtRS256.key -pubout -outform PEM -out jwtRS256.key.pub
I use https://jwt.io/ to generate a token and paste in the values from my certificates to encrypt the signature.
The VCL code
This is the VCL code that will extract the JWT from the token cookie:
vcl 4.1;
import blob;
import digest;
import crypto;
import std;
sub vcl_init {
new v = crypto.verifier(sha256,std.fileread("/etc/varnish/jwtRS256.key.pub"));
}
sub vcl_recv {
call jwt;
}
sub jwt {
if(req.http.cookie ~ "^([^;]+;[ ]*)*token=[^\.]+\.[^\.]+\.[^\.]+([ ]*;[^;]+)*$") {
set req.http.x-token = ";" + req.http.Cookie;
set req.http.x-token = regsuball(req.http.x-token, "; +", ";");
set req.http.x-token = regsuball(req.http.x-token, ";(token)=","; \1=");
set req.http.x-token = regsuball(req.http.x-token, ";[^ ][^;]*", "");
set req.http.x-token = regsuball(req.http.x-token, "^[; ]+|[; ]+$", "");
set req.http.tmpHeader = regsub(req.http.x-token,"token=([^\.]+)\.[^\.]+\.[^\.]+","\1");
set req.http.tmpTyp = regsub(digest.base64url_decode(req.http.tmpHeader),{"^.*?"typ"\s*:\s*"(\w+)".*?$"},"\1");
set req.http.tmpAlg = regsub(digest.base64url_decode(req.http.tmpHeader),{"^.*?"alg"\s*:\s*"(\w+)".*?$"},"\1");
if(req.http.tmpTyp != "JWT") {
return(synth(400, "Token is not a JWT: " + req.http.tmpHeader));
}
if(req.http.tmpAlg != "HS256" && req.http.tmpAlg != "RS256") {
return(synth(400, "Token does not use a HS256 or RS256 algorithm"));
}
set req.http.tmpPayload = regsub(req.http.x-token,"token=[^\.]+\.([^\.]+)\.[^\.]+$","\1");
set req.http.tmpRequestSig = regsub(req.http.x-token,"^[^\.]+\.[^\.]+\.([^\.]+)$","\1");
if(req.http.tempAlg == "HS256") {
set req.http.tmpCorrectSig = digest.base64url_nopad_hex(digest.hmac_sha256("SlowWebSitesSuck",req.http.tmpHeader + "." + req.http.tmpPayload));
if(req.http.tmpRequestSig != req.http.tmpCorrectSig) {
return(synth(403, "Invalid HS256 JWT signature"));
}
} else {
if (! v.update(req.http.tmpHeader + "." + req.http.tmpPayload)) {
return (synth(500, "vmod_crypto error"));
}
if (! v.valid(blob.decode(decoding=BASE64URLNOPAD, encoded=req.http.tmpRequestSig))) {
return(synth(403, "Invalid RS256 JWT signature"));
}
}
set req.http.tmpPayload = digest.base64url_decode(req.http.tmpPayload);
set req.http.X-Login = regsub(req.http.tmpPayload,{"^.*?"login"\s*:\s*(\w+).*?$"},"\1");
set req.http.X-Username = regsub(req.http.tmpPayload,{"^.*?"sub"\s*:\s*"(\w+)".*?$"},"\1");
unset req.http.tmpHeader;
unset req.http.tmpTyp;
unset req.http.tmpAlg;
unset req.http.tmpPayload;
unset req.http.tmpRequestSig;
unset req.http.tmpCorrectSig;
unset req.http.tmpPayload;
}
}
Installing libvmod-crypto
libvmod-crypto is required to use RS256, which is not supported by libvmod-digest.
Unfortunately I'm getting an error when running the ./configure script:
./configure: line 12829: syntax error: unexpected newline (expecting ")")
I'll talk to the maintainer of the VMOD and see if we can figure out someway to fix this. If this is an urgent matter, I suggest you use a non-Alpine Docker container for the time being.
Firstly, the configure error was caused by a missing -dev package, see the gitlab issue (the reference is in a comment, but I think it should be more prominent).
The main issue in the original question is that digest.hmac_sha256() can not be used to verify RS256 signatures. A JWT RS256 signature is a SHA256 hash of the subject encrypted with an RSA private key, which can then be verified by decrypting with the RSA public key and checking the signature. This is what crypto.verifier(sha256, ...) does.
In this regard, Thijs' previous answer is already correct.
Yet the code which is circulating and has been referenced here it nothing I would endorse. Among other issues, a fundamental problem is that regular expressions are used to (pretend to) parse JSON, which is simply not correct.
I use a better implementation for long, but just did not get around to publishing it. So now is the time, I guess.
I have just added VCL snippets from production code for JWT parsing and validation.
The example is used like so with the jwt directory in vcl_path:
include "jwt/jwt.vcl";
include "jwt/rsa_keys.vcl";
sub vcl_recv {
jwt.set(YOUR_JWT); # replace YOUR_JWT with an actual variable/header/function
call recv_jwt_validate;
# do things with jwt_payload.extract(".scope")
}
Here, the scope claim contains the data that we are actually interested in for further processing, if you want to use other claims, just rename .scope or add another jwt_payload.expect(CLAIM, ...) and then use jwt_payload.extract(CLAIM).
This example uses some vmods, which we developed and maintain in particular with JWT in mind, though not exclusively:
crypto (use gitlab mirror for issues) for RS signatures (mostly RS256)
frozen (use gitlab mirror for issues) for JSON parsing
Additionally, we use
re2 (use gitlab mirror for issues) to efficiently split the JWT into the three parts (header, payload, signature)
and taskvar from objvar (gitlab) for proper variables.
One could do without these two vmods (re2 could be replaced by the re vmod or even regsub and taskvar with headers), but they make the code more efficient and cleaner.
blobdigest (gitlab) is not contained in the example, but can be used to validate HS signtures (e.g. HS256).
I am trying to generated a signed URL to an object stored on Google Cloud Storage (GCS).
Attempt 1: try the API using the API Explorer
For this, I am trying to sign the blob/object as defined in the following:
GET
<expiration time>
/<bucket name>/<object/blob name>
I first tried Google's serviceAccounts.signBlob API as discussed in the following page:
https://cloud.google.com/iam/reference/rest/v1/projects.serviceAccounts/signBlob
A base64-encoded string.
Note, as mentioned in the API documentation on the above-linked page, I pass a base64 representation of the blob I want to sign, to the API.
The API's response has the following structure where it contains the signedBlob key:
{
"keyId": "...",
"signedBlob": "..."
}
then I generated a signed URL using the obtained signed blob as the following:
encoded_signedBlob = base64.b64encode(signedBlob)
signed_url = "https://storage.googleapis.com/{}/{}?" \
"GoogleAccessId={}&" \
"Expires={}&" \
"Signature={}".format(
bucket_name, blob_name,
service_account_email,
expiration,
encoded_signedBlob)
and when I paste that signed URL in the browser to download the blob, I get the following error:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
</Message>
<StringToSign>GET <bucket name> <blob/object name></StringToSign>
</Error>
Attempt 2: try python libraries
Then I tried to implement it in python as the following, but still getting the same error.
# -------------
# Part 1: obtain access token using the authorization flow discussed at:
# https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials
# -------------
client_service_account = "..."
access_token = build(
serviceName='iamcredentials',
version='v1',
http=http
).projects().serviceAccounts().generateAccessToken(
name="projects/{}/serviceAccounts/{}".format(
'-',
service_account_email),
body=body
).execute()["accessToken"]
credentials = AccessTokenCredentials(access_token, "MyAgent/1.0", None)
# -------------
# Part 2: sign the blob
# -------------
service = discovery.build('iam', 'v1', credentials=credentials)
name = 'projects/.../serviceAccounts/...'
encoded = base64.b64encode(blob)
sign_blob_request_body = {"bytesToSign": encoded}
request = service.projects().serviceAccounts().signBlob(name=name, body=sign_blob_request_body)
response = request.execute()
keyId = response["keyId"]
signedBlob = response["signature"]
# -------------
# Part 3: generate signed URL
# -------------
encoded_signedBlob = base64.b64encode(signedBlob)
signed_url = "https://storage.googleapis.com/{}/{}?" \
"GoogleAccessId={}&" \
"Expires={}&" \
"Signature={}".format(
bucket_name, blob_name,
service_account_email,
expiration,
encoded_signedBlob)
You could take a look to the implementation of this using the client libraries, you could give it a try, remember to include google-cloud-storage in your requirements.txt and a service account as specified in the code sample for python
I want to fetch all customer accounts from Zuora. Apart from Exports REST API, Is there any API available to fetch all accounts in a paginated list?
This is the format I used to fetch revenue invoices, use this code and change the endpoint
import pandas as pd
# Set the sleep time to 10 seconds
sleep = 10
# Zuora OAUTH token URL
token_url = "https://rest.apisandbox.zuora.com/oauth/token"
# URL for the DataQuery
query_url = "https://rest.apisandbox.zuora.com/query/jobs"
# OAUTH client_id & client_secret
client_id = 'your client id'
client_secret = 'your client secret'
# Set the grant type to client credential
token_data = {'grant_type': 'client_credentials'}
# Send the POST request for the OAUTH token
access_token_resp = requests.post(token_url, data=token_data,
auth=(client_id, client_secret))
# Print the OAUTH token respose text
#print access_token_resp.text
# Parse the tokens as json data from the repsonse
tokens = access_token_resp.json()
#print "access token: " + tokens['access_token']
# Use the access token in future API calls & Add to the headers
query_job_headers = {'Content-Type':'application/json',
'Authorization': 'Bearer ' + tokens['access_token']}
# JSON Data for our DataQuery
json_data = {
"query": "select * from revenuescheduleiteminvoiceitem",
"outputFormat": "JSON",
"compression": "NONE",
"retries": 3,
"output": {
"target": "s3"
}
}
# Parse the JSON output
data = json.dumps(json_data)
# Send the POST request for the dataquery
query_job_resp = requests.post(query_url, data=data,
headers=query_job_headers)
# Print the respose text
#print query_job_resp.text
# Check the Job Status
# 1) Parse the Query Job Response JSON data
query_job = query_job_resp.json()
# 2) Create the Job URL with the id from the response
query_job_url = query_url+'/'+query_job["data"]["id"]
# 3) Send the GETrequest to check on the status of the query
query_status_resp = requests.get(query_job_url, headers = query_job_headers)
#print query_status_resp.text
# Parse the status from teh response
query_status = query_status_resp.json()["data"]["queryStatus"]
#print ('query status:'+query_status)
# Loop until the status == completed
# Exit if there is an error
while (query_status != 'completed'):
time.sleep(sleep)
query_status_resp = requests.get(query_job_url, headers = query_job_headers)
#print query_status_resp.text
query_status = query_status_resp.json()["data"]["queryStatus"]
if (query_status == 'failed'):
print ("query: "+query_status_resp.json()["data"]["query"]+' Failed!\n')
exit(1)
# Query Job has completed
#print ('query status:'+query_status)
# Get the File URL
file_url = query_status_resp.json()["data"]["dataFile"]
print (file_url)```
If you don't want to use Data Query or any queue-based solution like that, use Zoql instead.
Note! You need to know all fields from the Account object you need, the asterisk (select *) doesn't work here:
select Id, ParentId, AccountNumber, Name from Account
You may also add custom fields into your selection. You will get up to 200 records per page.
I'm trying to use Elixir to access Azure Storage Services via their REST API but I'm having difficulty getting the Authentication Header to work. I am able to connect if I use the ex_azure package (wrapper for erlazure) but not when I try to build the request and use HTTPoison.
Most Recent Error Messages
<?xml version=\"1.0\" encoding=\"utf-8\"?>
<Error>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:00000000-0000-0000-0000-000000000000\nTime:2017-08-02T21:46:08.6488342Z</Message>
<AuthenticationErrorDetail>The MAC signature found in the HTTP request '<signature>' is not the same as any computed signature. Server used following string to sign: 'GET\n\n\nWed, 02 Aug 2017 21:46:08
GMT\nx-ms-date-h:Wed, 02 Aug 2017 21:46:08 GMT\nx-ms-version-h:2017-05-10\n/storage_name/container_name?comp=list'.</AuthenticationErrorDetail>
</Error>
After 1st Edit
<?xml version=\"1.0\" encoding=\"utf-8\"?>
<Error>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:00000000-0000-0000-0000-000000000000\nTime:2017-08-03T03:03:57.1385277Z</Message>
<AuthenticationErrorDetail>The MAC signature found in the HTTP request '<signature>' is not the same as any computed signature. Server used following string to sign: 'GET\n\n\n\n\n\n\n\n\n\n\n\nx-ms-date:Thu, 03 Aug
2017 03:03:57 GMT\nx-ms-version:2017-04-17\n/storage_name/container_name\ncomp:list\nrestype:container'.</AuthenticationErrorDetail>
</Error>
Dependencies
# mix.exs
defp deps do
{:httpoison, "~> 0.12"}
{:timex, "~> 3.1"}
end
Code
Am I formatting the Authentication Header (string_to_sign) right?
Am I using encode/decode right?
Am I adding headers correctly to HTTPoison?
Should I be using something else for REST actions instead of HTTPoison?
# account credentials
storage_name = "storage_name"
container_name = "container_name"
storage_key = "storage_key"
storage_service_version = "2017-04-17" # fixed version
request_date =
Timex.now
|> Timex.format!("{RFC1123}") # Wed, 02 Aug 2017 00:52:10 +0000
|> String.replace("+0000", "GMT") # Wed, 02 Aug 2017 00:52:10 GMT
# set canonicalized headers
x_ms_date = "x-ms-date:#{request_date}"
x_ms_version = "x-ms-version:#{storage_service_version}"
# assign values for string_to_sign
verb = "GET\n"
content_encoding = "\n"
content_language = "\n"
content_length = "\n"
content_md5 = "\n"
content_type = "\n"
date = "\n"
if_modified_since = "\n"
if_match = "\n"
if_none_match = "\n"
if_unmodified_since = "\n"
range = "\n"
canonicalized_headers = "#{x_ms_date}\n#{x_ms_version}\n"
canonicalized_resource = "/#{storage_name}/#{container_name}\ncomp:list\nrestype:container" # removed timeout. removed space
# concat string_to_sign
string_to_sign =
verb <>
content_encoding <>
content_language <>
content_length <>
content_md5 <>
content_type <>
date <>
if_modified_since <>
if_match <>
if_none_match <>
if_unmodified_since <>
range <>
canonicalized_headers <>
canonicalized_resource
# decode storage_key
{:ok, decoded_key} =
storage_key
|> Base.decode64
# sign and encode string_to_sign
signature =
:crypto.hmac(:sha256, decoded_key, string_to_sign)
|> Base.encode64
# build authorization header
authorization_header = "SharedKey #{storage_name}:#{signature}"
# build request and use HTTPoison
url = "https://storage_name.blob.core.windows.net/container_name?restype=container&comp=list"
headers = [ # "Date": request_date,
"x-ms-date": request_date, # fixed typo
"x-ms-version": storage_service_version, # fixed typo
# "Accept": "application/json",
"Authorization": authorization_header]
options = [ssl: [{:versions, [:'tlsv1.2']}], recv_timeout: 500]
HTTPoison.get(url, headers, options)
Notes
Some sources I used/tried...
Authentication for the Azure Storage Services
The MAC signature found in the HTTP request is not the same as any computed signature azure integration using php
How to access rest azure blob using cURL
Accessing Azure blob storage using bash, curl
A few issues I noticed:
You included Date request header in your request but it is not included in your string_to_sign. Either include this header in your string_to_sign or remove this header from request headers.
You included timeout:30 in your canonicalized_resource but it is not included in your request URL. Again, either add timeout=30 in your request querystring or remove timeout:30 from canonicalized_resource.
I have not used Elixir as such so I don't know how request headers work there, but you're naming your request headers as x-ms-date-h and x-ms-version-h. Shouldn't they be x-ms-date and x-ms-version respectively?