How to config Ceph rgw sts key - ceph

I'm going to use Ceph(rook-ceph V15.2.5)STS service to create temp credentials to access ceph bucket resources by Java, and followed the example steps in: https://docs.ceph.com/en/latest/radosgw/STS/.
When calling the assumeRole to get credentials, it's fialed with 400. And from the rgw log:
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 10 moving my-store.rgw.meta+roles+roles.5c5d7e0e-7492-4b53-8aa2-cd0a316f88af to cache LRU end
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 2 req 451 0.003000119s sts:assume_role verifying op params
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 2 req 451 0.003000119s sts:assume_role pre-executing
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 2 req 451 0.003000119s sts:assume_role executing
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 0 ERROR: Invalid secret key
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 2 req 451 0.003000119s sts:assume_role completing
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 2 req 451 0.003000119s sts:assume_role op status=-22
debug 2020-12-23T02:57:26.656+0000 7f8fd8dd4700 2 req 451 0.003000119s sts:assume_role http status=400
There is a "Invalid secret key" error. Does this means the rgw sts key is invalid?
I have set the config the sts key with 16 chars under rgw pod: /etc/ceph/ceph.conf
[client.radosgw.gateway]
rgw sts key = "abcdefghijklmnop"
rgw s3 auth use sts = true
Does anybody knows how to solve this issue?Thanks

The section heading [client.radosgw.gateway] is not to be copied blindly from the Ceph docs: there should be a {cluster-name} in there to be replaced. Putting these entries under [global] also works if you have only one cluster or use the same key for all of them.

Related

Hashicorp Vault - Permission denied in API While Succcess In CLI

I'm running a PoC with HCP Vault.
I created an admin token. I logged in from my computer. Developed a policy with reading permission to a simple KV secret. I generated a token from the policy.
With the same token in the CLI I manage to get the secret. I try to fetch the data from the REST API but I receive 403.
Note: When I run Vault in dev mode locally both methods work
❯ vault token create -policy=my-spring-boot-app-policy
Key Value
--- -----
token hvs.XXX
token_accessor AAA
token_duration 1h
token_renewable true
token_policies ["default" "my-spring-boot-app-policy"]
identity_policies []
policies ["default" "my-spring-boot-app-policy"]
❯ vault login hvs.XXX
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.XXX
token_accessor AAA
token_duration 59m44s
token_renewable true
token_policies ["default" "my-spring-boot-app-policy"]
identity_policies []
policies ["default" "my-spring-boot-app-policy"]
❯ curl --header "X-Vault-Token: hvs.XXX" --request GET https://vault-cluster-public-vault-XXX.YYY.z1.hashicorp.cloud:8200/v1/secret/data/my-spring-boot-app | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 60 100 60 0 0 125 0 --:--:-- --:--:-- --:--:-- 127
{
"errors": [
"1 error occurred:\n\t* permission denied\n\n"
]
}
CLI
❯ vault kv get secret/my-spring-boot-app
========= Secret Path =========
secret/data/my-spring-boot-app
======= Metadata =======
Key Value
--- -----
created_time 2022-09-15T14:03:22.327127967Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 5
======= Data =======
Key Value
--- -----
hello.world Bye from Vault
mykey Vault Key
To get a response from HCP Vault you need to add the header X-Vault-Namespace with the value admin.
For example:
❯ curl --header "X-Vault-Token: hvs.XXX" --header "X-Vault-Namespace: admin" --request GET https://vault-cluster-public-vault-AAA.BBB.z1.hashicorp.cloud:8200/v1/secret/data/my-spring-boot-app | jq
I found the reference in https://cloud.spring.io/spring-cloud-vault/reference/html/#vault.config.namespaces

linkerd cli returns "invalid argument" when running "top"

I'm going through the getting-started tutorial for linkerd and I've got stable-2.1.0 installed on kube v1.9.6 and v1.12.3
I've validated that all the pods are running and the mesh is working via the dashboard.
When I try to run linkerd -n linkerd top deploy/linkerd-web in step 4, I get invalid argument back from the controller.
Here's the verbose output:
DEBU[0000] Expecting API to be served over [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/]
DEBU[0000] Making gRPC-over-HTTP call to [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck] []
DEBU[0000] Response from [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck] had headers: map[Content-Type:[application/octet-stream] Date:[Wed, 12 Dec 2018 05:54:06 GMT] Content-Length:[108]]
DEBU[0000] gRPC-over-HTTP call returned status [200 OK] and content length [108]
DEBU[0003] Response from [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/TapByResource] had headers: map[Content-Type:[application/octet-stream] Date:[Wed, 12 Dec 2018 05:54:09 GMT]]
Error: invalid argument
Any advice on what I should try next?
I also created this issue on GitHub
Turns out there are some dependencies (termbox) for drawing the top table that aren't supported on Windows Subsystem for Linux. Here's the issue on GitHub: https://github.com/linkerd/linkerd2/issues/1976

Google cloud: Using gsutil to download data from AWS S3 to GCS

One of our collaborators has made some data available on AWS and I was trying to get it into our google cloud bucket using gsutil (only some of the files are of use to us, so I don't want to use the GUI provided on GCS). The collaborators have provided us with the AWS bucket ID, the aws access key id, and aws secret access key id.
I looked through the documentation on GCE and editied the ~/.botu file such that the access keys are incorporated. I restarted my terminal and tried to do an 'ls' but got the following error:
gsutil ls s3://cccc-ffff-03210/
AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied
Do I need to configure/run something else too?
thanks!
EDITS:
Thanks for the replies!
I installed the Cloud SDK and I can access and run all gsutil commands on my google cloud storage project. My problem is in trying to access (e.g. 'ls' command) the amazon S3 that is being shared with me.
I uncommented two lines in the ~/.boto file and put the access keys:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
# following two lines:
aws_access_key_id = my_access_key
aws_secret_access_key = my_secret_access_key
Output of 'gsutil version -l':
| => gsutil version -l
my_gc_id
gsutil version: 4.27
checksum: 5224e55e2df3a2d37eefde57 (OK)
boto version: 2.47.0
python version: 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)]
OS: Darwin 15.4.0
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /Users/pc/.boto, /Users/pc/.config/gcloud/legacy_credentials/pc#gmail.com/.boto
gsutil path: /Users/pc/Documents/programs/google-cloud- sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
The output with the -DD option is:
=> gsutil -DD ls s3://my_amazon_bucket_id
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /Users/pc/.boto, /Users/pc/.config/gcloud/legacy_credentials/pc#gmail.com/.boto
gsutil path: /Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
Command being run: /Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=my_gc_id -DD ls s3://my_amazon_bucket_id
config_file_list: ['/Users/pc/.boto', '/Users/pc/.config/gcloud/legacy_credentials/pc#gmail.com/.boto']
config: [('debug', '0'), ('working_dir', '/mnt/pyami'), ('https_validate_certificates', 'True'), ('debug', '0'), ('working_dir', '/mnt/pyami'), ('content_language', 'en'), ('default_api_version', '2'), ('default_project_id', 'my_gc_id')]
DEBUG 1103 08:42:34.664643 provider.py] Using access key found in shared credential file.
DEBUG 1103 08:42:34.664919 provider.py] Using secret key found in shared credential file.
DEBUG 1103 08:42:34.665841 connection.py] path=/
DEBUG 1103 08:42:34.665967 connection.py] auth_path=/my_amazon_bucket_id/
DEBUG 1103 08:42:34.666115 connection.py] path=/?delimiter=/
DEBUG 1103 08:42:34.666200 connection.py] auth_path=/my_amazon_bucket_id/?delimiter=/
DEBUG 1103 08:42:34.666504 connection.py] Method: GET
DEBUG 1103 08:42:34.666589 connection.py] Path: /?delimiter=/
DEBUG 1103 08:42:34.666668 connection.py] Data:
DEBUG 1103 08:42:34.666724 connection.py] Headers: {}
DEBUG 1103 08:42:34.666776 connection.py] Host: my_amazon_bucket_id.s3.amazonaws.com
DEBUG 1103 08:42:34.666831 connection.py] Port: 443
DEBUG 1103 08:42:34.666882 connection.py] Params: {}
DEBUG 1103 08:42:34.666975 connection.py] establishing HTTPS connection: host=my_amazon_bucket_id.s3.amazonaws.com, kwargs={'port': 443, 'timeout': 70}
DEBUG 1103 08:42:34.667128 connection.py] Token: None
DEBUG 1103 08:42:34.667476 auth.py] StringToSign:
GET
Fri, 03 Nov 2017 12:42:34 GMT
/my_amazon_bucket_id/
DEBUG 1103 08:42:34.667600 auth.py] Signature:
AWS RN8=
DEBUG 1103 08:42:34.667705 connection.py] Final headers: {'Date': 'Fri, 03 Nov 2017 12:42:34 GMT', 'Content-Length': '0', 'Authorization': u'AWS AK6GJQ:EFVB8F7rtGN8=', 'User-Agent': 'Boto/2.47.0 Python/2.7.10 Darwin/15.4.0 gsutil/4.27 (darwin) google-cloud-sdk/164.0.0'}
DEBUG 1103 08:42:35.179369 https_connection.py] wrapping ssl socket; CA certificate file=/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/third_party/boto/boto/cacerts/cacerts.txt
DEBUG 1103 08:42:35.247599 https_connection.py] validating server certificate: hostname=my_amazon_bucket_id.s3.amazonaws.com, certificate hosts=['*.s3.amazonaws.com', 's3.amazonaws.com']
send: u'GET /?delimiter=/ HTTP/1.1\r\nHost: my_amazon_bucket_id.s3.amazonaws.com\r\nAccept-Encoding: identity\r\nDate: Fri, 03 Nov 2017 12:42:34 GMT\r\nContent-Length: 0\r\nAuthorization: AWS AN8=\r\nUser-Agent: Boto/2.47.0 Python/2.7.10 Darwin/15.4.0 gsutil/4.27 (darwin) google-cloud-sdk/164.0.0\r\n\r\n'
reply: 'HTTP/1.1 403 Forbidden\r\n'
header: x-amz-bucket-region: us-east-1
header: x-amz-request-id: 60A164AAB3971508
header: x-amz-id-2: +iPxKzrW8MiqDkWZ0E=
header: Content-Type: application/xml
header: Transfer-Encoding: chunked
header: Date: Fri, 03 Nov 2017 12:42:34 GMT
header: Server: AmazonS3
DEBUG 1103 08:42:35.326652 connection.py] Response headers: [('date', 'Fri, 03 Nov 2017 12:42:34 GMT'), ('x-amz-id-2', '+iPxKz1dPdgDxpnWZ0E='), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', '60A164AAB3971508'), ('x-amz-bucket-region', 'us-east-1'), ('content-type', 'application/xml')]
DEBUG 1103 08:42:35.327029 bucket.py] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>6097164508</RequestId><HostId>+iPxKzrWWZ0E=</HostId></Error>
DEBUG: Exception stack trace:
Traceback (most recent call last):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 577, in _RunNamedCommandAndHandleExceptions
collect_analytics=True)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 317, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 548, in RunCommand
exp_dirs, exp_objs, exp_bytes = ls_helper.ExpandUrlAndPrint(storage_url)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/ls_helper.py", line 180, in ExpandUrlAndPrint
print_initial_newline=False)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/ls_helper.py", line 252, in _RecurseExpandUrlAndPrint
bucket_listing_fields=self.bucket_listing_fields):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 476, in IterAll
expand_top_level_buckets=expand_top_level_buckets):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 157, in __iter__
fields=bucket_listing_fields):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 413, in ListObjects
self._TranslateExceptionAndRaise(e, bucket_name=bucket_name)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 1471, in _TranslateExceptionAndRaise
raise translated_exception
AccessDeniedException: AccessDeniedException: 403 AccessDenied
AccessDeniedException: 403 AccessDenied
I'll assume that you are able to set up gcloud credentials using gcloud init and gcloud auth login or gcloud auth activate-service-account, and can list/write objects to GCS successfully.
From there, you need two things. A properly configured AWS IAM role applied to the AWS user you're using, and a properly configured ~/.boto file.
AWS S3 IAM policy for bucket access
A policy like this must be applied, either by a role granted to your user or an inline policy attached to the user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::some-s3-bucket/*",
"arn:aws:s3:::some-s3-bucket"
]
}
]
}
The important part is that you have ListBucket and GetObject actions, and the resource scope for these includes at least the bucket (or prefix thereof) that you wish to read from.
.boto file configuration
Interoperation between service providers is always a bit tricky. At the time of this writing, in order to support AWS Signature V4 (the only one supported universally by all AWS regions), you have to add a couple extra properties to your ~/.boto file beyond just credential, in an [s3] group.
[Credentials]
aws_access_key_id = [YOUR AKID]
aws_secret_access_key = [YOUR SECRET AK]
[s3]
use-sigv4=True
host=s3.us-east-2.amazonaws.com
The use-sigv4 property cues Boto, via gsutil, to use AWS Signature V4 for requests. Currently, this requires the host be specified in the configuration, unfortunately. It is pretty easy to figure the host name out, as it follows the pattern of s3.[BUCKET REGION].amazonaws.com.
If you have rsync/cp work from multiple S3 regions, you could handle it a few ways. You can set an environment variable like BOTO_CONFIG before running the command to change between multiple files. Or, you can override the setting on each run using a top-level argument, like:
gsutil -o s3:host=s3.us-east-2.amazonaws.com ls s3://some-s3-bucket
Edit:
Just want to add... another cool way to do this job is rclone.
1. Generate your GCS credentials
If you download the Cloud SDK, then run gcloud init and gcloud auth login, gcloud should configure the OAuth2 credentials for the account you logged in with, allowing you to access your GCS bucket (it does this by creating a boto file that gets loaded in addition to your ~/.boto file, if it exists).
If you're using standalone gsutil, run gsutil config to generate a config file at ~/.boto.
2. Add your AWS credentials to the file ~/.boto
The [Credentials] section of your ~/.boto file should have these two lines populated and uncommented:
aws_access_key_id = IDHERE
aws_secret_access_key = KEYHERE
If you've done that:
Make sure that you didn't accidentally swap the values for key and id.
Verify you're loading the correct boto file(s) - you can do this by
running gsutil version -l and looking for the "config path(s):" line.
If you still receive a 403, it's possible that they've given you either
the wrong bucket name, or a key and id corresponding to an account
that doesn't have permission to list the contents of that bucket.

Unable to hide CONNECT requests in Fiddler

I'm using Fiddler v4.6.20171.26113 on Windows 8.1. I have enabled the Hide CONNECTs option under the Rules menu and even tried put this script in the custom rules file:
if (oSession.HTTPMethodIs("CONNECT"))
{
oSession["ui-hide"] = "true";
}
However the CONNECT requests are still shown even when their flag UI-HIDE: true is set.
SESSION STATE: Done.
Response Entity Size: 0 bytes.
== FLAGS ==================
BitFlags: [ResponseGeneratedByFiddler, IsDecryptingTunnel, ProtocolViolationInRequest, RequestBodyDropped] 0x10a100
HTTPS-CLIENT-SESSIONID: empty
HTTPS-CLIENT-SNIHOSTNAME: mtalk.google.com
LOG-DROP-REQUEST-BODY: yes
LOG-DROP-RESPONSE-BODY: yes
UI-BACKCOLOR: LightYellow
UI-HIDE: true
X-CLIENTIP: ::ffff:***.***.**.**
X-CLIENTPORT: 5033
X-EGRESSPORT: 55428
X-HOSTIP: **.***.***.***
X-HTTPPROTOCOL-VIOLATION: [ProtocolViolation] HTTP/1.1 Request was missing the required HOST header.
X-ORIGINAL-HOST:
X-REQUESTBODYFINALLENGTH: 1,384
X-RESPONSEBODYTRANSFERLENGTH: 0
== TIMING INFO ============
ClientConnected: 07:05:03.136
ClientBeginRequest: 07:05:03.339
GotRequestHeaders: 07:05:03.339
ClientDoneRequest: 07:05:03.339
Determine Gateway: 0ms
DNS Lookup: 0ms
TCP/IP Connect: 61ms
HTTPS Handshake: 215ms
ServerConnected: 07:05:03.777
FiddlerBeginRequest: 07:05:03.777
ServerGotRequest: 07:05:03.777
ServerBeginResponse: 00:00:00.000
GotResponseHeaders: 00:00:00.000
ServerDoneResponse: 00:00:00.000
ClientBeginResponse: 07:05:03.777
ClientDoneResponse: 07:05:03.777
Overall Elapsed: 0:00:00.437
The response was buffered before delivery to the client.
== WININET CACHE INFO ============
This URL is not present in the WinINET cache. [Code: 2]
* Note: Data above shows WinINET's current cache state, not the state at the time of the request.
* Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only.
So what should I do now?

What is wrong with my ETrade OAuth get token request?

The server is responding with a less than helpful message.
Unable to get a request token: Request for https://etwssandbox.etrade.com/oauth/sandbox/request_token?oauth_callback=oob&oauth_consumer_key=aaf0812a4bcc6e4c21783af47cf88237&oauth_nonce=3495463522&oauth_signature=ykqRaZc18GwIoqHtYqtxzsMq4xs%3D&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1371092839&oauth_version=1.0 failed,HTTP/1.1 400 Bad Request
Connection: close
Content-Length: 62
Client-Date: Thu, 13 Jun 2013 03:07:19 GMT
Client-Peer: 12.153.224.230:443
Client-Response-Num: 1
Client-SSL-Cert-Issuer: /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3
Client-SSL-Cert-Subject: /C=US/ST=New York/L=New York/O=ETRADE FINANCIAL CORPORATION/OU=Global Information Security/CN=etwssandbox.etrade.com
Client-SSL-Cipher: RC4-MD5
<html><body><b>Http/1.1 400 Bad Request</b></body> </html>
OK I will try with headers. All required parameters are present.
$ wget -d -O- --header='Authorization: OAuth realm="",oauth_callback="oob",oauth_consumer_key="aaf0812a4bcc6e4c21783af47cf88237",oauth_nonce="3495463522",oauth_signature="ykqRaZc18GwIoqHtYqtxzsMq4xs%3D",oauth_signature_method="HMAC-SHA1",oauth_timestamp="1371092839",oauth_version="1.0"' 'https://etwssandbox.etrade.com/oauth/sandbox/request_token'
Setting --output-document (outputdocument) to -
Setting --header (header) to Authorization: OAuth realm="",oauth_callback="oob",oauth_consumer_key="aaf0812a4bcc6e4c21783af47cf88237",oauth_nonce="3495463522",oauth_signature="ykqRaZc18GwIoqHtYqtxzsMq4xs%3D",oauth_signature_method="HMAC-SHA1",oauth_timestamp="1371092839"
DEBUG output created by Wget 1.13.4 on cygwin.
URI encoding = `UTF-8'
--2013-06-12 23:08:33-- https://etwssandbox.etrade.com/oauth/sandbox/request_token
Resolving etwssandbox.etrade.com (etwssandbox.etrade.com)... 12.153.224.230, 198.93.34.230
Caching etwssandbox.etrade.com => 12.153.224.230 198.93.34.230
Connecting to etwssandbox.etrade.com (etwssandbox.etrade.com)|12.153.224.230|:443... connected.
Created socket 3.
Releasing 0x80733128 (new refcount 1).
---request begin---
GET /oauth/sandbox/request_token HTTP/1.1
User-Agent: Wget/1.13.4 (cygwin)
Accept: */*
Host: etwssandbox.etrade.com
Connection: Keep-Alive
Authorization: OAuth realm="",oauth_callback="oob",oauth_consumer_key="aaf0812a4bcc6e4c21783af47cf88237",oauth_nonce="3495463522",oauth_signature="ykqRaZc18GwIoqHtYqtxzsMq4xs%3D",oauth_signature_method="HMAC-SHA1",oauth_timestamp="1371092839"
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 400 Bad Request
Content-Length:62
Connection: close
---response end---
400 Bad Request
2013-06-12 23:08:34 ERROR 400: Bad Request.
That still did not work. Let me verify the signature. Notice my key and secret are correct.
First URL encode all the parameters to form a base string for signing.
$ perl -MURI::Escape -e "print uri_escape('oauth_callback=oob&oauth_consumer_key=aaf0812a4bcc6e4c21783af47cf88237&oauth_nonce=3495463522&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1371092839&oauth_version=1.0')"
oauth_callback%3Doob%26oauth_consumer_key%3Daaf0812a4bcc6e4c21783af47cf88237%26oauth_nonce%3D3495463522%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1371092839%26oauth_version%3D1.0
Now hash with HMAC-SHA1, encode with Base64 (no newline at end), and URL encode the resulting signature.
There is an ampersand at the end of the consumer secret because we don't have a token secret yet (it is empty).
$ perl -MDigest::HMAC_SHA1=hmac_sha1 -MMIME::Base64 -MURI::Escape -e "print uri_escape(encode_base64(hmac_sha1('GET&https%3A%2F%2Fetwssandbox.etrade.com%2Foauth%2Fsandbox%2Frequest_token&oauth_callback%3Doob%26oauth_consumer_key%3Daaf0812a4bcc6e4c21783af47cf88237%26oauth_nonce%3D3495463522%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1371092839%26oauth_version%3D1.0', 'xxxxxxxxxxxxxxxxxxxx&'), ''))"
ykqRaZc18GwIoqHtYqtxzsMq4xs%3D
This signature matches the above.
The specs are here: http://oauth.net/core/1.0a/#signing_process
ETrade specs are here: https://us.etrade.com/ctnt/dev-portal/getDetail?contentUri=V0_Documentation-AuthorizationAPI-GetRequestToken
ETrade's documentation is broken. They specify in the Sandbox environment uses different hosts and URLs
https://us.etrade.com/ctnt/dev-portal/getContent?contentUri=V0_Documentation-DeveloperGuides-Sandbox
but for OAuth they do not. That part is never mentioned and I had to look in the source code for one of their SDKs to find out.
|Environment| URL |
|Production |https://etws.etrade.com/{module}/rest/{API} |
|Sandbox |https://etwssandbox.etrade.com/{module}/sandbox/rest/{API} |