Google cloud: Using gsutil to download data from AWS S3 to GCS - google-cloud-storage

One of our collaborators has made some data available on AWS and I was trying to get it into our google cloud bucket using gsutil (only some of the files are of use to us, so I don't want to use the GUI provided on GCS). The collaborators have provided us with the AWS bucket ID, the aws access key id, and aws secret access key id.
I looked through the documentation on GCE and editied the ~/.botu file such that the access keys are incorporated. I restarted my terminal and tried to do an 'ls' but got the following error:
gsutil ls s3://cccc-ffff-03210/
AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied
Do I need to configure/run something else too?
thanks!
EDITS:
Thanks for the replies!
I installed the Cloud SDK and I can access and run all gsutil commands on my google cloud storage project. My problem is in trying to access (e.g. 'ls' command) the amazon S3 that is being shared with me.
I uncommented two lines in the ~/.boto file and put the access keys:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
# following two lines:
aws_access_key_id = my_access_key
aws_secret_access_key = my_secret_access_key
Output of 'gsutil version -l':
| => gsutil version -l
my_gc_id
gsutil version: 4.27
checksum: 5224e55e2df3a2d37eefde57 (OK)
boto version: 2.47.0
python version: 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)]
OS: Darwin 15.4.0
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /Users/pc/.boto, /Users/pc/.config/gcloud/legacy_credentials/pc#gmail.com/.boto
gsutil path: /Users/pc/Documents/programs/google-cloud- sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
The output with the -DD option is:
=> gsutil -DD ls s3://my_amazon_bucket_id
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /Users/pc/.boto, /Users/pc/.config/gcloud/legacy_credentials/pc#gmail.com/.boto
gsutil path: /Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
Command being run: /Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=my_gc_id -DD ls s3://my_amazon_bucket_id
config_file_list: ['/Users/pc/.boto', '/Users/pc/.config/gcloud/legacy_credentials/pc#gmail.com/.boto']
config: [('debug', '0'), ('working_dir', '/mnt/pyami'), ('https_validate_certificates', 'True'), ('debug', '0'), ('working_dir', '/mnt/pyami'), ('content_language', 'en'), ('default_api_version', '2'), ('default_project_id', 'my_gc_id')]
DEBUG 1103 08:42:34.664643 provider.py] Using access key found in shared credential file.
DEBUG 1103 08:42:34.664919 provider.py] Using secret key found in shared credential file.
DEBUG 1103 08:42:34.665841 connection.py] path=/
DEBUG 1103 08:42:34.665967 connection.py] auth_path=/my_amazon_bucket_id/
DEBUG 1103 08:42:34.666115 connection.py] path=/?delimiter=/
DEBUG 1103 08:42:34.666200 connection.py] auth_path=/my_amazon_bucket_id/?delimiter=/
DEBUG 1103 08:42:34.666504 connection.py] Method: GET
DEBUG 1103 08:42:34.666589 connection.py] Path: /?delimiter=/
DEBUG 1103 08:42:34.666668 connection.py] Data:
DEBUG 1103 08:42:34.666724 connection.py] Headers: {}
DEBUG 1103 08:42:34.666776 connection.py] Host: my_amazon_bucket_id.s3.amazonaws.com
DEBUG 1103 08:42:34.666831 connection.py] Port: 443
DEBUG 1103 08:42:34.666882 connection.py] Params: {}
DEBUG 1103 08:42:34.666975 connection.py] establishing HTTPS connection: host=my_amazon_bucket_id.s3.amazonaws.com, kwargs={'port': 443, 'timeout': 70}
DEBUG 1103 08:42:34.667128 connection.py] Token: None
DEBUG 1103 08:42:34.667476 auth.py] StringToSign:
GET
Fri, 03 Nov 2017 12:42:34 GMT
/my_amazon_bucket_id/
DEBUG 1103 08:42:34.667600 auth.py] Signature:
AWS RN8=
DEBUG 1103 08:42:34.667705 connection.py] Final headers: {'Date': 'Fri, 03 Nov 2017 12:42:34 GMT', 'Content-Length': '0', 'Authorization': u'AWS AK6GJQ:EFVB8F7rtGN8=', 'User-Agent': 'Boto/2.47.0 Python/2.7.10 Darwin/15.4.0 gsutil/4.27 (darwin) google-cloud-sdk/164.0.0'}
DEBUG 1103 08:42:35.179369 https_connection.py] wrapping ssl socket; CA certificate file=/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/third_party/boto/boto/cacerts/cacerts.txt
DEBUG 1103 08:42:35.247599 https_connection.py] validating server certificate: hostname=my_amazon_bucket_id.s3.amazonaws.com, certificate hosts=['*.s3.amazonaws.com', 's3.amazonaws.com']
send: u'GET /?delimiter=/ HTTP/1.1\r\nHost: my_amazon_bucket_id.s3.amazonaws.com\r\nAccept-Encoding: identity\r\nDate: Fri, 03 Nov 2017 12:42:34 GMT\r\nContent-Length: 0\r\nAuthorization: AWS AN8=\r\nUser-Agent: Boto/2.47.0 Python/2.7.10 Darwin/15.4.0 gsutil/4.27 (darwin) google-cloud-sdk/164.0.0\r\n\r\n'
reply: 'HTTP/1.1 403 Forbidden\r\n'
header: x-amz-bucket-region: us-east-1
header: x-amz-request-id: 60A164AAB3971508
header: x-amz-id-2: +iPxKzrW8MiqDkWZ0E=
header: Content-Type: application/xml
header: Transfer-Encoding: chunked
header: Date: Fri, 03 Nov 2017 12:42:34 GMT
header: Server: AmazonS3
DEBUG 1103 08:42:35.326652 connection.py] Response headers: [('date', 'Fri, 03 Nov 2017 12:42:34 GMT'), ('x-amz-id-2', '+iPxKz1dPdgDxpnWZ0E='), ('server', 'AmazonS3'), ('transfer-encoding', 'chunked'), ('x-amz-request-id', '60A164AAB3971508'), ('x-amz-bucket-region', 'us-east-1'), ('content-type', 'application/xml')]
DEBUG 1103 08:42:35.327029 bucket.py] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>6097164508</RequestId><HostId>+iPxKzrWWZ0E=</HostId></Error>
DEBUG: Exception stack trace:
Traceback (most recent call last):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 577, in _RunNamedCommandAndHandleExceptions
collect_analytics=True)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 317, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 548, in RunCommand
exp_dirs, exp_objs, exp_bytes = ls_helper.ExpandUrlAndPrint(storage_url)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/ls_helper.py", line 180, in ExpandUrlAndPrint
print_initial_newline=False)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/ls_helper.py", line 252, in _RecurseExpandUrlAndPrint
bucket_listing_fields=self.bucket_listing_fields):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 476, in IterAll
expand_top_level_buckets=expand_top_level_buckets):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 157, in __iter__
fields=bucket_listing_fields):
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 413, in ListObjects
self._TranslateExceptionAndRaise(e, bucket_name=bucket_name)
File "/Users/pc/Documents/programs/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 1471, in _TranslateExceptionAndRaise
raise translated_exception
AccessDeniedException: AccessDeniedException: 403 AccessDenied
AccessDeniedException: 403 AccessDenied

I'll assume that you are able to set up gcloud credentials using gcloud init and gcloud auth login or gcloud auth activate-service-account, and can list/write objects to GCS successfully.
From there, you need two things. A properly configured AWS IAM role applied to the AWS user you're using, and a properly configured ~/.boto file.
AWS S3 IAM policy for bucket access
A policy like this must be applied, either by a role granted to your user or an inline policy attached to the user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::some-s3-bucket/*",
"arn:aws:s3:::some-s3-bucket"
]
}
]
}
The important part is that you have ListBucket and GetObject actions, and the resource scope for these includes at least the bucket (or prefix thereof) that you wish to read from.
.boto file configuration
Interoperation between service providers is always a bit tricky. At the time of this writing, in order to support AWS Signature V4 (the only one supported universally by all AWS regions), you have to add a couple extra properties to your ~/.boto file beyond just credential, in an [s3] group.
[Credentials]
aws_access_key_id = [YOUR AKID]
aws_secret_access_key = [YOUR SECRET AK]
[s3]
use-sigv4=True
host=s3.us-east-2.amazonaws.com
The use-sigv4 property cues Boto, via gsutil, to use AWS Signature V4 for requests. Currently, this requires the host be specified in the configuration, unfortunately. It is pretty easy to figure the host name out, as it follows the pattern of s3.[BUCKET REGION].amazonaws.com.
If you have rsync/cp work from multiple S3 regions, you could handle it a few ways. You can set an environment variable like BOTO_CONFIG before running the command to change between multiple files. Or, you can override the setting on each run using a top-level argument, like:
gsutil -o s3:host=s3.us-east-2.amazonaws.com ls s3://some-s3-bucket
Edit:
Just want to add... another cool way to do this job is rclone.

1. Generate your GCS credentials
If you download the Cloud SDK, then run gcloud init and gcloud auth login, gcloud should configure the OAuth2 credentials for the account you logged in with, allowing you to access your GCS bucket (it does this by creating a boto file that gets loaded in addition to your ~/.boto file, if it exists).
If you're using standalone gsutil, run gsutil config to generate a config file at ~/.boto.
2. Add your AWS credentials to the file ~/.boto
The [Credentials] section of your ~/.boto file should have these two lines populated and uncommented:
aws_access_key_id = IDHERE
aws_secret_access_key = KEYHERE
If you've done that:
Make sure that you didn't accidentally swap the values for key and id.
Verify you're loading the correct boto file(s) - you can do this by
running gsutil version -l and looking for the "config path(s):" line.
If you still receive a 403, it's possible that they've given you either
the wrong bucket name, or a key and id corresponding to an account
that doesn't have permission to list the contents of that bucket.

Related

Renew wildcard certificate fails with »None of the preferred challenges are supported by the selected plugin.«

I have a Let's Encrypt wildcard certificate which was obtained with the DNS challenge.
In the meantime I migrated the webapp and the certificate to a new server, where renewing that certificate fails.
$ certbot renew --preferred-challenges dns --standalone
Processing /etc/letsencrypt/renewal/dedacted.de-0001.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert is due for renewal, auto-renewing...
Plugins selected: Authenticator standalone, Installer nginx
Renewing an existing certificate
Performing the following challenges:
Attempting to renew cert (dedacted.de-0001) from /etc/letsencrypt/renewal/dedacted.de-0001.conf produced an unexpected error: None of the preferred challenges are supported by the selected plugin. Skipping.
These is the config file:
# renew_before_expiry = 30 days
version = 0.40.0
archive_dir = /etc/letsencrypt/archive/dedacted.de-0001
cert = /etc/letsencrypt/live/dedacted.de-0001/cert.pem
privkey = /etc/letsencrypt/live/dedacted.de-0001/privkey.pem
chain = /etc/letsencrypt/live/dedacted.de-0001/chain.pem
fullchain = /etc/letsencrypt/live/dedacted.de-0001/fullchain.pem
# Options used in the renewal process
[renewalparams]
account = dedacted
authenticator = nginx
server = https://acme-v02.api.letsencrypt.org/directory
installer = nginx
I tried different options for the renewing command but had no success.
This is the last debug log:
2021-12-14 11:10:00,631:DEBUG:acme.client:Received response:
HTTP 200
Server: nginx
Date: Tue, 14 Dec 2021 10:10:00 GMT
Content-Type: application/json
Content-Length: 384
Connection: keep-alive
Boulder-Requester: 110278569
Cache-Control: public, max-age=0, no-cache
Link: <https://acme-v02.api.letsencrypt.org/directory>;rel="index"
Replay-Nonce: dedacted
X-Frame-Options: DENY
Strict-Transport-Security: max-age=604800
{
"identifier": {
"type": "dns",
"value": "dedacted.de"
},
"status": "pending",
"expires": "2021-12-20T14:57:56Z",
"challenges": [
{
"type": "dns-01",
"status": "pending",
"url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/dedacted/dedacted",
"token": "dedacted"
}
],
"wildcard": true
}
2021-12-14 11:10:00,632:DEBUG:acme.client:Storing nonce: dedacted
2021-12-14 11:10:00,632:INFO:certbot.auth_handler:Performing the following challenges:
2021-12-14 11:10:00,632:WARNING:certbot.renewal:Attempting to renew cert (dedacted.de-0001) from /etc/letsencrypt/renewal/dedacted.de-0001.conf produced an unexpected error: None of the preferred challenges are supported by the selected plugin. Skipping.
2021-12-14 11:10:00,635:DEBUG:certbot.renewal:Traceback was:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/certbot/renewal.py", line 462, in handle_renewal_request
main.renew_cert(lineage_config, plugins, renewal_candidate)
File "/usr/lib/python3/dist-packages/certbot/main.py", line 1208, in renew_cert
renewed_lineage = _get_and_save_cert(le_client, config, lineage=lineage)
File "/usr/lib/python3/dist-packages/certbot/main.py", line 116, in _get_and_save_cert
renewal.renew_cert(config, domains, le_client, lineage)
File "/usr/lib/python3/dist-packages/certbot/renewal.py", line 320, in renew_cert
new_cert, new_chain, new_key, _ = le_client.obtain_certificate(domains, new_key)
File "/usr/lib/python3/dist-packages/certbot/client.py", line 348, in obtain_certificate
orderr = self._get_order_and_authorizations(csr.data, self.config.allow_subset_of_names)
File "/usr/lib/python3/dist-packages/certbot/client.py", line 396, in _get_order_and_authorizations
authzr = self.auth_handler.handle_authorizations(orderr, best_effort)
File "/usr/lib/python3/dist-packages/certbot/auth_handler.py", line 62, in handle_authorizations
achalls = self._choose_challenges(authzrs)
File "/usr/lib/python3/dist-packages/certbot/auth_handler.py", line 206, in _choose_challenges
self._get_chall_pref(authzr.body.identifier.value),
File "/usr/lib/python3/dist-packages/certbot/auth_handler.py", line 229, in _get_chall_pref
raise errors.AuthorizationError(
certbot.errors.AuthorizationError: None of the preferred challenges are supported by the selected plugin
Did I miss to install something for supporting wildcard certificated or did something change and it is not supported anymore the way it was?
Well, here is what I can tell from what you are doing. You already had a working certificate that was generated using the certbot nginx plugin. Now, you've relocated your server and config to a new server and want to renew the certificate. My advice is to use the nginx plugin again by adding --nginx to your command for the renewal. I suspect that because it is not included, certbot is raising the error of the plugin used is not supported.
If that does not work, I will suggest that you remove the old configuration you copied onto the new server and run certbot to create a new certificate with its own config for you.

minio+KMS x509: certificate signed by unknown authority

I am trying to use minio as a local S3 server. I am following this article
I downloaded key and cert files.
I added the env parameters:
set MINIO_KMS_KES_ENDPOINT=https://play.min.io:7373
set MINIO_KMS_KES_KEY_FILE=D:\KMS\root.key
set MINIO_KMS_KES_CERT_FILE=D:\KMS\root.cert
set MINIO_KMS_KES_KEY_NAME=my-minio-key
I started minio server: D:\>minio.exe server D:\Photos
It logs after sturt up:
Endpoint: http://169.254.182.253:9000 http://169.254.47.198:9000 http://172.17.39.193:9000 http://192.168.0.191:9000 http://169.254.103.105:9000 http://169.254.209.102:9000 http://169.254.136.71:9000 http://127.0.0.1:9000
AccessKey: minioadmin
SecretKey: minioadmin
Browser Access:
http://169.254.182.253:9000 http://169.254.47.198:9000 http://172.17.39.193:9000 http://192.168.0.191:9000 http://169.254.103.105:9000 http://169.254.209.102:9000 http://169.254.136.71:9000 http://127.0.0.1:9000
Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc.exe alias set myminio http://169.254.182.253:9000 minioadmin minioadmin
Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide
Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'
I opened UI in browser: http://localhost:9000/minio/mybacket/
I tried to upload a jpg file and got an exception:
<?xml version="1.0" encoding="UTF-8"?> <Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.</Message><Key>Completed.jpg</Key><BucketName>mybacket</BucketName><Resource>/minio/upload/mybacket/Completed.jpg</Resource><RequestId>1634A6E5663C9D70</RequestId><HostId>4a46a947-6473-4d53-bbb3-a4f908d444ce</HostId></Error>
And I got this exception in minio console:
Error: Post "https://play.min.io:7373/v1/key/generate/my-minio-key": x509: certificate signed by unknown authority
3: cmd\api-errors.go:1961:cmd.toAPIErrorCode()
2: cmd\api-errors.go:1986:cmd.toAPIError()
1: cmd\web-handlers.go:1116:cmd.(*webAPIHandlers).Upload()
Most probably your OS trust store (containing the Root CA certificates) does not trust Let's Encrypt (the Let's Encrypt Authority X3 CA certificate).
The server https://play.min.io:7373 serves a TLS certificates issued by Let's Encrypt.
See:
openssl s_client -showcerts -servername play.min.io -connect play.min.io:7373
Eventually, check your the root CA store of your windows machine.
See: https://security.stackexchange.com/questions/48437/how-can-you-check-the-installed-certificate-authority-in-windows-7-8

403 error when making get request on bucket using IBM Cloud Object Storage CLI

I created a cloud object storage service and created a standard bucket. My goal is to upload files using a service id in CLI.
As step -1 I am testing I am following to run few commands on bucket I created from this link: https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-cli-ic-cos-cli
Here are some outputs:
ibmcloud cos config list
Key Value
Last Updated Tuesday, December 17 2019 at 23:31:19
Default Region us-east
Download Location /Users/myname/Downloads
CRN crn:v1:bluemix:public:cloud-object-storage:global:a/784492b2864521d53b6c4590e0f2bf34:f743cac0-6166-404f-abea-2e1d74c6a7ac:: f743cac0-6166-404f-abea-2e1d74c6a7ac
AccessKeyID
SecretAccessKey
Authentication Method IAM
URL Style VHost
ibmcloud cos list-buckets --ibm-service-instance-id crn:v1:bluemix:public:cloud-object-storage:global:a/784492b2864521d53b6c4590e0f2bf34:f743cac0-6166-404f-abea-2e1d74c6a7ac::
OK
2 buckets found in your account:
Name Date Created
hog-bucket2 Dec 18, 2019 at 05:43:28
hog-test-bucket-name Dec 17, 2019 at 16:59:41
ibmcloud cos head-bucket --bucket hog-bucket2
FAILED
Forbidden: Forbidden
status code: 403, request id: 2fba921d-a11c-4f45-b172-3937daeab633, host id:
I tried it on other bucket and I see same 403.
I went into access policies for the bucket and created a policy to set myself as manager. But it didn't help.
Creating a bucket from cli worked fine:
ibmcloud cos create-bucket --bucket hog-cli-bucket-name --ibm-service-instance-id crn:v1:bluemix:public:cloud-object-storage:global:a/784492b2864521d53b6c4590e0f2bf34:f743cac0-6166-404f-abea-2e1d74c6a7ac::
OK
Details about bucket hog-cli-bucket-name:
Region: us-east
Class: Standard
Then I tried to do get list of buckets:
ibmcloud cos list-buckets --ibm-service-instance-id crn:v1:bluemix:public:cloud-object-storage:global:a/784492b2864521d53b6c4590e0f2bf34:f743cac0-6166-404f-abea-2e1d74c6a7ac::
OK
3 buckets found in your account:
Name Date Created
hog-bucket2 Dec 18, 2019 at 05:43:28
hog-cli-bucket-name Dec 18, 2019 at 06:14:03
hog-test-bucket-name Dec 17, 2019 at 16:59:41
which looked good but trying to retrieve class for hog-cli-bucket-name bucket didn't work. It is asking me to login.
ibmcloud cos get-bucket-class --bucket hog-cli-bucket-name
FAILED
Access to your IBM Cloud account was denied. Log in again by typing ibmcloud login --sso.
And after I login when I test get-bucket-class it keeps telling me to login again.
I think your CRN looks wrong. I only used the last part of my CRN
f743cac0-6166-404f-abea-2e1d74c6a7ac

Cannot register a build agent in VSTS

I am trying to register a build agent with VSTS by following the instructions here. I created a PAT token in my Security settings and used it to register the agent. I used the following PS command to register the agent:
PS>.\config.cmd --url 'https://[account].visualstudio.com/' --auth 'PAT' --token '[Token]' --pool '[Pool Name]' --agent '[Agent Name]'
I get the following information on the output:
>> Connect:
Connecting to server ...
>> Register Agent:
Scanning for tool capabilities.
Connecting to the server.
Successfully added the agent
Testing agent connection.
But it then gets stuck on this step and never completes. When I look at the agent pool in VSTS I see the new agent, but it's state is 'Offline'.
I looked in the _diag folder and checked the latest log. There is an error there:
[2017-07-11 12:32:10Z WARN VisualStudioServices] Authentication failed with status code 401.
Date: Tue, 11 Jul 2017 12:32:10 GMT
P3P: CP="CAO DSP COR ADMa DEV CONo TELo CUR PSA PSD TAI IVDo OUR SAMi BUS DEM NAV STA UNI COM INT PHY ONL FIN PUR LOC CNT"
Server: Microsoft-IIS/10.0
WWW-Authenticate: Bearer authorization_uri=https://login.microsoftonline.com/b8f712c7-d223-4cfb-b165-6267fc789086, Basic realm="https://tfsprodweu2.app.visualstudio.com/", TFS-Federated
X-TFS-ProcessId: f4c5d148-0e01-488f-ab27-69c753e38911
Strict-Transport-Security: max-age=31536000; includeSubDomains
ActivityId: b85558aa-d41a-4763-a311-f2498ddb2dc0
X-TFS-Session: 8242b35d-d5e9-4242-8f52-b7c1f14c451c
X-VSS-E2EID: 6494b165-d486-4064-a5fd-92ae7f201867
X-FRAME-OPTIONS: SAMEORIGIN
X-TFS-FedAuthRealm: https://tfsprodweu2.app.visualstudio.com/
X-TFS-FedAuthIssuer: https://rr-ffes.visualstudio.com/
X-VSS-ResourceTenant: b8f712c7-d223-4cfb-b165-6267fc789086
X-TFS-SoapException: %3C%3Fxml%20version%3D%221.0%22%20encoding%3D%22utf-8%22%3F%3E%3Csoap%3AEnvelope%20xmlns%3Asoap%3D%22http%3A%2F%2Fwww.w3.org%2F2003%2F05%2Fsoap-envelope%22%3E%3Csoap%3ABody%3E%3Csoap%3AFault%3E%3Csoap%3ACode%3E%3Csoap%3AValue%3Esoap%3AReceiver%3C%2Fsoap%3AValue%3E%3Csoap%3ASubcode%3E%3Csoap%3AValue%3EUnauthorizedRequestException%3C%2Fsoap%3AValue%3E%3C%2Fsoap%3ASubcode%3E%3C%2Fsoap%3ACode%3E%3Csoap%3AReason%3E%3Csoap%3AText%20xml%3Alang%3D%22en%22%3ETF400813%3A%20Resource%20not%20available%20for%20anonymous%20access.%20Client%20authentication%20required.%3C%2Fsoap%3AText%3E%3C%2Fsoap%3AReason%3E%3C%2Fsoap%3AFault%3E%3C%2Fsoap%3ABody%3E%3C%2Fsoap%3AEnvelope%3E
X-TFS-ServiceError: TF400813%3A%20Resource%20not%20available%20for%20anonymous%20access.%20Client%20authentication%20required.
X-VSS-S2STargetService: 00000002-0000-8888-8000-000000000000/visualstudio.com
X-TFS-FedAuthRedirect: https://app.vssps.visualstudio.com/_signin?realm=rr-ffes.visualstudio.com&reply_to=https%3A%2F%2Frr-ffes.visualstudio.com%2F_apis%2FconnectionData%3FconnectOptions%3D1%26lastChangeId%3D-1%26lastChangeId64%3D-1&redirect=1&context=eyJodCI6MiwiaGlkIjoiNDEzNzE0YzItZWQ2OS00MWRkLWJmMTItNzc0ZTI1ZGEzOTdmIiwicXMiOnt9LCJyciI6IiIsInZoIjoiIiwiY3YiOiIiLCJjcyI6IiJ90#ctx=eyJTaWduSW5Db29raWVEb21haW5zIjpbImh0dHBzOi8vbG9naW4ubWljcm9zb2Z0b25saW5lLmNvbSIsImh0dHBzOi8vbG9naW4ubWljcm9zb2Z0b25saW5lLmNvbSJdfQ2
X-Powered-By: ASP.NET
X-Content-Type-Options: nosniff
I am sure I correctly entered the PAT access token, so what is the problem?
I found the answer. The authentication failure was a red herring. The actual problem is that it was waiting for user input, but powershell was not writing out the question and waiting for the answer. Here is the line in the log file:
[2017-07-11 14:03:20Z INFO Terminal] WRITE: Enter work folder (press enter for _work) >
[2017-07-11 14:03:20Z INFO Terminal] READ LINE
In order to get round this problem, I made sure I specified all the required arguments and used --unattended to ensure it wasn't going to ask me anything else. In the end it was like this:
PS>.\config.cmd --url $url --auth 'PAT' --token $patKey --pool $poolId --agent $serverId --work '_work' --runasservice --unattended
I just want to share our story we had the same Exception but my log was different.
[2017-12-04 11:20:37Z WARN VisualStudioServices] Authentication failed
with status code 401.
And after struggling around we figured out that the ApplicationPools that running the TFS should have administration level to be able to access Certificate Management
To fix the issue, I had to go to Administrative Tools > Services, and change the VSTS Agent service account from Network service to Local system.
The service was then able to start and work as expected.

Can't access resource as OWNER despite the fact I'm the owner

I'm trying to act on a bucket and resources but I keep getting access denied error
e.g.
```
$ gsutil ls -L gs://images/large
gs://images/large/aa.png:
Creation time: Tue, 25 Nov 2014 20:03:19 GMT
Cache-Control: public, max-age=2592000
Content-Length: 343034
Content-Type: image/png
Generation: 1416945799570000
Metageneration: 2
ACL: ACCESS DENIED. Note: you need OWNER permission
on the object to read its ACL.
```
Same when I try to run acl operations or override a file.
First of all, I'd like to mention that being the bucket owner means that you are always allowed to delete the objects stored in that bucket but you may not have object owner permissions if the default ACLs were overridden. This is different from how popular operating systems work where there is the concept of a super-user.
Did you try to run that command using the existing service accounts in your project listed in the Developers Console at APIs & auth -> Credentials?
If you are still getting that error, the object was probably uploaded through App Engine. You can make an App Engine application in Python with the following code which lists the object ACLs using the JSON API because App Engine has its own service account (<project ID>#appspot.gserviceaccount.com) and it's different from that appear in the Developers Console.
#!/usr/bin/env python
import webapp2
from google.appengine.api import app_identity
from google.appengine.api import urlfetch
class MainPage(webapp2.RequestHandler):
def get(self):
scope = "https://www.googleapis.com/auth/devstorage.full_control"
authorization_token, _ = app_identity.get_access_token(scope)
acls = urlfetch.fetch(
"https://www.googleapis.com/storage/v1/b/<bucket>/o/<object/acl",
method=urlfetch.GET,
headers = {"Content-Type": "application/json", "Authorization": "OAuth " + authorization_token})
self.response.headers['Content-Type'] = 'application/json'
self.response.write(acls.content)
application = webapp2.WSGIApplication([
('/', MainPage),
], debug=True)