code=DotfulBucketNameNotUnderTld for .name domain - google-cloud-storage

In order to portion off my part of the Google Cloud Storage namespace, I've verified my domain, paul.kishimoto.name, as described here. However, I am still unable to create buckets containing that name. Using the gsutil command-line tool, I am given an error message:
$ gsutil mb -p khaeru-private gs://paul.kishimoto.name-documents
Creating gs://paul.kishimoto.name-documents/...
GSResponseError: status=400, code=DotfulBucketNameNotUnderTld, reason=Bad Request.
A search for the error code DotfulBucketNameNotUnderTld turns up this discussion from the old support group, in which a Google employee said a list from publicsuffix.org was used to check for valid TLDs. The list does not appear to contain the .name TLD.

After verifying that you own the domain paul.kishimoto.name you can create buckets with that name or subdomains of that name, such as paul.kishimoto.name, docs.paul.kishimoto.name, images.paul.kishimoto.name, etc. You can't create a bucket named paul.kishimoto.name-documents because .name-documents is not a currently valid TLD.
Mike Schwartz, Google Cloud Storage team

Related

Is it possibe to have multiple kerberos tickets on same machine?

I have a use case where I need to connect to 2 different DBS using 2 different accounts. And I am using Kerberos for authentication.
Is it possible to create multiple Kerberos tickets on same machine?
kinit account1#DOMAIN.COM (first ticket)
kinit account2#DOMAIN.COM (second ticket)
Whenever I do klist, I only see most recent ticket created. It doesn't show all the tickets.
Next, I have a job that needs to first use ticket for account1 (for connection to DB1) and then use ticket for account2 (for DB2).
Is that possible? How do I tell in DB connection what ticket to use?
I'm assuming MIT Kerberos and linking to those docs.
Try klist -A to show all tickets in the ticket cache. If there is only one try switching your ccache type to DIR as described here:
DIR points to the storage location of the collection of the credential caches in FILE: format. It is most useful when dealing with multiple Kerberos realms and KDCs. For release 1.10 the directory must already exist. In post-1.10 releases the requirement is for parent directory to exist and the current process must have permissions to create the directory if it does not exist. See Collections of caches for details. New in release 1.10. The following residual forms are supported:
DIR:dirname
DIR::dirpath/filename - a single cache within the directory
Switching to a ccache of the latter type causes it to become the primary for the directory.
You do this by specifying the default ccache name as DIR:/path/to/cache on one of the ways described here.
The default credential cache name is determined by the following, in descending order of priority:
The KRB5CCNAME environment variable. For example, KRB5CCNAME=DIR:/mydir/.
The default_ccache_name profile variable in [libdefaults].
The hardcoded default, DEFCCNAME.

can i search google cloud storage buckets recursively in the console?

We have uploaded files to google cloud storage buckets and planning to create a permission to have a number of people access it. So far we could only filter/search files and folders in the directory you are in. Is it possible to search files recursively though?
It seems what you are looking for is the following command for searching within a bucket recursively:
gsutil ls -r gs://bucket/**
Note: "bucket" is the name of the bucket you have set.
In the case you would like to search within a specific directory you can run the following:
gsutil ls -r gs://bucket/dir/**
Note: "dir" would be the directory in which you would like to search
You can find more information regarding searching through "Directory By Directory, Flat, And Recursive" by going to the following link.
Update
If this is not what you meant then I would like to mention another option. You can retrieve the information regarding the contents in a bucket through an API as well. The following API link here retrieves a list of objects matching the criteria specified.
Note: In order for this API to work the user must have "READER" permission or above.
Please let me know if this is what you were looking for.

gsutil acl set command AccessDeniedException: 403 Forbidden

I am following the steps of setting up Django on Google App Engine, and since Gunicorn does not serve static files, I have to store my static files to Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically readable." on https://cloud.google.com/python/django/flexible-environment#run_the_app_on_your_local_computer. I ran the following commands as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second line sets its default ACL. When I type in the command, the second line returns an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the problem?
I figured it out myself, and it is kind of silly. I didn't notice if the first command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like bucket name and project name are global across its space. And what happened was that the name I used to create a new bucket is already used by other people. And no wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like prefixing project name and application name.

Can't use wildcards for bucket names with gsutil for Google Cloud Storage?

Question: can wildcards be used in GCS bucketnames with gsutil?
I want to grab multiple files in GCS using wildcards that are split across buckets. But, I'm consistently running into errors when using wildcards in bucket names with gsutil. I'm using wildcards like this:
gsutil ls gs://myBucket-abcd-*/log/data_*
I want to match all these file names (variations in bucket name AND in object name):
gs://myBucket-abcd-1234/log/data_foo.csv
gs://myBucket-abcd-1234/log/data_bar.csv
gs://myBucket-abcd-5678/log/data_foo.csv
gs://myBucket-abcd-5678/log/data_bar.csv
Documentation on Bucket Wildcards tells me I should be able to use wildcards both in the bucketname and object name, but the code sample above always gets "BadRequestException: 400 Invalid argument."
gsutil is otherwise working when I use no wildcards or use wildcards in the object name only. But adding a wildcard to the bucket name results in the error. Are there workarounds to make the wildcard work in bucket names, or am I misinterpreting the linked documentation?
Found that not being able to use bucket wildcards in this case is working as intended, and is due to differences in permission settings. Google Cloud Storage permissions can be set at both bucket and project levels.
Though the access token used in this case can access every individual bucket, it doesn't have reader/editor/owner access to the top-level project (shared across many users of the system). Without access to the project, wildcards cannot be used on buckets.
This can be fixed by having a project owner add the user as a reader/editor/owner to the project.
In this case, for security reasons we can't give an individual token access to all buckets in the project, but its helpful to understand why the wildcard didn't work. Thanks all for the input, and especially Travis for the contact.
Some shells (Zsh) is trying to expand the * and ** , so you need to include these inside quotation marks. Like this
gsutil ls 'gs://myBucket-abcd-*/log/data_*'
I found it here gsutil returning "no matches found"

gsutil make bucket command [gsutil mb] is not working

I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$