How to get 16 digit rgw_sts_key in ceph storage for STS API - ceph

I am trying to generate temporary credentials in ceph storage through STS API.
I am following steps as mentioned in below link https://docs.ceph.com/en/latest/radosgw/STS/
One of the steps mention is to put 16 digit hex rgw_sts_key
Does any one how to get rgw_sts_key and where to put it ?

Related

unexpected "ed25519-nkey" algorithm error when using NAS and NSC of NATS.io

A team I'm working with, has created a NAS Docker container. The Dockerfile uses FROM synadia/nats-account-server:0.8.4 and installs NSC using curl -L https://raw.githubusercontent.com/nats-io/nsc/master/install.py | python. When NAS is run in the Docker container, it is given a path to a server.conf file that contains operatorjwtpath: "/nsc/accounts/nats/OperatorName/OperatorName.jwt".
The problem is, that when I generate the operator on my PC using nsc add operator -i and when I run the Docker container on AWS Fargate and mount the JWT file to the appropriate folder using an AWS EFS filesystem, the container crashes and shows the error unexpected "ed25519-nkey" algorithm.
According to the NATS basics page, the algorithm that should be used is "alg": "ed25519". But when I generated the JWT and decoded it on this site, I see that what's being used is "alg": "ed25519-nkey".
So what is going on here? I can't find any specific info about an algorithm that has the "nkey" appended to its name. This is the default JWT that's generated. Why is it different from what the NAS algorithm expects? How do I solve this error?
Extra info: According to this site, it's supposed to be due to a version conflict, but even upgrading to FROM synadia/nats-account-server:1.0.0 didn't solve it.

Process YAML File step of the Kubernetes plugin throwing error if the image name has / slash

The image in the deployment yaml is in the below format :
'${DockerRegistry}/${orgName}/${projectName}/${ImageName}:${version}'
There are 3 forward slashes in the image name after the docker registry name and this is causing an error. I tried with Kubernetes plugin of 16, 17, 18 & 19 and Process Yaml step of Kubernetes is throwing the below error.
Loading /opt/ibm-ucd/agent/var/work/lr-central-credit-register/common/openshift/dc.yml The desired versions for existing image components is [:] Creating ibm-ucd-kubernetes.yaml Creating component: cbrpoc-loan-requests-cbrpoc-loan-requests/lr-central-credit-register Caught: java.io.IOException: 400 Error processing command: Name cannot contain the following characters: / \ [ ] % java.io.IOException: 400 Error processing command: Name cannot contain the following characters: / \ [ ] % at com.urbancode.ud.client.UDRestClient.invokeMethod(UDRestClient.java:225) at com.urbancode.ud.client.ComponentClient.createComponent(ComponentClient.java:180) at processyaml.createComponent(processyaml.groovy:481) at processyaml.this$4$createComponent(processyaml.groovy) at processyaml$_run_closure6.doCall(processyaml.groovy:362) at processyaml.run(processyaml.groovy:325)
According to the official documentation of Docker Registry HTTP API V2
A repository name is broken up into path components. A component of a
repository name must be at least one lowercase, alpha-numeric
characters, optionally separated by periods, dashes or underscores.
More strictly, it must match the regular expression
[a-z0-9]+(?:[._-][a-z0-9]+)*. If a repository name has two or more
path components, they must be separated by a forward slash (“/”). The
total length of a repository name, including slashes, must be less
than 256 characters.
Please make sure you are using Docker Registry HTTP API V2 and follow all the above rules.
While the V1 registry protocol is usable, there are several problems with the architecture that have led to V2.
Additionally, you can try to use Docker Tag
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
More info with examples lined above.
Please let me know if that helped.

Google cloud datalab deployment unsuccessful - sort of

This is a different scenario from other question on this topic. My deployment almost succeeded and I can see the following lines at the end of my log
[datalab].../#015Updating module [datalab]...done.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deployed module [datalab] to [https://main-dot-datalab-dot-.appspot.com]
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Step deploy datalab module succeeded.
Jul 25 16:22:36 datalab-deploy-main-20160725-16-19-55 startupscript: Deleting VM instance...
The landing page keeps showing a wait bar indicating the deployment is still in progress. I have tried deploying several times in last couple of days.
About additions described on the landing page -
An App Engine "datalab" module is added. - when I click on the pop-out url "https://datalab-dot-.appspot.com/" it throws an error page with "404 page not found"
A "datalab" Compute Engine network is added. - Under "Compute Engine > Operations" I can see a create instance for datalab deployment with my id and a delete instance operation with *******-ompute#developer.gserviceaccount.com id. not sure what it means.
Datalab branch is added to the git repo- Yes and with all the components.
I think the deployment is partially successful. When I visit the landing page again, the only option I see is to deploy the datalab again and not to start it. Can someone spot the problem ? Appreciate the help.
I read the other posts on this topic and tried to verify my deployment using - "https://console.developers.google.com/apis/api/source/overview?project=" I get the following message-
The API doesn't exist or you don't have permission to access it
You can try looking at the App Engine dashboard here, to verify that there is a "datalab" service deployed.
If that is missing, then you need to redeploy again (or switch to the new locally-run version).
If that is present, then you should also be able to see a "datalab" network here, and a VM instance named something like "gae-datalab-main-..." here. If either of those are missing, then try going back to the App Engine console, deleting the "datalab" service, and redeploying.

GCS slow upload from pod inside kubernetes GKE

Uploading to GCE from a pod inside GKE takes really long. I hoped the upgrade to kubernetes 1.1 would help, but it didn't. It is faster, but not as fast as it should be. I made some benchmarks, uploading a single file with 100MiB:
docker 1.7.2 local
took {20m51s240ms}, that's about ~{0.07993605115907274}MB/s
docker 1.8.3 local
took {3m51s193ms}, that's about ~{0.4329004329004329}MB/s
docker 1.9.0 local
took {3m51s424ms}, that's about ~{0.4329004329004329}MB/s
kubernetes 1.0
took {1h10s952ms}, that's about ~{0.027700831024930747}MB/s
kubernetes 1.1.2 (docker 1.8.3)
took {32m11s359ms}, that's about ~{0.05178663904712584}MB/s
As you can see the thruput doubles with kubernetes 1.1.2, but is still really slow. If I want to upload 1GB I have to wait for ~5 hours, this can't be the expected behaviour. GKE runs inside the Google infrastructure, so I expect that it should be faster or at least as fast as uploading from local.
I also noted a very high CPU load (70%) while uploading. It was tested with a n1-highmem-4 machine-type and a single RC/pod that was doing nothing then the upload.
I'm using the java client with the GAV coordinates com.google.appengine.tools:appengine-gcs-client:0.5
The relevant code is as follows:
InputStream inputStream = ...; // 100MB RandomData from RAM
StorageObject so = new StorageObject().setContentType("text/plain").setName(objectName);
AbstractInputStreamContent content = new InputStreamContent("text/plain", inputStream);
Stopwatch watch = Stopwatch.createStarted();
storage.objects().insert(bucket.getName(), so, content).execute();
watch.stop();
Copying a 100MB file using a manually installed gcloud with gsutil cp took nearly no time (3 seconds). So it might be an issue with the java-library? The question still remains, how to improve the upload time using the java-library?
Solution is to enable "DirectUpload", so instead of writing
storage.objects().insert(bucket.getName(), so, content).execute();
you have to write:
Storage.Objects.Insert insert = storage.objects().insert(bucket.getName(), so, content);
insert.getMediaHttpUploader().setDirectUploadEnabled(true);
insert.execute();
Performance I get with this solution:
took {13s515ms}, that's about ~{7.6923076923076925}MB/s
JavaDoc for the setDirectUploadEnabled:
Sets whether direct media upload is enabled or disabled.
If value is set to true then a direct upload will be done where the
whole media content is uploaded in a single request. If value is set
to false then the upload uses the resumable media upload protocol to
upload in data chunks.
Direct upload is recommended if the content size falls below a certain
minimum limit. This is because there's minimum block write size for
some Google APIs, so if the resumable request fails in the space of
that first block, the client will have to restart from the beginning
anyway.
Defaults to false.
The fact that you're seeing high CPU load and that the slowness only affects Java and not the Python gsutil is consistent with the slow AES GCM issue in Java 8. The issue is fixed in Java 9 using appropriate specialized CPU instructions.
If you have control over it, then either using Java 7 or adding jdk.tls.disabledAlgorithms=SSLv3,GCM to a file passed to java -Djava.security.properties should fix the slowness as explained in this answer to the general slow AES GCM question.

gsutil make bucket command [gsutil mb] is not working

I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$