Bluemix - Assigning Space Quota while creating space - ibm-cloud

I am trying to create a space with a specific space quota plan assigned through CF CLI command. I had created a space quota plan called myPlanTiny and fired the CLI command as below.
cf create-space mySpace1 -o my_org1 -q myPlanTiny
The space got created, however the quota plan assignment did not take effect. Only the default plan got assigned to the newly created space. My Bluemix account is a public bluemix account.
Requesting your help to understand why the space quota assignment does not take effect.
Thanks.

The assigned quota is set to default for each newly created space, both from UI and CF CLI. This quota is related to the org and not to the single space in public Bluemix. Anyway there is a reserved set of cf commands that only administrators can run. These commands include for example things around quota and creating orgs, this could be the reason why you are not able to change the quota. If you are an Admin you should be able to set a custom quota and use it in a newly created space using the command you provided. After doing this, if the command cf space <spacename> still shows the default I suggest you to open a support request using one of the following methods in order to engage the Bluemix Project Office Team:
Use the Support Widget. It is available from the user avatar in the
   upper right corner of the main Bluemix UI.  After opening the support
   widget panel, select Get Help > Get In Touch , select the type of
   assistance you need, and then fill out the support form.
Use the Support Site 'Get Help' form. This form is available on a separate site that is made available for ticket submission when you cannot log into Bluemix and access the Support Widget.  Go to http://ibm.biz/bluemixsupport and fill in the support request form.

1) update your CF CLI
2) create quota: cf create-space-quota myQuota -i 256M -m 256M -r 5 -s 5 --allow-paid-service-plans
3) create space and assign your quota: cf create-space personalSpace -q myQuota
4) if the issue is still there open a new ticket.

Thanks for all your help.
I updated the CF CLI version to 6.16.1+924508c-2016-02-26 and created a space quota and a space set to the newly created quota.
I verified the space-quota set on the space using the command below
cf space SPACE_NAME
It showed up the correct details.
Initially, I got confused with by the tiles on the dashboard was saying 1 GB/16 GB Used. I guess that is shown at the org level and probably a functionality as designed.
The issue is resolved . Thanks!!!

Related

CodeSandbox: Using terminal in node template

I'm trying to create a sandbox using the node template but I'm running into issues accessing the terminal. I have a sandbox here that I've uploaded using their define API which should be using a node template (defined in my sandbox.config.json) and have a defined start script. It shows a 504 and doesn't give me access to the terminal. What am I doing wrong?
After more research: I now see the sandbox running in a node environment, but no terminal - but hovering over the "+" on the upper right of the info/console window gives a tooltip "Fork to add a Terminal". I did so, and the terminal became available. I conclude it's some form of ownership issue - I can't open a terminal in your sandbox, but I can in my forked sandbox.
We can conclude that the define API creates a public template/sandbox - but the terminal is only available in a private sandbox. To use the terminal, you'll have to fork the sandbox after creating it.
(thx to #codesandbox for including the tooltip that led to the conclusion)
That's not a container environment, which is required to have access to a terminal. There are known issues with containers & codesandbox; specifically, you can't convert one sandbox type to another, and sometimes forking from someone else's github also does not create as a container.
Best to start with a containerized template.
In case this helps anyone: to enable containers for an existing project you need to create a sandbox.config.json file with the following content before creating the sandbox:
{
"template": "node"
}
I'm not sure if there is a way to change the sandbox once it's created.
Ref: https://github.com/codesandbox/codesandbox-client/issues/1608

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

Google Vision API - tatusCode.RESOURCE_EXHAUSTED

I am new to the Google Vision API and I would like to conduct a label detection of approx. 10 images and I would like to run the vision quickstart.py file. However when I do this with only 3 images then it is successful. With more than 3 images I am getting the error message below. I know that I would need to change something at my setup, but I do not know what I should change.
Here is my error message:
google.gax.errors.RetryError: GaxError(Exception occurred in retry method
that was not classified as transient, caused by <_Rendezvous of RPC that
terminated with (StatusCode.RESOURCE_EXHAUSTED, Insufficient tokens for
quota 'DefaultGroup' and limit 'USER-100s' of service
'vision.googleapis.com' for consumer 'project_number: XXX'.)>)
Does anybody know what I need to do?
Any help would be much appreciated
Cheers,
Andi
I ran into the same problem and fixed it with these steps:
Make sure you have the Google Cloud SDK properly installed: https://cloud.google.com/vision/docs/reference/libraries
Setup a Service Account in the Google Cloud backend: https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount
Create a Service Account Key and download it as a JSON file to a local folder. You need to keep the key private.
Export the filepath to the key-file as an environment variable: gcloud auth activate-service-account --key-file path/to/your/keyfile/here
Log out/in of the console.
Make sure, the environment variable is properly set with printenv
Try your py-script again...
Good luck...
Edit: In addition to the mentioned steps 1.-3. you can just do vision_client = vision.Client.from_service_account_json('/path/to/your/keyfile.json') in your script. No need for the env variable then.

Bluemix dashboard: Unable to update route (BXNUI0030E)

This error seems like a Bluemix internal error to me.
I try to add another route using the new Bluemix dashboard to one of my liberty apps and the error message I get is:
BXNUI0030E: The 'xxxxxx.au-syd.mybluemix.net' route wasn't mapped to
the 'xxxxx-arya' app because a problem occurred contacting Cloud
Foundry. Try again later. If you see this message again, go to the
Bluemix status page to check whether a service or component has an
issue. If the problem continues, click the Account and Support icon in
the top menu bar, click Get help, and search for help or get support.
The error message is not clear to identify cause of the problem. The message just tells Cloud Foundry command behind Web interface failed to create map-route, but it didn't tell what exactly happen. Please run below commands to know what happened during creating map-route and fix the problem;
Login into the space by cfcli;
cf login -a api.au-syd.bluemix.net -u -p -o -s
List all routes in the space and make sure the app or the map-route are there;
cf routes
Create map-route by command;
cf map-route APPNAME au-syd.mybluemix.net --hostname NEW-HOSTNAME
Check error message and fix the problem
*) Most common problem is duplicate map-route name if the hostname used in in other spaces. Please contact Bluemix Support with your new hostname if you cannot find the hostname in your organizations.

gsutil make bucket command [gsutil mb] is not working

I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$