How to set restrictions on usage of selected services at space level in bluemix - ibm-cloud

I need some help on creating space quota plans with higher granularity in the restrictions.
I can create a space quota plan that either allows or disallows use of paid services. If I create a space quota plan that disallows paid services and assign it to a space, I will not be able to use even some of the most common boiler plates (such as Node.js Cloudant DB Web Starter, Java Cloudant Web Starter, etc) in that space.
However, I would like to create a space quota plan such that I allow the use of only free service plans and selected paid plans like the boiler plates.
Please share your thoughts.

This is not possible. You can't select specific services or boilerplates in space quota definition.
You can specify:
* number of routes
* number of services
* memory
* memory instance
* show all services or only free services
You can 'create' a boilerplates combining specific runtime and\or services.

Related

How to avoid High Download charges when we pull docker images on cloud builds

We are building our stack on google cloud builds and for building we are using custom docker base images which are stored in gcr.io/project-name/image-name
While using this method we are e getting charged on Download Worldwide Destinations (excluding Asia & Australia)
Is there any way that we can reduce the High download charges? if we will run cloud builds and pull docker images from same region i.e. running docker build on us-central1 and pulling docker image from us-central1-docker.dev.pkg/project-name/image-name will it reduce the download charges (No charge) ?
As we found one ref : https://cloud.google.com/storage/pricing#network-buckets
Or is there any other solution ?
Just to expand on #John Hanley's comment, according to this documentation on location considerations:
A good location balances latency, availability, and bandwidth costs for data consumers.
Choosing the closest and same region will help optimize latency and network bandwidths. It would also be convenient to choose the region where it contains the majority of your data users.
There is a Cloud Storage Always Free usage limits wherein 1GB Network Egress is free from North America to each GCP egress destination (excluding Australia and China) however starting October 1, 2022, it would be upgraded to 100GB You can check the full documentation on Changes to Always Free usage limits.

How to apply upload limit for google storage bucket per day/month/etc

Is there a way how to apply upload limit for google storage bucket per day/month/year?
Is there is a way how to apply limit on amount of Network traffic?
Is there is a way how to apply limit on Class A operations?
Is there is a way how to apply limit on Class B operations?
I found only Queries per 100 seconds per user and Queries per day using
https://cloud.google.com/docs/quota instructions, but this is JSON Api quotas
(I even not sure what kind of api is used inside of StorageClient c# client class)
For defining Quotas, and by the way SLO, you need to have SLI: Service level indicator. that means to have metrics on what you want to observe.
Here, it's not the case. Cloud Storage haven't indicator on the volume of data per day. Thus, you don't have built in indicator and metrics, ... and quotas.
If you want it, you have to build something by your own. To wrap all the Cloud Storage call in a service that count the volume of blob per days and then you will be able to apply your own rules on this personal indicator.
Of course, for preventing any by pass, you have to deny direct access to the buckets and only grant your "indicator service" to access them. Same things for the bucket creation, to register the new buckets in your service.
Not an easy task...

Multiple pods and nodes management in Kubernetes

I've been digging the Kubernetes documentation to try to figure out what is the recommend approach for this case.
I have a private movie API with the follow microservices (pods).
- summary
- reviews
- popularity
Also I have accounts that can access these services.
How do restrict access to services per account e.g. account A can access all the services but account B can only access summary.
The account A could be doing 100x more requests than account B. It's possible to scale services for specific accounts?
Should I setup the accounts as Nodes?
I feel like I'm missing something basic here.
Any thoughts or animated gifs are very welcome.
It sounds like this is level of control should be implemented at the application level.
Access to particular parts of your application, in this case the services, should probably be controlled via user permissions. Similar line of thought for scaling out the services...allow everything to scale but rate limit up front, e.g., account A can get 10 requests per second and account B can do 100x. Designating accounts to nodes might also be possible, but should be avoided. You don't want to end up micromanaging the orchestration layer :)

What is maximum memory limit for an app in PCF during the scaling or push?

I have an application which require more than 30GB of memory and more than 4GB of disk space.
Can I run the app in any of cloud foundry environments (PCF or Bluemix - enterprise account)
Please help me on this query.
Bluemix default quota plan does not resolve your necessity, since the default plan allows only 8GB per instance (512GB max). You would need to open a ticket to change the quota plan of your organization.
Either way, to make sure about the quota plan being used by your organization, go to Manage > Account > Organization > Select Organization > Edit Org
In the quota section, look at the quota plan then login into cf tool and list the quota details:
cf login
cf quota QUOTA_PLAN
This link can give you a little more help.
This depends entirely on the Cloud Foundry provider that you're using and the limits that they put in place.
Behind the scenes, it also depends on the types of VMs being used for Diego Cells in the platform. The Cells are where your application code will run and there must be enough space on a Cell to run your app instance. As an example, if you have a total of 16G of RAM on your Diego Cells then it wouldn't be possible for a provider to support your use case of 30G for one application instance since there would be no Cells with that much free space. If you had Cells with 32G of RAM, that might work, depending on overhead and placement of other apps, but you'd probably need something even larger like 64G per Cell.
I mention all this because at the end of the day, if you're willing to run your own Cloud Foundry installation you can pretty much do whatever you want, so running with 30G or 100G isn't a problem as long as you configure and scale your platform accordingly.
Hope that helps!

Is it possible to view and change google cloud persistent disks' stripe size?

I would like to better fit block size on some disks to the average file size, but it will be useless if it doesn't fit the stripe size.
I couldn't find a way to even view the stripe size about in the documentation.
Google Cloud Storage operates at a far higher level of abstraction than you appear to desire -- at that level, there's even no such thing as "block size", forget details such as "stripe size".
If you do want to work at very low levels of abstractions, therefore, you'll absolutely have to forget GCS and start thinking in terms of GCE instances (AKA VMs) with persistent disk, possibly of the SSD varieties (shareable or local). Even then, given the way Google's virtualization is architected, you may get frustrated with the still-too-high-for-you level of abstraction.
I'll admit I'm curious about the implied request for very-low-level access and I'd be glad to relay the details to my friends and colleagues in the Storage line -- or, you could of course open a detailed feature request in our public issue tracker, my group (cloud tech support) monitors that as well as Stackexchange sites and relevant Google Groups, and we're always glad to be a bridge between our customers and our Eng/PM colleagues.