What is maximum memory limit for an app in PCF during the scaling or push? - ibm-cloud

I have an application which require more than 30GB of memory and more than 4GB of disk space.
Can I run the app in any of cloud foundry environments (PCF or Bluemix - enterprise account)
Please help me on this query.

Bluemix default quota plan does not resolve your necessity, since the default plan allows only 8GB per instance (512GB max). You would need to open a ticket to change the quota plan of your organization.
Either way, to make sure about the quota plan being used by your organization, go to Manage > Account > Organization > Select Organization > Edit Org
In the quota section, look at the quota plan then login into cf tool and list the quota details:
cf login
cf quota QUOTA_PLAN
This link can give you a little more help.

This depends entirely on the Cloud Foundry provider that you're using and the limits that they put in place.
Behind the scenes, it also depends on the types of VMs being used for Diego Cells in the platform. The Cells are where your application code will run and there must be enough space on a Cell to run your app instance. As an example, if you have a total of 16G of RAM on your Diego Cells then it wouldn't be possible for a provider to support your use case of 30G for one application instance since there would be no Cells with that much free space. If you had Cells with 32G of RAM, that might work, depending on overhead and placement of other apps, but you'd probably need something even larger like 64G per Cell.
I mention all this because at the end of the day, if you're willing to run your own Cloud Foundry installation you can pretty much do whatever you want, so running with 30G or 100G isn't a problem as long as you configure and scale your platform accordingly.
Hope that helps!

Related

How to avoid High Download charges when we pull docker images on cloud builds

We are building our stack on google cloud builds and for building we are using custom docker base images which are stored in gcr.io/project-name/image-name
While using this method we are e getting charged on Download Worldwide Destinations (excluding Asia & Australia)
Is there any way that we can reduce the High download charges? if we will run cloud builds and pull docker images from same region i.e. running docker build on us-central1 and pulling docker image from us-central1-docker.dev.pkg/project-name/image-name will it reduce the download charges (No charge) ?
As we found one ref : https://cloud.google.com/storage/pricing#network-buckets
Or is there any other solution ?
Just to expand on #John Hanley's comment, according to this documentation on location considerations:
A good location balances latency, availability, and bandwidth costs for data consumers.
Choosing the closest and same region will help optimize latency and network bandwidths. It would also be convenient to choose the region where it contains the majority of your data users.
There is a Cloud Storage Always Free usage limits wherein 1GB Network Egress is free from North America to each GCP egress destination (excluding Australia and China) however starting October 1, 2022, it would be upgraded to 100GB You can check the full documentation on Changes to Always Free usage limits.

Google Cloud Storage quota hit - how?

When my app is trying to access files in a bucket using a SignedURL, a 429 response is received:
<Error>
<Code>InsufficientQuota</Code>
<Message>
The App Engine application does not have enough quota.
</Message>
<Details>App s~[myappname] not have enough quota</Details>
</Error>
This error continues until the end of the day, when the quota is apparently reset, then I can use storage again. It's only a small app and does not have much usage. The project that contains the storage is set up to use billing. The files are being accessed from another project, which is also set up to use billing.
I'm not aware that Google Cloud Storage has any quotas that could be hit in this fashion. The only ones I know of are the ones here: https://cloud.google.com/storage/quotas but as far as I am aware, none of them apply.
Buckets are not being created or destroyed.
Updates are not being made to buckets.
There are only a couple of IAM identities.
There are no Pub/Sub notifications.
Objects stored in the buckets are small.
Is there any way I can find out why the quota is being exceeded?
It turns out it was because of a spending limit I had set on app engine. I didn't think those spending limits applied any more, but it turns out that's for new projects. Spending limits that have already been set on existing projects are effective, and I can personally attest that they do work!
Thanks for the comments #KevinQuinzel and #gso_gabriel.

How to set restrictions on usage of selected services at space level in bluemix

I need some help on creating space quota plans with higher granularity in the restrictions.
I can create a space quota plan that either allows or disallows use of paid services. If I create a space quota plan that disallows paid services and assign it to a space, I will not be able to use even some of the most common boiler plates (such as Node.js Cloudant DB Web Starter, Java Cloudant Web Starter, etc) in that space.
However, I would like to create a space quota plan such that I allow the use of only free service plans and selected paid plans like the boiler plates.
Please share your thoughts.
This is not possible. You can't select specific services or boilerplates in space quota definition.
You can specify:
* number of routes
* number of services
* memory
* memory instance
* show all services or only free services
You can 'create' a boilerplates combining specific runtime and\or services.

Is it possible to view and change google cloud persistent disks' stripe size?

I would like to better fit block size on some disks to the average file size, but it will be useless if it doesn't fit the stripe size.
I couldn't find a way to even view the stripe size about in the documentation.
Google Cloud Storage operates at a far higher level of abstraction than you appear to desire -- at that level, there's even no such thing as "block size", forget details such as "stripe size".
If you do want to work at very low levels of abstractions, therefore, you'll absolutely have to forget GCS and start thinking in terms of GCE instances (AKA VMs) with persistent disk, possibly of the SSD varieties (shareable or local). Even then, given the way Google's virtualization is architected, you may get frustrated with the still-too-high-for-you level of abstraction.
I'll admit I'm curious about the implied request for very-low-level access and I'd be glad to relay the details to my friends and colleagues in the Storage line -- or, you could of course open a detailed feature request in our public issue tracker, my group (cloud tech support) monitors that as well as Stackexchange sites and relevant Google Groups, and we're always glad to be a bridge between our customers and our Eng/PM colleagues.

Are there clusters available to rent?

I am wondering if there are clusters available to rent.
Scenario:
We have a program that will take what we estimate a week to run(after optimization) on a given file. Quite possibly, longer. Unfortunately, we also need to do approximately 300+ different files, resulting in approximately 300 weeks of compute time(roundable to 6 wallclock years of continuously running job). For a research job that should be done - at the latest - by December, that's simply unacceptable. While we are exploring other options, I am investigating the option of simply renting a Beowulf cluster. The job is academic and will lead towards the completion of a PhD.
What would be ideal would be a company that we send the source and the job files to the company and then receive a week or two later the result files. Voila!
Quick googling doesn't turn up anything terribly promising.
Suggested Solutions?
Cloud computing sounds like what you need. Amazon, Microsoft and Google rent computer resources on a pay for what you use basis.
Amazon's service is the most mature, and there are several questions already about Amazon's service, EG here and here.
Amazon EC2 (Elastic Compute Cloud) sounds like exactly what you're looking for. You can sign up for one or more virtual machines (up to 20 automatically, more if you request permission), starting at $0.10 an hour per VM, plus bandwidth costs (free between EC 2 machines and Amazon's other web services). You can choose between several operating systems (various Linux distributions, OpenSolaris, Windows if you pay extra), and you can use pre-existing machine images or create your own. If you're using all open-source software and don't have much bandwidth costs, it sounds like it would cost you around $5000 to run your job (assuming that your 6 years of compute time was for something comparable to their small instances, with a single virtual CPU).
Once you sign up for the service and get their tools set up, it's pretty easy to get new virtual machines launched. I've even spent the $0.10 to launch a machine for a few minutes just to verify an answer I was giving someone here on StackOverflow; I wanted to check something on Solaris, so I just booted up an instance and had a Solaris VM at my disposal within 5 minutes.
I don't know where are you doing your PhD... Most of the Asian, European, and North American universities have some clusters. You can
meet directly the people at the lab which is in charge of the cluster.
ask your PhD director to arrange that. Maybe he/she have some friends that can handle that.
Also, the classical trick is to use the unused time of the computers of your lab/university... Basically, each computer run a client application that crunch numbers when the computer is not used. See http://boinc.berkeley.edu/
This lead may prove helpful:
http://lcic.org/vendors.html
And this is a fantastic resource site on the matter:
http://www.hpcwire.com
The thread has been replete with pointers to Amazon's EC2 - and correctly so. They are the most mature in this area. Recently, they've released their elastic map-reduce platform which sound similar (although not exactly) like what you are trying to do. Google is not an option for you as their compute model doesn't support the generic compute model you need.
For academic/scientific use, there are several public centers offering HPC capability. In Europe, there is DEISA. http://www.deisa.eu/ and DEISA members. There must be similar possibilities in the US, probably thru the NSF.
For commercial use, check IBM Deep Computing On Demand offerings.
http://www-03.ibm.com/systems/deepcomputing/cod/
There are several ways to get time on clusters.
Purchase time on Amazon elastic cloud. Depending on how familiar you are with their service, it may take time to get it configured the way you want it.
Approach a university and see if they have a commercial program to rent out the time to companies. I know several do. One that I know of specifically is private sector program at NCSA at UIUC. Depending on the institution, they also offer porting and optimization service for your code.
Or you could rent CPU time from a private provider.
I'm from Slovenia and, for example, here we have a great private provider called Arctur. The guys were helpful and and responsive when I contacted them.
You can find them here: hpc.arctur.net
One option is to rent the virtual resources equivalent of whatever number of PCs you need, and set them up as a cluster, using the Amazon Elastic Compute Cloud.
Setting up a beowulf cluster of those is entirely possible.
Check out this link which provides resources and software to do exactly that.
Go to : http://www.extremefactory.com/index.php
True HPC cluster, up to 200 TFlops.