Google Storage Object Versioning with Cloud CDN Caching - google-cloud-storage

Google Cloud CDN recommended to use versioned URL for static objects.
If I enabled Google Storage versioning, could the Cloud CDN get the fresh one instead of the cache one (prior to its normal expiration time) after updating an obeject on Storage?

By definition, the cache system (in the CDN or elsewhere) prevent any extra communication until the cache expire.
By the way, the cache system will never ask the backend before the cache expiration. In addition, Cloud Storage isn't aware that an additional layer catch its data and store them for a period of time.
By design, it's no, versioning change nothing in the CDN cache management.

Related

Google Cloud Storage quota hit - how?

When my app is trying to access files in a bucket using a SignedURL, a 429 response is received:
<Error>
<Code>InsufficientQuota</Code>
<Message>
The App Engine application does not have enough quota.
</Message>
<Details>App s~[myappname] not have enough quota</Details>
</Error>
This error continues until the end of the day, when the quota is apparently reset, then I can use storage again. It's only a small app and does not have much usage. The project that contains the storage is set up to use billing. The files are being accessed from another project, which is also set up to use billing.
I'm not aware that Google Cloud Storage has any quotas that could be hit in this fashion. The only ones I know of are the ones here: https://cloud.google.com/storage/quotas but as far as I am aware, none of them apply.
Buckets are not being created or destroyed.
Updates are not being made to buckets.
There are only a couple of IAM identities.
There are no Pub/Sub notifications.
Objects stored in the buckets are small.
Is there any way I can find out why the quota is being exceeded?
It turns out it was because of a spending limit I had set on app engine. I didn't think those spending limits applied any more, but it turns out that's for new projects. Spending limits that have already been set on existing projects are effective, and I can personally attest that they do work!
Thanks for the comments #KevinQuinzel and #gso_gabriel.

flutter data storage: local storage vs cloud storage

a question about local and remote storage of user data. Is there a best practices for the common situation where a user accesses data from an API and can favourite or otherwise personalise the data.
I have seen tutorials, e.g. a movie browsing app, where the use can make a list of favourite movies, where this personalised data is stored locally (e.g. in sqflite) and other tutorials where this data is stored remotely, eg. firebase. And firebase has an offline mode, so that data can be synced later. In that case, is it a common use case to set up local storage as well as cloud storage? Is there a common practice for this situation?
Thanks for any insights.
This is not specifically a Flutter question, more of a general app development question. It's very common to have both local and cloud "storage" but I wouldn't think of it that way. If you're interacting with an API backend I wouldn't consider it as the cloud storage for your app. Instead look at it as a different component within your applications overall architecture. You API/Backend component, this way it's not apart of your app instead it's something your app interacts with.
I assume you know the purpose of your API. Returns your data you want to see, keeps track of user profile information and other sensitive information.
When it comes to local storage I'd say the most common scenarios for local storage is results caching and storing information that the API requires on every session to make the user experience a bit better. See some examples below for both:
On instagram they store your "Feed watermark" which is a string value that is linked to a specific set of results so that when you open the app and request again they return that set of results, plus anything new - Local storage
They also "store locally" (better referred to as caching) a small set of your feeds from your posts, a list of user profiles that has stories on them and your DM's for instant and offline access. This way when the app loads up it has something to show while performing the action to get the new information. - Caching
They also store your login token, that never expires. - Local storage
tl;dr: Yes. If you need data on every session to use your API store that locally in a secure way and use that to interact with your "Cloud storage".

What does eventual or strong mean in the context of Google Cloud Storage consistency?

What does eventual or strong mean in the context of Google Cloud Storage consistency?
From the Consistency section of the documentation:
Google Cloud Storage provides strong global consistency for all
read-after-write, read-after-update, and read-after-delete operations,
including both data and metadata. When you upload a file (PUT) to
Google Cloud Storage, and you receive a success response, the object
is immediately available for download (GET) and metadata (HEAD)
operations, from any location in Google's global network.
That means it will take time to replicate all over the networks and it will not be available until the replication is finished (to demonstrate strong consistency). It is more understandable by the statement from the doc that says, "When you upload an object, the object is not available until it is completely uploaded." And that is why, the latency for writing to a globally-consistent replicated store may be slightly higher than to a non-replicated or non-committed store because a success response is returned only when multiple writes complete, not just one. Here what it says more.

Bluemix Session Cache: Trigger to evict cached data

I create Java web app on IBM Bluemix. This application shares session object among instances via Session Cache Service.
I understand how to program my application with session cache. But I could not find any descriptions if the total amount of cached data exceeds cache space (e.g. for starter plan, I can use 1GB cache space.).
These are my questions.
Q1. Are there any trigger to remove cached data from cache space?
Q2. After exceeding cache space, what data will be removed? Is there any cache strategy such as Least Recently Used, Least Frequently Used and so on?
The Session Cache service on IBM Bluemix is based on WebSphere Extreme Scale. Hence a lot of background information is provided in the Knowledge Center of WebSphere Extreme Scale. The standard Liberty profile for the Session Cache uses a Least Recently Used (LRU) algorithm to manage the space. I haven't tried it yet, but the first linked document describes how to monitor the cache and obtain statistics.

Can I use Google Cloud Storage for Apache DocumentRoot?

I was reading the docs and saw the following:
Standard Storage is appropriate for storing data that requires low latency access or data that is frequently accessed ("hot" objects), such as serving website content, interactive workloads, or data supporting mobile and gaming applications.
With that said, I wanted to know how would I go about mounting a gs://bucket? I would prefer to go this route than to create an NFS/GlusterFS.
You can use gcsfuse to mount a Google Cloud Storage bucket as a filesystem that Apache can read:
gcsfuse is a user-space file system for interacting with Google Cloud Storage.
As of 20 August 2015, the project's README also says:
Current status
Please treat gcsfuse as beta-quality software. Use it for whatever you like, but be aware that bugs may lurk, and that we reserve the right to make small backwards-incompatible changes.
The careful user should be sure to read semantics.md for information on how gcsfuse maps file system operations to GCS operations, and especially on surprising behaviors. The list of open issues may also be of interest.