Additional Buckets are seen in metrics explorer compared with Google Cloud storage Bucket - google-cloud-storage

There are few additional bucket information available in metric explorer when compared with cloud storage bucket explore. This seems to be old deleted bucket. However, it appears in Metric explorer. Is there any reason why it is appearing here ?
Also there are some buckets seen in storage explorer but not shown in metrics explorer. Please note this was not created before 24 hours. It is there for quite some time

To answer your first question -
As mentioned in this document -
Cloud Monitoring acquires metric data and holds it in the time series of metric types for a period of time. This period of time varies with the metric type; See Data retention for details.
At the end of that period, Monitoring deletes the expired data points.
When all the points in a time series have expired, Monitoring deletes the time series. Deleted time series do not appear in Monitoring charts or in results from the Monitoring API.
Here as you are talking about the Cloud Storage metrics which come under Google Cloud metrics and as mentioned here the data retention period is 6 weeks for Google Cloud metrics. So if you have deleted the bucket within the past 6 weeks the data for that bucket will show up in the metrics explorer.
To answer your second question -
Which bucket will be shown in the metrics explorer depends on the metrics you are using. If any bucket doesn’t have any relevant data for that metric it will not show up there. Let's say if you have a bucket but you don’t have any object inside that bucket. So if you explore with the Object count(storage/object_count) metric then that bucket will not show up in the metrics explorer. So I will suggest you try with some other metrics i.e. Sent bytes or Received bytes or Log entries which are mentioned in this document. Also the metrics have different measuring times. So based on the metrics you may need to wait for some time to get the data in metrics explorer.

Related

How to convert analytics views into GB

I would like to have an estimate or an idea in how to convert Google Analytics views into GB.
I have been looking everywhere in the Google Analytics portal but all I see is the number of views
for example
a day 244 views a week~ 855~ views
Now, Im trying to calculate price or estimate with Application Insigths but their table of prices is on GB for example
$2.76 per GB per Day
100GB per day = $220.67
You can check the estimated cost from Azure Monitor Costs
Azure Monitor will have two phases, first it will estimate the cost according to your data, and later it will give you the actual cost after the deployment.
Metrics queries and Alerts
Log Analytics
Application Insights
In your case you can check Azure Application Insights metrics pricing documentations (MSDOCS) and Azure Monitor Pricing

How do I find out which files were downloaded outside my continent (and by whom)?

I have been monitoring Cloud Storage billing daily and saw two unexpected, large spikes in "Download Worldwide Destinations (excluding Asia & Australia)" this month. The cost for this SKU is typically around US$2-4 daily; however, these two daily spikes have been $89 and $15.
I have enabled GCS Bucket Logging soon after the $89 spike, hoping to deduce what causes it the next time it happens, but when the $15 spike happened yesterday, I was unable to pinpoint which service or files downloaded have caused this spike.
There is a Log field named Location, but it appears to be linked to the region where a bucket is located, not the location of the downloader (that would contribute to the "Worldwide Destinations" egress).
As far as I know, my services are all in the southamerica-east1 region, but it's possible that there is either a legacy service or a misconfigured one that has been responsible for these spikes.
The bucket that did show up outside my region is in the U.S., but I concluded that it is not responsible for the spikes because the files there are under 30 kB and have only been downloaded 8 times according to the logs.
Is there any way to filter the logs so that it tells me as much information as possible to help me track down what is adding up the "Download Worldwide Destinations" cost? Specifically:
which files were downloaded
if it was one of my Google Cloud services, which one it was
Enable usage logs and export the log data to a new bucket.
Google Cloud Usage logs & storage logs
The logs will contain the IP address, you will need to use a geolocation service to map IP addresses to city/country.
Note:
Cloud Audit Logs do not track access to public objects.
Google Cloud Audit Logs restrictions

Google Cloud Storage maximum access limits

The system i am building is currently storing videos with Google Cloud Storage, my server will return the link from Google Cloud Storage which is used to play the video on mobile platforms. Is there a limit for how many user can access that link at the same time? . Thank You!
All of the known limits for Cloud Storage are listed in the documentation. It says:
There is no limit to reads of objects in a bucket, which includes reading object data, reading object metadata, and listing objects. Buckets initially support roughly 5000 object reads per second and then scale as needed.
So, no, there are effectively no limits to the number of concurrent downloads.

How to apply upload limit for google storage bucket per day/month/etc

Is there a way how to apply upload limit for google storage bucket per day/month/year?
Is there is a way how to apply limit on amount of Network traffic?
Is there is a way how to apply limit on Class A operations?
Is there is a way how to apply limit on Class B operations?
I found only Queries per 100 seconds per user and Queries per day using
https://cloud.google.com/docs/quota instructions, but this is JSON Api quotas
(I even not sure what kind of api is used inside of StorageClient c# client class)
For defining Quotas, and by the way SLO, you need to have SLI: Service level indicator. that means to have metrics on what you want to observe.
Here, it's not the case. Cloud Storage haven't indicator on the volume of data per day. Thus, you don't have built in indicator and metrics, ... and quotas.
If you want it, you have to build something by your own. To wrap all the Cloud Storage call in a service that count the volume of blob per days and then you will be able to apply your own rules on this personal indicator.
Of course, for preventing any by pass, you have to deny direct access to the buckets and only grant your "indicator service" to access them. Same things for the bucket creation, to register the new buckets in your service.
Not an easy task...

Build internal dashboard analytics on top of mongodb

I want to build an internal dashboard to show the key metrics of a startup.
All data is stored in a mongodb database on Mongolab (SaaS on top of AWS).
Queries to aggregate datas from all documents take 1-10minutes.
What is the best practice to cache such data and make it immediately available?
Should I run a worker thread once a day and store the result somewhere?
I want to build an internal dashboard to show the key metrics of a startup. All data is stored in a mongodb database on Mongolab (SaaS on top of AWS). Queries to aggregate datas from all documents take 1-10minutes.
Generally users aren't happy to wait on the order of minutes to interact with dashboard metrics, so it is common to pre-aggregate using suitable formats to support more realtime interaction.
For example, with time series data you might want to present summaries with different granularity for charts (minute, hour, day, week, ...). The MongoDB manual includes some examples of Pre-Aggregated Reports using time-based metrics from web server access logs.
What is the best practice to cache such data and make it immediately available?
Should I run a worker thread once a day and store the result somewhere?
How and when to pre-aggregate will depend on the source and interdependency of your metrics as well as your use case requirements. Since use cases vary wildly, the only general best practice that comes to mind is that a startup probably doesn't want to be investing too much developer time in building their own dashboard tool (unless that's a core part of the product offering).
There are plenty of dashboard frameworks and reporting/BI tools available in both open source and commercial flavours.