I would like to have an estimate or an idea in how to convert Google Analytics views into GB.
I have been looking everywhere in the Google Analytics portal but all I see is the number of views
for example
a day 244 views a week~ 855~ views
Now, Im trying to calculate price or estimate with Application Insigths but their table of prices is on GB for example
$2.76 per GB per Day
100GB per day = $220.67
You can check the estimated cost from Azure Monitor Costs
Azure Monitor will have two phases, first it will estimate the cost according to your data, and later it will give you the actual cost after the deployment.
Metrics queries and Alerts
Log Analytics
Application Insights
In your case you can check Azure Application Insights metrics pricing documentations (MSDOCS) and Azure Monitor Pricing
Related
We are building our stack on google cloud builds and for building we are using custom docker base images which are stored in gcr.io/project-name/image-name
While using this method we are e getting charged on Download Worldwide Destinations (excluding Asia & Australia)
Is there any way that we can reduce the High download charges? if we will run cloud builds and pull docker images from same region i.e. running docker build on us-central1 and pulling docker image from us-central1-docker.dev.pkg/project-name/image-name will it reduce the download charges (No charge) ?
As we found one ref : https://cloud.google.com/storage/pricing#network-buckets
Or is there any other solution ?
Just to expand on #John Hanley's comment, according to this documentation on location considerations:
A good location balances latency, availability, and bandwidth costs for data consumers.
Choosing the closest and same region will help optimize latency and network bandwidths. It would also be convenient to choose the region where it contains the majority of your data users.
There is a Cloud Storage Always Free usage limits wherein 1GB Network Egress is free from North America to each GCP egress destination (excluding Australia and China) however starting October 1, 2022, it would be upgraded to 100GB You can check the full documentation on Changes to Always Free usage limits.
There are few additional bucket information available in metric explorer when compared with cloud storage bucket explore. This seems to be old deleted bucket. However, it appears in Metric explorer. Is there any reason why it is appearing here ?
Also there are some buckets seen in storage explorer but not shown in metrics explorer. Please note this was not created before 24 hours. It is there for quite some time
To answer your first question -
As mentioned in this document -
Cloud Monitoring acquires metric data and holds it in the time series of metric types for a period of time. This period of time varies with the metric type; See Data retention for details.
At the end of that period, Monitoring deletes the expired data points.
When all the points in a time series have expired, Monitoring deletes the time series. Deleted time series do not appear in Monitoring charts or in results from the Monitoring API.
Here as you are talking about the Cloud Storage metrics which come under Google Cloud metrics and as mentioned here the data retention period is 6 weeks for Google Cloud metrics. So if you have deleted the bucket within the past 6 weeks the data for that bucket will show up in the metrics explorer.
To answer your second question -
Which bucket will be shown in the metrics explorer depends on the metrics you are using. If any bucket doesn’t have any relevant data for that metric it will not show up there. Let's say if you have a bucket but you don’t have any object inside that bucket. So if you explore with the Object count(storage/object_count) metric then that bucket will not show up in the metrics explorer. So I will suggest you try with some other metrics i.e. Sent bytes or Received bytes or Log entries which are mentioned in this document. Also the metrics have different measuring times. So based on the metrics you may need to wait for some time to get the data in metrics explorer.
I am very confused about storage limits of cloud storage. I am new for firebase. Can you help me?
ı have a project about share documents. Users can upload projects and then others can see these documents. I use cloud storage. In graphics of usages, I understood that an app's tenure is only 5 gb area for upload(this is not for each day,totaly 5gb). ıf app exceeds this limit I pay extra money. Is this true?
ı think my app have potential of exceed this limit. Can you offer me some solutions.
Thanks...
According to Firebase prices if you are a free user you will limited to 5GB per project and you can't exceed them and download capacity of 1 GB/day - 50K operation/day.
However, if you want more storage move to Blaze plan and you will pay $0.026/GB after first 5GB and Pay as you go.
My question is, are the number of parallel DevOps jobs charged per-second, per-minute, or only per month?
Especially when evaluating setups (we're in the process of migrating from a TeamCity rig), we try different configurations of cloud-hosted vs self-hosted agents, and compare performance vs maintenance work and pricing.
So, say we use 15 cloud-hosted build agents for a week, and then 15 on-prem ones the next week, and then scale back to 2-3 of each for the rest of the month, will we be charged a full month for both 15 on-prem and 15 cloud agents, or do you divide the charge in sub-months, so that we could get charged for 1/4 of a month for each of the 15 on-prem and cloud-hosted agents?
It's difficult to tell from the documentation. Azure Support told me on Twitter to ask in the MSDN forums, here: https://learn.microsoft.com/en-us/answers/questions/263527/per-month-or-per-minute-billing-of-azure-devops-pi.html
But, then I found this answer, which says that DevOps is no longer suppoerted on there: https://social.msdn.microsoft.com/Forums/en-US/72cfd507-06a9-4a43-82d5-58b1eb48df56/azure-devops-pipelines-pricing-and-usage-running-on-deployment-groups?forum=AzureStack
We tested this because we need more deployment agents for 1-2 days once per month. We use Microsoft Hosted agents and they are charged per day. So we did increase the number of agents in the middle of the day and decreased back the next day and the increased charge happened for 2 days which was verified on the billing costs in Azure:
https://learn.microsoft.com/en-us/azure/devops/organizations/billing/billing-faq?view=azure-devops#q-how-much-am-i-currently-spending-on-azure-devops
If you want to verify this yourself, this shows up in Azure > Cost Management > Cost Analysis.
And filter over Service Name: Azure Devops.
I want to build an internal dashboard to show the key metrics of a startup.
All data is stored in a mongodb database on Mongolab (SaaS on top of AWS).
Queries to aggregate datas from all documents take 1-10minutes.
What is the best practice to cache such data and make it immediately available?
Should I run a worker thread once a day and store the result somewhere?
I want to build an internal dashboard to show the key metrics of a startup. All data is stored in a mongodb database on Mongolab (SaaS on top of AWS). Queries to aggregate datas from all documents take 1-10minutes.
Generally users aren't happy to wait on the order of minutes to interact with dashboard metrics, so it is common to pre-aggregate using suitable formats to support more realtime interaction.
For example, with time series data you might want to present summaries with different granularity for charts (minute, hour, day, week, ...). The MongoDB manual includes some examples of Pre-Aggregated Reports using time-based metrics from web server access logs.
What is the best practice to cache such data and make it immediately available?
Should I run a worker thread once a day and store the result somewhere?
How and when to pre-aggregate will depend on the source and interdependency of your metrics as well as your use case requirements. Since use cases vary wildly, the only general best practice that comes to mind is that a startup probably doesn't want to be investing too much developer time in building their own dashboard tool (unless that's a core part of the product offering).
There are plenty of dashboard frameworks and reporting/BI tools available in both open source and commercial flavours.