Detailed access logs in Azure CDN - azure-cdn

Amazon and Google both provide detailed access logs for HTTP requests served by their CDN platforms. As far as I can see there is no such thing in any of Microsoft's CDN solutions (Microsoft, Akamai, Verizon Standard/Premium). There are diagnostic logs and reports for top resources, but I need the individual requests or at least the total number of requests per URL and day.
I have read Azure CDN file download statistics, but it is very old, so something may have changed.
Is it possible to get the access logs we need from a CDN in Azure using a method I have missed, or is this still a dead end?

It can be done now, at least if using "Azure CDN from Microsoft". You need to create a Log Analytics workspace (if you don't have one already), go to Diagnostics settings on your CDN profile (not endpoint) and route the raw logs to that workspace.
You can also put it in storage, or feed it into an event hub, but I found making reports easiest via log analytics - you can easily create a chart of hit/miss ratio per CDN Point-of-Presence with this:
AzureDiagnostics
| where Category == "AzureCdnAccessLog"
| summarize request_count = count(), totalResponseBytes = sum(toint(responseBytes_s)) by pop_s, cacheStatus_s, sentToOriginShield_b
| order by request_count desc
Read the whole of that second link - you may need to re-save endpoints not created recently.
Here's that query used to diagnose that misses from Cape Town (and Johannesburg) were being sent to an origin shield PoP in London before actually being pulled from the origin by the London PoP.

Related

VSTS Analytics request blocked due to exceeding usage of resource AnalyticsBlockingResource

I have a PowerBI mashup that performs 4 queries against different VSTS projects on our tenant via the VSTS Analytics module. I have setup each query as a specific analytics views. Each view returns < 200 records and are simple "Get Story Backlog items" for a single team for today only.
I am frequently getting a message like the following:
An error occurred in the ‘DS BI WorkItems’ query. Error: Request was
blocked due to exceeding usage of resource 'AnalyticsBlockingResource'
in namespace 'User'. For more information on why your request was
blocked, see the topic "Rate limits" on the Microsoft Web site
(https://go.microsoft.com/fwlink/?LinkId=823950). Details:
DataSourceKind=Visual Studio Team Services
ActivityId=a6ac93f3-549c-4eb0-b64e-2b38e18ae7ee
Url=https://vrmobility.analytics.visualstudio.com/_odata/v2.0-preview/WorkItems?$filter=((ProjectSK%20eq%208e25983d-a154-4b53-915f-1394b34e5338)%20and%20((ProjectSK%20eq%208e25983d-a154-4b53-915f-1394b34e5338%20and%20Teams/any(t:t/TeamSK%20eq%2019afa381-35ca-47db-9060-51baa5d0485e))))%20and%20Processes/any(b:(b/BacklogName%20eq%20'Stories')%20and%20((b/ProjectSK%20eq%208e25983d-a154-4b53-915f-1394b34e5338%20and%20(b/TeamSK%20eq%2019afa381-35ca-47db-9060-51baa5d0485e))))&$select=LeadTimeDays,CycleTimeDays,CompletedDate,StateCategory,ParentWorkItemId,ActivatedDate,Activity,VRAgile_ActualCompletionIteration,VRAgile_ActualUatIteration,BusinessValue,VRAgile_ChangeAreaOwnerTeam,ChangedDate,ClosedDate,CompletedWork,VRAgile_CompletionTargetConfidence,CreatedDate,FinishDate,FoundIn,WorkItemId,VRAgile_IncludedinVersion,IntegrationBuild,OriginalEstimate,VRAgile_PlannedCompletionIteration,VRAgile_PlannedUATIteration,Priority,Reason,VRAgile_ReleaseQuality,RemainingWork,vrmobility_VRAgile_RequestedBy,VRAgile_RequestedDept,R...
error=Record
I have checked the page and looked at the Usage page on our VSTS tenant but during these times my user is not indicated as blocked and VSTS user interface works normally.
The issue goes away after a few minutes but it will then return after a couple of changes made in PowerBI (like adding a new column, changing data type etc) because this automatically refreshes all 4 queries again and this seems to trigger this unacceptable usage.
It is really frustrating as I can't continue working on the report and have to go and do something else for 5 minutes really impacting my flow.
Any ideas on cause, solution/workarounds? It feels to me like an overly sensitive VSTS limit on the VSTS Analytic service
Azure DevOps Services limits the resources individuals can consume and the number of requests they can make to certain commands. When these limits are exceeded, subsequent requests may be either delayed or blocked.
See the below link...
https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/rate-limits?view=azure-devops

Serve file with google cloud storage, google compute engine and external website

I got a question regarding a upload, edit and serve setup.
I have a setup where my shopify website lets users upload images to a google cloud bucket with javascript. When the file is uploaded to the bucket it's send to a compute engine which edits the files and the file is then uploaded to another bucket. All this is done.
But now I want to serve the file to the user on my shopify website. I can't figure out a way to do this. Is it even possible with my current setup? My problem is how to identify the user-session which uploaded the file, so that I can serve the file back to that person.
I hope someone has knowledge about this and is willing to help. Thanks!
Every person that logs into a Shopify store gets a customer ID. You can use this for your uploads. Ensure images get manipulated with that ID in mind. Now, use an App Proxy that sends the same customer ID to your App. Your App can then use this ID to find the image previously uploaded, and you can return it to the shop. A very common pattern of Shopify use.
As for getting the customer ID, one way is to dump it using Liquid since you have {{ customer.id }} or you can sniff it out of the cookies Shopify stores for a user session. It exists, but you'll have to dig for it, I forget its exact code.

Using dropbox as image hosting for emails

First of all I'm not sure if this is the correct place to post this question, if its not please tell where should I post it.
My doubt is if I can use Dropbox to host images and then send emails linking that image to preview it , I don't want to send it as an attachment , but as an image in the email , is that possible ? Or do I have to upload it to a hosting?
This question was already replied here and I strongly agree with the accepted response: don't do it in production.
Dropbox imposes limits on bandwidth that you can confirm here and stated below, so I would say it's ok for internal testing only.
Dropbox Basic (free) accounts:
The total amount of traffic that all of your links and file requests
together can generate without getting banned is 20 GB per day. The
total number of downloads that all of your links together can generate
is 100,000 downloads per day.
If you don't have other option or still insist on doing it, just be sure you keep yourself under the limits in order to don't go against their terms of use and avoid being banned.

Using GA Data Export API to Get All UA's

I am using the GA Data Export API to interact with Google Analytics and I'm making a lot of progress, I am using this URL Endpoint initially to pull all the profiles under an account:
https://www.google.com/analytics/feeds/accounts/default
This URL retrieves each GA ID (profile) and each UA. One thing I've realized is one account can contain multiple UAs and when this happens, this request pulls all profiles. We have a client who has about 115 profiles under like 10 different UAs, and the request takes about 30 seconds for the initial request (and then I believe it must be cached, because it speeds up considerably after this, but then the next day the same thing occurs).
Is there a way to get a list of UA's without pulling the profiles? This way I can query the UA specifically for the profiles instead of pulling each one.
Any advice on this would be really helpful!
Thanks
UPDATE: Here's some documentation on the specific call I am using right now:
http://code.google.com/apis/analytics/docs/gdata/gdataReferenceAccountFeed.html
UPDATE 1: I have found some interesting information in the docs
Once your application has verified
that the user has Analytics access,
its next step is to find out which
Analytics accounts the user has access
to. Remember, users can have access to
many different accounts, and within
them, many different profiles. For
this reason, your application cannot
access any report information without
first requesting the list of accounts
available to the user. The resulting
accounts feed returns that list, but
most importantly, the list also
contains the account profiles that the
user can view.
So this means that you have to use the default accounts call to get these back? Surely, somebody has had this issue before?
So apparently, you can query the account if you know the UA-ID, however there is no way to get back a list of only UA IDs.
One way you can do it is have the user enter their own UA ID instead of having them choose one; not as user-friendly as it could be but better than making the user wait 30 seconds!

screenshot-grabbing email tool

I have a web site with various graphs embedded in it that are generated externally. Occasionally those graphs will fail to generate and I would like to catch that when it happens. These graphs are embedded in multiple pages and I would rather not check each page manually. Is there any kind of tool or perhaps a browser addon that could periodically take screenshots of different URLs and email them in a single email? It would be sufficient to have scaled-down screenshots of full pages emailed maybe once a day to me, allowing me to take a quick glance and see that all the graphs are there and look okay.
I'm a big fan of automation. Rather than have emails generated that you then have to look at, take a look at 'replacing custom missing images in jquery'. This will run a piece of Javascript for each image that fails. Extending that to make a request to a URL that you control, which may also include the broken URL (or just the filename that is broken) would not be too hard. That URL would then generate an email, and store the broken URL so that it doesn't send 5000 emails if there's a flurry of hits to your page.
Another idea building on the above is to effectively change the external 404 from the source site to a local one (eg /backend/missing-images/) - the full-path need not exist - you are just generating a local 404 record in your apache logs. Logwatch will send a list of 404 pages from the apache log to you daily (or more often, if you want) by email.