Azure Rest API - Get all vms from resource ordered - rest

I am trying to make a simple ranking (like a top 10) of the vms that I have in a specific resource. For example the top 10 vms in percentage cpu metric. Right now what I am doing is to collect the metrics of each one individually and then comparing them with the others. I couldn't find anything in the api that would make this less rudimentary, do you know any query or filter that approximates of what I am requesting?
Thanks!

Today the REST API for metrics can't do any aggregation across multiple resources.

Related

MongoDB aggregate for Dashboard

I want to show the data in MongoDB on the dashboard. I implemented it by applying the "Aggregate"
.
I am constantly receiving the "Query Targeting: Scanned Objects / Returned has gone about 1000" alert. How do I solve this alert? The method I thought of is as follows.
Remove the aggregation function from the dashboard: If we need the aggregation data, send a query at that time to obtain the data.
Separate aggregate functions and send queries from business logic: Divide data obtained at once through aggregate functions into multiple queries and then combine the data.
If there is a better way, I wonder if there is a common way.
I am constantly receiving the "Query Targeting: Scanned Objects / Returned has gone about 1000" alert. How do I solve this alert?
What, specifically, are you trying to solve here?
The Query Targeting metric (and associated alert) provides general information regarding the efficiency of the workload against the cluster. It can help with identifying potential problems there, most notably when relevant indexes are missing. Some more information about the metrics and actions that you can take for it are described here.
That said, the metric itself is not perfect. The fact that the targeting ratio is high enough to trigger an alert does not necessarily mean that there is a problem or that any particular action needs to be taken. Particularly notable here is that aggregation operations can cause misleading targeting ratios depending on what types of transformations the pipeline is applying. So the existence of the alert indicates there may be some improvements that could be pursued, but it does not guarantee that there are. You can certainly take a look at the workload using the strategies described in that documentation to determine if any actions like index creation are needed in your specific situation.
The two approaches that you specifically mention in the question could be considered but they kind of don't directly address the alert itself. Certainly if these are heavy aggregations that aren't needed for the application to function then there may be good reason to consider reducing their frequency. But if they are needed by the application and they are structured to be reasonably efficient, then I would not recommend trying to make any drastic adjustments just to avoid triggering the alert. Rather it may be the case that the default query targeting alert is too low for your particular use case and workload and you may consider raising it instead.

What are the usage and resource limits for odata queries on Azure Devops Server 2020 on-premises?

Does Azure Devops Server 2020 (on-prem) have any limits to the number of odata queries and/or amount of data returned in odata queries (for Analytics feature)? I found this documentation, https://learn.microsoft.com/en-us/azure/devops/integrate/concepts/rate-limits?view=azure-devops-2020, but it implicitly refers to Azure Devops Services by referencing information such as Usage views/settings that are not available on-prem; so, I don't believe it to be accurate for AZD on-prem.
The documentation doesnot mention there is any limits to the number of odata queries. As for the amount of data returned in odata queries. The document has below:
Analytics forces paging when query results exceed 10000 records. In that case, you will get first page of data and link to follow to get next page. Link (#odata.nextLink) can be found at the end of the JSON output. It will look like an original query followed by $skip or $skiptoken
See here for more information.
So i donot think there is any limits to the number of odata queries and/or amount of data returned in odata queries.
It is described in the document that the rate limits delay requests from individual users when their usage exceeds threshold consumption of a resource within a (sliding) five-minute window. So the rate limits donot have effect on the number of odata queries and/or amount of data returned in odata queries but the frequency the odata queries call Azure Devops Services.

Firestore pagination of multiple queries

In my case, there are 10 fields and all of them need to be searched by "or", that is why I'm using multiple queries and filter common items in client side by using Promise.all().
The problem is that I would like to implement pagination. I don't want to get all the results of each query, which has too much "read" cost. But I can't use .limit() for each query cause what I want is to limit "final result".
For example, I would like to get the first 50 common results in the 10 queries' results, if I do limit(50) to each query, the final result might be less than 50.
Anyone has ideas about pagination for multiple queries?
I believe that the best way for you to achieve that is using query cursors, so you can better manage the data that you retrieve from your searches.
I would recommend you to take a look at the below links, to find out more information - including a question answered by the community, that seems similar to your case.
Paginate data with query cursors
multi query and pagination with
firestore
Let me know if the information helped you!
Not sure it's relevant but I think I'm having a similar problem and have come up with 4 approaches that might be a workaround.
Instead of making 10 queries, fetch all the products matching a single selection filter e.g. category (in my case a customer can only set a single category field). And do all the filtering on the client side. With this approach the app still reads lots of documents at once but at least reuse these during the session time and filter with more flexibility than firestore`s strict rules.
Run multiple queries in a server environment, such as cloud store functions with Node.js and get only the first 50 documents that are matching all the filters. With this approach client only receives wanted data not more, but server still reads a lot.
This is actually your approach combined with accepted answer
Create automated documents in firebase with the help of cloud functions, e.g. Colors: {red:[product1ID,product2ID....], ....} just storing the document IDs and depending on filters get corresponding documents in server side with cloud functions, create a cross product of matching arrays (AND logic) and push first 50 elements of it to the client side. Knowing which products to display client then handle fetching client side library.
Hope these would help. Here is my original post Firestore multiple `in` queries with `and` logic, query structure

How to determine what a cluster is about?

I have tweets retrieved using the Twitter API and need to group the tweets into 2 categories. To do the grouping I used doc2vec to represent the tweets into numerical form and then performed DBSCAN algorithm clustering. However, how do I know what category a cluster belongs to? My output is just tweets assigned to different clusters.
For example, I need to know which tweet indicates the needs of the people and which tweets indicate that people have help to offer.
How can I make out which cluster has what type of tweets?
Thank you!
Probably neither cluster is a either of these two things.
Clustering is unsupervised. You don't get to control what it finds. It could be tweets that contain the f... word vs. tweets that don't.
If you want something specific such as "needs" and "offers", then you absolutely need to train a supervised algorithm from labeled data.

Queries with calculated fields and sorting

We have started using queries in VSTS to find the projects with the highest value to facilitate a better planning structure and that works great but we also would like to measure that against a cost field. Ideally we would be calculating an ROI field that measures our value against cost and then sorts by ROI.
Is there no way to work with calculated fields in VSTS queries? We haven't been able to find anything so far. Any other workarounds?
It is not supported.
The workaround is that you can run a query with WIQL through WIQL REST API.
You can update the value in WIQL per to the stored query. (Get WIQL through Queries REST API)