What is the limit of Bing Distance Matrix API for basic user? - bing-maps

I'm wondering what is the limit of Bing Distance Matrix API. From the docs here said 2,500 origins-destination pairs. Is it the maximum request per single request or the cumulative monthly limit? Another thing that confused me is about their billable transaction. What is it about the billable transaction?
I did try reading the docs and implementing the API. I'm a little afraid to try a 50x50 matrix as I will get charged.

That limit is per request. There is no monthly limit.
Non-billable transactions are tracked, but don't count towards the free limits (or paid for transaction credits). Billable transactions eat up the free credits and potentially into purchases transaction credits, if you have any.
Note that with Bing maps you have to purchase transactions before you can use them. So, if you don't have a license today, you can at most only max out the free limits. Once you hit the free limits, the service will stop working for your account until you decide to get a paid license, or until a new year has come around. So, you don't need to worry about a surprise bill appearing in this case.
In Bing Maps, every 4 cells in a matrix (origin/destination pair) generates 1 transaction. So, a 50x50 matrix would have 2,500 cells, and thus use 625 transactions. The Bing Maps free limits is 125,000 transactions a year. So, you could in theory run request a 50x50 matrix every other day and stay under the free yearly limit.

Related

Athena/DDB to condense millions of data points for plotting them on a graph

I need to plot trend charts on the react app based on user inputs such as timestamps, devices, etc. I have related time series data in DynamoDB and S3 (which I can query using Athena).
Returning all those millions of data points for a graph seems unreasonable and is super laggy.
I guess one option is "binning" where I decide the number of bins based on how big the time range is and take averages of the readings in that bin. However, concerned about how well it will show the drops and high we need to show them accurately.
Athena queries and DDB queries (due to the 1MB limit) - both seem fairly slow so far.
Of course the size of the response payload is another concern as API and Lambda both limit it to 10 and 6Mb respectively.
Any ideas?
I can't suggest anything smarter than "binning", but if you are concerned that the bucket interval might become too wide and performance might suffer, you can fixate the interval. Then create more than one table. For example, the interval can be 1 hour and you can have a new table for each week.
This is what we did when we had to deal with time series in dynamo. At some point, we decided to switch to Amazon Timestream

Bing Maps REST API Snap to Route Fails Silently

I'm running into an odd problem using the snap to route feature of the Bing Maps REST API. It works great for most of the GPS coordinates I send it -- they were recorded by me on a recent motorcycle trip, so they're "contiguous" -- but fails silently for others.
As in, the returned status code is 200...but rather than returning SnapToRoadReponse objects it returns Route objects. Which lack any of the snapped-to coordinates I need.
What's particularly interesting is the problem occurs in the middle of processing the entire route. In other words, it works fine for 6 or so invocations (each with around 100 points), fails for a number of invocations, and then works fine for the remaining invocations.
Is there a rate limit on how frequently you can access the snap-to service? I'm using just a basic Bing Maps account but could program around rate limitations easily enough (e.g., by waiting beyond invocations). But I couldn't find reference to such in the documentation I reviewed.
Or maybe Bing Maps just doesn't like the hills east of Santa Rosa and the 101 corridor south from there other the Golden Gate Bridge... :)
Turns out the problem was sending too many points through the request pipeline. The Bing Maps REST API requires/strongly advises not to use GET requests involving more than 100 geographic points. I assumed the Bing Maps REST Toolkit took care of ensuring larger requests were done as POSTS. It does not, however, appear to do that.
Reducing the number of geographic points to no more than 100 per request solved the problem.
The portion of my route which was causing problems involved high speed freeway travel, which caused my code to interpolate additional points for each set of observed data points so as to ensure no two points were more than 2.5 kilometers apart (that's a Bing Maps hard limit). That drove the total number of points for each request along that stretch of the route to over 100 points, causing the problem I encountered.

Reciprocal cost allocation between units servicing each other (typical managerial accounting problem) in T-SQL

I am desperately searching for an efficient way - if there is one - to solve some kind of a recursive task in T-SQL (I could successfully model it in excel and on paper with an iterative solution - as many CMAs would for a small example, re-allocating shares of cost between pairs of support units serving each other in iterations and minimising the balancing unit's unallocated cost leftover to a reasonably small number to stop iterations/recursion).
Now I am trying to find a good scalable solution (or at least a feasible approach to it) how to achieve the same in T-SQL for this typical computational task in the managerial accounting area: when some internal support units service each other (and incur periodic costs, like salary etc) to produce at the end let's say 2 or 3 final products together as a firm, and as a result their respective shares of internally generated support overheads need to be reasonably (according to some physical base distribution, lets say - man hrs spent in each) allocated to these products' cost at the end of the costing exercise.
It would be quite simple if there was no reciprocal services: one support unit providing some service to other support units during the period (and a need to allocate respective costs too alongside this service qty flow) and the second and third support units doing the same thing to other support peers, before all their costs get properly berried into production costs and spread between respective products they jointly serviced (not equally for all support units, I'm using activity-based-costing approach here)... And in a real case there could be many more than just 2-3 units one could manually solve in excel or on paper. So, it really needs some dynamic parameters algorithm (X number of support units servicing X-1 peers and Y products in the period serviced based on some qty-measure/% square matrix allocation table) to spread their periodic cost to one unit of each product at the end. Preferably, somehow natively in SQL without using external .NET or other assembly references.
Some numeric example:
each of 3 support units A,B,C incurred $100, $200, $300 of expenses in the period and worked 50 man hrs each, respectively
A-unit serviced B-unit for 10 hrs and C-unit for 5 hrs, B-unit serviced A-unit for 5 hrs, C-unit serviced A-unit for 3 hrs and B-unit for 10 hrs
The rest of the support units' work time (A-unit 35 hrs: 30% for P1 and 70% for P2, B-unit 45 hrs: 35% for P1 and 65% for P2, C-unit 37 hrs for P2 for 100%) they spent servicing the output of two products (P1 and P2); this portion of their direct time/effort easily allocates to products - but due to reciprocal services to each other some share of support units' cost needs to be shifted to a respective product cost pool unequal to their direct time to product allocation (needs an adjusted mix coefficient for step 2 effects).
I could solve this in excel with iterating algorithm and use of VBA arrays:
(a) vector of period costs by each support unit (to finally reallocate to products and leave 0),
(b) 2dim array/matrix of coefficients of self-service between support units (based on man hrs - one to another),
(c) 2dim array/matrix of direct hrs service for each product by support units,
(d) minimal tolerable error of $1 (leftover of unallocated cost in a unit to stop iteration)
For just 2 or 3 elements (while still manually provable on paper) it is a feasible approach, but this becomes impossible to manually prove for a correct solution once I have 10-20+ support units and many products in a matrix; and I want to switch from excel and VBA to MS SQL server and t-sql for other reasons.
Since this business case as such is not new at all, I was hoping more experienced colleagues could throw an advise how to best solve this - I believed there must have been a solution to this task before (not in pure programming environment/external code).
I am thinking to combine CTE(recursive), table variables and aggregate window functions - but hesitate/struggle how to best/exactly put all puzzle elements together so it is truly scalable for my potentially growing unit/product matrix dimensions.
For my current level it's a little mind blowing, so I'd be grateful for an advice.

qlik sense capability api 10000 limit

We've reached the limit for hypercubes and need to extract more than 10000 (data points - I used this term for lack of words to describe the individual cell that the API sends over 10000 is the max when you multiply width and height of your initial fetch) using the capability API. has anyone been able to get the next page for hypercubes? note that our requirement is for mashups not extensions.
we did a work around but it required us to break our dataset and it takes a little longer.
makes you think, since Qlik is a data analytics tool there should be a way to get all of your data. in an era where we process millions if not billions of records, 10000 data points (not even records) is miniscule.
I should also volunteer that the app we are using this for is for stock analysis and they want to see trends and require to see information on individual points as tooltips. with the number of dimensions and measures we pass (total of 7 times the number of stocks - about 20 = 140) we are constricted to only 70 days (10000/140).
we are using qliksenseserver 11.24.4
Qlik Sense November 2017 Patch 2

iOS Leaderboard: Rank users on overall shortest time

I want to be able to rank users based on how quick they have completed each level. I want this to be an overall leaderboard I.e. shortest overall time for all levels.
The problem here is that for each level completed the totally completion time goes up. But I want to ensure that the leaderboard takes that into account so that a user having completed 10 levels will rank more highly than someone with only 1 completed level.
How can I create some kind of score based on this?
Before submitting the time to leader board.
You could perform a modulation on the total time by the number of levels completed, then for each level completed reduce it by a set amount so people who complete all levels with the same average time will score better then people with the same average time but with fewer levels.
My Preferred Method:
Or you could express it with a score value.
level complete = 1,000.
Each level has a set time limit bonus, the longer you take the less bonus u get.
eg
I Complete the level in 102 secs Goal time is 120 secs
I get 1,000 points for completion and 1,500 points for each second
that i beat the Goal time for.
This way i will get 1,000 + (18* 1,500) = 28,000 points
Next guy completes in 100 secs
He Gets 1,000 + (20*1,500) = 31,000 points
I suggest adding a default amount of time to the total for each incomplete level. So, say, if a player beats a new level in 3 minutes, that replaces a 10 minute placeholder time, and they 'save' 7 minutes from the total.
Without that kind of trick, the iPhone has no provision for multi-factor rankings.
Leaderboard scores in GameKit have to be expressed as a single number (see this section of the GameKit Programming Guide), so that won't be possible.
Your best bet would be to just have a completion time leaderboard for people who have completed all the levels, and maybe another leaderboard (or a few) for people who have completed a smaller number of levels.