Number of possible riders in UberPOOL - uber-api

Am I able to set the maximum number of possible UberPOOL riders in a vehicle to something higher than 4 or 6? I'm working on a request-based shuttle program and would like to see if I could make use of the UberPOOL API for this project. Ideally I would like to bump up the number of riders in a vehicle to something much higher, probably around 10 - 15 per shuttle.

According to the api documentation for v1.2. The max seat allowed for a uberpool is is 2. So you will not be able to request more than that. The best alternative is use uberx and XL for your request api call. However the max for UberXL is 6 occupants and the for uberx is 4 occupants.

Related

XRPL: How to get the history of the balance of an account?

I would like to query the history of the balance of an XRPL account with the new WebSocket API.
For example, how do I check the balance of an account on a particular day?
I know with the v2 api, there was a possibility to query balance_changes. But this doesn't seem to be part of the new version.
For example:
https://data.ripple.com/v2/accounts/rf1BiGeXwwQoi8Z2ueFYTEXSwuJYfV2Jpn/balance_changes?start=2018-01-01T00:00:00Z
How is this done with the new Websocket API's?
There's no convenient API call that the WebSocket API can do to get this. I assume you want the XRP balance, not token/issued currency balances, which are in a different place.
One way to go about it is to make an account_tx call and then iterate through the metadata. Many, but not all, transactions will have a ModifiedNode entry of type AccountRoot—if that transaction changed the account's XRP balance, you can see the difference in the PreviousFields vs. FinalFields for that entry. The Look Up Transaction Results tutorial has some details on how to parse out metadata this way. There are some kind of tricky edge cases here: for example, if you send a transaction that buys 10 drops of XRP in the exchange but burns 10 drops of XRP as a transaction cost, then the metadata won't show a balance change because the net change was zero (+10, -10).
Another approach could be to estimate what ledger_index was most recently closed at a given time, then use account_info to look up the account's balance as of that time. The hard part there is figuring out what the latest ledger index was at a given time. This is one of the places where the Data API was just more convenient than the WebSocket API—there's no way to look up by date in WebSocket so you have to try a ledger index, see what the close time of the ledger was, try another ledger index, see what the date is, etc.

Relation between QCI and Bearer in LTE

I was reading through LTE specifications in order to understand the number of bearers that could be created per UE.
Spec 23.203 - Table 6.1.7: Standardized QCI characteristics, defines 9 values for QoS(Quality of Service) based on performance characteristics of IP packets while according to Spec 24.007 - Table 11.5: EPS bearer identity, E-RAB IDs 5-15(11 bearer IDs) are usable.
I am unable to comprehend whether the number of bearers can be 9 or 11?
Thanks!
It is possible for 2 EPS bearers to have to the same QCI. For example, they could be flowing to different PGWs, but have the same QCI. Hence based on the number of identifiers available, the answer is 11.
QCI 1 to 9 are pre-defined settings and given by 3GPP for every operator and network vendors. This allows for interoperability among vendors.
However, operators can create their own QCI from 10 to 255 with their own customized settings for a particular service (like maximum allowed delay, priority, GBR or Non-GBR, etc.
It's worth noting that is very unlikely that a mobile will use that number of bearers at the same time. Typically you have just one (QCI 9) for pretty much everything you do (including Skype, WhatsApp call or streaming).
If on the top of that you also have voice over LTE, then you get QCI 1 while you're in a call and QCI 5 for IMS signaling.
Cheers
LTE QOS de nes priorities for certain customers or services during congestion. In LTE Broadband Network QoS is implemented between CPE and PDN Gateway and is applied to a set of bearers. 'Bearer' is basically a virtual concept and is a set of network con guration to provide special treatment to set of tra c e.g. VoIP packets are prioritized by network compared to web browser tra c. In LTE, QoS is applied on Radio bearer, S1 bearer and S5/S8 bearer, collectively called as EPS bearer as shown in gure below
for more enter link description here

How do I gather geolocation data on GitHub users in a week's time, given their API rate limiting constraints?

To make a long story short, I need to gather geolocation data and signup date on ~23 million GitHub users for a historical data visualization project. I need to do this within a week's time.
I'm rate-limited to 5000 API calls/hour while authenticated. This is a very generous rate limit, but unfortunately, I've run into a major issue.
The GitHub API's "get all users" feature is great (it gives me around 30-50 users per API call, paginated), but it doesn't return the complete user information I need.
If you look at the following call (https://api.github.com/users?since=0), and compare it to a call to retrieve one user's information (https://api.github.com/users/mojombo), you'll notice that only the user-specific call retrieves the information I need such as "created_at": "2007-10-20T05:24:19Z" and "location": "San Francisco".
This means that I would have to make 1 API call per user to get the data I need. Processing 23,000,000 users then requires 4,600 hours or a little over half a year. I don't have that much time!
Are there any workarounds to this, or any other ways to get geolocation and sign up timestamp data of all GitHub users? If the paginated get all users API route returned full user info, it'd be smooth sailing.
I don't see anything obvious in the API. It's probably specifically designed to make mass data collection very slow. There's a few things you could do to try and speed this up.
Increase the page size
In your https://api.github.com/users?since=0 call add per_page=100. That will cut the number of calls getting the whole user list by 1/3.
Randomly sample the user list.
With a set of 23 million people you don't need to poll very single one to get good data about Github signup and location patterns. Since you're sampling the entire population randomly there's no poll bias to account for.
If user 1200 signed up on 2015-10-10 and user 1235 also signed up on 2015-10-10 then you know users 1201 to 1234 also signed up on 2015-10-10. I'm going to assume you don't need any more granularity than that.
Location can also be randomly sampled. If you randomly sample 1 in 10 users, or even 1 in 100 (one per page). 230,000 out of 23 million is a great polling sample. Professional national polls in the US have sample sizes of a few thousand people for an even bigger population.
How Much Sampling Can You Do In A Week?
A week gives you 168 hours or 840,000 requests.
You can get 1 user or 1 page of 100 users per request. Getting all the users, at 100 users per request, is 230,000 requests. That leaves you with 610,000 requests for individual users or about 1 in 37 users.
I'd go with 1 in 50 to account for download and debugging time. So you can poll two random users per page or about 460,000 out of 23 million. This is an excellent sample size.

Possible to specify date_preset with insights edge in Facebook Ads API?

For the Marketing API, I know that I'm able to make one call to retrieve all of the adsets from a certain account along with their insights, but am I able to specify the date_preset for the insights edge in that same call?
For example, the following gives me lifetime insights stats:
/v2.4/{accountID}/adcampaigns?fields=insights
To be clear - I know this is possible to retrieve by making separate calls for each adset id (where I know I can specify the date_preset); instead, I'd like to do this via the call where I get a long list of the ad sets plus their insights details in one go.
Yes this is possible using query expansion, however you probably should not do it in this anyway.
Using query expansion results in multiple requests being executed in one HTTP call, in this case one to get all the adcampaigns, and then N requests where N is the number of adcampaigns returned. This will in turn affect your rate limiting.
The most efficient way to request all insights for all adcampaigns (ad sets) is instead to request them at the account level, specifying aggregation level:
/v2.4/act_{ADACCOUNT_ID}/insights?date_preset=last_7_days&level=campaign
This requires just 1 request, or the number of requests to retrieve the total number of pages.
If you really want to achieve this with query expansion, you can do the following for example:
/v2.4/act_{ADACCOUNT_ID}/adcampaigns?fields=insights.date_preset(last_30_days).time_increment(all_days)
You can see the parameters to insights that would normally be query parameters of the form param_name=param_value are now in the form of param_name(param_value).
To specify the date_preset , here is the correct format . Its important to use insights as edge to get the date_preset filtering .
/v2.10/act_{ADACCOUNT_ID}/insights?fields=impressions,clicks,ctr,unique_clicks,unique_ctr,spend,cpc&date_preset=last_3d
The above one is tested with latest Graph Api version(2.10) as of now . FOr more info related to the date_preset values refer to there api docs .
https://developers.facebook.com/docs/marketing-api/insights/parameters

Can response data from core reporting api be grouped?

Explanation:
I am able to query the Google Core reporting APIv3 using the client library to get data on pageviews for specific URLs of a website I am working on. I want to get data(pageviews) for each day within a specified range. So far I am simply looping through the range, sending individual request to the API. in each request I am setting the same value for the start date and the end date.
Problem:
Obviously this gets the job done, BUT it is certainly not the best way to go about it. Because, assumming I want to get data for the past 3 months for each of about 2000 URIs. Then I will need 360000 number of requests and that value is well over the limit quota defined by Google.
Potential solution: So one way I thought of solving this issue is probably to send a request setting start-date and end-date to be a week apart but the API will return a sum of the values rather than the individual values.
main question: So is there a way to insist that these values should not be added up and returned as a sum but rather returned (as associative array or something like that) separately for each.
I hope the question is clear and that there is a solution! Thank you!
Very straightforward:
Metric: ga:pageview, Dimension: ga:date, Set a filter for your pagepath, and set a start-date and end-date.
Example:
https://www.googleapis.com/analytics/v3/data/ga?ids=ga%3Axxyyzz&dimensions=ga%3Adate&metrics=ga%3Apageviews&filters=ga%3Apagepath%3D%3D%2Ffaq.html&start-date=2013-06-27&end-date=2013-07-11&max-results=50
This will return the pageviews for that the faq.html& page for each day in the time-frame.
You should check out the QueryExplorer. Great tool to find out how to structure queries.