MS Graph API for Teams can create a team without resourceProvisioningOptions property set.
But, when get all teams with PowerShell 0.9.5(Microsoft.TeamsCmdlets.PowerShell.Custom.dll), it calls HTTP Get with "https://graph.microsoft.com/beta/groups?$filter=resourceProvisioningOptions/Any(x:x eq 'Team')"
So, it cannot returns all teams.
According to your description, I assume you want to list the teams by using the Power Shell.
I have tied this, and it works. First, I tried the following API GET https://graph.microsoft.com/beta/groups?$filter=resourceProvisioningOptions/Any(x:x eq 'Team') on the MS Graph Explorer. And it works. Then, I tried this API on the PowerShell, it works too.
According to this document,
If the group was created less than 15 minutes ago, it's possible for the Create team call to fail with a 404 error code due to replication delays.
So it maybe the reason about you could not get all the teams
Normally when I want to page results from a REST API endpoint, I would use $top + $skip.
However, when the endpoint wraps a Generic Inquiry, $top + $skip no longer affect the returned results. ($filter still works)
My goal is to export data from [GLTran], so given the high number of records in this table, I need to be able to page the results.
Is there a way to do this? Or is there a better way to export all columns from [GLTran]?
There is no option available to page the GI results via the API. Your best bet is to configure Parameters for a GI the way it shows GLTran for several GL batches (say from Batch X to Batch Y of Module Z). Then you simply PUT parameter values through the API call and export GLTran records in batches.
I also highly recommend you to check the Exporting Records from Acumatica via REST Contract-Based API topic for examples on how to implement pagination on multiple REST requests. The $skip query option does not skip the records exported by the API from Acumatica, what in fact it does is excluding the first N records from the resultset returned by the API.
I am running a request against the JIRA REST API and I am getting an error:
Response status code does not indicate success: 400 (Bad Request).
I am thinking it has to do with a maximum number of items that can be in the JQL search query.
My query looks something like this:
search?jql=issue+in+('PROJ-741','PROJ-724','PROJ-851','PROJ-854','PROJ-856','PROJ-857','PROJ-980','PROJ-1133','PROJ-1132','PROJ-1071','PROJ-852','PROJ-727','PROJ-725','PROJ-853','PROJ-726','PROJ-434','PROJ-436','PROJ-433','PROJ-734','PROJ-733','PROJ-732','PROJ-182','PROJ-174','PROJ-173','PROJ-133','PROJ-301','PROJ-300','PROJ-281','PROJ-266','PROJ-253','PROJ-293','PROJ-287','PROJ-284','PROJ-276','PROJ-271','PROJ-262','PROJ-248','PROJ-214','PROJ-322','PROJ-323','PROJ-310','PROJ-332','PROJ-399','PROJ-600','PROJ-346','PROJ-389','PROJ-409','PROJ-521','PROJ-505','PROJ-490','PROJ-432','PROJ-486','PROJ-464','PROJ-438','PROJ-566','PROJ-534','PROJ-471','PROJ-178','PROJ-240','PROJ-210','PROJ-205','PROJ-655','PROJ-427','PROJ-419','PROJ-422','PROJ-426','PROJ-441','PROJ-442','PROJ-193','PROJ-194','PROJ-197','PROJ-195','PROJ-196','PROJ-513','PROJ-198','PROJ-514','PROJ-199','PROJ-516','PROJ-515','PROJ-200','PROJ-517','PROJ-201','PROJ-441','PROJ-188','PROJ-190','PROJ-189','PROJ-191','PROJ-192','PROJ-134','PROJ-213','PROJ-217','PROJ-219','PROJ-238','PROJ-237','PROJ-239','PROJ-221','PROJ-330','PROJ-418','PROJ-119','PROJ-463','PROJ-789','PROJ-331','PROJ-837','PROJ-959','PROJ-864','PROJ-957','PROJ-787','PROJ-445','PROJ-476','PROJ-786','PROJ-790','PROJ-791','PROJ-792')&startAt=0&maxResults=900&fields=labels,assignee,components,id,key,created,resolutiondate,customfield_10100,summary,issuetype,status,priority
I could try to batch these up into multiple queries, but I first wanted to see if there was any documented limit (I couldn't find anything mentioned in the documentation).
There is no restriction on the JQL queries and fields has been mentioned within Atlassian Documentation.
But if the Query is too large to be encoded as a Query Param then you should instead POST to this resource.
Your JQL has used some reserved characters hence it is returning 400 Error.
I have corrected your Query which compiles correctly and returns 200 status.
Here it is:
search?jql=issue%20in%20('PROJ-741'%2C'PROJ-724'%2C'PROJ-851'%2C'PROJ-854'%2C'PROJ-856'%2C'PROJ-857'%2C'PROJ-980'%2C'PROJ-1133'%2C'PROJ-1132'%2C'PROJ-1071'%2C'PROJ-852'%2C'PROJ-727'%2C'PROJ-725'%2C'PROJ-853'%2C'PROJ-726'%2C'PROJ-434'%2C'PROJ-436'%2C'PROJ-433'%2C'PROJ-734'%2C'PROJ-733'%2C'PROJ-732'%2C'PROJ-182'%2C'PROJ-174'%2C'PROJ-173'%2C'PROJ-133'%2C'PROJ-301'%2C'PROJ-300'%2C'PROJ-281'%2C'PROJ-266'%2C'PROJ-253'%2C'PROJ-293'%2C'PROJ-287'%2C'PROJ-284'%2C'PROJ-276'%2C'PROJ-271'%2C'PROJ-262'%2C'PROJ-248'%2C'PROJ-214'%2C'PROJ-322'%2C'PROJ-323'%2C'PROJ-310'%2C'PROJ-332'%2C'PROJ-399'%2C'PROJ-600'%2C'PROJ-346'%2C'PROJ-389'%2C'PROJ-409'%2C'PROJ-521'%2C'PROJ-505'%2C'PROJ-490'%2C'PROJ-432'%2C'PROJ-486'%2C'PROJ-464'%2C'PROJ-438'%2C'PROJ-566'%2C'PROJ-534'%2C'PROJ-471'%2C'PROJ-178'%2C'PROJ-240'%2C'PROJ-210'%2C'PROJ-205'%2C'PROJ-655'%2C'PROJ-427'%2C'PROJ-419'%2C'PROJ-422'%2C'PROJ-426'%2C'PROJ-441'%2C'PROJ-442'%2C'PROJ-193'%2C'PROJ-194'%2C'PROJ-197'%2C'PROJ-195'%2C'PROJ-196'%2C'PROJ-513'%2C'PROJ-198'%2C'PROJ-514'%2C'PROJ-199'%2C'PROJ-516'%2C'PROJ-515'%2C'PROJ-200'%2C'PROJ-517'%2C'PROJ-201'%2C'PROJ-441'%2C'PROJ-188'%2C'PROJ-190'%2C'PROJ-189'%2C'PROJ-191'%2C'PROJ-192'%2C'PROJ-134'%2C'PROJ-213'%2C'PROJ-217'%2C'PROJ-219'%2C'PROJ-238'%2C'PROJ-237'%2C'PROJ-239'%2C'PROJ-221'%2C'PROJ-330'%2C'PROJ-418'%2C'PROJ-119'%2C'PROJ-463'%2C'PROJ-789'%2C'PROJ-331'%2C'PROJ-837'%2C'PROJ-959'%2C'PROJ-864'%2C'PROJ-957'%2C'PROJ-787'%2C'PROJ-445'%2C'PROJ-476'%2C'PROJ-786'%2C'PROJ-790'%2C'PROJ-791'%2C'PROJ-792')&maxResults=900&fields=labels%2Cassignee%2Ccomponents%2Cid%2Ckey%2Ccreated%2Cresolutiondate%2Ccustomfield_10100%2Csummary%2Cissuetype%2Cstatus%2Cpriority
I would like to list public GitHub repositories with the latest create/update/push timestamps (for me any of these is acceptable). Can I achieve this with the GitHub API?
I have tried the following:
Tried using /repositories endpoint, and use the link header to navigate to the last page. However, the link header I receive only has first and next links, whereas I need a last link.
Tried using /search/repositories endpoint. This will work as long as I have a keyword or filter in the q parameter, but it will not accept an empty q parameter.
I got in touch with GitHub support, and there are two solutions to this:
Use binary search on the since parameter of the /repositories endpoint to find the last page.
Cons: may quickly exhaust the API rate limit.
Use the /search/repositories endpoint with an always-true predicate such as stars>=0.
Cons: likely to cause a query timeout/ incomplete results.
I want to do a query like this:
search?q=KEY_NAME&type=page&fields=id,name,location&limit=500&offset=0
when I do this the first time the result is about 470 results, now I put offset to 471 and repeat the query
search?q=KEY_NAME&type=page&fields=id,name,location&limit=500&offset=471
and the result is empty.
Why? The key_name is a famous word like "fan" and I don't think that there are only 471 results on fb pages!
What is the problem?
Never use a limit that high, afaik a limit of 100 should be the maximum. Everything else may be buggy. If you use this API call, you get more than 500 with paging:
/search?pretty=0&fields=idmname,location&q=fan&type=page&limit=100
Don´t use "offset", always use the "next" link in the JSON document to get the next batch of results: https://developers.facebook.com/docs/graph-api/using-graph-api/v2.4#paging
The next 100 entries would be available with the following endpoint for me:
/search?pretty=0&fields=idmname,location&q=fan&type=page&limit=100&after=OTkZD
Please refer to this following blog post in which it says
https://developers.facebook.com/blog/post/478/
gist of it.
As such, when querying the following tables and connections, use time-based paging instead of “offset” to ensure you are getting back as many results as possible with each call. For these Graph API connections, use the “since” and “until” parameters