Python Retrieving all Groups from a domain using OAuth2 - google-groups

Using PYTHON, To get all groups in a domain, in OAuth1 had a command like:
groupfeed = api(lambda: GROUPS_SERVICE.RetrieveAllGroups())
In OAuth2, will it be
allGrps = client.groups().list(customer='my_company').execute()
I am looking for the equivalent code to get ALL groups in a domain. Thanks for your help and attention.

If you have more than 200 groups, the results will be paged and returned over multiple API calls. You need to keep retrieving pages until none are left:
all_groups = []
request = client.groups().list(customer='my_customer')
while True: # loop until no nextPageToken
this_page = request.execute()
if 'items' in this_page:
all_groups += this_page['items']
if 'nextPageToken' in this_page:
request = client.groups().list(
customer='my_customer',
pageToken=this_page['nextPageToken'])
else:
break
also notice that it's my_customer, not my_company.

Related

Jira Tempo users are pseudonymised

I am trying to obtain the worklogs using the Jira Tempo REST API.
The server I am working with is an on-premises one.
The data extraction is straightforward with one exception: some users are renamed from john.doe to JIRAUSER12345.
I could not find any rule for this and I also couldn't find any way to map the JIRAUSER12345 to the actual username.
Is there any way of getting the real user name? Is it possible that I am missing some access rights (probably at team level) that forbid me seeing the real user names?
Reading this article gives the reason for the anonymization:
https://tempo-io.atlassian.net/wiki/spaces/KB/pages/1196327022/Why+do+I+see+JIRAUSERxxxx+as+worklog+author
In order to get the correct user id I did something like:
usersCache = {}
def getUserbyKey(key):
if not key in usersCache:
query = {
'key': key
}
response = requests.get(f"{JIRA_BASE_URL}/rest/api/latest/user",auth=authorization, headers=headers, params=query)
j = response.json()
usersCache[key]=j["displayName"]
j = usersCache.get(key)
return j
...
for wl in worklogs:
user = getUserbyKey(wl["worker"])
key = wl["issue"]["key"]
timeSpent = wl["timeSpent"]

Retrieving SendGrid Transactional Templates List

I have been trying to retrieve list of SendGrid transactional templates using API. I'm using correct API key and getting an empty array while there are about 5 transactional templates existing in my SendGrid account. Here is the response:
{
"templates": []
}
Any guesses what could be wrong?
Any guesses what could be wrong?
Yep, their documentation could be!
I also stuck with the problem and finally managed to solve it once I opened the devtools and saw how they request their own API from the UI. Long story short - one has to pass additional generations=dynamic query parameter. Here is the C# code I use:
var client = new SendGridClient("key");
var response = await client.RequestAsync(
SendGridClient.Method.GET,
urlPath: "/templates",
queryParams: "{\"generations\": \"dynamic\"}");
Using Api 7.3.0 PHP
require("../../sendgrid-php.php");
$apiKey = getenv('SENDGRID_API_KEY');
$sg = new \SendGrid($apiKey);
#Comma-delimited list specifying which generations of templates to return. Options are legacy, dynamic or legacy,dynamic
$query_params = json_decode('{"generations": "legacy,dynamic"}');
try {
#$response = $sg->client->templates()->get();
$response = $sg->client->templates()->get(null, $query_params);
echo $response->body();
exit;
} catch (Exception $e) {
echo '{"error":"Caught exception: '. $e->getMessage().'"}';
}
I had the same problem using the python wrapper provided by Sendgrid.
My code was similar to this:
response = SendGridAPIClient(<your api key>).client.templates.get({'generations': 'legacy,dynamic'})
This returned an empty array.
To fix you have to name the param or to pass None before the dict:
response = SendGridAPIClient(<your api key>).client.templates.get(None, {'generations': 'legacy,dynamic'})
or
response = SendGridAPIClient(<your api key>).client.templates.get(query_params={'generations': 'legacy,dynamic'})

GitHub API get user's last login

I have a GitHub organization and I am trying to determine the last login dates for all of the users in the organization.
I see that there is a way to get last commits, but there are some users that only do pulls so this would not work.
The /users/:user/events call doesn't return any results for me.
Neither the GitHub User API nor Event API would include that information.
I suspect a user's last login time is considered "private", meaning you are not suppose to know when a user last logged or not. (privacy issue)
The GitHub privacy statement which states "User Personal Information does not include aggregated, non-personally identifying information". A user's last login time likely is included under this statement.
If you have GitHub Enterprise Cloud, you may be able to use the Audit log to determine if there has been activity in the last 90 days.
Untested but a sample approach:
"""
Script to retrieve all audit events for a user.
Note need GitHub enterprise cloud
"""
import csv
import os
import requests
# Fetch required values from environment variables
github_username = os.environ.get('GITHUB_USERNAME')
github_password = os.environ.get('GITHUB_PASSWORD')
github_base_url = os.environ.get('GITHUB_BASE_URL', 'https://api.github.com')
github_org = os.environ.get('GITHUB_ORG', 'my-org')
per_page = int(os.environ.get('PER_PAGE', 100))
csv_path = os.environ.get('CSV_PATH', 'output/permissions_for_team_repos.csv')
# Check credentials have been supplied
if not github_username or not github_password:
raise ValueError('GITHUB_USERNAME and GITHUB_PASSWORD must be supplied')
# Prepare a requests session to reuse authentication configuration
# auth parameter will use basic auth automatically
session = requests.Session()
session.auth = (github_username, github_password)
session.headers = {'accept': 'application/vnd.github.v3+json'}
def generate_members():
"""
Generator function to paginate through all organisation members (users)
https://docs.github.com/en/rest/reference/orgs#list-organization-members
"""
# Fetch the repositories for the specified team
url = f"{github_base_url}/orgs/{github_org}/members"
# Set pagination counters
page = 1
paginate = True
# Paginate until there are no repos left
while paginate:
response = session.get(url, params={'per_page': per_page, 'page': page})
if response.status_code > 399:
raise Exception(f"GitHub API response code: {response.status_code}")
members = response.json()
records = len(members)
# Check if the current page contains any records, if not continue looping
if records == 0:
paginate = False
# Fetch source repo permissions
for member in members:
yield member
# Increment page counter for next loop
page += 1
def generate_member_audit_events(member_name):
"""
Generator function to fetch organisation audit events for a specific member
https://docs.github.com/en/rest/reference/orgs#get-the-audit-log-for-an-organization
https://docs.github.com/en/organizations/keeping-your-organization-secure/reviewing-the-audit-log-for-your-organization
"""
url = f"{github_base_url}/orgs/{github_org}/audit-log"
# Set pagination counters
page = 1
paginate = True
# Paginate until there are no teams left
while paginate:
response = session.get(url, params={'per_page': per_page, 'page': page, 'phrase': f'actor:{member_name}'})
if response.status_code > 399:
raise Exception(f"GitHub API response code: {response.status_code}")
audit_events = response.json()
records = len(audit_events)
# Check if the current page contains any records, if not continue looping
if records == 0:
paginate = False
# Fetch source repo permissions
for audit_event in audit_events:
yield audit_event
# Increment page counter for next loop
page += 1
with open(csv_path, 'w', newline='') as csvfile:
fieldnames = ['member', 'member_type', 'site_admin', 'action', 'created_at']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for member in generate_members():
row = {'member': member['login'], 'member_type': member['type'], 'site_admin': member['site_admin']}
for audit_event in generate_member_audit_events(member['login']):
row['action'] = audit_event['action']
row['created_at'] = audit_event['created_at']
writer.writerow(row)

Tornado facebook_request() to get email

I'm using tornado and trying to get a facebook user's email address from the Graph API. I have the following code (most of which is from the Tornado website)
class FacebookAuth2Handler(BaseHandler,tornado.auth.FacebookGraphMixin):
#tornado.gen.coroutine
def get(self):
if self.get_argument("code", False):
user = yield self.get_authenticated_user(redirect_uri=self.settings["facebook_redirect_uri"],
client_id=self.settings["facebook_app_id"],
client_secret=self.settings["facebook_secret"],
code=self.get_argument("code"))
ob = yield self.facebook_request("/me/email",access_token=user["access_token"])
print(ob)
else:
yield self.authorize_redirect(redirect_uri=self.settings["facebook_redirect_uri"],
client_id=self.settings["facebook_app_id"],
extra_params={"scope": ["email","public_profile"]})
The problem seems to be fetching the /me/email with the facebook_request() this crashes with the following:
tornado.auth.AuthError: Error response HTTP 400: Bad Request fetching https://graph.facebook.com/me/email?access_token=xxxxxxx
Setting the path to "/me/email" is not valid, and setting it to "/me?fields=email" causes it to send your url as "/me?fields=email?access_token=xxxxxxx", which is no good either.
use the fields parameter:
ob = yield self.facebook_request(
path="/me",
access_token=user["access_token"],
fields="email,gender"
)
or you can really simplify things by adding the extra_fields parameter to get_authenticated_user. Note it is a python list, not a comma-separated string like above:
user = yield self.get_authenticated_user(redirect_uri=self.settings["facebook_redirect_uri"],
client_id=self.settings["facebook_app_id"],
client_secret=self.settings["facebook_secret"],
code=self.get_argument("code"),
extra_fields=['email','gender']
)
Any missing or unpermitted fields will show as None in the returned user mapping object.

Limit of 100 items for graph statuses?

I'm working on a console application to download statuses and such from my own account -- nothing production or public. I'm finding that I can only get the last 100 statuses, but I was hoping to at least go a couple of years back.
I'm using the C# API, with something like:
dynamic response = Client.Get(string.Format("{0}/statuses", Secrets.FacebookUserName));
while (response.data.Count > 0)
{
foreach (dynamic status in response.data)
{
// do stuff
}
response = Client.Get(response.paging.next);
}
This works fine, but stops after 100 records.
I see the same thing when trying to use FQL:
dynamic x = Client.Get("fql", new { q = "select message from status where uid=me() limit 1000" });
Do I need to go down the road of exploring the batch API?