Slack API: Retrieve all member emails from a slack channel - email

Given the name of a slack channel, is there a way to retrieve a list of emails of all the members in that channel? I tried looking in the slack api docs but couldn't find the method I need to make this happen (https://api.slack.com/methods).

Provided you have the necessary scopes you can retrieved the emails of all members of a channel starting with the channel name as follows:
Call channels.list to get the list of all channels and to convert the channel name to its ID
Call channels.info of the desired channel with channel ID to get the list of its members.
Call users.list to retrieve the list of all Slack users including their profile information and email
Match the channel member list with the user list by user ID to get the correct users and emails
Note that this also works for private channels using groups.list and groups.info, but only if the user or bot related to the access token is a member of that private channel.
Update 2019
Would strongly recommend to rather use the newer conversations.* methods, instead of channels.* and groups.*, because they are more flexible and they are some cases where the older methods will not work (e.g. converted channels).

Here's a version that works with Python 2 or 3 using up-to-date APIs.
import os
import requests
SLACK_API_TOKEN='xoxb-TOKENID' # Your token here
CHANNEL_NAME='general' # Your channel here
channel_list = requests.get('https://slack.com/api/conversations.list?token=%s&types=%s' % (SLACK_API_TOKEN, 'public_channel,private_channel,im,mpim')).json()['channels']
for c in channel_list:
if 'name' in c and c['name'] == CHANNEL_NAME:
channel = c
members = requests.get('https://slack.com/api/conversations.members?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['members']
users_list = requests.get('https://slack.com/api/users.list?token=%s' % SLACK_API_TOKEN).json()['members']
for user in users_list:
if "email" in user['profile'] and user['id'] in members:
print(user['profile']['email'])
Note that you'll need to create a Slack App with an OAuth API token and the following scopes authorized for this to work for all of the various types of conversations:
channels:read
groups:read
im:read
mpim:read
users:read
users:read.email
Also, to read from private channels or chats, you'll need to add your app to the Workspace and "/invite appname" for each channel you're interested in.

Note: channels.list, channels.info, users.list are deprecated (retire and cease functioning on November 25, 2020).
Replace to conversations.list, conversations.members, users.info
You can get the email like this way:
conversations.list - Get the list of Channel Id (public or private)
conversations.members - Get the list of Member Id by Channel Id
users.info - Get the Email by Member Id

Here's the python code:
import requests
SLACK_API_TOKEN = "" # get one from https://api.slack.com/docs/oauth-test-tokens
CHANNEL_NAME = ""
# channel_list = requests.get('https://slack.com/api/channels.list?token=%s' % SLACK_API_TOKEN).json()['channels']
# channel = filter(lambda c: c['name'] == CHANNEL_NAME, channel_list)[0]
# channel_info = requests.get('https://slack.com/api/channels.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['channel']
# members = channel_info['members']
channel_list = requests.get('https://slack.com/api/groups.list?token=%s' % SLACK_API_TOKEN).json()['groups']
channel = filter(lambda c: c['name'] == CHANNEL_NAME, channel_list)[0]
channel_info = requests.get('https://slack.com/api/groups.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['group']
print channel_info
members = channel_info['members']
users_list = requests.get('https://slack.com/api/users.list?token=%s' % SLACK_API_TOKEN).json()['members']
users = filter(lambda u: u['id'] in members, users_list)
for user in users:
first_name, last_name = '', ''
if user['real_name']:
first_name = user['real_name']
if ' ' in user['real_name']:
first_name, last_name = user['real_name'].split()
# print "%s,%s,%s" % (first_name, last_name, user['profile']['email'])
print "%s" % (user['profile']['email'])

I just made a small Ruby script, what retrieves all members from a slack channel and returns it in CSV format.
Script: https://github.com/olivernadj/toolbox/tree/master/slack-members
Example:
$ ./membersof.rb -t xoxp-123456789A-BCDEF01234-56789ABCDE-F012345678 -g QWERTYUIO
first_name,last_name,email
John,Doe,john.doe#example.com
Jane,Doe,jane.doe#example.com

Based on the answer by #Lam, I modified it to work with python3.
import requests
SLACK_API_TOKEN = "" # get one from https://api.slack.com/docs/oauth-test-tokens
CHANNEL_NAME = ""
# channel_list = requests.get('https://slack.com/api/channels.list?token=%s' % SLACK_API_TOKEN).json()['channels']
# channel = filter(lambda c: c['name'] == CHANNEL_NAME, channel_list)[0]
# channel_info = requests.get('https://slack.com/api/channels.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['channel']
# members = channel_info['members']
channel_list = requests.get('https://slack.com/api/groups.list?token=%s' % SLACK_API_TOKEN).json()['groups']
for c in channel_list:
if c['name'] == CHANNEL_NAME:
channel = c
channel_info = requests.get('https://slack.com/api/groups.info?token=%s&channel=%s' % (SLACK_API_TOKEN, channel['id'])).json()['group']
print(channel_info)
members = channel_info['members']
users_list = requests.get('https://slack.com/api/users.list?token=%s' % SLACK_API_TOKEN).json()['members']
for user in users_list:
if "email" in user['profile']:
print(user['profile']['email'])

Ruby solution using slack-ruby-client:
Scopes:
channels:read
users.profile:read
users:read.email
users:read
require 'slack-ruby-client'
Slack.configure do |config|
config.token = ENV['SLACK_TOKEN_IN_BASH_PROFILE']
end
client = Slack::Web::Client.new
CH = '#channel-name'
client.conversations_members(channel: CH).members.each do |user|
puts client.users_profile_get(user: user).profile.email
end

I'm not sure if these are all outdated but I couldn't get any of them to work. The best way I found to do it was to use the client.conversations_members method to find all user IDs and then get emails for those users.
import slack
def get_channel_emails(channel_id:str)-> list:
client = slack.WebClient(token=os.getenv("SLACK_TOKEN"))
result = client.conversations_members(channel= channel_id)
emails = []
for user in result['members']:
info = client.users_info(user = user).data
if 'email' in info['user']['profile'].keys():
emails.append(info['user']['profile']['email'])
return emails
Some notable roadblocks are:
The slack package is actually slackclient so use pip install slackclient instead
The channel_id is not the channel name but the code slack gives to the channel. It can be found in the web browser version path and is formatted CXXXXXXXXXX.

If without coding you require to get emails of all users from Slack channel:
Go to Channel settings, there is a option for "Copy member email address".
With Slack API:
conversations.list - Get the list of Channel Id (public or private)
conversations.members - Get the list of Member Id by Channel Id
users.info - Get the Email by Member Id

with python3 and package 'slackclient'
HERE
pip3 install slackclient
def get_channel_emails(channel_id: str):
slack_api_bot_token = 'YOUR_BOT_TOKEN'
## Require BOT permission ##
# channels:read
# groups:read
# im:read
# mpim:read
# users:read
client = slack.WebClient(token=slack_api_bot_token)
result = client.conversations_members(channel=channel_id)
i = 0
for user in result['members']:
#print(user)
info = client.users_info(user=user).data
i = i + 1
#print(info)
member_id = info['user']['id']
team_id = info['user']['team_id']
display_name = info['user']['name']
real_name = info['user']['real_name']
phone = info['user']['profile']['phone']
email = info['user']['profile']['email']
if not member_id:
member_id = 'null'
elif not team_id:
team_id = 'null'
elif not display_name:
display_name = 'null'
elif not real_name:
real_name = 'null'
elif not phone:
phone = 'null'
elif not email:
email = 'null'
print(f'{i},{real_name},{display_name},{team_id},{member_id},{email},{phone}')
def main():
#channel id: https://app.slack.com/huddle/TB37ZG064/CB3CF4A7B
#if end of URL string starts with "C", it means CHANNEL
get_channel_emails('CB3CF4A7B')

Related

Connecting to Facebook's API

I am facing this issue connecting with Facebook's API using httr package, while testing on 'me' node I came along the following problem.
I was under the impression that me node does not require special permissions.
Testing on the browser with 'https://graph.facebook.com/me' gave the same results, it would be great if some one could provide an explanation.
# Define keys
app_id = 'my_app_id'
app_secret = 'my_app_secret'
# Define the app
fb_app <- oauth_app(appname = "facebook",
key = app_id,
secret = app_secret)
# Get OAuth user access token
fb_token <- oauth2.0_token(oauth_endpoints("facebook"),
fb_app,
scope = 'public_profile',
type = "application/x-www-form-urlencoded",
cache = TRUE)
response <- GET("https://graph.facebook.com",
path = "/me",
config = config(token = fb_token))
# Show content returned
content(response)
$error
$error$message
[1] "An active access token must be used to query information about the current user."
$error$type
[1] "OAuthException"
$error$code
[1] 2500
$error$fbtrace_id
[1] "ARRnb93rZHmWLlXK_MMJlfi"
Noting that I have signed in using the app.

Container keeps on crashing while creating a deployment from a docker image in minikube

i have docker image containing python files which should download satellite imageries from scihub website. The docker image is working fine. Now when i want to create the deployment thorugh kubectl so that i can expose it as a service, its's container keeps on crashing. That's what the pod description says when seen through kubectl describe pod.
this is how i am trying to deploy sudo kubectl run back --image=back:latest --port=8080 --image-pull-policy Never. i also tried changing the port but it did not work. Here are the files within docker image.
Docker File
FROM python:3.7-stretch
COPY . /code
WORKDIR /code
RUN pip install -r requirements.txt
ENTRYPOINT ["python", "ingestion.py"]
** ingestion **
import os
import shutil
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")
import requests
import datahub
scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"
logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
password=scihub_password,
producttype=os.getenv("producttype"),
platformname=os.getenv("platformname"),
days_back=os.getenv("days_back", 2),
footprint=os.getenv("footprint"),
max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
start_date = os.getenv("start_date"),
end_date = os.getenv("end_date"))
logger.info("Found {} relevant scenes".format(len(scenes)))
job_results = []
for scene in scenes:
# do not donwload a scene that has already been ingested
if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
filename = scene["title"]+".SAFE"
else:
logger.info("Starting the download of scene {}".format(scene["title"]))
filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
logger.info("The download was successful.")
shutil.move(filename, "/out_data")
result_message = {"description": "test",
"type": "Raster",
"format": "SAFE",
"filename": os.path.basename(filename)}
job_results.append(result_message)
res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()
*datahub
import logging
import os
import urllib.parse
import zipfile
import requests
# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"
logger = logging.getLogger(__name__)
def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
search_terms = []
if producttype:
search_terms.append("producttype:{}".format(producttype))
if platformname:
search_terms.append("platformname:{}".format(platformname))
if start_date and end_date:
search_terms.append(
"beginPosition:[{}+TO+{}]".format(start_date, end_date))
elif days_back:
search_terms.append(
"beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
if footprint:
search_terms.append("footprint:%22Intersects({})%22".format(
footprint.replace(" ", "+")))
if max_cloud_cover_percentage:
search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
url = SITE["SEARCH"] + "+AND+".join(search_terms)
return url
def _unpack(zip_file, directory, remove_after=False):
with zipfile.ZipFile(zip_file) as zf:
# This assumes that the zipfile only contains the .SAFE directory at root level
safe_path = zf.namelist()[0]
zf.extractall(path=directory)
if remove_after:
os.remove(zip_file)
return os.path.normpath(os.path.join(directory, safe_path))
def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
""" Search the Copernicus SciHub
Parameters
----------
username : str
user name for the Copernicus SciHub
password : str
password for the Copernicus SciHub
producttype : str, optional
product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
platformname : str, optional
plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
days_back : int, optional
number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
footprint : str, optional
well-known-text representation of the footprint
max_cloud_cover_percentage: str, optional
percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery.
(see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
start_date: str, optional
start point of the search extent has to be used in combination with end_date
end_date: str, optional
end_point of the search extent has to be used in combination with start_date
Returns
-------
list
a list of scenes that match the search parameters
"""
import xml.etree.cElementTree as ET
scenes = []
search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
logger.info("Search URL: {}".format(search_url))
offset = 0
rowsBreak = 5000
name_space = {"atom": "http://www.w3.org/2005/Atom",
"opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
while offset < rowsBreak: # Next pagination page:
response = requests.get(search_url.format(offset=offset), auth=(username, password))
root = ET.fromstring(response.content)
if offset == 0:
rowsBreak = int(
root.find("opensearch:totalResults", name_space).text)
for e in root.iterfind("atom:entry", name_space):
uuid = e.find("atom:id", name_space).text
title = e.find("atom:title", name_space).text
begin_position = e.find(
"atom:date[#name='beginposition']", name_space).text
end_position = e.find(
"atom:date[#name='endposition']", name_space).text
footprint = e.find("atom:str[#name='footprint']", name_space).text
scenes.append({
"id": uuid,
"title": title,
"begin_position": begin_position,
"end_position": end_position,
"footprint": footprint})
# Ultimate DHuS pagination page size limit (rows per page).
offset += 100
return scenes
def download(scene, directory, username, password, unpack=True):
""" Download a Sentinel scene based on its uuid
Parameters
----------
scene : dict
the scene to be downloaded
path : str
the path where the file will be downloaded to
username : str
username for the Copernicus SciHub
password : str
password for the Copernicus SciHub
unpack: boolean, optional
flag that defines whether the downloaded product should be unpacked after download. defaults to true
Raises
------
ValueError
if the size of the downloaded file does not match the Content-Length header
ValueError
if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub
Returns
-------
str
path to the downloaded file
"""
import hashlib
md5hash = hashlib.md5()
md5sum = requests.get(SITE["CHECKSUM"].format(
uuid=scene["id"]), auth=(username, password)).text
download_path = os.path.join(directory, scene["title"] + ".zip")
# overwrite if path already exists
if os.path.exists(download_path):
os.remove(download_path)
url = SITE["SAFEZIP"].format(uuid=scene["id"])
rsp = requests.get(url, auth=(username, password), stream=True)
cl = rsp.headers.get("Content-Length")
size = int(cl) if cl else -1
# Actually fetch now:
with open(download_path, "wb") as f: # Do not read as a whole into memory:
written = 0
for block in rsp.iter_content(8192):
f.write(block)
written += len(block)
md5hash.update(block)
written = os.path.getsize(download_path)
if size > -1 and written != size:
raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
download_path, written, size))
elif md5sum:
calculated = md5hash.hexdigest()
expected = md5sum.lower()
POD events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m39s (x18636 over 2d19h) kubelet, minikube Back-off restarting failed container
the system which wants to use this service already has another main front end service running(which just runs the application ) on 8081 so maybe i need to expose this on the same port. How can i make the deployments running?

How to fetch collection of Zuora Accounts using REST API

I want to fetch all customer accounts from Zuora. Apart from Exports REST API, Is there any API available to fetch all accounts in a paginated list?
This is the format I used to fetch revenue invoices, use this code and change the endpoint
import pandas as pd
# Set the sleep time to 10 seconds
sleep = 10
# Zuora OAUTH token URL
token_url = "https://rest.apisandbox.zuora.com/oauth/token"
# URL for the DataQuery
query_url = "https://rest.apisandbox.zuora.com/query/jobs"
# OAUTH client_id & client_secret
client_id = 'your client id'
client_secret = 'your client secret'
# Set the grant type to client credential
token_data = {'grant_type': 'client_credentials'}
# Send the POST request for the OAUTH token
access_token_resp = requests.post(token_url, data=token_data,
auth=(client_id, client_secret))
# Print the OAUTH token respose text
#print access_token_resp.text
# Parse the tokens as json data from the repsonse
tokens = access_token_resp.json()
#print "access token: " + tokens['access_token']
# Use the access token in future API calls & Add to the headers
query_job_headers = {'Content-Type':'application/json',
'Authorization': 'Bearer ' + tokens['access_token']}
# JSON Data for our DataQuery
json_data = {
"query": "select * from revenuescheduleiteminvoiceitem",
"outputFormat": "JSON",
"compression": "NONE",
"retries": 3,
"output": {
"target": "s3"
}
}
# Parse the JSON output
data = json.dumps(json_data)
# Send the POST request for the dataquery
query_job_resp = requests.post(query_url, data=data,
headers=query_job_headers)
# Print the respose text
#print query_job_resp.text
# Check the Job Status
# 1) Parse the Query Job Response JSON data
query_job = query_job_resp.json()
# 2) Create the Job URL with the id from the response
query_job_url = query_url+'/'+query_job["data"]["id"]
# 3) Send the GETrequest to check on the status of the query
query_status_resp = requests.get(query_job_url, headers = query_job_headers)
#print query_status_resp.text
# Parse the status from teh response
query_status = query_status_resp.json()["data"]["queryStatus"]
#print ('query status:'+query_status)
# Loop until the status == completed
# Exit if there is an error
while (query_status != 'completed'):
time.sleep(sleep)
query_status_resp = requests.get(query_job_url, headers = query_job_headers)
#print query_status_resp.text
query_status = query_status_resp.json()["data"]["queryStatus"]
if (query_status == 'failed'):
print ("query: "+query_status_resp.json()["data"]["query"]+' Failed!\n')
exit(1)
# Query Job has completed
#print ('query status:'+query_status)
# Get the File URL
file_url = query_status_resp.json()["data"]["dataFile"]
print (file_url)```
If you don't want to use Data Query or any queue-based solution like that, use Zoql instead.
Note! You need to know all fields from the Account object you need, the asterisk (select *) doesn't work here:
select Id, ParentId, AccountNumber, Name from Account
You may also add custom fields into your selection. You will get up to 200 records per page.

get_process_lines in liquidsoap 1.3.0

I've just updated Liquidsoap to 1.3.0 and now get_process_lines does not return anything.
def get_request() =
# Get the URI
lines = get_process_lines("curl http://localhost:3000/api/v1/liquidsoap/next/my-radio")
log("liquidsoap curl returns #{lines}")
uri = list.hd(default="",lines)
log("liquidsoap will try and play #{uri}")
# Create a request
request.create(uri)
end
I read on the CHANGELOG
- Moved get_process_lines and get_process_output to utils.liq, added optional env parameter
Does it mean I have to do something to use utils.liq in my script now ?
The full script is as follows
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
def apply_metadata(m) =
title = m["title"]
artist = m["artist"]
log("Now playing: #{title} by #{artist}")
end
# Our custom request function
def get_request() =
# Get the URI
lines = get_process_lines("curl http://localhost:3000/api/v1/liquidsoap/next/my-radio")
log("liquidsoap curl returns #{lines}")
uri = list.hd(default="",lines)
log("liquidsoap will try and play #{uri}")
# Create a request
request.create(uri)
end
def my_safe(s) =
security = sine()
fallback(track_sensitive=false,[s,security])
end
s = request.dynamic(id="s",get_request)
s = on_metadata(apply_metadata,s)
s = crossfade(s)
s = my_safe(s)
# We output the stream to an icecast
# server, in ogg/vorbis format.
log("liquidsoap starting")
output.icecast(
%mp3(id3v2=true,bitrate=128,samplerate=44100),
host = "localhost",
port = 8000,
password = "PASSWORD",
mount = "myradio",
genre="various",
url="http://www.myradio.fr",
description="My Radio",
s
)
Of course the API is working
$ curl http://localhost:3000/api/v1/liquidsoap/next/my-radio
annotate:title="Chamakay",artist="Blood Orange",album="Cupid Deluxe":http://localhost/stream/3.mp3
A more simple example :
lines = get_process_lines("echo hi")
log("lines = #{lines}")
line = list.hd(default="",lines)
log("line = #{line}")
returns the following logs
2017/05/05 15:24:42 [lang:3] lines = []
2017/05/05 15:24:42 [lang:3] line =
Many thanks in advance for your help !
geoffroy
The issue was fixed in liquidsoap 1.3.1
Fixed:
Fixed run_process, get_process_lines, get_process_output when compiling with OCaml <= 4.03 (#437, #439)
https://github.com/savonet/liquidsoap/blob/1.3.1/CHANGES#L12

django-rest-framework: How do I create a Feedback / Contact form without a model?

first-time poster. I’m trying to create a simple contact option using Django Rest Framework. The contact page would allow users and non-users to send the site admin an email via a form. Been at this for weeks… I’ve added my questions and code below.
1) Is it the viewset that needs some additional work to connect to the form data?
2) Does the DRF API viewer allow for testing this out? Should it be showing the email fields?
# serializers.py
class CommentSerializer(serializers.Serializer):
email = serializers.EmailField()
message = serializers.CharField()
name = serializers.CharField()
# views.py
class CommentViewSet(viewsets.ViewSet):
def list(self, request): #, format=None
comment = CommentSerializer(data=request.data)
if comment.is_valid():
form_email = comment.data['email']
form_message = comment.data['message'] + "email: " + form_email
form_name = comment.data['name']
send_mail("New contact form submission",
form_message,
form_email,
['myemailaddress#gmail.com'],
fail_silently=False
)
return Response(comment.data)
# Not sure how the html connects here:
# return render('comment.html', {
# 'form': form_class,
# })
return Response(
{
"success": False,
'error-code':'invalid-data'
},
)
# urls.py
router = DefaultRouter()
router.register(r'profiles', views.ProfileViewSet)
router.register(r'users', views.UserViewSet)
router.register(r'comment', views.CommentViewSet, 'Comment')
urlpatterns = [
url(r'^', include(router.urls)),
]