Bash/Curl example of pocket oAuth login - pocket

I'm trying to get a simple bash script working with the pocket api. All I want to do is authenticate with pocket and download my list of articles (actually, only the count)
I'm a little confused by the way that the oauth process works.
I've registered my app with the pocket api and have a consumer key
its marked as in "development" - I'm not sure if this is important.
The bit that's confusing me is that it seems that the way that the oAuth flow works with it's redirect uris is that it only really works with a gui (i.e a browser) - is it possible to do this with a bash script?
Here is what I have below. it works up until I have the TOKEN, but then I'm not sure what to do next.
#!/bin/bash
REDIR="redirect_uri=pocketapp1234:authorizationFinished"
KEY=21004-xxxxxxabcabcabc # you can assume this is the consumer key pocket issues for my app.
CODE=`curl -X POST --data "consumer_key=$KEY&$REDIR" https://getpocket.com/v3/oauth/request`
echo "OK - code is $CODE"
TOKEN=$(echo $CODE | awk -F"=" '{print $2}')
echo "OK - token is $TOKEN"
AUTH="consumer_key=$KEY&$CODE"
# This line seems not to work
curl -v "https://getpocket.com/auth/authorize?request_token=$TOKEN&$REDIR"

Yes, the browser portion is required. At the authorize phase, there is a page from getpocket.com prompting the user to login and authorise the bash script to access the user's Pocket account.
You can refer to Step 3 of the Pocket API Docs.

This is the Python v3.8 script I'm using and seems to work.
#!/usr/bin/env python
from os import environ as env
import requests
import webbrowser
def authorize_pocket_app():
data = {
"consumer_key": env['POCKET_CONSUMER_KEY'],
"redirect_uri": env['POCKET_APP_NAME'],
}
resp = requests.post(url="https://getpocket.com/v3/oauth/request", data=data)
code = resp.text.split("=")[1]
webbrowser.open(f"https://getpocket.com/auth/authorize?request_token={code}"
"&redirect_uri=https://duckduckgo.com")
input("Authorize %s app in the browser, then click enter" % env['POCKET_APP_NAME'])
get_token(code)
def get_token(code):
resp = requests.post(
url="https://getpocket.com/v3/oauth/authorize",
data={
"consumer_key": env["POCKET_CONSUMER_KEY"],
"code": code,
})
token = resp.text.split("&")[0].split("=")[1]
print("Secret token:", token)
if __name__ == "__main__":
authorize_pocket_app()
To use it as is you need to install requests external library and export POCKET_CONSUMER_KEY and POCKET_APP_NAME in your shell environment. E.g.
pip install requests
export POCKET_CONSUMER_KEY=xxx-yyy-zzz
export POCKET_APP_NAME=my-pocket-app
python <filename>.py
HTH

Related

How to export IBM Watson conversation history?

Before running the code, install ibm-watson &
ibm-cloud-sdk-core package and also pip instll PyJWT==1.7.1.
I found in IBM document that "For a Python script you can run to export logs and convert them to CSV format, download the export_logs_py.py file from the Watson Assistant GitHub) repository."
But I don't really know where & how should I modify in order to connect my ibm skill.
There is no demo or instruction about where I can find those argument.
I only find these information in skill api details but it seems it needs more.
Do anyone have an example version about how to use the .py they provided?
(I'm a coding beginner, not really understand every lines in the .py)
The .py shows an error after I run the file without modification:
runfile('C:/export_logs.py', wdir='C:/Users/admin/Downloads')
usage: export_logs.py [-h] [--logtype {ASSISTANT,WORKSPACE,DEPLOYMENT}]
[--language LANGUAGE] [--filetype {CSV,TSV,XLSX,JSON}]
[--url URL] [--version VERSION]
[--totalpages TOTALPAGES] [--pagelimit PAGELIMIT]
[--filter FILTER] [--strip STRIP]
apikey id filename
export_logs.py: error: the following arguments are required: apikey, id, filename
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
The conversation I want to download:
First of all, Workspaces in IBM Watson Assistant are now called Skills.
To understand what arguments(positional and optional) you need to pass to the Python script, run the below command
python export_logs_py.py -h
Wherever you see workspace, you can replace it with skill.
To export the logs in the .csv file format, run the below command
python export_logs_py.py --filetype CSV --url <URL> <API_KEY> <SKILL_ID> output.csv
Replace placeholders <URL, <API_KEY> and <SKILL_ID> with appropriate values mentioned below.
<URL> & <API_KEY> - You can find them under the Manage page of your Watson Assistant service page
<SKILL_ID> - The same as the one in the image you uploaded. Check this StackOverflow answer for more info.
For Assistant logs, add --logtype ASSISTANT. The default is WORKSPACE.
You can also find the logs in the UI under Analytics section of your Skill
As you can see, the script reported an error and said that you have to provide the apikey, the id and the (presumably output) filename as parameters. It also showed that additional parameters can be specified.
usage: export_logs.py [-h] [--logtype {ASSISTANT,WORKSPACE,DEPLOYMENT}]
[--language LANGUAGE] [--filetype {CSV,TSV,XLSX,JSON}]
[--url URL] [--version VERSION]
[--totalpages TOTALPAGES] [--pagelimit PAGELIMIT]
[--filter FILTER] [--strip STRIP]
apikey id filename
Your next step could be now to invoke the script again, but provide an API key for Watson Assistant, the skill ID and a filename as additional paramaters. Next, I would try to something like, e.g., trying out to specify the output type:
export_logs.py --filetype CSV myapikey skillID output.csv
I am not the author of that script, but that is how I would approach it if I wanted to use it

Apps Script Execution API 404 error with devMode: true

When requesting POST https://script.googleapis.com/v1/scripts/{script_id}:run with devMode: true I get a 404 error. I can run the script successfully with devMode: false.
Although other people (1, 2) have raised this issue, none of the other solutions work. I keep getting an HTTP 404 Not Found error whenever my request comes with devMode: true.
I have performed the following steps:
created a new Google account
created a Cloud project
set up an OAuth consent screen for the project
authorized the domain for the app (just in case)
created 'Desktop' OAuth2 credentials for this project with OAuth scopes listed below
enables Apps Script API on the project
created a standalone Apps Script using Google Drive ("Test 1")
set the Cloud Platform project ID for the script Test 1
deploy the script Test 1 as an API executable, with access to "Anyone"
obtain a valid access token with the exact same scopes listed below and used for the OAuth consent screen configuration in the Cloud project. The token is for the same account that owns the script and the cloud project.
After performing the above steps, running with devMode: false was successful, but when switching to devMode: true it failed.
The same happens when I set access to "Only Me".
To make clear the steps that I took, I provide a full flow of screenshots taken along the way (open image in new window to zoom in; the flows to top-to-bottom; the three columns from left to right are: Cloud console project flow; Apps Script flow; OAuth2 flow):
At the request of #ziganotschka I made a simpler copy of my Apps Script function:
function test() {
return 1;
}
And the appsscript.json manifest is:
{
"timeZone": "Asia/Jerusalem",
"dependencies": {
},
"exceptionLogging": "STACKDRIVER",
"runtimeVersion": "V8"
}
The code for obtaining the OAuth2 token and running the script, in Python:
##
# %%
import requests
import urllib
import json
client_id = '...'
client_secret = '...'
script_id = '...'
is_dev_mode = True # True or False
##
# %% Initiate OAuth2
url = 'https://accounts.google.com/o/oauth2/auth?' + urllib.parse.urlencode({
'client_id': client_id,
'redirect_uri': 'urn:ietf:wg:oauth:2.0:oob',
'response_type': 'code',
'scope': ' '.join([
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/userinfo.profile',
'openid',
'https://www.googleapis.com/auth/documents',
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.scripts',
'https://www.googleapis.com/auth/script.external_request',
'https://www.googleapis.com/auth/script.projects',
'https://www.googleapis.com/auth/script.scriptapp',
'https://www.googleapis.com/auth/script.container.ui'
])
}, doseq=True)
print(url)
##
# %% Exchange authorization code with access and refresh tokens
print('Enter authorization token: ', end='')
authorization_code = input()
authorization_token_response = requests.post('https://accounts.google.com/o/oauth2/token', data={
'code': authorization_code,
'client_id': client_id,
'client_secret': client_secret,
'redirect_uri': 'urn:ietf:wg:oauth:2.0:oob',
'grant_type': 'authorization_code'
})
authorization_token_response.raise_for_status()
authorization_data = authorization_token_response.json()
access_token = authorization_data["access_token"]
refresh_token = authorization_data["refresh_token"]
##
# %%
response = requests.post(f'https://script.googleapis.com/v1/scripts/{script_id}:run',
data=json.dumps({
"function": "test",
"parameters": [],
"devMode": is_dev_mode
}),
headers={
'content-type': 'application/json',
'authorization': f'Bearer {access_token}'
}
)
response.raise_for_status()
print(response.content)
I get similar results for a curl call:
$ curl 'https://script.googleapis.com/v1/scripts/x...x:run' -X POST -H 'content-type: application/json' -d '{"function":"test","parameters":[],"devMode":true}' -H 'authorization: Bearer x...x' --silent
{
"error": {
"code": 404,
"message": "Requested entity was not found.",
"status": "NOT_FOUND"
}
}
I consider this as an issue report, as Google mention that they use Stack Overflow to field technical questions for Apps Script API. As well as a beacon to anyone who has been frustrated with this issue.
Any my question would be -- am doing anything wrong?
As an aside question: what's the difference between substituting script_id for the Current API ID (as suggested in 'How to Execute a function guide'; this identifier seems to be identical to the script Project key under File > Project properties) and the Script's Drive file ID (suggested everywhere else; this seems to be identical to Script ID)?

Jenkins http plugin to upload file using rest

I am trying to upload a file to a rest server from jenkins using http plugin. I have a jenkins pipelie where a step involves loading a file(type formData)
to a server using rest.
the server side method uses two parameters:
(#FormDataParam("file") InputStream file, #FormDataParam("fileName") String fileName)
I am using the below method
def filename = "${WORKSPACE}/Test.txt"
data="""{ \"fileName\" : \"Test.txt\" }"""
resp3 = httpRequest consoleLogResponseBody: true,url: "http://<url>",contentType:'APPLICATION_OCTETSTREAM',customHeaders:[[name:'Authorization', value:"Basic ${auth}"]],httpMode: 'POST',multipartName: 'Test.txt',uploadFile: "${filename}",requestBody:data,validResponseCodes: '200'
but when I run the status code is 400 and in the server logs the message is that no filestream and no filename is received i.e not able to get both the arguments.
Please let me know where it is getting wrong
Regards
You can try using curl instead of built-in Jenkins methods:
curl -XPOST http://<url> -H 'Content-Type: application/octet-stream' -H 'Authorization: Basic ${auth}' --data-binary '{\"fileName\" : \"Test.txt\" }'
You can debug it first from within shell. Once it's working, wrap it in sh directive:
sh "curl ..."
Since I was running on windows so bat + curl worked for me .With this workaround I was able to transfer files using jenkins and rest
However using httpRequest from jenkins inbuild library is still not working.

How to extract the list of all repositories in Stash or Bitbucket?

I need to extract the list of all repos under all projects in Bitbucket. Is there a REST API for the same? I couldn't find one.
I have both on-premise and cloud Bitbucket.
Clone ALL Projects & Repositories for a given stash url
#!/usr/bin/python
#
# #author Jason LeMonier
#
# Clone ALL Projects & Repositories for a given stash url
#
# Loop through all projects: [P1, P2, ...]
# P1 > for each project make a directory with the key "P1"
# Then clone every repository inside of directory P1
# Backup a directory, create P2, ...
#
# Added ACTION_FLAG bit so the same logic can run fetch --all on every repository and/or clone.
import sys
import os
import stashy
ACTION_FLAG = 1 # Bit: +1=Clone, +2=fetch --all
url = os.environ["STASH_URL"] # "https://mystash.com/stash"
user = os.environ["STASH_USER"] # joedoe"
pwd = os.environ["STASH_PWD"] # Yay123
stash = stashy.connect(url, user, pwd)
def mkdir(xdir):
if not os.path.exists(xdir):
os.makedirs(xdir)
def run_cmd(cmd):
print ("Directory cwd: %s "%(os.getcwd() ))
print ("Running Command: \n %s " %(cmd))
os.system(cmd)
start_dir = os.getcwd()
for project in stash.projects:
pk = project_key = project["key"]
mkdir(pk)
os.chdir(pk)
for repo in stash.projects[project_key].repos.list():
for url in repo["links"]["clone"]:
href = url["href"]
repo_dir = href.split("/")[-1].split(".")[0]
if (url["name"] == "http"):
print (" url.href: %s"% href) # https://joedoe#mystash.com/stash/scm/app/ae.git
print ("Directory cwd: %s Project: %s"%(os.getcwd(), pk))
if ACTION_FLAG & 1 > 0:
if not os.path.exists(repo_dir):
run_cmd("git clone %s" % url["href"])
else:
print ("Directory: %s/%s exists already. Skipping clone. "%(os.getcwd(), repo_dir))
if ACTION_FLAG & 2 > 0:
# chdir into directory "ae" based on url of this repo, fetch, chdir back
cur_dir = os.getcwd()
os.chdir(repo_dir)
run_cmd("git fetch --all ")
os.chdir(cur_dir)
break
os.chdir(start_dir) # avoiding ".." in case of incorrect git directories
Once logged in: on the top right, click on your profile pic and then 'View profile'
Take note of your user (in the example below 'YourEmail#domain.com', but keep in mind it's case sensitive)
Click on profile pic > Manage account > Personal access token > Create a token (choosing 'Read' access type is enough for this functionality)
For all repos in all projects:
Open a CLI and use the command below (remember to fill in your server domain!):
curl -u "YourEmail#domain.com" -X GET https://<my_server_domain>/rest/api/1.0/projects/?limit=1000
It will ask you for your personal access token, you comply and you get a JSON file with all repos requested
For all repos in a given project:
Pick the project you want to get repos from. In my case, the project URL is: <your_server_domain>/projects/TECH/ and therefore my {projectKey} is 'TECH', which you'll need for the command below.
Open a CLI and use this command (remember to fill in your server domain and projectKey!):
curl -u "YourEmail#domain.com" -X GET https://<my_server_domain>/rest/api/1.0/projects/{projectKey}/repos?limit=50
Final touches
(optional) If you want just the titles of the repos requested and you have jq installed (for Windows, downloading the exe and adding it to PATH should be enough, but you need to restart your CLI for that new addition to be detected), you can use the command below:
curl -u $BBUSER -X GET <my_server_domain>/rest/api/1.0/projects/TECH/repos?limit=50 | jq '.values|.[]|.name'
(tested with Data Center/Atlassian Bitbucket v7.9.0 and powershell CLI)
For Bitbucket Cloud
You can use their REST API to access and perform queries on your server.
Specifically, you can use this documentation page, provided by Atlassian, to learn how to list you're repositories.
For Bitbucket Server
Edit: As of receiving this tweet from Dan Bennett, I've learnt there is an API/plugin system for Bitbucket Server that could possibly cater for your needs. For docs: See here.
Edit2: Found this reference to listing personal repositories that may serve as a solution.
AFAIK there isn't a solution for you unless you built a little API for yourself that interacted with your Bitbucket Server instance.
Atlassian Documentation does indicate that to list all currently configured repositories you can do git remote -v. However I'm dubious of this as this isn't normally how git remote -v is used; I think it's more likely that Atlassian's documentation is being unclear rather than Atlassian building in this functionality to Bitbucket Server.
I ended up having to do this myself with an on-prem install of Bitbucket which didn't seem to have the REST APIs discussed above accessible, so I came up with a short script to scrape it out of the web page. This workaround has the advantage that there's nothing you need to install, and you don't need to worry about dependencies, certs or logins other than just logging into your Bitbucket server. You can also set this up as a bookmark if you urlencode the script and prefix it with javascript:.
To use this:
Open your bitbucket server project page, where you should see a list of repos.
Open your browser's devtools console. This is usually F12 or ctrl-shift-i.
Paste the following into the command prompt there.
JSON.stringify(Array.from(document.querySelectorAll('[data-repository-id]')).map(aTag => {
const href = aTag.getAttribute('href');
let projName = href.match(/\/projects\/(.+)\/repos/)[1].toLowerCase();
let repoName = href.match(/\/repos\/(.+)\/browse/)[1];
repoName = repoName.replace(' ', '-');
const templ = `https://${location.host}/scm/${projName}/${repoName}.git`;
return {
href,
name: aTag.innerText,
clone: templ
}
}));
The result is a JSON string containing an array with the repo's URL, name, and clone URL.
[{
"href": "/projects/FOO/repos/some-repo-here/browse",
"name": "some-repo-here",
"clone": "https://mybitbucket.company.com/scm/foo/some-repo-here.git"
}]
This ruby script isn't the greatest code, which makes sense, because I'm not the greatest coder. But it is clear, tested, and it works.
The script filters the output of a Bitbucket API call to create a complete report of all repos on a Bitbucket server. Report is arranged by project, and includes totals and subtotals, a link to each repo, and whether the repos are public or personal. I could have simplified it for general use, but it's pretty useful as it is.
There are no command line arguments. Just run it.
#!/usr/bin/ruby
#
# #author Bill Cernansky
#
# List and count all repos on a Bitbucket server, arranged by project, to STDOUT.
#
require 'json'
bbserver = 'http(s)://server.domain.com'
bbuser = 'username'
bbpassword = 'password'
bbmaxrepos = 2000 # Increase if you have more than 2000 repos
reposRaw = JSON.parse(`curl -s -u '#{bbuser}':'#{bbpassword}' -X GET #{bbserver}/rest/api/1.0/repos?limit=#{bbmaxrepos}`)
projects = {}
repoCount = reposRaw['values'].count
reposRaw['values'].each do |r|
projID = r['project']['key']
if projects[projID].nil?
projects[projID] = {}
projects[projID]['name'] = r['project']['name']
projects[projID]['repos'] = {}
end
repoName = r['name']
projects[projID]['repos'][repoName] = r['links']['clone'][0]['href']
end
privateProjCount = projects.keys.grep(/^\~/).count
publicProjCount = projects.keys.count - privateProjCount
reportText = ''
privateRepoCount = 0
projects.keys.sort.each do |p|
# Personal project slugs always start with tilde
isPrivate = p[0] == '~'
projRepoCount = projects[p]['repos'].keys.count
privateRepoCount += projRepoCount if isPrivate
reportText += "\nProject: #{p} : #{projects[p]['name']}\n #{projRepoCount} #{isPrivate ? 'PERSONAL' : 'Public'} repositories\n"
projects[p]['repos'].keys.each do |r|
reportText += sprintf(" %-30s : %s\n", r, projects[p]['repos'][r])
end
end
puts "BITBUCKET REPO REPORT\n\n"
puts sprintf(" Total Projects: %5d Public: %5d Personal: %5d", projects.keys.count, publicProjCount, privateProjCount)
puts sprintf(" Total Repos: %5d Public: %5d Personal: %5d", repoCount, repoCount - privateRepoCount, privateRepoCount)
puts reportText
The way I solved this issue, was get the html page and give it a ridiculous limit like this. thats in python :
cmd = "curl -s -k --user " + username + " https://URL/projects/<KEY_PROJECT_NAME>/?limit\=10000"
then I parsed it with BeautifulSoup
make_list = str((subprocess.check_output(cmd, shell=True)).rstrip().decode("utf-8"))
html = make_list
parsed_html = BeautifulSoup(html,'html.parser')
list1 = []
for a in parsed_html.find_all("a", href=re.compile("/<projects>/<KEY_PROJECT_NAME>/repos/")):
list1.append(a.string)
print(list1)
to use this make sure you change and , this should be the bitbucket project you are targeting. All , I am doing is parsing an html file.
Here's how I pulled the list of repos from Bitbucket Cloud.
Setup OAauth Consumer
Go to your workspace settings and setup an OAuth consumer, you should be able to go here directly using this link: https://bitbucket.org/{your_workspace}/workspace/settings/api
The only setting that matters is the callback URL which can be anything but I chose http://localhost
Once setup, this will display a key and secret pair for your OAuth consumer, I will refer to these as {oauth_key} and {oauth_secret} below
Authenticate with the API
Go to https://bitbucket.org/site/oauth2/authorize?client_id={oauth_key}&response_type=code ensuring you replace {oauth_key}
This will redirect you to something like http://localhost/?code=xxxxxxxxxxxxxxxxxx, make a note of that code, I'll refer to that as {oauth_code} below
In your terminal go to curl -X POST -u "{oauth_key}:{oauth_secret}" https://bitbucket.org/site/oauth2/access_token -d grant_type=authorization_code -d code={oauth_code} replacing the placeholders.
This should return json including the access_token, I’ll refer to that access token as {oauth_token}
Get the list of repos
You can now run the following to get the list of repos. Bear in mind that your {oauth_token} lasts 2hrs by default.
curl --request GET \
--url 'https://api.bitbucket.org/2.0/repositories/pageant?page=1' \
--header 'Authorization: Bearer {oauth_token}' \
--header 'Accept: application/json'
This response is paginated so you'll need to page through the responses, 10 repositories at a time.

Nagios Custom Plug-in(https authentication) not working as expected

I am writing a plugin to check authentication to a https site and then search for a text in the response html,body to confirm successful login. I have created the following plugin
#!/bin/bash
add_uri='--no-check-certificate https://'
end_uri='/'
result=$(wget -O- $add_uri$1$end_uri --post-data=$2)
flag=`echo $result|awk '{print match($0,"QC Domain")}'`;
echo $flag
echo "Nagios refreshes properly1"
if [[ $flag -gt 0 ]] ; then
echo 'ALL SEEMS FINE!!'
exit 0
else
echo 'Some Problem'
exit 2
fi;
When I execute this plugin directly from command line
./check_nhttps <url here> '<very long post data with credential information>'
The plugin works as expected(For both + & - test cases) and there seems to be no issues.
But when the plugin runs from Nagios,
check_command check_nhttps! <url here> '<very long post data with credential information>'
It always shows critical error(Prints else condition text "Some Problem" too).
P.S : Tried sending the post data with double quotes also.
Please help!!!
I'd think its very probable that your post data contains some characters that confuse nagios, maybe a space, or even a !. Better put the post data into some file and use --post-file. Also, you might insert echo "$2" > /tmp/this_is_my_post_data_when_executed_by_nagios into your script and check if the post data is ok.