Error with OAuth in the Google Data Protocol Client Libraries - entity-framework

When I was testing the code in "OAuth in the Google Data Protocol Client Libraries (http://code.google.com/apis/gdata/docs/auth/oauth.html)", I always got the following error. Anyone can give me a hint?
Error Code:
File "D:\PROJ\GAE\proj2\proj2.py", line 262, in get
return self.redirect(auth_url)
File "C:\DEV\google_appengine\v1.4.2\google\appengine\ext\webapp\__init__.py", line 380, in redirect
absolute_url = urlparse.urljoin(self.request.uri, uri)
File "C:\DEV\Python\v2.5.4\lib\urlparse.py", line 253, in urljoin
urlparse(url, bscheme, allow_fragments)
File "C:\DEV\Python\v2.5.4\lib\urlparse.py", line 154, in urlparse
tuple = urlsplit(url, scheme, allow_fragments)
File "C:\DEV\Python\v2.5.4\lib\urlparse.py", line 193, in urlsplit
i = url.find(':')
AttributeError: 'Uri' object has no attribute 'find'
Here is the code I want to fetch Google contacts info:
class Test(webapp.RequestHandler):
def get(self):
client = gdata.contacts.client.ContactsClient(source = 'www.mydomainname.com')
callback_url = 'http://%s/test2' % self.request.host
request_token = client.GetOAuthToken(['http://www.google.com/m8/feeds/'],
callback_url,
GOOGLE_KEY,
GOOGLE_SECRET)
gdata.gauth.AeSave(request_token, 'request_token')
auth_url = request_token.generate_authorization_url(google_apps_domain = None)
return self.redirect(auth_url) #Error?!
Thanks in advance!

The auth_url generated is not a string.
Just do:
return self.redirect(str(auth_url))
and it will work.

Related

Running ApiSpec with Flask and Swagger UI

I am trying to render my API's in Swagger. I am using ApiSpec library to generate the Open Api Spec, which then I trying to add into my Swagger UI. I am trying to use MethodView available in Flask with the following code below.
from flask.views import MethodView
from flask import Blueprint, after_this_request, make_response
test_data = Blueprint('test', __name__, url_prefix='/testdata')
class TestDataApi(MethodView):
def get():
"""Get all TestData.
---
description: Get a random data
security:
- ApiKeyAuth: []
responses:
200:
description: Return all the TestData
content:
application/json:
schema: TestDataSchema
headers:
- $ref: '#/components/headers/X-Total-Items'
- $ref: '#/components/headers/X-Total-Pages'
"""
data = TestData.query.all()
response_data = test_schema.dump(data)
resp = make_response(json.dumps(response_data), 200)
return resp
This is my app.py where I am trying to register swagger and corresponding views:
api = Blueprint('api', __name__, url_prefix="/api/v0")
spec = APISpec(
title='Test Backend',
version='v1',
openapi_version='3.0.2',
plugins=[MarshmallowPlugin(), FlaskPlugin()],
)
api.register_blueprint(test_data)
app.register_blueprint(api)
test_view = TestDataApi.as_view('test_api')
app.add_url_rule('/api/v0/testdata/', view_func=test_view, methods=['GET',])
spec.components.schema("TestData", schema=TestDataSchema)
spec.path(view=test_view, operations=dict(get={}))
SWAGGER_URL = '/api/v0/docs'
API_URL = 'swagger.json'
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL,
config={
'app_name': "Backend"
})
app.register_blueprint(swaggerui_blueprint)
But I keep hitting this below error and I am not able to deduce how to fix it.
File "/home/hh/Projects/testdata/src/goodbytz_app.py", line 29, in <module>
app = create_app()
File "/home/hh/Projects/testdata/src/app.py", line 100, in create_app
spec.path(view=additives_view, operations=dict(get={}))
File "/home/hh/Projects/testdata/test_poc/lib/python3.10/site-packages/apispec/core.py", line 516, in path
plugin.operation_helper(path=path, operations=operations, **kwargs)
File "/home/hh/Projects/testdata/test_poc/lib/python3.10/site-packages/apispec/ext/marshmallow/__init__.py", line 218, in operation_helper
self.resolver.resolve_operations(operations)
File "/home/hh/Projects/testdata/test_poc/lib/python3.10/site-packages/apispec/ext/marshmallow/schema_resolver.py", line 34, in resolve_operations
self.resolve_response(response)
File "/home/hh/Projects/testdata/test_poc/lib/python3.10/site-packages/apispec/ext/marshmallow/schema_resolver.py", line 183, in resolve_response
if "headers" in response:
TypeError: argument of type 'NoneType' is not iterable
can try this one, more simple and its automatically generate openapi/swagger https://pypi.org/project/flask-toolkits/

Unable to set credentials with Google texttoaudio API

I was able to set to set the credentials for Google's translation but with texttospeech I'm having a lot of trouble. First, I couldn't get the credentials as a json file, but got it as just a string. I haven't been able to find anyone else whose credentials are strings every one else has a json file. I converted the string to a json file but I don't think that is helping because it seems that they json object has to be a dictionary. In any case when I try this:
from google.oauth2 import service_account
key1 = 'key.json'
credentials = service_account.Credentials.from_service_account_file(key1)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-c7d030662a35>", line 1, in <module>
credentials = service_account.Credentials.from_service_account_file(key1)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/oauth2/service_account.py", line 209, in from_service_account_file
filename, require=['client_email', 'token_uri'])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/_service_account_info.py", line 73, in from_filename
return data, from_dict(data, require=require)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/_service_account_info.py", line 46, in from_dict
missing = keys_needed.difference(six.iterkeys(data))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/six.py", line 575, in iterkeys
return iter(d.keys(**kw))
AttributeError: 'str' object has no attribute 'keys'
When I try this code the following happens:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'key.json'
client = texttospeech.TextToSpeechClient()
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-14-d30e5cd41087>", line 2, in <module>
client = texttospeech.TextToSpeechClient()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/cloud/texttospeech_v1/gapic/text_to_speech_client.py", line 159, in __init__
address=api_endpoint, channel=channel, credentials=credentials
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/cloud/texttospeech_v1/gapic/transports/text_to_speech_grpc_transport.py", line 61, in __init__
channel = self.create_channel(address=address, credentials=credentials)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/cloud/texttospeech_v1/gapic/transports/text_to_speech_grpc_transport.py", line 91, in create_channel
address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 177, in create_channel
credentials, _ = google.auth.default(scopes=scopes)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/_default.py", line 305, in default
credentials, project_id = checker()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/_default.py", line 165, in _get_explicit_environ_credentials
os.environ[environment_vars.CREDENTIALS])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/google/auth/_default.py", line 102, in _load_credentials_from_file
credential_type = info.get('type')
AttributeError: 'str' object has no attribute 'get'
I think this is because my json object is not a dict but a string. But the key that Google gave me was a string and not a json file, so I really don't know what to do here. Plus their documentation is too hard to understand.
When you say you have a string, are you referring to the Key ID? You will still need the json file associated with that key.
To create a new json file, go to Google Cloud Console -> IAM & Admin -> Service Accounts. Select one of your service accounts and click "Create Key" which will download the key as a json file. It can only be downloaded once.

Redirection using Scrapy Spider Middleware (Unhandled error in Deferred)

I've made a spider using Scrapy that first solves a CAPTCHA in a redirected address before accessing the main website I intend to scrape. It says that I have an HTTP error causing an infinite loop but I can't find which part of the script is causing this.
In the middleware:
from scrapy.downloadermiddlewares.redirect import RedirectMiddleware
class ProtectRedirectMiddleware(RedirectMiddleware):
def __init__(self, settings):
super().__init__(settings)
self.source = urllib.request.urlopen('http://sampleurlname.com/')
soup = BeautifulSoup(source, 'lxml')
def _redirect(self, redirected, request, spider, reason):
# act normally if this isn't a CAPTCHA redirect
if not self.is_protected(redirected.url):
return super()._redirect(redirected, request, spider, reason)
# if this is a CAPTCHA redirect
logger.debug(f'The protect URL is triggered for {request.url}')
request.cookies = self.bypass_protection(redirected.url)
request.dont_filter = True
return request
def is_protected(self, url):
return 'sampleurlname.com/protect' in url
def bypass_protection(self, url=None):
# only navigate if any explicit url is provided
if url:
url = url or self.source.geturl(url)
img = soup.find_all('img')[0]
imgurl = img['src']
urllib.request.urlretrieve(imgurl, "captcha.png")
return self.solve_captcha(imgurl)
# wait for the redirect and try again
self.wait_for_redirect()
return self.bypass_protection()
def wait_for_redirect(self, url = None, wait = 0.1, timeout=10):
url = self.url
for i in range(int(timeout//wait)):
time.sleep(wait)
if self.response.url() != url:
return self.response.url()
logger.error(f'Maybe {self.response.url()} isn\'t a redirect URL')
raise Exception('Timed out')
def solve_captcha(self, img, width=150, height=50):
# open image
self.img = 'captcha.png'
img = Image.open("captcha.png")
# image manipulation - simplified
# input the captcha text - simplified
# click the submit button - simplified
# save the URL
url = self.response.url()
# try again if wrong
if self.is_protected(self.wait_for_redirect(url)):
return self.bypass_protection()
# return the cookies as a dict
cookies = {}
for cookie_string in self.response.css.cookies():
if 'domain=sampleurlname.com' in cookie_string:
key, value = cookie_string.split(';')[0].split('=')
cookies[key] = value
return cookies
Then, this is the error I get when I run the scrapy crawl of my spider:
Unhandled error in Deferred:
2018-08-06 16:34:33 [twisted] CRITICAL: Unhandled error in Deferred:
2018-08-06 16:34:33 [twisted] CRITICAL:
Traceback (most recent call last):
File "/username/anaconda/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/crawler.py", line 80, in crawl
self.engine = self._create_engine()
File "/username/anaconda/lib/python3.6/site-packages/scrapy/crawler.py", line 105, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/username/anaconda/lib/python3.6/site-packages/scrapy/core/engine.py", line 69, in __init__
self.downloader = downloader_cls(crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/middleware.py", line 36, in from_settings
mw = mwcls.from_crawler(crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/downloadermiddlewares/redirect.py", line 26, in from_crawler
return cls(crawler.settings)
File "/username/...../scraper/myscraper/myscraper/middlewares.py", line 27, in __init__
self.source = urllib.request.urlopen('http://sampleurlname.com/')
File "/username/anaconda/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 564, in error
result = self._call_chain(*args)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 756, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 532, in open
It basically repeats the bottom part of these over and over: open, http_response, error, _call_chain, and http_error_302, until these show at the end:
File "/username/anaconda/lib/python3.6/urllib/request.py", line 746, in http_error_302
self.inf_msg + msg, headers, fp)
urllib.error.HTTPError: HTTP Error 307: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Temporary Redirect
In setting.py is:
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': None,
'myscrape.middlewares.ProtectRedirectMiddleware': 600}
Your issue has nothing to do with scrapy itself. You are using blocking requests in your middleware initiation.
This request seems to be stuck in a redirect loop. This usually happens when websites do not act appropriately and require cookies to allow you through:
First you connect and get a redirect response 30x and some setCokies headers
You redirect again but not with Cookies headers and the page lets you through
Python urllib doesn't handle cookies, so try this:
import urllib
from http.cookiejar import CookieJar
def __init__(self):
try:
req=urllib.request.Request(url)
cj = CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
response = opener.open(req)
source = response.read().decode('utf8', errors='ignore')
response.close()
except urllib.request.HTTPError as e:
logging.error(f"couldn't initiate middleware: {e}")
return
# you should use scrapy selectors instead of beautiful soup here
#soup = BeautifulSoup(source, 'lxml')
selector = Selector(text=source)
Alternatively you should use requests package that handles cookies by itself.

Access Facebook Graph API in Python, Access Token usage

I'm new to Python, and have been using different tutorials/guides to build a program for getting data from a Facebook page, and write it into an AzureSQL database. In the Facebook Developer page I've set up an app, and created the AppID and AppSecret.
Here's the relevant part of the code I think the issue is with:
#create authenticated post URL
def create_post_url(graph_url, APP_ID, APP_SECRET):
post_args = "/posts/?key=value&access_token=" + APP_ID + "|" + APP_SECRET
post_url = graph_url + post_args
return post_url
#render graph url call to JSON
def render_to_json(graph_url):
web_response = urllib2.urlopen(graph_url)
readable_page = web_response.read()
json_data = json.loads(readable_page)
return json_data
def main():
#define facebook app secret and app id
APP_SECRET = "xyz"
APP_ID = "abc"
#define page username
pagelist = ["porsche"]
graph_url = "https://graph.facebook.com/"
In the Facebook Graph API Explorer, running a GET request will work great and return data.
Facebook Graph API Explorer results
Howerver if I run my code in the CLI, the resulting error is as follows:
[vagrant#localhost SomeAnalysis]$ python src/collectors/facebook/facebookScaper.py
Traceback (most recent call last):
File "src/collectors/facebook/facebookScaper.py", line 92, in <module>
main()
File "src/collectors/facebook/facebookScaper.py", line 56, in main
json_fbpage = render_to_json(current_page)
File "src/collectors/facebook/facebookScaper.py", line 25, in render_to_json
web_response = urllib2.urlopen(graph_url)
File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 400: Bad Request
I've been checking SO and have found similar posts but none explain how to use the Access Token. I've tried changing the part in the URL formation with APP_ID and APP_SECRET to just using the long ACCESS_TOKEN from the Graph API Explorer but that yields the same 400 error.
Any help is appreciated.
/K

download file from mongo gridfs with python

I have uploaded files to the mongo.But when I want to download from mongo by httpresonse on the web browser,that did not work.
Here is the views.py:
if filename is not None:
file_ = db.fs.files.find_one({
'filename':filename
})
file_id = file_['_id']
wrapper = fs.get(file_id).read()
response = StreamingHttpResponse(FileWrapper(wrapper),content_type=file_['contentType'])
response['Content-Disposition'] = 'attachment; filename=%s' % str(filename)
response['Content-Length'] = file_['length']
return response
I got this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 86, in run
self.finish_response()
File "/usr/lib/python2.7/wsgiref/handlers.py", line 126, in finish_response
for data in self.result:
File "/usr/local/lib/python2.7/dist-packages/django/utils/six.py", line 473, in next
return type(self).__next__(self)
File "/usr/local/lib/python2.7/dist-packages/django/http/response.py", line 292, in __next__
return self.make_bytes(next(self._iterator))
File "/usr/lib/python2.7/wsgiref/util.py", line 30, in next
data = self.filelike.read(self.blksize)
AttributeError: 'str' object has no attribute 'read'
But when I change the StreamingHttpResponse to HttpResponse,the error is as follow:
[30/Jul/2014 17:29:43] "GET /download/cs101/ HTTP/1.1" 200 664
/usr/lib/python2.7/wsgiref/handlers.py:126: DeprecationWarning:
Creating streaming responses with `HttpResponse` is deprecated.
Use `StreamingHttpResponse`instead if you need the streaming behavior.
for data in self.result:
Traceback (most recent call last):
File "/usr/lib/python2.7/wsgiref/handlers.py", line 86, in run
self.finish_response()
File "/usr/lib/python2.7/wsgiref/handlers.py", line 126, in finish_response
for data in self.result:
File "/usr/local/lib/python2.7/dist-packages/django/utils/six.py", line 473, in next
return type(self).__next__(self)
File "/usr/local/lib/python2.7/dist-packages/django/http/response.py", line 292, in __next__
return self.make_bytes(next(self._iterator))
File "/usr/lib/python2.7/wsgiref/util.py", line 30, in next
data = self.filelike.read(self.blksize)
AttributeError: 'str' object has no attribute 'read'
Thanks in advance!
You're calling the read method in:
wrapper = fs.get(file_id).read()
So you're getting a str (assuming Python 2, if 3, you're getting bytes). FileWrapper needs file like object, which of course str is not one.
Try to use:
wrapper = fs.get(file_id)
This will return file like object.
OTOH, pymongo's .get() returns a GridOut instance, which already supports iteration, so why not try something like:
wrapper = fs.get(file_id)
response = StreamingHttpResponse(wrapper, content_type=file_['contentType'])