alamofire request gets text/html response, while curl and postman get json response - swift

=================================
Note: Using #Larme's trick to print out debugDescription of the request, and comparing with my working curl request, I was able to figure out the dumb bugs I made. 1. In the server request handler, I return a serializerError when something unrecognized, pretty confusing. 2. I made a stupid mistake in my request from swift, putting "GET_RECIPES" instead of "GET_RECIPE".
================================
I have a http service implemented with django rest framework, when I send requests via swift/alamofire, it cannot get the correct json response. However, requests sent via curl and postman get the correct json response.
So I am confused what where is the issue, the django service side or swift request side?
I have tried using .responseString instead of .responseJSON in swift to print out the resposne, but still the data is not in the response, basically the error occurs when the request reaches the server side.
From django server side, the error reads "TypeError: Object of type 'property' is not JSON serializable". OK it seems the issue is from django side...
But from curl and postman, I can get the json response without an issue, with the response header containing "Content-Type": "application/json", and for the django side everything is also OK. Then does this mean the django server can handle json response and it should be the issue of the swift request??
Code in swift,
let parameters: [String: Any] = [
"type": "GET_RECIPE",
"details": ["ingredients" : ["egg", "bread"]]
]
let headers = ["Content-Type": "application/json"]
Alamofire.request(url, mothod: .post, parameters: parameters,
headers: headers, encoding: JSONEncoding.default)
.responseJSON {response in
if let data = response.result.value {
print(data)
}
}
Code of the request handler
class RecipesSerilizer(serializers.ModelSerializer):
class Meta:
model = Recipes
fields = ('id', 'url', 'author', 'category', 'title', 'description',
'instructions', 'tip', 'raw', 'score')
def get_recipes_given_ingredients(data):
logger.info('Get recipes for {}'.format(data.get('details')))
details = data.get('details')
ingredients = details.get('ingredients')
logger.info('GET_RECIPE for ingredients {}'.format(ingredients))
recipes = queries.get_recipe_recommendation_given_ingredients(ingredients)
serializer = RecipesSerilizer(recipes, many=True)
return Response(serializer.data)
Trace stack from the server side:
Internal Server Error: /get-recipes/
Traceback (most recent call last):
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\django\core\handlers\exception.py", line 34, in inner
response = get_response(request)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\django\core\handlers\base.py", line 145, in _get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\django\core\handlers\base.py", line 143, in _get_response
response = response.render()
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\django\template\response.py", line 106, in render
self.content = self.rendered_content
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\response.py", line 72, in rendered_content
ret = renderer.render(self.data, accepted_media_type, context)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\renderers.py", line 733, in render
context = self.get_context(data, accepted_media_type, renderer_context)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\renderers.py", line 688, in get_context
'content': self.get_content(renderer, data, accepted_media_type, renderer_context),
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\renderers.py", line 424, in get_content
content = renderer.render(data, accepted_media_type, renderer_context)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\renderers.py", line 107, in render
allow_nan=not self.strict, separators=separators
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\utils\json.py", line 28, in dumps
return json.dumps(*args, **kwargs)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\json\encoder.py", line 201, in encode
chunks = list(chunks)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\json\encoder.py", line 437, in _iterencode
o = _default(o)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\site-packages\rest_framework\utils\encoders.py", line 68, in default
return super(JSONEncoder, self).default(obj)
File "C:\Users\Yuanjun\Anaconda2\envs\online_bid\lib\json\encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'property' is not JSON serializable
[14/May/2019 08:29:32] "POST /get-recipes/ HTTP/1.1" 500 124585

i think your problem is that you are trying to send a post to a get request.
try changing your alamofire request as follows:
let parameters: [String: Any] = [
"type": "GET_RECIPE",
"details": ["ingredients" : ["egg", "bread"]]
]
let headers = ["Content-Type": "application/json"]
Alamofire.request(url, mothod: .get, parameters: parameters,
headers: headers, encoding: JSONEncoding.default)
.responseJSON {response in
if let data = response.result.value {
print(data)
}
}

Probably the server crashes while handling your request or cannot find the given URL(because of the trailing slash).
text/html is usually returned if the server has crashed while running in DEBUG mode. This is how it shows the crash reason in a pretty way with the stack trace.
It is really hard to tell what happened in your case. It would be great if you provided the stack trace of the error.

Related

Uber Eats API - Fail to collect reports (Could not parse json: readObjectStart: expect { or n, but..)

We are facing an error when we are trying to request UberEats API to collect report about our restaurants based on the Ubereats documentation here.
The error :
'{"error":"Could not parse json: readObjectStart: expect { or n, but found \x00, error found in #0 byte of ...||..., bigger context ...||..."}'
We tried to run the query in python and postman and still facing the same error.
Need help to understand where we failed.
Here the python code run in VSC
import requests
import json
payload = {
"report_type": "FINANCE_SUMMARY_REPORT",
"store_uuids": "xxx",
"start_date": "2022-09-01",
"end_date": "2022-09-15"
}
headers = {
"authorization": "Bearer xxx"
}
report_response = requests.post('https://api.uber.com/v1/eats/report', data=payload, headers=headers)
report_response.text
'{"error":"Could not parse json: readObjectStart: expect { or n, but found \x00, error found in #0 byte of ...||..., bigger context ...||..."}'
Best regards,
You have to convert the payload to valid JSON string and send the request.
headers = {
"Authorization" : "*********",
"Content-Type" : "application/json"
}
import json
response = requests.post("https://api.uber.co/v1/eats/report", data = json.dumps(payload), headers=headers)

How to in "def request()" return the response directly

like title, I wanna in "def request()" to process data, and return a response directly;
I don't wanna flow through the target server;
this way is feasible? thanks!!!
Here's an example on how to do that:
"""Send a reply from the proxy without sending any data to the remote server."""
from mitmproxy import http
def request(flow: http.HTTPFlow) -> None:
if flow.request.pretty_url == "http://example.com/path":
flow.response = http.Response.make(
200, # (optional) status code
b"Hello World", # (optional) content
{"Content-Type": "text/html"} # (optional) headers
)
Source: https://github.com/mitmproxy/mitmproxy/blob/main/examples/addons/http-reply-from-proxy.py

How to check for proper format in my API response

Currently running tests for my REST API which:
takes an endpoint from the user
using that endpoint, grabs info from a server
sends it to another server to be translated
then proceeds to jsonify the data.
I've written a series of automated tests running and I cannot get one to pass - the test that actually identifies the content of the response. I've tried including several variations of what the test is expecting but I feel it's the actual implementation that's the issue. Here's the expected API response from the client request:
{ "name": "random_character", "description": "Translated description of requested character is output here" }
Here is the testing class inside my test_main.py:
class Test_functions(unittest.TestCase):
# checking if response of 200 is returned
def test_healthcheck_PokeAPI(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/")
status_code = response.status_code
self.assertEqual(status_code, 200)
# the status code should be a redirect i.e. 308; so I made a separate test for this
def test_healthcheck_ShakesprAPI(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/charizard")
self.assertEqual(response.status_code, 308)
def test_response_content(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/charizard")
self.assertEqual(response.content_type,
'application/json') <<<< this test is failing
def test_trans_shakespeare_response(self):
manualtest = app.test_client(self)
response = manualtest.get("/pokemon/charizard")
self.assertFalse(b"doth" in response.data)
Traceback:
AssertionError: 'text/html; charset=utf-8' != 'application/json' - text/html; charset=utf-8 + application/json
Any help would be greatly appreciated

RestClient.post response generating "error groovyx.net.http.ResponseParseException : OK"

I' am using groovy to consume a POST Rest api : here is my code :
import groovyx.net.http.RESTClient
#Grab (group = 'org.codehaus.groovy.modules.http-builder', module = 'http-builder', version = '0.7.1')
def url = "https://poc-ser.tst.be/"
def client = new RESTClient(url)
client.ignoreSSLIssues()
client.auth.basic(login,pswd)
client.post(
path: "osidoc/api/rest/production/documents?id=46",
contentType:'application/json',
headers: [Accept: 'application/json' , Authorization: 'Authorization']
)
But I always get error " error groovyx.net.http.ResponseParseException: OK caused by: groovy.json.JsonException: Unable to determine the current character, " Knowing that the response should be "application/msword" type (i.e : the response should be a word doc)
EDIT
I tried to change the Accept to "application/octet-stream' but it showed me an other error "406Invalid Accept header. Only XML and JSON are supported.`"

Redirection using Scrapy Spider Middleware (Unhandled error in Deferred)

I've made a spider using Scrapy that first solves a CAPTCHA in a redirected address before accessing the main website I intend to scrape. It says that I have an HTTP error causing an infinite loop but I can't find which part of the script is causing this.
In the middleware:
from scrapy.downloadermiddlewares.redirect import RedirectMiddleware
class ProtectRedirectMiddleware(RedirectMiddleware):
def __init__(self, settings):
super().__init__(settings)
self.source = urllib.request.urlopen('http://sampleurlname.com/')
soup = BeautifulSoup(source, 'lxml')
def _redirect(self, redirected, request, spider, reason):
# act normally if this isn't a CAPTCHA redirect
if not self.is_protected(redirected.url):
return super()._redirect(redirected, request, spider, reason)
# if this is a CAPTCHA redirect
logger.debug(f'The protect URL is triggered for {request.url}')
request.cookies = self.bypass_protection(redirected.url)
request.dont_filter = True
return request
def is_protected(self, url):
return 'sampleurlname.com/protect' in url
def bypass_protection(self, url=None):
# only navigate if any explicit url is provided
if url:
url = url or self.source.geturl(url)
img = soup.find_all('img')[0]
imgurl = img['src']
urllib.request.urlretrieve(imgurl, "captcha.png")
return self.solve_captcha(imgurl)
# wait for the redirect and try again
self.wait_for_redirect()
return self.bypass_protection()
def wait_for_redirect(self, url = None, wait = 0.1, timeout=10):
url = self.url
for i in range(int(timeout//wait)):
time.sleep(wait)
if self.response.url() != url:
return self.response.url()
logger.error(f'Maybe {self.response.url()} isn\'t a redirect URL')
raise Exception('Timed out')
def solve_captcha(self, img, width=150, height=50):
# open image
self.img = 'captcha.png'
img = Image.open("captcha.png")
# image manipulation - simplified
# input the captcha text - simplified
# click the submit button - simplified
# save the URL
url = self.response.url()
# try again if wrong
if self.is_protected(self.wait_for_redirect(url)):
return self.bypass_protection()
# return the cookies as a dict
cookies = {}
for cookie_string in self.response.css.cookies():
if 'domain=sampleurlname.com' in cookie_string:
key, value = cookie_string.split(';')[0].split('=')
cookies[key] = value
return cookies
Then, this is the error I get when I run the scrapy crawl of my spider:
Unhandled error in Deferred:
2018-08-06 16:34:33 [twisted] CRITICAL: Unhandled error in Deferred:
2018-08-06 16:34:33 [twisted] CRITICAL:
Traceback (most recent call last):
File "/username/anaconda/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/crawler.py", line 80, in crawl
self.engine = self._create_engine()
File "/username/anaconda/lib/python3.6/site-packages/scrapy/crawler.py", line 105, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "/username/anaconda/lib/python3.6/site-packages/scrapy/core/engine.py", line 69, in __init__
self.downloader = downloader_cls(crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/middleware.py", line 36, in from_settings
mw = mwcls.from_crawler(crawler)
File "/username/anaconda/lib/python3.6/site-packages/scrapy/downloadermiddlewares/redirect.py", line 26, in from_crawler
return cls(crawler.settings)
File "/username/...../scraper/myscraper/myscraper/middlewares.py", line 27, in __init__
self.source = urllib.request.urlopen('http://sampleurlname.com/')
File "/username/anaconda/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 564, in error
result = self._call_chain(*args)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 756, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/username/anaconda/lib/python3.6/urllib/request.py", line 532, in open
It basically repeats the bottom part of these over and over: open, http_response, error, _call_chain, and http_error_302, until these show at the end:
File "/username/anaconda/lib/python3.6/urllib/request.py", line 746, in http_error_302
self.inf_msg + msg, headers, fp)
urllib.error.HTTPError: HTTP Error 307: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Temporary Redirect
In setting.py is:
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': None,
'myscrape.middlewares.ProtectRedirectMiddleware': 600}
Your issue has nothing to do with scrapy itself. You are using blocking requests in your middleware initiation.
This request seems to be stuck in a redirect loop. This usually happens when websites do not act appropriately and require cookies to allow you through:
First you connect and get a redirect response 30x and some setCokies headers
You redirect again but not with Cookies headers and the page lets you through
Python urllib doesn't handle cookies, so try this:
import urllib
from http.cookiejar import CookieJar
def __init__(self):
try:
req=urllib.request.Request(url)
cj = CookieJar()
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
response = opener.open(req)
source = response.read().decode('utf8', errors='ignore')
response.close()
except urllib.request.HTTPError as e:
logging.error(f"couldn't initiate middleware: {e}")
return
# you should use scrapy selectors instead of beautiful soup here
#soup = BeautifulSoup(source, 'lxml')
selector = Selector(text=source)
Alternatively you should use requests package that handles cookies by itself.