Is it possible to add a \ in the body of the message? - stocktwits

I'm using the stocktwits api to pull tweets and process them. When there are quotes in the body of the message the JSON parser gives errors. Is it possible to add a \ before the extra quotes in the body.
For example:
{"body":"ChOTD-11/3/16 CBOE "Equity Put":Call Ratio ISEE Call:Put Ratio Hits Extreme > 1.00 $SPY $SPX"}
Here the quotes around "Equity Put" should been escaped
like this \"Equity Put\"
{"body":"ChOTD-11/3/16 CBOE \"Equity Put\":Call Ratio ISEE Call:Put Ratio Hits Extreme > 1.00 $SPY $SPX"}

The response comes as an html encoded string.
For example:
"Test with "quotes""
You need to be using a JSON serializer that decodes these html entities when parsing, which will turn it into
"Test with \"quotes\""

Related

Requests fail authorization when query string contains certain characters

I'm making requests to Twitter, using the OAuth1.0 signing process to set the Authorization header. They explain it step-by-step here, which I've followed. It all works, most of the time.
Authorization fails whenever special characters are sent without percent encoding in the query component of the request. For example, ?status=hello%20world! fails, but ?status=hello%20world%21 succeeds. But the change from ! to the percent encoded form %21 is only made in the URL, after the signature is generated.
So I'm confused as to why this fails, because AFAIK that's a legally encoded query string. Only the raw strings ("status", "hello world!") are used for signature generation, and I'd assume the server would remove any percent encoding from the query params and generate its own signature for comparison.
When it comes to building the URL, I let URLComponents do the work, so I don't add percent encoding manually, ex.
var urlComps = URLComponents()
urlComps.scheme = "https"
urlComps.host = host
urlComps.path = path
urlComps.queryItems = [URLQueryItem(key: "status", value: "hello world!")]
urlComps.percentEncodedQuery // "status=hello%20world!"
I wanted to see how Postman handled the same request. I selected OAuth1.0 as the Auth type and plugged in the same credentials. The request succeeded. I checked the Postman console and saw ?status=hello%20world%21; it was percent encoding the !. I updated Postman, because a nice little prompt asked me to. Then I tried the same request; now it was getting an authorization failure, and I saw ?status=hello%20world! in the console; the ! was no longer being percent encoded.
I'm wondering who is at fault here. Perhaps Postman and I are making the same mistake. Perhaps it's with Twitter. Or perhaps there's some proxy along the way that idk, double encodes my !.
The OAuth1.0 spec says this, which I believe is in the context of both client (taking a request that's ready to go and signing it before it's sent), and server (for generating another signature to compare against the one received):
The parameters from the following sources are collected into a
single list of name/value pairs:
The query component of the HTTP request URI as defined by
[RFC3986], Section 3.4. The query component is parsed into a list
of name/value pairs by treating it as an
"application/x-www-form-urlencoded" string, separating the names
and values and decoding them as defined by
[W3C.REC-html40-19980424], Section 17.13.4.
That last reference, here, outlines the encoding for application/x-www-form-urlencoded, and says that space characters should be replaced with +, non-alphanumeric characters should be percent encoded, name separated from value by =, and pairs separated by &.
So, the OAuth1.0 spec says that the query string of the URL needs to be decoded as defined by application/x-www-form-urlencoded. Does that mean that our query string needs to be encoded this way too?
It seems to me, if a request is to be signed using OAuth1.0, the query component of the URL that gets sent must be encoded in a way that is different to what it would normally be encoded in? That's a pretty significant detail if you ask me. And I haven't seen it explicitly mentioned, even in Twitter's documentation. And evidently the folks at Postman overlooked it too? Unless I'm not supposed to be using URLComponents to build a URL, but that's what it's for, no? Have I understood this correctly?
Note: ?status=hello+world%21 succeeds; it tweets "hello world!"
I ran into a similar issue.
put the status in post body, not query string.
Percent-encoding:
private encode(str: string) {
// encodeURIComponent() escapes all characters except: A-Z a-z 0-9 - _ . ! ~ * " ( )
// RFC 3986 section 2.3 Unreserved Characters (January 2005): A-Z a-z 0-9 - _ . ~
return encodeURIComponent(str)
.replace(/[!'()*]/g, c => "%" + c.charCodeAt(0).toString(16).toUpperCase());
}

What alternative ways exist for sending updates through REST?

I was trying to get my head around the possible alternative ways to structure a GET or POST call in REST.
(This is not homework, more like just attempting to get a better understanding of the options.)
Here are the alternatives I have gathered so far:
GET-based calls
Following alternatives exist for structuring the submitted parameters:
[name]=[value] pairs joined by equal signs and separated by ampersands, sent in:
The URL after the URI followed by a question mark.
(MATRIX parameters) [name]=[value] pairs joined by equal signs and separated by semicolons, sent in:
The URL after the URI before the question mark.
POST and PUT-based calls
Following alternatives exist for structuring the submitted parameters:
JSON, sent in:
The content part of the request
XML, sent in:
The content part of the request
[name]=[value] pairs, sent in:
The content part of the request
The request header
Are there any other ways to structure the parameters?
You can use any hypermedia content type. Doesn't mean every client and server will understand every type (which is why we have content negotation).
Most common are
application/json
application/x-www-form-urlencoded
multipart/form-data
text/html
text/xml
application/xml

Need to find the requests equivalent of openurl() from urllib2

I am currently trying to modify a script to use the requests library instead of the urllib2 library. I haven't really used it before and I am looking to do the equivalent of urlopen("http://www.example.org").read(), so I tried the requests.get("http://www.example.org").text function.
This works fine with normal everyday html, however when I fetch from this url (https://gtfsrt.api.translink.com.au/Feed/SEQ) it doesn't seem to work.
So I wrote the below code to print out the responses from the same url using both the requests and urllib2 libraries.
import urllib2
import requests
#urllib2 request
request = urllib2.Request("https://gtfsrt.api.translink.com.au/Feed/SEQ")
result = urllib2.urlopen(request)
#requests request
result2 = requests.get("https://gtfsrt.api.translink.com.au/Feed/SEQ")
print result2.encoding
#urllib2 write to text
open("Output.txt", 'w').close()
text_file = open("Output.txt", "w")
text_file.write(result.read())
text_file.close()
open("Output2.txt", 'w').close()
text_file = open("Output2.txt", "w")
text_file.write(result2.text)
text_file.close()
The openurl().read() works fine but the requests.get().text doesn't work for the given this url. I suspect it has something to do with encoding, but i don't know what. Any thoughts?
Note: The supplied url is a feed in the google protocol buffer format, once I receive the message i give the feed to a google library that interprets it.
Your issue is that you're making the requests module interpret binary content in a response as text.
A response from the requests library has two main way to access the body of the response:
Response.content - will return the response body as a bytestring
Response.text - will decode the response body as text and return unicode
Since protocol buffers are a binary format, you should use result2.content in your code instead of result2.text.
Response.content will return the body of the response as-is, in bytes. For binary content this is exactly what you want. For text content that contains non-ASCII characters this means the content must have been encoded by the server into a bytestring using a particular encoding that is indicated by either a HTTP header or a <meta charset="..." /> tag. In order to make sense of those bytes they therefore need to be decoded after receiving using that charset.
Response.text now is a convenience method that does exactly this for you. It assumes the response body is text, and looks at the response headers to find the encoding, and decodes it for you, returning unicode.
But if your response doesn't contain text, this is the wrong method to use. Binary content doesn't contain characters, because it's not text, so the whole concept of character encoding does not make any sense for binary content - it's only applicable to text composed of characters. (That's also why you're seeing response.encoding == None - it's just bytes, there is no character encoding involved).
See Response Content and Binary Response Content in the requests documentation for more details.

Base64 decoding of MIME email not working (GMail API)

I'm using the GMail API to retrieve an email contents. I am getting the following base64 encoded data for the body: http://hastebin.com/ovucoranam.md
But when I run it through a base64 decoder, it either returns an empty string (error) or something that resembles the HTML data but with a bunch of weird characters.
Help?
I'm not sure if you've solved it yet, but GmailGuy is correct. You need to convert the body to the Base64 RFC 4648 standard. The jist is you'll need to replace - with + and _ with /.
I've taken your original input and did the replacement: http://hastebin.com/ukanavudaz
And used base64decode.org to decode it, and it was fine.
You need to use URL (aka "web") safe base64 decoding alphabet (see rfc 4648), which it doesn't appear you're doing. Using the standard base64 alphabet may work sometimes but not always (2 of the characters are different).
Docs don't seem to consistently mention this important detail. Here's one where it does though:
https://developers.google.com/gmail/api/guides/drafts
Also, if your particular library doesn't support the "URL safe" alphabet then you can do string substitution on the string first ("-" with "+" and "_" with "/") and then do normal base64 decoding on it.
I had the same issue decoding the 'data' fields in the message object response from the Gmail API. The Google Ruby API library wasn't decoding the text correctly either. I found I needed to do a url-safe base64 decode:
#data = Base64.urlsafe_decode64(JSON.parse(#result.data.to_json)["payload"]["body"]["data"])
Hope that helps!
There is an example for python 2.x and 3.x:
decodedContents = base64.urlsafe_b64decode(payload["body"]["data"].encode('ASCII'))
If you only need to decode for displaying purposes, consider using atob to decode the messages in JavaScript frontend (see ref).
I found whilst playing with the API result, once I had drilled down to the body I was given an option to decode in the available methods.
val message = mService!!.users().messages().get(user, id).setFormat("full").execute()
println("Message snippet: " + message.snippet)
if(message.payload.mimeType == "text/plain"){
val body = message.payload.body.decodeData() // getValue("body")
Log.i("BODY", body.toString(Charset.defaultCharset()))
}
The result:-
com.example.quickstart I/BODY: ISOLATE NORMAL: 514471,Fap, South Point Rolleston, 55 Faringdon Boulevard , Rolleston, 30 May 2018 20:59:21
I coped the base64 test to a file (b64.txt), then base64-decoded it using base64 (from coreutils) with the -d option (see http://linux.die.net/man/1/base64) and I got text that was perfectly readable. The command I used was:
cat b64.txt | base64 -d

send the POST without URLencoding the data

I post a form,and the value is
<
But It last be enocded
%3C
So,how I keep <? or send the POST without URLencoding the data.
Try changing your forms enctype value to text/plain. The only encoding done by that is to replace spaces with + symbols.