vertx-web httpserver messy code when response a Chinese word - vert.x

I met an issue when I was learning Vert.x-Web, below code will return a messy code for Chinese words, any one can help?
HttpServer server = vertx.createHttpServer();
server.requestHandler(request -> {
// This handler gets called for each request that arrives on the server
HttpServerResponse response = request.response();
response.putHeader("content-type", "text/plain charset='utf-8'");
// Write to the response and end it
response.end("Hello World!中文");
});
server.listen(8080);

I just found the reason, I think actually vert.x support UTF-8 encoding, but we need to make sure all the html files and related files including css, js, and font files all match UTF-8 format while saving it. we can use notepad open the file and check if it is UTF-8 format, if not, use "Save As..." to save it as UTF-8 format.

Related

PidTagInternetCodePage not present in msg file

Looking at the MS-OXPROPS, MS-OXCMSG and MS-OXCMAIL documentation, it is said that the user should include PidTagInternetCodePage to indicate the appropriate code page for the HTML content in order to parse it properly.
However, opening up the ole streams of the msg files, I could not find the 0x3FDE stream that indicates the code page id, but only found some semblance of a code page id in the compressed RTF stream (first line).
Am I looking at the streams wrongly or are the other properties hidden in other streams? If so, how do I look for them?
Thanks in advance.
The PidTagInternetCodePage property is not guaranteed to be present and is in no way required, especially if it is a Unicode MSG file. The HTML body can include the meta tag with encoding in the header, and even then, it won't be necessary if all Unicode characters in the HTML body are properly HTML-encoded (which is always a good idea).

How to include file bytes into the POST request body in Jmeter? (What encoding to use)

I have to perform POST requests from Jmeter. I use default HTTPRequest sampler, where I specify the JSON structure that is understandable by the testing app. One part of this JSON has to contain binary data from a pdf file.
For reading the file I use BeanShellSampler in the setUp thread group:
File file = new File(bsh.args[0]);
try {
FileInputStream fis = new FileInputStream(file);
byte[] array = new byte[(int)file.length()];
log.info("String is read.");
fis.read(array);
vars.put("fileEntity", new String(array, "cp1252"));
} catch (e) {
e.printStackTrace();
log.error(e.getMessage());
}
The problem is that when I look at the request with Fiddler, I see that difference, how the binary object is represented there in comparison with Postman's requests:
Postman
Jmeter
I think that there is something wrong with the encoding when I create a String object in the BeanShellSampler. What encoding is correct?
I tried to use RawDataSource plugin but it doesn't help for two reasons:
It fails to read my file, saying "Error reading next chunk"
It uses the same approach that I do to read the file, but uses UTF8 encoding. I tried this encoding also, but without any success.
My expectation is that your fis.read(array); function relies on default value of the file.encoding system property which may or may not be cp1252.
I would recommend introducing an InputStreamReader and explicitly specify the encoding there like:
InputStreamReader isr = new InputStreamReader(fis,"cp1252");
Also be aware that starting from JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting mainly because Groovy performs much better comparing to Beanshell.

NetSuite RESTlet output pdf

NetSuite Restlet PDF file encoding issue
The above thread seems to be giving a solution to outputing a pdf with a NetSuite RESTlet. As far as I know, you cannot output a pdf from a restlet, so I'm very confused. I am using a restlet to generate a report and the information ultimately needs to output to a pdf so I was trying to see if there was a work around. I tried the answer code from the above thread and I got the expected error:"error code: INVALID_RETURN_DATA_FORMAT error message:Invalid data format. You should return TEXT."
Am I missing something? Is there a way to export xml to a pdf with a NetSuite RESTlet?
The thread you reference discusses how to generate a PDF file in Netsuite. If you want to return a PDF from a RESTLet you will have to return it as a member of a JSON object. e.g.:
var pdfFile = genPDF(); // base this on the sample
return{
fileName: pdfFile.getName(),
fileContent: nlapiEncrypt(pdfFile.getValue(), 'base64')
};
And then your receiver will have to create the actual file.
Recall that RESTLets are for application-to-system communications. If you are trying to return a PDF to a browser you should probably be using a Suitelet.
If this is part of a larger app and you need the RESTLet then review this post: Save base64 string as PDF at client side with JavaScript for options to display the RESTLet response.
Reading through that answer, it appears you'll need to encode/convert the PDF to string format before returning, so you'll need to use base64 encoding.
The NS method nlapiEncrypt(content, 'base64') seems like it might be a good place to start.
Another avenue to investigate, which I haven't tried, is to first save the PDF in the file cabinet, then to return a public link to that file. You'll need to make sure the file has the correct permissions.

Need to find the requests equivalent of openurl() from urllib2

I am currently trying to modify a script to use the requests library instead of the urllib2 library. I haven't really used it before and I am looking to do the equivalent of urlopen("http://www.example.org").read(), so I tried the requests.get("http://www.example.org").text function.
This works fine with normal everyday html, however when I fetch from this url (https://gtfsrt.api.translink.com.au/Feed/SEQ) it doesn't seem to work.
So I wrote the below code to print out the responses from the same url using both the requests and urllib2 libraries.
import urllib2
import requests
#urllib2 request
request = urllib2.Request("https://gtfsrt.api.translink.com.au/Feed/SEQ")
result = urllib2.urlopen(request)
#requests request
result2 = requests.get("https://gtfsrt.api.translink.com.au/Feed/SEQ")
print result2.encoding
#urllib2 write to text
open("Output.txt", 'w').close()
text_file = open("Output.txt", "w")
text_file.write(result.read())
text_file.close()
open("Output2.txt", 'w').close()
text_file = open("Output2.txt", "w")
text_file.write(result2.text)
text_file.close()
The openurl().read() works fine but the requests.get().text doesn't work for the given this url. I suspect it has something to do with encoding, but i don't know what. Any thoughts?
Note: The supplied url is a feed in the google protocol buffer format, once I receive the message i give the feed to a google library that interprets it.
Your issue is that you're making the requests module interpret binary content in a response as text.
A response from the requests library has two main way to access the body of the response:
Response.content - will return the response body as a bytestring
Response.text - will decode the response body as text and return unicode
Since protocol buffers are a binary format, you should use result2.content in your code instead of result2.text.
Response.content will return the body of the response as-is, in bytes. For binary content this is exactly what you want. For text content that contains non-ASCII characters this means the content must have been encoded by the server into a bytestring using a particular encoding that is indicated by either a HTTP header or a <meta charset="..." /> tag. In order to make sense of those bytes they therefore need to be decoded after receiving using that charset.
Response.text now is a convenience method that does exactly this for you. It assumes the response body is text, and looks at the response headers to find the encoding, and decodes it for you, returning unicode.
But if your response doesn't contain text, this is the wrong method to use. Binary content doesn't contain characters, because it's not text, so the whole concept of character encoding does not make any sense for binary content - it's only applicable to text composed of characters. (That's also why you're seeing response.encoding == None - it's just bytes, there is no character encoding involved).
See Response Content and Binary Response Content in the requests documentation for more details.

Manually generating x-gwt-rpc from Python

I want to access a GWT service from a Python script, so I want to generate a x-gwt-rpc request manually. Can't seem to find any info on the format of a GWT RPC call, since everybody does it from Java (so the call is generated by the framework). Where can I find some detailed documentation about this format?
Don't think it is a trivial task to do that, but because gwt is opensource i would say that the source-code is a pretty good documentation for how it works, if you know java that is.
Gwt source
I stumbled on the same problem as you and I think I solved it rather easily.
Though I haven't figured out how to catch the response properly, I managed to get the response and successfully send the request. Here is what I did:
import requests
url = 'yours url'
header = {'Accept':'*/*',
'Accept-Encoding':'gzip, deflate',
etc...
}
cookie = {cookies if needed
}
data_g = 'this would be request payload u can see in F12 of browser '# u just copy it and paste it, !!!like a string (UTF-8 chars)
t = requests.post(url, headers=header, data = data_g, cookies = cookie)
print vars(t).keys()
#line above will print all variables of t
print t
Also these are some good links you should check out:
https://github.com/GDSSecurity/GWT-Penetration-Testing-Toolset
https://docs.google.com/document/d/1eG0YocsYYbNAtivkLtcaiEE5IOF5u4LUol8-LL0TIKU/edit?hl=de&forcehl=1