As a result of a 3rd party integration that only supports file uploading via CURL, which we only found out after completing the implementation as a normal multi-part form and ran into CORS issues, we then had to proxy the requests via our API but took a simpler approach, which also solved/simplified other things of attaching the base64 encoded string for the file(s), as apposed to creating a new multi-part endpoint.
We base64 encode the files byte data and attach it as part of the POST payload, we then noticed some unusual performance issues, a 310kb file takes 20 seconds before the network request is actually started.
After some investigation and CPU profiling, we narrowed it down to the move method as part of darts iterable in the Dio chain. Given the majority of our network requests include lists this made little sense.
To confirm I pulled in the http package and made the network request using that, the time dropped from 2-25s to less than 2, so there is definitely something fishy up with Dio in our instance.
We are on v4.0.0 of Dio and 2.0.1 of Retrofit and owing to other dependencies that we cannot update at this time, we cannot progress further with these packages so it's possible this very likely edge case may be solved, however, I see no issues on Dio or Retrofit that could be related so it's also likely they may not be directly or indirectly causing this.
A solution would be fantastic, as preferably I would. not want to be running 2 network services.
Here is a link to the CPU profile, if that helps any. https://drive.google.com/file/d/1Dh7rEg7c04xxUvTmJB42Zs0RqEmdVPjQ/view?usp=sharing
Related
My question is quite related to this one
I have spend weeks of headaches to try and fight it, but there doesn't seem to exist a solution worthy of mention, apart from the solution to the above question, which is a terrible workaround, but there really seem to exist nothing else around.
We are trying to communicate with a legacy system that has an established and running web service, with certain WS-Security constraints declared in its WSDL. We cannot change anything on the server, we just have to do as it bids. We also have a third party client implementation that actually works and communicates with the server, so we know that the communication works - using THAT specific client. Now, we want to make our own.
The above WS-Security policy includes encryption and signing. There were following scenarios of what to do:
write our own code to encrypt/decrypt and sign/verify
use one of the ready JAX-WS implementations to do the above for us
The second option of course is what we tried to do. Then we branch into following:
Metro/WSIT
Apache CXF
Everybody on the web suggests the latter option (which I tried too) - but for the time being I went with the first one (especially since we do not have any integration with Spring to take advantage of CXF's good integration with it)
After struggling with a bit of ambiguous documentation and various wizards (NetBeans), we came to a solution that contained very little custom code, a configuration file with some keystores, and the usual generated code from wsimport utility.
Some time passed, it included dumping the XML SOAP requests and responses, comparing the failing ones that we produce to the successful ones from the 3rd-party client. Lots of pain, with no results - the messages were different variously, but the core logic and structure was okay - then - you couldn't actually compare the encrypted parts. After some time I ended up with a client that sent something, and actually received something back, but failed to decrypt the response.
Actually it was decrypted alright, but the signature digest verification was failing. It is worth to mention that the original XML message contained a "&" character, as well as multiple newlines. I.e. the payload of the SOAP message was not syntactically correct XML. Anyway.
It seems that this digest verification is deeply rooted inside Metro/WSIT stack and there was absolutely no way I could find to actually intercept and correct that digest - or actually the contents upon which this digest was calculated - obviously - the problem was that some special characters were translated or canonicalized either after or before the digest calculation, and we (rather the underlying implementation that I tried to use to keep my hands clean) did something different from what the server side of the web service did.
Even the Metro tubes (nice name, but horrendously scarce documentation - it seems that nobody uses Metro/WSIT these days - or, should I say, nobody uses SOAP, or SOAP with this level of security? - when I tried Apache CXF, the generated SOAP messages were deceptively similar) and their way of intercepting messages didn't seem to help - when trying to get the raw contents of the message, no provided methods (Packet.getMessage().writeTo... - and other variations) could actually bypass the digest verification thing - because they ALL tried to read the contents the StAX way, streaming etc. (invoking StreamingPayLoadDigester.accept that invariably failed)
But hope would die last, and I would try again and again to find some obscure undocumented magic to make my thing work. Okay, i was about to call it a day and dig hard into java encryption - until I found the above question, that is. Actually it "exploits" a log message that gets printed from deep within the Metro code (actually from wssx-impl I think) with the canonicalized decrypted message, before throwing the digest mismatch exception. Thankfully, this message gets printed using java.util.logging, and this can be intercepted in various ways - e.g. to send it in some kind of synchronized queue, to be consumed by my client. Ugh. If somebody has a better idea, please write your thoughts.
Thank you all.
Finally I resorted to rebuilding Metro/WSIT version 2.1.1 found on GitHub, commenting a single line in WS-SX Implementation project (ws-sx\wssx-impl...\StreamingPayloadDigester.java:145)
if (!Arrays.equals(originalDigest, calculatedDigest)) {
XMLSignatureException xe = new XMLSignatureException(LogStringsMessages.WSS_1717_ERROR_PAYLOAD_VERIFICATION());
logger.log(Level.WARNING, LogStringsMessages.WSS_1717_ERROR_PAYLOAD_VERIFICATION()); //,xe);
// bypass throwing exception
// throw new WebServiceException(xe);
}
It could have been done in a better way, introducing a flag, for instance.
The order of the projects, starting from the smallest one where I did the change, to the one I include into my own project as Metro implementation is approximately as follows:
WS-SX Implementation is referenced in ->
WS-Security Project is referenced in ->
Metro Web Services Interoperability Technology Implementation Bundle (wsit-impl) is referenced in ->
Metro Web Serrvices Runtime non-OSGi Bundle (webservices-rt) included in my client
I am new to using Alamofire for swift. I tried reading the documentation, but didn't succeed.
I am making a
Alamofire.request("http:json").responseJSON
and I discovered, that it works and returns a response even when the phone is offline. If I'm not mistaken, the response is saved in the cache.
How long will this response stay in the cache for the user to use offline?
Should I store the response as a preference?
Thanks for the help.
You are right, Alamofire caches your response.
Although, I don't think there's a way to know when your response will be dismissed from cache, as there are many variables for the system to consider, for example- disk space... You may use a custom caching policy if you think it's right for you.
I wouldn't count on the default caching policy to save me files for offline usage, and implementing a custom policy feels wrong for that case. So if you really need your files offline, I would recommend you to use a different way.
Take a look on URLCache- this what Alamofire uses for response caching-
Response Caching is handled on the system framework level by URLCache. It provides a composite in-memory and on-disk cache and lets you manipulate the sizes of both the in-memory and on-disk portions. -> From Alamofire documentation
Hi I'm trying to realize a Tornado server with the goal to receive very big binary files (~1GB) into POST body. The following code works for small files, but does not answer if I try to send big files (~100MB).
class ReceiveLogs(tornado.web.RequestHandler):
def post(self):
file1 = self.request.body
output_file = open('./output.zip', 'wb')
output_file.write(file1)
output_file.close()
self.finish("file is uploaded")
Do you know any solutions?
I don't have a real implementation as an answer but one or two remarks that hopefully point to the right direction.
First of all there is a 100MB Upload limit which can be increased setting the
self.request.connection.set_max_body_size(size)
in the initalization of the Request handler. (taken from this answer)
The Problem is that tornado handles all file uploads in memory (and that HTTP is not a very reliable Protocol for handling large file uploads.)
This is quote from a member of the tornadoweb team from 2014 (see github issue here)
... You can adjust this limit with the max_buffer_size argument to the
HTTPServer constructor, although I don't think it would be a good idea
to set this larger than say 100MB.
Tornado does not currently support very large file uploads. Better
support is coming (#1021) and the nginx upload module is a popular
workaround in the meantime. However, I would advise against doing 1GB+
uploads in a single HTTP POST in any case, because HTTP alone does not
have good support for resuming a partially-completed upload (in
addition to the aforementioned error problem). Consider a multi-step
upload process like Dropbox's chunked_upload and commit_chunked_upload
(https://www.dropbox.com/developers/core/docs#chunked-upload)
As stated I would recommend to do one of the following:
if NGNIX is possible to handle and route requests to tornado=> look
at the NGNIX upload module (see ngnix wiki here)
If it must be a plain tornado solution use the
tornado.web.stream_request_body which came with tornado 4. This
streams the uploaded files to disk instead of trying to first get
them all in mem. (see tornado 4 release notes and this solution on stackoverflow)
I have a large byte file (log file) that I want to upload to server using PUT request. The reason I choose PUT is simply because I can use it to create a new resource or update an existing resource.
My problem is how to handle situation when server or Network disruption happens during PUT request.
That is say I have a huge file, during the transfer of which, Network failure happens. When the network resumes, I dont want to start the entire upload. How would I handle this?
I am using JAX-RS API with RESTeasy implementation.
Some people are using the Content-Range Header to achieve this but many people (like Mark Nottingham) state that this is not legal for requests. Please read the comments to this answer.
Besides there is no support from JAX-RS for this scenario.
If you really have the repeating problem of broken PUT requests I would simply let the client slice the files:
PUT /logs/{id}/1
PUT /logs/{id}/2
PUT /logs/{id}/3
GET /logs/{id} would then return the aggregation of all successful submitted slices.
In fiddler, is there any way of knowing if some piece of code ( jscript, jquery, css) are been loaded from local cache vs downloaded from the server. I think this may be represented by different color in web sessions, but wasn't able to find legend for these colors.
If you see 304 Not Modified responses, those mean that the client made a conditional request, and server is signalling "no need to download, you have the newest version cached". That's one "class" of cached responses.
However, for some entities, not even conditional requests are sent (Expires header is in the future, etc. - see RFC2616 ). Those would not show up in Fiddler at all, as there is no request at all - the client may assume that the cached version is fresh.
What you can certainly see are the non-cached resources - anything coming back with a response code from the 2xx range should be non-cached (unless there's a seriously misconfigured caching proxy upstream, but those are rare nowadays).
You could clear your caches, and open the page. Save those results. Then open the page again - see what's missing when compared to the first load; those are cached.
Fiddler is an HTTP proxy, so it does not show cached content at all.