RPC not working on addclose handler - gwt

I'm facing a weird problem in GWT. I generate an excel file on server side for users to download. But after the download the file should get deleted.
I have put logic to delete it on server-Side on 2 occasions. One when user logs out and another when browser is closed.
When the user logs out, it works perfectly as it has enough time to make a call to the server whereas in case of addclosehandler, it loses connection and file remains as it is.
i.e. the method on server side does not get executed.
I tried to find another way to call the method directly by importing the package and inheriting in gwt.xml. But an error was thrown at the compile time and rightly so that server side cant be inherited.
Please get me out of this.
Thanks in advance.

But after the download the file should get deleted. I have put logic
to delete it on server-Side on 2 occasions.
This does not have to do anything with the client. I don't know excactly how your progam works but generally it should work like this:
Client makes a request
Servlet generates the bytes (Is there really a need to store the bytes in a file?)
sends them to client
And that's it.

Related

REST file download that takes 5 minutes to complete

One of the API calls to my Web API 2 server from my Angular client dynamically generates an XLSX file via a ton of SQL queries and processing. That can take up to five minutes to generate all the data and return it via a file download to the client. Obviously that's bad because Chrome shows an error by then even though the page is still loading.
It feels like this is where I'd use a status code 202 to tell the client that it got the request, but I'm not sure after that how to actually send the file back to the client then.
The only thing I can think of is that the server spawns a background task that will write the file to a specific temp location, after it's been created, and then another API call will download that file if it exists and delete it from the temp location.
Is that what I do and then just have the client poll periodically for that file? Pre-generating the file isn't an option as it has to have realtime data (at the point of request of course).
It feels like this is where I'd use a status code 202 to tell the client that it got the request, but I'm not sure after that how to actually send the file back to the client then.
Usually a HTTP 202 comes with a location header and an indicator of where and when the resource will be available.
Also a possibility is to add a link to a status monitor, like described here.
To achieve this, you could generate an id for that process and use it in the location header url to point to the result.
The client then is able to get that resource when the resource should be ready. This means you would need some short-term persistence.

Beginner GXT issues

We have a working web application, which has been developed with ExtJS for client side, and Struts, Spring, Hibernate for server side. now, we are considering to migrate to GXT (or may be GWT itself). The thing is I'm very new to GWT/GXT. and we are trying to decide whether we go down this road or not.
1) Until now, we have 2 domains for our web-app. one is that the application (Struts+...) have been deployed to, and the other is mainly a cookie-less custom CDN. The transfer between client and server is mostly XHR requests, sending/receiving JSON and/or JSONP. But with the new approach ahead of us, I began to understand that we are supposed to have only ONE domain, for the whole GXT application. Is it correct or I forgot to consider something here?
and if not, Is it possible that we deployed just part of the application (i.e. com.ourcompany.webapp.gxt.server.*) to the main server, and the contents that have been compiled and generated by the GWT compiler to the other CDN-like domain?
2) The other big issue we are facing is that the current application is consists of mostly 3 huge modules. One is responsible for "SignIn", the other is for "Webtop", and the third one is "Modules which each users has access to". The latter has been generated on the server due to "access rights" of each users, and obviously could be different from one user to the other.
The only thing I could find on this matter, which might be related is Code Splitting. Although I'm not totally sure if this would be the right solution for this.
We want that the application, on Start Up, checks whether user has been logged in or not. if not, loads the SignIn sets of javascript files (i.e webapp.signin.nocache.js), then after user has entered the correct username/password, unloads this signin file and loads webtop.nocache.js AND modules.nocache.js.
I would be really appreciated if you could help me out.
1) If your GWT app is loaded from a different domain than you have to face the same origin policy. You can not do a xhr to a different domain. You could use the ScriptTagProxy to get around this. But it does not feel very netural.
2) You can use CodeSplitting in order to automatically load a particular part of your application dynamically. All you have to do is to warp your splitt point into an async call.
A detailed compile report gives you a pretty good overview how well code splitting is working.
But CodeSplitting does not unload already loaded code. If its really importend to do so you have to redirect the user to another url in order to load the appropriate user depended module.
Once Javascript code has been loaded and executed its impossible to remove the code from the browsers memory.
Grettings,
Peter

Perl application move causing my head to explode...please help

I'm attempting to move a web app we have (written in Perl) from an IIS6 server to an IIS7.5 server.
Everything seems to be parsing correctly, I'm just having some issues getting the app to actually work.
The app is basically a couple forms. You fill the first one out, click submit, it presents you with another form based on what checkboxes you selected (using includes and such).
I can get past the first form once... but then after that it stops working and pops up the generated error message. After looking into the code and such, it basically states that there aren't any checkboxes selected.
I know the app writes data into .dat files... (at what point, I'm not sure yet), but I don't see those being created. I've looked at file/directory permissions and seemingly I have MORE permissions on the new server than I did on the last. The user/group for the files/dirs are different though...
Would that have anything to do with it? Why would it pass me on to the next form, displaying the correct "modules" I checked the first time and then not any other time after that? (it seems to reset itself after a while)
I know this is complicated so if you have any questions for me, please ask and I'll answer to the best of my ability :).
Btw, total idiot when it comes to Perl.
EDIT AGAIN
I've removed the source as to not reveal any security vulnerabilities... Thanks for pointing that out.
I'm not sure what else to do to show exactly what's going on with this though :(.
I'd recommend verifying, step by step, that what you think is happening is really happening. Start by watching the HTTP request from your browser to the web server - are the arguments your second perl script expects actually being passed to the server? If not, you'll need to fix the first script.
(start edit)
There's lots of tools to watch the network traffic.
Wireshark will read the traffic as it passes over the network (you can run it on the sending or receiving system, or any system on the collision domain).
You can use a proxy server, like WebScarab (free), Burp, Paros, etc. You'll have to configure your browser to send traffic to the proxy server, which will then forward the requests to the server. These particular servers are intended to aid testing, in that you'll be able to mess with the requests as they go by (and much more)
As Sinan indicates, you can use browser addons like Fx LiveHttpHeaders, or Tamper Data, or Internet Explorer's developer kit (IIRC)
(end edit)
Next, you should print out all CGI arguments that the second perl script receives. That way, you'll know what the script really thinks it gets.
Then, you can enable verbose logging in IIS, so that it logs the full HTTP request.
This will get you closer to the source of the problem - you'll know if it's (a) the first script not creating correct HTML, resulting in an incomplete HTTP request from the browser, (b) the IIS server not receiving the CGI arguments for some odd reason, or (c) the arguments aren't getting from the IIS server and into the perl script (or, possibly, that the perl script is not correctly accessing the arguments).
Good luck!
What you need to do is clear.
There is a lot of weird excess baggage in the script. There seemed to be no subroutines. Just one long series of commands with global variables.
It is time to start refactoring.
Get one thing running at a time.
I saw HTML::Template there but you still had raw HTML mixed in with code. Separate code from presentation.

Client Server Applications for Iphone

I have a question regarding this topic.Like for Client Server Applications
1) is it necessary to load database directly into the Application.
Suppose if I have a DB in the back end and My application has to connect to that DB and display the results on the View for this do I need to Add DB into the Application directly.
2) can we access any DB or a File on the Remote server and show the required results.( with out adding that particular DB or A File into the application directly). How can we do this.
I saw a similar question in stackoverflow one answer was to use a PList, I am new to this.I am browsing the net but not able to get clear results. I lost many of my interviews because of this question.
Thanks,
1) is it necessary to load database
directly into the Application.
Suppose if I have a DB in the back end
and My application has to connect to
that DB and display the results on the
View for this do I need to Add DB into
the Application directly.
I'm not sure I understand this question. No, you don't need to load a database directly into a client in a client-server architecture. Normally, when I think of a design where a server has a database, I imagine there's some kind of way for the client to query the server for information. Perhaps it's making HTTP requests, which the server parses into a query, runs the query, and then returns the results (perhaps in XML form?).
2) can we access any DB or a File on
the Remote server and show the
required results.( with out adding
that particular DB or A File into the
application directly). How can we do
this.
Are you asking if it's possible, in general, to access a server database from a client? Yes, of course. (See above, re: HTTP Requests).
Any arbitrary file? That depends on how the server is set up. Again, HTTP is one protocol works that way; if you send an HTTP query like "GET someimage.png HTTP/1.0", the server could just be grabbing the whole file someimage.png and sending it back in the response. (Technically, it's not necessarily snarfing a whole file -- it could be creating that PNG dynamically since there's nothing in the HTTP protocol that says it must be sending an existing file -- but that's outside the scope of your question.)
I lost many of my interviews because
of this question.
Not to sound too snarky, but interviews are often won and lost not because you don't know the answer, but when you can't communicate effectively. You haven't phrased your question(s) here particularly well.

Is there any way to allow failed uploads to resume with a Perl CGI script?

The application is simple, an HTML form that posts to a Perl script. The problem is we sometimes have our customers upload very large files (gt 500mb) and their internet connections can be unreliable at times.
Is there any way to resume a failed transfer like in WinSCP or is this something that can't be done without support for it in the client?
AFAIK, it must be supported by the client. Basically, the client and the server need to negotiate which parts of the file (likely defined as parts in "multipart/form-data" POST) have already been uploaded, and then the server code needs to be able to merge newly uploaded data with existing one.
The best solution is to have custom uploader code, usually implemented in Java though I think this may be possible in Flash as well. You might be even able to do this via JavaScript - see 2 sections with examples below
Here's an example of how Google did it with YouTube: http://code.google.com/apis/youtube/2.0/developers_guide_protocol_resumable_uploads.html
It uses "308 Resume Incomplete" HTTP response which sends range: bytes=0-408 header from the server to indicate what was already uploaded.
For additional ideas on the topic:
http://code.google.com/p/gears/wiki/ResumableHttpRequestsProposal
Someone implemented this using Google Gears on calient side and PHP on server side (the latter you can easily port to Perl)
http://michaelshadle.com/2008/11/26/updates-on-the-http-file-upload-front/
http://michaelshadle.com/2008/12/03/updates-on-the-http-file-upload-front-part-2/
It's a shame that your clients can't use ftp uploading, since this already includes abilities like that. There is also "chunked transfer encoding" in HTTP. I don't know what Perl modules might support it already.