GWT Logging : Send server output to Client - gwt

I'm working with GWT's Logging mechanism to show:
Client logging output in client's Browser (using com.google.gwt.logging.client.HasWidgetsLogHandler)
Client logging output in server log (com.google.gwt.logging.server.RemoteLoggingServiceImpl)
Server logging output in server log (using java.util.logging.*)
Is it reasonable and possible to show server log in a client debug component?
Would you advise to send server logging to client instead of using an extra "tool" to access the server log? Which can be a comfortable realization for detached server logging?

In the server-side, you can use any tool you like, obviously... in the client-side, GWT provides all these wonderful options you mentioned, including the RemoteLogger which lets you log stuff going on in the client-side in the server (which is not something you would want to do in production, but for debugging may be helpful).
It's hard to understand why you would need logs to go the other way, ie. from server to client..... maybe you don't have access to the server?? But then how are you going to work on the GWT code, which lives in the server??? Just doesn't add up... if you have access to client logging (be it hosted-mode GWT.log("") messages, production mode java.util.logging, or even the Remote logger), and you have your server logs, you have the whole picture already!
In my opinion, the answer to your question:
Is it reasonable and possible to show server log in a client debug component?
is simple:
No, it is not reasonable.
However, if you really must, do it using GWT's RPC mechanism which allows you to send almost anything at all to the client (within the GWT limits, of course), including log messages....

Related

Loopback.io backup server and server to server replication

I am thinking of adopting Loopback.io to create a REST API. I may need the following approach: an inTERnet server (run by me) to which clients connect, plus a fallback inTRAnet server to which clients connect only in case the internet connection is down. This secondary fallback server should then replicate data on the main server when the internet connection is up and running again. As clients are on the same inTRAnet they should be able to switch automatically to the fallback server. Is this possible as an idea and if so, what do you recommend i start digging into?
Thank you all!
Matteo
Simon from my other account. I believe what you want is possible as you can use whatever client side technology you want with LoopBack. As for easy solutions, I'm not familiar enough with Cordova to give any insight there.
It is definitely possible, but I suggest going through the getting started tutorial first. You'd probably create two application servers and have another proxy in front to route the requests to server a or b based a heartbeat from the main server. You would have to code all the logic and set up the infrastructure yourself though.

Handling authentication with Apache reverse proxy for plack/PSGI app

This is my scenario:
So,
Requests via encrypted HTTPS go to Apache like: https://server1/MyPerlApp
If the user is not logged in, they get a redirect to some login page (in the server1), and Apache doesn't proxy the request to Server2
When the user logged in - IS authenticated - then Apache forwards all requests that are coming to https://server1/MyPerlApp to http://server2:5000
Question1: Is this possible? (Asking, because I don't know Apache enough deeply, and this is not an simple:
ProxyPass /MyPerlApp http://server2:5000/
because I need need authenticate the user at server1 and set ProxyPass Only if authenticated.
Since Apache is quite flexible I assume the answer is yes for the above (but confirmation and details is very welcomed) - so here are my main specific questions:
How will my Plack application know what user is authenticated at the Apache level (i.e. on the 1st server)?
what is an easy way to deliver some of the user info to the perl app on the server2? e.g. with Apache's mod_rewrite what appends an user=username parameter to each query,
can Apache can set some HTTP headers that my perl app should read?
is there an easy and recommenced way?
I'm looking for how to avoid authentication routines in my starman/perl app, maily because:
the user need to log into server1 anyway (for other tasks in his workflow)
if he is already logged in, authentication in my app is not needed (avoid unnecessary double login)
but I still need to know which users are logged in (via Apache at server1)
There is already similar questions, but:
https://stackoverflow.com/q/12561830/734304 (no answer)
https://stackoverflow.com/q/11907797/734304 (no answer)
Apache reverse proxy with basic authentication (similar, but the backend is in the same server and same apache)
[I think you asked four questions here. Some of them overlap. I will try to answer as many as I can, then edit your question to make it a bit clearer. It might be helpful to post your current Apache httpd.conf so people can see how you are handling access and authentication currently. That way you might get better suggestions on how to integrate the proxied application(s) with your Apache instance.]
Setting up a front-end that can handle "Web Site Single Sign On" requires some planning and configuration but it is worth the effort. To make this easier, you'll want to use Apache-2.4. You probably are using this version, but Apache has become something of a workhorse, such that some sites update it much less frequently than in the past. Apache 2.4 includes mod_session and mod_auth_form which make it possible to set up form-based "web portal Single Sign On" sorts of tools with Apache for sites with multiple back-end application servers (often running on separate machine ports or sockets) combined under one outward facing set of URL/URIs. This pattern of use was so widespread with Apache that the 2.4 release added features to make it easier to do.
You asked about an "easy recommended" way to do what you have described. Well, you are on the right track. Apache's httpd is really useful for this kind of authentication/authorization and "user login" sort of application - so much so that it's become a staple tool for what you are trying to do.
You asked how to "deliver the user information" to the back-end server. You do that in the same way you handle state in any web application: with sessions and cookies. Session information contains key/value pairs encoded as an application/x-www-form-urlencodedstring. You can also create an HTTP_SESSION environment value that you back-end application can read from. Your Plack/Starman application has to be able to handle sessions and cookies (i.e. it has to be "session aware") if you want to use them there of course. Look at Plack::Middleware::Session for ideas on how to approach this.
For sure, setting up authentication with mod_auth_form is more complicated than Basic authentication. But with form based logins javascript can be used (judiciously), client applications can store form information locally for quick logins; as well, forms are flexible and can gather more data and pass more information to the user and some of the complexity (redirection after authentication) can be handled by Apache. Since they are just an HTML <form>, you can start simply and make them more elaborate as your site grows. That said you can have an Apache Reverse Proxy simply provide Basic Auth for your back-end.
Without seeing more details about your installation I can't say how/why you might need mod_rewrite per se, but Rewrite directives can play nicely with ProxyPass. Of course throughout your site you'd want to check for authentication and session information and redirect users to a login form where/when necessary. Using mod_auth_form makes this easier to implement at the cost of a somewhat more complicated configuration. As for the reverse prosy itself, you'd use ProxyPass in the normal way to pass requests to your back end:
ProxyPass /app http://[starmanhost]:3000/
Then you need configure or tweak your current Apache system to have Session On and require authentication for the URLs in question (unless the entire / requires authentication) in the standard Apache way:
<Location /app>
AuthType Basic
Session On
SessionCookieName session path=/
...
require valid-user
</Location>
etc. As the Apache docs point out (and you'll want to read mod_session, mod_proxy among others), you can pass session information around for use by back-end applications.
If the SessionHeader directive is used to define an HTTP request
header, the session, encoded as a application/x-www-form-urlencoded
string, will be made available to the application.
From mod_session documentation Integrating Sessions with External Applications.
For privacy/security you'll want to use mod_session_crypto and SSL if that's possible. As you note you will not need encryption to be "end to end" (i.e. HTTPS from client to outward facing front-end and between the reverse proxy and back-end applications) but if outside connections are https:// and you keep session information on the server (using mod_session_dbd as another response noted) using encrypted storage, you can avoid obvious threats inherent in sharing user session information across servers. The best part of this is you can add these layers one by one without having to modify your back-end applications extensively. This is the advantage of creating a solid "WebSSO server" front-end to handle logins.
Note that I've been using the term WebSSO here a bit loosely. Strictly speaking, WebSSO (and SSO) are much broader and more encompassing concepts with their own standards tracks and technologies (there are a couple Apache projects focused on this). This is why I tend to call the approach you are trying "Web Site SSO". Support for a wide range of authentication, programming language modules, proxying, and rewriting makes Apache's httpd the "swiss army knife/duct tape" of choice for handling logins and sessions in this way.
Your rational for doing this is sound, since you can avoid extra logins and confusing users (and their browsers). As well, by decoupling the authentication steps from your application and dedicating that task to Apache, you make it easier for developers to write back-end applications. Your question is very general though. I think you can start to try out some of the suggestions that begin to appear here and if you run into problems you can follow up with more specific questions focused on your implementation.
Get the Apache bits working correctly first (Session On; ProxyPass, <Location /app>) and make sure the right information is getting created, stored and passed on by the front-end. This will be very useful for lots of things going forward. Apache gurus can help here. Once you have the proper session information being passed to your back-end you can ask questions about how to access and use it in in your perl code with starman and plack. There may be missing or rough bits in tools and documentation but lots of sites want to do what you have described so these things will appear and continue to improve. Good luck.
References
A Gentle Introduction to Plack Sessions
Deploy Catalyst Applications with Starman and Apache
Using Apache mod_auth_form
Authentication in Apache2.4 using mod_auth_form and mod_dbd
Reverse proxying behind Apache
Apache's mod_session looks to be the component you are missing. Since the proxy is the gateway to the applications in the back-end, it can handle the authentication on the HTTP layer and pass back sessions as needed to the Perl script using the proxy entry.
Exposing the user information to the Perl application can happen in a few ways.
mod_session_dbd - is a module to store session information in a database. This could then be shared with the back-end server hosting the Perl application.
mod_session_cookie - is a module to store session information in a cookie on the browser of the client. Session variables would be stored in the cookie and the Perl application would retrieve them.
But, cookies or putting session variables in the URL open up security concerns. Cookies and headers can be modified.
mod_proxy should pass the session variables back to the applications in the form html.
http://httpd.apache.org/docs/trunk/mod/mod_session.html

Force reload client side code on server startup

I am building an intranet application with GWT, gilead and Hibernate, and Tomcat.
As I am actively calibrating the application based on the users' feedback, I have to put on changes and restart tomcat quite often. I was wondering how I can seemlessly make these changes available to the client side. (For the moment I always ask them to refresh after I restart tomcat).
Since the application is client-side (js) based, the client has the application code. Imagine the scenario where he has the application open, and I upload a new version and restart. After the restart, the user can perfectly go on using the application as he has the page open, but he is executing the old code. How can I make the client aware of the new code? I guess just invalidating the session and redirecting the user to the login page won't do that, as the js code won't be refreshed.
Any ideas?
As you mention the application will continue to work after the server side update. Even calls to the new version on the server might work if the interface was not changed.
A possible solution is to have a comet connection open, for example with Atmosphere for GWT. At any time when you have deployed a new version you broadcast an event to the active clients. Clients applications will receive this broadcast and act on it, prompting the user to refresh. You could also use this mechanism to pass messages to active clients, like upcoming server maintenance times.

See what website the user is visiting in a browser independent way

I am trying to build an application that can inform a user about website specific information whenever they are visiting a website that is present in my database. This must be done in a browser independent way so the user will always see the information when visiting a website (no matter what browser or other tool he or she is using to visit the website).
My first (partially successful) approach was by looking at the data packets using the System.Net.Sockets.Socket class etc. Unfortunately I discoverd that this approach only works when the user has administrator rights. And of course, that is not what I want. My goal is that the user can install one relatively simple program that can be used right away.
After this I went looking for alternatives and found a lot about WinPcap and some of it's .NET wrappers (did I tell you I am programming c# .NET already?). But with WinPcap I found out that this must be installed on the user's pc and there is nog way to just reference some dll files and code away. I already looked at including WinPcap as a prerequisite in my installer but that is also to cumbersome.
Well, long story short. I want to know in my application what website my user is visiting at the moment it is happening. I think it must be done by looking at the data packets of the network but can't find a good solution for this. My application is build in C# .NET (4.0).
You could use Fiddler to monitor Internet traffic.
It is
a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
It's scriptable and can be readily used from .NET.
One simple idea: Instead of monitoring the traffic directly, what about installing a browser extension that sends you the current url of the page. Then you can check if that url is in your database and optionally show the user a message using the browser extension.
This is how extensions like Invisible Hand work... It scans the current page and sends relevant data back to the server for processing. If it finds anything, it uses the browser extension framework to communicate those results back to the user. (Using an alert, or a bar across the top of the window, etc.)
for a good start, wireshark will do what you want.
you can specify a filter to isolate and view http streams.
best part is wireshark is open source, and built opon another program api, winpcap which is open source.
I'm guessing this is what you want.
capture network data off the wire
view the tcp traffic of a computer, isolate and save(in part or in hole) http data.
store information about the http connections
number 1 there is easy, you can google for a winpcap tutorial, or just use some of their sample programs to capture the data.
I recomend you study up on the pcap file format, everything with winpcap uses this basic format and its structers.
now you have to learn how to take a tcp stream and turn it into a solid data stream without curoption, or disorginized parts. (sorry for the spelling)
again, a very good example can be found in the wireshark source code.
then with your data stream, you can simple read the http format, and html data, or what ever your dealing with.
Hope that helps
If the user is cooperating, you could have them set their browser(s) to use a proxy service you provide. This would intercept all web traffic, do whatever you want with it (look up in your database, notify the user, etc), and then pass it on to the original location. Run the proxy on the local system, or on a remote system if that fits your case better.
If the user is not cooperating, or you don't want to make them change their browser settings, you could use one of the packet sniffing solutions, such as fiddler.
A simple stright forward way is to change the comupter DNS to point to your application.
this will cause all DNS traffic to pass though your app which can be sniffed and then redirected to the real DNS server.
it will also save you the hussel of filtering out emule/torrent traffic as it normally work with pure IP address (which also might be a problem as it can be circumvented by using IP address to browse).
-How to change windows DNS Servers
-DNS resolver
Another simple way is to configure (programmaticly) the browsers proxy to pass through your server this will make your life easier but will be more obvious to users.
How to create a simple proxy in C#?

How can a web page communicate with a local rich client application

I need to implement a process where users punch in a few details into a web page, and have this information fired as some
sort of an event to a Java rich client application (SWING) on the same host.
One idea was perhaps implementing an applet that would initiate socket communication with a listener implemented by the SWING
application, but not sure whether this is possible at all.
This sort of puzzling piece of integration is basically a given fact.
Essentially both the web application and the SWING one are already active and in use.
The only missing bit is sharing info between the two, in a way that would be easy to implement. no matter how dirty.
Any ideas?
Thanks!
Sounds a little confusing to the user if nothing else.
I would go one of two ways.
Have your rich client communicate over the network. And put whatever form you were going to have in the browser there.
Put your rich client into an applet.
Have both connect to a server somewhere (even locally), which your rich client can poll to see if the form has been filled in.