How to display the conversation between server and client with Jodd - mail-server

In upgrading from an older version of Jodd I cannot identify how to display the conversation between the server and the client. In the past I wrote:
smtpServer.debug(true);
imapServer.setProperty("mail.debug", "true");
Now I tried to use:
smtpServer.debugMode(true);
but it had no effect.
I could not identify how to do this with imapServer.
I like to show the conversation to my students.

Unfortunately, there was a bug in Jodd. The debug and timeout were not applied.
The bug was fixed in the latest commits, soon to be released.
At least, we introduced some improvements, so now on the debug & timeout methods would be applied for all servers, so you would not need to use different setting for the IMAP and SMTP server :)

Related

Letting socket.io client version lag behind server version

Situation
We're using socket.io for mobile-server communications. Since we can't force-upgrade users' devices, if we want to upgrade to version 1 (non-back-compatible), we have to handle both versions on the server for a while.
Question
What are the options?
My current favourite is to wrap both the old version and the new version in a multiplexer. It detects the version of the incoming request based on headers and query parameters and thereby knows which functions to invoke.
Another (shittier) option is to wrap the new version in a module that can translate the old version of the protocol into the new version (and back again) when necessary. This suffers from a serious drawback. It would be time-consuming and uncertain work to ensure I've properly determined and handled all the tiny differences. Some differences might take some serious massaging.
(In case you're curious or it's helpful to know, we're doing this in Go.)
It appears that you could run two separate versions of socket.io on the server. Since the two versions don't have unique module filenames you would probably need to load one version from a different path. And, then obviously when loading the modules and initializing them you'd assign them to differently named variables. For example:
var io_old = require('old/socket.io');
var io = require('socket.io);
Once you have the two versions loaded on the server, I think there are two different approaches for how they could be run.
1) Use a different port for each version. The older version would use the default port 80 (no configuration change required for that) which is shared with the node.js web server. The newer version would be run on a different port (say port 3000). You would then initialize each version of socket.io to its own port. Your newer version clients would then connect to the the port the newer version was running on.
For the old socket.io server running on port 80, you would use whatever initialization you already have which probably hooks into your existing http server.
For the new socket.io server running on some other port, you would initialize it separately like this:
var io_old = require('old/socket.io')(server);
var io = require('socket.io')(3000);
Then, in the new version client, you would specify port 3000 when connecting.
var socket = io("http://yourdomain.com:3000");
2) Use a different HTTP request path for each version. By default, each socket.io connection starts with an HTTP request that looks like this: http://yourdomain.com/socket.io?EIO=xx&transport=xxx?t=xxx. But, the /socket.io portion of that request is configurable and two separate versions of socket.io could each be using a different path name. On the server, the .listen() method that starts socket.io listening takes an optional options object which can be configured with a custom path as in path: "/socket.io-v2". And similarly, the .connect() method in the client also accepts that options object. It's kind of hard to find the documentation for this option because it's actually an engine.io option (which socket.io uses), but socket.io passes the options through to engine.io.
I have not tried either of these myself, but I've studied how socket.io connections are initiated from client and server and it looks like the underlying engine supports this capability and I can see no reason why it should not work.
Here's how you'd change the path on the server:
var io = require('socket.io')(server, {path: "/socket.io.v1"});
Then, in the client code for the new version, you'd connect like this:
var socket = io({path: "/socket.io.v1"});
This would then result in the initial connection request being made to an HTTP URL like this:
http://yourdomain.com/socket.io.v1?EIO=xx&transport=xxx?t=xxx
Which would be handled by a different request handler on your HTTP server, thus separating the two version.
FYI, it is also possible that the EIO=3 query parameter in the socket.io connection URL is actually an engine.io version number and that can also be used to discern client version and "do the right thing" based on that value. I have not found any documentation on how that works and could not even find where that query parameter was looked at in the engine.io or socket.io source code so that would take more investigation as a another possibility.
I don't really have an immediate solution for this, but I have some kind of advice. I guess you could use it to save a lot of time.
first of all Im working in a startup which uses socketIo for almost
everything
We knew that this problem would happen so our initial design was to
make everything pluggable which means that we can swap out socketio for
sockjs and it will still work.
The way its done is by defining the common set of APIs which rarely change
in a system. We call it managers. The managers can just expose the API which the rest of the devs need to use without messing up anything. It speeds up a lot.
The manager implementation changes in the background but still the APIs are the same, so the engineers working on the core can confidently make changes.
Seems like you have a tight dependency in your code. Or may be not. I'm not so sure. Try following this principle if you haven't.
We're going to go the route of keeping both the 0.9.x version and the current version as separate libraries on the server. Eventually, when the pool of clients has more-or-less all updated, we'll just pull the plug on the 0.9.x version.
The way we'll manage the two versions is by wrapping the socket.io services in a package that will determine which wrapped socket.io version to pass the request off to. This determination will depend on features of the request, such as custom headers (that can be added to the newer clients) as well as query parameters and other headers utilized exclusively by one version or the other.
Since we're using Go, there's so far no universally agreed upon way to manage dependencies, let alone a way that can respect version differences. Assuming the back-compat branch of the repo wasn't broken (which it is), we'd have two options. The first would be to fork the repo and make the back-compat version the master. We'd then import it as if it had nothing to do with the other one. The second option would be to use gopkg.in to pretend the separate branches were separate repos.
In either event, we could import the two branches/repos like so
import (
socketioV0 "github.com/path/to/older/version"
socketioV1 "github.com/path/to/current/version"
)
And then refer to them in the code using their import names socketioV0 and socketioV1.

Distributed Recovery - can this be done without timeout?

We have a mail sender application, that receives a bunch of mails in one blob, and then puts all those mails into database. This can take up to ten minutes. During this process the state of the mailing is BUILDING.
When it is finished the state gets changed to READY.
When the server crashes (shouldn't happen of course) and restarts, it looks for all mailings with status BUILDING and marks them as ERROR. This happens, because we never want to send incomplete mailings.
Now we'd like to scale up using a second server. The recovery strategy above doesn't work here.
e.g. server 1 is BUILDING a mailing, and server 2 crashes and restarts. Now server 2 will see the BUILDING mailing and doesn't know if it's been aborted or if it's running on another server.
So what's the best recovery strategy for distributed services?
(We thought about some timeout mechanism, where the BUILDING server updates a timestamp every few seconds, and when some server reboots it checks if there's a BUILDING mailing that hasn't been updated for x minutes. Then it's highly possible that this mailing has been aborted.)
EDIT:
What I'd like to achieve: If some server restarts (after a crash or just because we added a new mailing server to the cluster), it should not mark mailings as ERROR if this particular mailing is actually being built (by another server).
Nice to have: If this would work without having to store server ids, because then it's possible to easily add and/or remove servers. Else it would not be possible to completely remove some server, because then there might be a BUILDING mailing with that particular server id. But this server got removed and will never get started again. Though the only server that could set the mailing to ERROR will be gone.
Add two things to your state tracking: a timestamp and the server working on it.
If a server starts up and sees anything in a building state for itself it knows it failed. Conversely, if it starts up and sees something in a building state for another server, it now has information that it's going to need to look at later to see if there's a problem that needs to be addressed. You need to worry about multiple servers restarting at the same time, so you can't just have a server grab all old bundles for all servers at startup.
Or you can just use a clustering service for your OS.

Why does my Github webhook keep timing out?

We couldn’t deliver this payload: Service Timeout
I was successfully sending webooks to my server 5 minutes ago, and now I just keep getting timeouts. I tried deleting the webook and re-adding it, changing the URL it points to, but nothing.
Am I flooding it with too many pushes, or is GitHub's webhook service just down?
It also turns out that GitHub has a 10-second timeout set on their webhooks. That is what I ran into. See the documentation here.
Unless there is some kind of error on the GitHub side (which doesn't seem to be the case at the moment, given their "System Status" history), you might check the program receiving the payload of that webhook.
See a similar problem in Supybot-plugins 225:
I contacted GitHub support and one of the employees has been troubleshooting this for me. Here is part of what he had to say about the issue:
I just tried making a request manually from one of our machines, and that went through with no error (see curl -v output below).
However, I did notice that it took extremely long for the request to be processed -- over 15 seconds (for 2 bytes of data).
Decoupling the listening and reception of the payload, from its proicessing, is generally the right approach, as I recommended ion "Perl Script slow over Tomcat 6.0 and generates service time out".
The first part should be as fast as possible.

How to connect and pull data out of funambol dm server using j2se client ?I

I have installed the DM server and it is up and running.Also added couple of device details via koneki simulator.
Now i want to use one of the j2se client to connect and pull data from DM server.
I am struck on this part.Any code sample ?
Actually, there is an OMA-DM client underneath the Koneki simulator that you might want to reuse!
You can grab the source code from the GitHub mirrors of the OMA-DM simulator: here and here
org.eclipse.koneki.simulators.omadm is used to run a new OMA-DM session. Look for the run() method in the class org.eclipse.koneki.simulators.omadm.basic.DMBasicSimulation
org.eclipse.koneki.protocols.omadm defines the object model used during a simulation.
org.eclipse.koneki.protocols.omadm.client (the most interesting for you) manages all the messages that are exchanged between a client (e.g. the simulator) and a server. Look for the org.eclipse.koneki.protocols.omadm.client.basic.DMBasicSession class.
You should stop by the Koneki forum if you have any further questions (and I am sure you will!)

Problems with making web service requests with custom headers via MonoTouch

My team and I are working against a few webservices that require SOAP Message Headers to be available when making a request. We are not in control of these webservices so we can't change the implementation, even if we wanted to (or at least not without a lot of pain). We just need to be able to have authentication related information & a couple of other items passed through our message headers.
I've read of a few people who've had this problem in the past with no clear indication on if they succeeded in pulling it off on Monotouch.
Here's what I've read: http://forums.monotouch.net/yaf_postsm2104.aspx so far.
Any ideas on what we can do to overcome this on the Monotouch framework?
Here's what i'm trying to do for now:
using (var scope = new OperationContextScope (client.InnerChannel))
{
client.GetHistories += handler;
OperationContext.Current.OutgoingMessageHeaders.Add (MessageHeader.CreateHeader ("EnvironmentInfo", "http://schemas.contoso.com",
ServiceContext.Current.OperatingEnvironment));
OperationContext.Current.OutgoingMessageHeaders.Add (MessageHeader.CreateHeader ("AuthenticationToken", "http://schemas.contoso.com",
ServiceContext.Current.Token));
client.GetHistoriesAsync (ServiceContext.Current.OperatingEnvironment, ServiceContext.Current.Token, request);
}
Thanks for your time.
JM
I was not able to get Message Headers to work with WCF in Mono 2.6. I tried several different ways (including how you do it in your example) - it just doesn't work in Mono 2.6.
I raised a bug for this, which I then closed after discovering it is fixed in the latest trunk. So if you run against Mono 2.7 or greater, this should work.