I'm writing a small xmpp server using the qxmpp library. Now I want to do the routing of messages myself: If I understand the server's implementation correct, the server forwards a message with a bare JID (contact#myxmpp) in the 'to' attribute to all connected resources for this bare JID.
I want to create an implementation that takes care of the priority and sends the message only to the "most available" resource.
The only way to achieve this with QXmppServer seems to be to change the to field to a full JID, but this is prohibited by RFC for this case. (RFC 6121, 8.5.2.1.1 last paragraph: "In all cases, the server MUST NOT rewrite the 'to' attribute".
Is there a trick I didn't see or is it impossible to achieve this with the current version 0.8.0 and I have to open an issue / create a patch for qxmpp?
Related
I'm trying to build a entirely contained trading simulator using quickfix/J. The systems ought to consist of 2 client applications (a market/exchange and a broker) as well as a router (server/acceptor). In particular I'd like to know:
Client-Client communication
How the two clients can communicate to each other, but the server handling all the messaging logic, ie. messages should go through server and it should decide where and how to forward messages. I ought to be able to pass a targetID in FIX message, and the server app should handle routing to desired client.
Multiple clients on same port
Have multiple clients connected on same port but messages should only go to a particular sender comp Id ie. clients should not be privy of communication from other clients.
I've already set up the acceptor, and 2 clients. I know I could do this programmaticaly using plain old Java but I'd like to leverage the quickfix library and would like a relativly out of the box solution.
MVP: client (broker) sends fix message through the acceptor(router), message is processed and forwarded to a particular market, market recieves message through server and does some business logic, market sends fix message back to client through acceptor.
ps: I like the quickfix library but I'm very flexible if there any other library/languages you'd recommend
Short answer: QuickFIX/J (as far as I can tell similarly QuickFIX or quickfix/n) will not route messages based on tags. This has to be implemented in your application code.
Edit: with regard to your second point. There is no problem having your FIX server listening for multiple FIX connections on the same port (This applies for QuickFIX/J and I guess also the other language variants.) Sessions are addressed via the SessionID so it is ensured that only the correct FIX Session gets its messages.
If we create the multiple resource of update request using POST method in REST.what will be the impact at server side if number of resource created .
I Know using put request ,we can achieve fault tolerance due to idempotence.if we use post instead put,what will happen?
If we created number of resource using post for update , is there any performance issue ?if we created number of resource then what is impact on server ?
In post and put if we call same request n times ,we are going to hit the server n time then creating new resource and same resource should not impact on server.can please confirm this statement right or wrong .
If we create the multiple resource of update request using POST method in REST.what will be the impact at server side if number of resource created .
First of all, HTTP, the de-facto transport layer of REST, is an application protocol for transferring documents over a network and not just YOUR application domain you can run your business rules on. Any business rules you infer from sending data over the network are just a side-effect of the actual documentation management you perform via HTTP. While certain thins might map well from the documentation management to your business layer, certain things might not. I.e. HTTP isn't designed to support larger kinds of batch processing.
By that, even though HTTP itself defines a set of methods you can use, with IANA administering additional ones, the actual implementation depends on the server itself. It should follow the semantics outline in the RFC, though it might not. It may harm interoperability with other clients though in such a case, that is why it is recommended to follow the spec.
What implications or impact a request may have on the server depends on a couple of factors such as the kind of the server, the data that needs to be processed and whether work can be offloaded, i.e. by a cache, as well as the internal infrastructure you use. If you have a server that supports a couple of hundred cores and terabytes of address space a request to be processed might have less of an impact on the server than if you have a server with only a single CPU core and just a gigabyte of RAM, which has to fit in a couple of other applications as well as the OS itself. In general though the actual impact a request has on the server isn't tide to the operation you invoke as at its core HTTP is just a remote document management protocol, as explained before. Certain methods, such as PATCH, may be an exception to this rule though as it clearly demands transaction support as either all or none of the operations defined in the patch document need to be applied.
I Know using put request ,we can achieve fault tolerance due to idempotence.if we use post instead put,what will happen?
RFC 7231 includes a hint on the difference between POST and PUT:
The fundamental difference between the POST and PUT methods is highlighted by the different intent for the enclosed representation. The target resource in a POST request is intended to handle the enclosed representation according to the resource's own semantics, whereas the enclosed representation in a PUT request is defined as replacing the state of the target resource. Hence, the intent of PUT is idempotent and visible to intermediaries, even though the exact effect is only known by the origin server.
POST does not give a client any promises on what happens in case of a network error i.e. You might not know whether a request reached the server and only the response got lost or if the actual request didn't make it to the server at all. Jim Webber gave an example why idempotency is important, especially when you deal with money and currencies.
HTTP is rather specific to inform a client when a resource was created by including a HTTP Location header in the response that contains a URI to the created resource. This works on POST as well as PUT and PATCH. This premise can now be utilized to "safely" upload data. A client can send POST requests to the server until it receives a response with a Location header pointing to the created resource which is then used in the next step to perform a PUT update on that resource with the actual content. This pattern is called the POST-PUT creation pattern and it is especially useful if you have either a large payload to send or have to guarantee that the state only triggers a business rule once, i.e. in case of an online purchase.
Note that with the help of conditional requests some form of optimistic locking could be used as well, though this would require to at least know the state of the current resource beforehand as here a certain value, that is unique to the current state, is included in the request that acts as distributed lock which, if different to the state the server currently has, as there might have been an update by an other client in the meantime, will result in a rejection of the request at the server side.
If we created number of resource using post for update , is there any performance issue ?if we created number of resource then what is impact on server ?
I'm not fully sure what you mean by created a number of resources using post for update. Do you want to create or update a resource via POST? These methods just differ in the semantics they promise. How you map the event of the document modification to trigger certain business rules in your backend is completely up to you. In general though, as mentioned before, HTTP isn't ideal in terms of batch processing.
In post and put if we call same request n times ,we are going to hit the server n time then creating new resource and same resource should not impact on server.can please confirm this statement right or wrong
If you send n requests via POST to the server, the server will perform the logic that should perform on a POST request n times (if all of the requests reached the server). Whether a new resource is created or not depends on the implementation. A POST request might only start a backing process, some kind of calculation or actually doing nothing. If one was created though the response should contain a Location header with the URI that points to the location of the new resource.
In terms of sending n requests via PUT, if the same URI is used for all of these requests, the server in general should apply the payload as the new state of the targeted resource. If it internally results in a DB update or not is an implementation detail that may very from project to project. In general a PUT request does not reflect in the creation of a new resource unless the resource the target URI pointed to didn't exist before, though it also may create further resources as a side-effect. Imagine if you design some kind of version control system. PUT is allowed to have side effects. Such a side effect may be that you perform an update on the HEAD trunk, which applies the new state to the HEAD, though as a side-effect a new resource is created for that commit in the commit history.
So in summary, you can't deduce the impact a request has on a server solely based on the HTTP operation you use as at its heart HTTP is just an application protocol that transfers documents over a network. The actual business rules that get triggered are just a side effect of the actual document management. What impact a request has on the server depends on multiple factors, such as the type of the server you use but also on the length of the request and what you do with it on the server. Each of the available methods has its own semantics and you shouldn't compare them by the impact they might have on the server, but on the premise they give to a client. Certain things like anything related to a balance or money should be done via PUT due to the idempotent property of that method.
I'm willing to use ejabberd / mongooseIm in a microservice network. XMPP should be our chat protocol aside from a REST API network. I want to send messages incoming at the xmpp server downstream to worker services. Has anybody done this or could lead me into the right direction?
My first thoughts are using RabbitMQ for sending the new incoming messages to the workers.
There are basically two choices to giving your workers access to the messages routed by ejabberd / MongooseIM. I'll focus on MongooseIM, since I know it better (DISCLAIMER: I'm in the dev team).
The first is to scan the message archive in an async / polling fashion. The Message Archive Management describes XMPP level protocol for accessing it, but for your use case the important part is message persistence - so just making sure the relevant module (mod_mam) is enabled in server config and the messages will hit the database. The databases supported for MAM are PostgreSQL and Riak, though there was also some work on a Cassandra backend (YMMV). This doesn't require tinkering with the server / in Erlang for as long as there's a DB driver for your language of choice available. Since PR#657 it's possible to store the messages in raw XML or even some custom format if you're willing to write the serialization module.
The second option is to use the server mechanism of hooks and handlers (also available in ejabberd), which can trigger a server action on events like "user sent a message", "user logged in", "user logged out", ... This, however, requires a server side extension written in Erlang. In the simplest case the extension could forward any interesting event (with message content and metadata) via AMQP or just call some external HTTP/REST API - that way the real work is carried out by the workers giving you the freedom with regard to implementation language. This options also doesn't require to enable mod_mam or set up a database for message persistency (which you could still have with a persistent message queue...).
In general, the idea is perfectly feasible.
Generally, the most common XMPP extension use to build messaging systems for machines-to-machines, internet of things, microservices, etc is PubSub, as defined in XEP-0060.
This is a module you can enable in ejabberd. It is API based, so you can even customize the behaviour of that module to your application specific.
Pubsub basically allows to decouple senders and receivers and is especially designed for that use case.
I've read everything I could find on verifying e-mail addresses. The widely encountered solution is this, and it doesn't work (for one, actual nslookup output differs significantly from what the article shows, so I don't get an actual address to telnet to).
But then I thought: I don't need to verify the address. I just want to detect clearly bogus address (such an address that sending a message to it will yield "delivery failed" response). Is it possible to do in principle, and implement using C++ sockets or Java networking API in particular?
Depending on which operating system and tools you use, verifying the recipient's domain, and whether it is recorded in the DNS with a meaningful MX (mail exchange), you could use dig in place of nslookup. For foo#bar.com,
$ dig bar.com MX
Possibilities of detecting bogus eMail adresses are typically limited, though. Availability largely depends on how "generously" the MTA offers this information. Most don't, these days. The SMTP protocol includes some verbs you could then use, such as VRFY. On the other hand, spammers could do just that, hence … (That's one reason why a mail loop is run, in order to detect valid eMails fairly reliably; embedding, as I'm sure you know, a verification string to be sent back, or passed via URL to some web service.
SMTP, being a text protocol, would be used via some "transport layers" underlying higher level APIs like JavaMail. I'd look for programmability of these with the programming language used. Typically, there is some socket library, for sending and retrieving lines of text.
(newbie alert)
I need to program a multiparty communication service for a course project, and I am considering XMPP for it.
The service needs following messaging semantics:
1) server will provide a method of registering and unregistering an address such as somenode#myservice.com/SomeResource. (for now I will do it manually).
2) server will provide a method of forwarding incoming messages from, say, somenode#myservice.com/SomeResource to someothernode#myservice.com/someOtherResource, assuming that the latter is registered, and a method for removing this forwarding. (for now I will do it manually).
3) anonymous clients can send messages to, say, somenode#myservice.com/someresource (one way traffic only). If there is any forwarding setup, the message will be forwarded. Finally if the address is somenode#myservice.com/someresource is registered, the message will be stored for later delivery (or immediate if a retrieving client is online - see below). If no forwarding and unregistered, message will be silently dropped.
4) clients can connect and retrieve messages from a registered address. Exact method of authenticating clients (e.g., passwords?) is yet to be determined.
Eventually, I want to add support for clients to connect from a web browser so they can register/unregister and set/remove forwarding themselves.
Thus, the server will have to do some non-standard switching. Will I need to implement an XMPP server for this? I guess some (or all?) of this can also be done using a XMPP client bot
You might investigate whether Pub/Sub is a better fit for your problem than custom messaging semantics. If so, you may find an implementation of it in your existing XMPP server.
You could probably get away with using a message queue like ActiveMQ for the communication and Apache Camel for the routing/forwarding/processing.