With how many resources can a user log into a standard compliant XMPP server? - xmpp

Simple question: is there any restriction on the number of concurrent logins using the same bare JID but different resources in the XMPP standard?

unlimited, there is no restriction in the RFCs

It is implementation-defined, with no set maximum in the RFCs. However, RFC 6120, section 7.6.2.1 says:
If the account has reached a limit on the number of simultaneous
connected resources allowed, the server MUST return a stanza error (Section 8.3.3.18).
S: <iq id='tn281v37' type='error'>
<error type='wait'>
<resource-constraint
xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
</error>
</iq>
Therefore, it is not true that there is no limit, just that each server installation can make different decisions about what the limit is.

Related

Cannot use sc_http_req_rate when the counter isn’t currently tracked

As per documentation, sc_http_req_rate will return the rate of http requests from the stick table, but only from the currently tracked counters.
From testing, this means that if you are incrementing your counter using http-response rather than http-request, the field is unavailable. It cannot be used for rate limiting, or sent on to the backends.
A down to earth conseguence of this is that I cannot limit the number of bots generating too many 404 requests.
How can I load and use the stick table http rate during the request if it’s only tracked in the response?
Thank you :)

What is the best HTTP status code for pin code's "Max Attempt Reached"?

I'm implementing a pin code authorization in my web application and one of the requirements is to limit the attempt count of a user to n times in a day.
So, what is the best HTTP status code returned to the user when they reached the max attempt count?
Now, I'm thinking of
403
429 (but it's not about sending too many requests)
400 (but the request payload didn't invalid)
429 is exactly what you want.
from: https://datatracker.ietf.org/doc/html/rfc6585
429 Too Many Requests
The 429 status code indicates that the user has sent too many
requests in a given amount of time ("rate limiting").
The response representations SHOULD include details explaining the
condition, and MAY include a Retry-After header indicating how long
to wait before making a new request.
For example:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html
Retry-After: 3600
<html>
<head>
<title>Too Many Requests</title>
</head>
<body>
<h1>Too Many Requests</h1>
<p>I only allow 50 requests per hour to this Web site per
logged in user. Try again soon.</p>
</body>
</html>
Note that this specification does not define how the origin server
identifies the user, nor how it counts requests. For example, an
origin server that is limiting request rates can do so based upon
counts of requests on a per-resource basis, across the entire server,
or even among a set of servers. Likewise, it might identify the user
by its authentication credentials, or a stateful cookie.
Responses with the 429 status code MUST NOT be stored by a cache.
Note how the spec invites the service / implementation to provide details. It does not say what type of requests is too much or anything specific, really. Therefore, you will want to say something like "stop spamming my service because x, y, z".

Which ActiveMQ properties/settings should be stated in an interface specification document?

A customer wants to exchange data between his application and our application via ActiveMQ, so we want to create an Interface Specification Document which describes the settings and properties that both applications must use so that they can communicate. We don't know which programming language or API the customer will use; so if the specification is incomplete they might implicitly use settings that we don't expect.
So I'm wondering which settings must be the same on both sides, and which settings can be decided by each application on its own. This is what I have so far:
Must be specified in document:
connector type (openwire, stomp, ...)
connector settings (host name where broker runs, TCP port, user name, password)
message type (TextMessage, BytesMessage...)
payload details (XML with XSDs, JSON with schema, ...)
message encoding (UTF-8), for text payload
use queues, or topics, or durable topics
queue names
is any kind of request/response protocol being used
use single queue for requests and responses (with selectors being used to get correct messages), or use separate queues for requests and responses
how to transfer correlation ID used for correlating requests and responses
message expiration
Must not be specified in document:
ActiveMQ broker version (all versions are compatible, right?)
message compression (it should be transparent?)
What did I miss? Which things should be stated in such a document to ensure that two applications can communicate via ActiveMQ?
What did I miss?
You missed message headers. These can be broken into two categories:
Built-in (JMS) headers
Custom headers
Examples of the built-in headers are things such as JMSMessageID, JMSXGroupID, etc. In some cases, your interface definition will need to include details of whether and how these values will be set. For example, if messages need to be grouped, then any message producer or consumer using the definition will need to be aware of this.
Similarly, if there will any custom headers (common uses include bespoke ordering, source system identification, authorization tokens, etc.) attached to the messages need to be part of any interface definition.
In fact, I would argue that the interface definition only needs to include two things:
a schema definition for the message body, and
any headers + if they are required or optional
Everything else you have listed above is either a deployment or a management concern.
For example, whether a consumer or producer should connect to a queue or topic is a management concern, not an interface concern. The address of the queue/topic is a deployment concern, not an interface concern.

HornetQ clustering topologies

I understand that in HornetQ you can do live-backup pairs type of clustering. I also noticed from the documentation that you can do load balancing between two or more nodes in a cluster. Are those the only two possible topologies? How would you implement a clustered queue pattern?
Thanks!
Let me answer this using two terminologies: One the core queues from hornetq:
When you create a cluster connection, you are setting an address used to load balance hornetq addresses and core-queues (including its direct translation into jms queues and jms topics), for the addresses that are part of the cluster connection basic address (usually the address is jms)
When you load balance a core-queue, it will be load balanced among different nodes. That is each node will get one message at the time.
When you have more than one queue on the same address, all the queues on the cluster will receive the messages. In case one of these queues are in more than one node.. than the previous rule on each message being load balanced will also apply.
In JMS terms:
Topic subscriptions will receive all the messages sent to the topic. Case a topic subscription name / id is present in more than one node (say same clientID and subscriptionName on different nodes), they will be load balanced.
Queues will be load balanced through all the existent queues.
Notice that there is a setting on forward when no consumers. meaning that you may not get a message if you don't have a consumer. You can use that to configure that as well.
How would you implement a clustered queue pattern?
Tips for EAP 6.1/HornetQ 2.3 To implement a distributed queue/topic:
Read the official doc for your version: e.g. for 2.3 https://docs.jboss.org/hornetq/2.3.0.Final/docs/user-manual/html/clusters.html
Note that the old setting clusterd=true is deprecated, defining the cluster connection is enough, check that internal core bridges are created automatically / clustered=true is deprecated in 2.3+
take the full-ha configuration as a baseline or make sure you have jgroups properly set. This post goes deeply into the subject: https://developer.jboss.org/thread/253574
Without it, no errors are shown, the core bridge connection is
established... but messages are not being distributed, again no errors
or warnings at all...
make sure security domain and security realms, users, passwords, roles are properly set.
E.g. I confused the domain id ('other') with the realm id
('ApplicationRealm') and got auth errors, but the errors were
generic, so I wasted time checking users, passwords, roles... until I
eventually found out.
debug by enabling debug (logger.org.hornetq.level=DEBUG)

Mule: What's the difference between a multicasting-router and a static-recipient-list-router?

I can't really see a difference between a multicasting-router and a static-recipient-list-router. Why would I use one over the other?
According to Mule-2.x user guide
Recipient List
the Recipient list router can be used
to send the same event to multiple
endpoints over the same endpoint or to
implement routing-slip behaviour where
the next destination for the event is
determined from the event properties
or payload. Mule provides an abstract
Recipient list implementation
org.mule.routing.outbound.AbstractRecipientList
that provides a thread-safe base for
specialised implementations. Mule also
provides a Static recipient list that
takes a configured list of endpoints
from the current event or statically
declared on the endpoint.
<outbound>
<static-recipient-list-router>
<payload-type-filter expectedType="javax.jms.Message"/>
<recipients>
<spring:value="jms://orders.queue"/>
<spring:value="jms://tracking.queue"/>
</recipients>
</static-recipient-list-router> </outbound>
Multicasting Router
The Multicasting router can be used to
send the same event over multiple
endpoints. When using this router care
must be taken to configure the correct
transformers on the endpoints to
handle the event source type.
<outbound>
<multicasting-router>
<jms:endpoint queue="test.queue"
transformer-refs="StringToJmsMessage"/>
<http:endpoint host="10.192.111.11"
transformer-refs="StringToHttpClientRequest"/>
<tcp:endpoint host="10.192.111.12"
transformer-refs="StringToByteArray"/>
<payload-type-filter expectedType="java.lang.String"/>
</multicasting-router> </outbound>
Remember that care should be taken to
ensure that the message being routed
is transformed to a format that the
endpoint understands.
Straight from the horse's mouth (Mule in Action, by David Dossot, John D'Emic, p. 98..100)
The static-recipient-list router lets you simultaneously send the same message to multiple endpoints. You'll usually use a static recipient list when each endpoint is using the same transport. This is often the case with VM and JMS endpoints.
Use static recipient lists when sending the same message to endpoints using identical transports
The multicasting router is similar to the static recipient list in that it simultaneously sends the same message across a set of outbound endpoints. The difference is that the multicasting router is used when the endpoint list contains different types of transports.
Use the multicasting router when sending the same message to endpoints using different transports
This is how I understand these:
The static-recipient-list router will send the payload to each recipient in the order that they are listed. This gives you the ability to modify the payload before proceeding to the next endpoint. This also gives you the ability to stop processing in the event of an error.
The multicast-router sends the same payload to all endpoints at the same time. You will not be able to change the payload for each endpoint. You will not be able to stop other endpoints from processing if one of the endpoints fail.