I have multiple SafeNet HSMs that i wish to connect to all of them at any one time from a single client. I know this cannot be done through PKCS#11 because PKCS#11 has the concept of a single HSM at a time with multiple slots.
So, is it possible to connect to the multiple HSMs at any one time?
Yes, Safenet's HSM model's support something called High availability mode.
This allows the Application to see a virtual HSM rather than a group of HSMs.
I'm not sure whether this question is referring to the ability to connect to separate HSMs for separate functions, or to load-balance and provide failover between the HSMs.
For the first scenario, if you have multiple HSMs registered with the client, they should show up as separate slots and you can use the desired slot in your PKCS #11 code:
Output from ckdemo option 11 (Slot Info):
Slots available:
slot#1 - LunaNet Slot
slot#2 - LunaNet Slot
As Raj mentioned, Safenet Luna HSMs do have a High Availability (HA) mode that allows load-balancing and failover. To expand on that answer, if you configure your HSMs for HA use and create a HA group on your Safenet client using the vtl haAdmin command, you will see a virtual slot in addition to the separate slots for the individual HSMs:
Output from ckdemo option 11 (Slot Info):
Slots available:
slot#1 - LunaNet Slot
slot#2 - LunaNet Slot
slot#3 - HA Virtual Card Slot
You can now use that Virtual slot in your PKCS #11 code to interface with the HSMs in the HA pool and the Safenet client software will take care of determining how to route the requests between the HSMs.
Related
I am new to Vertx. I am confused about event bus in clustering environment.
As documentation of vertx
The event bus doesn’t just exist in a single Vert.x instance. By
clustering different Vert.x instances together on your network they
can form a single, distributed event bus.
How exactly event bus of different Vert.x instances are joined together in cluster to form a single distributed event bus and the role of ClusterManager in this case? How the communication between nodes work in distributed event bus? Please explain me this in detail of technical. Thanks
There is more info about clustering in the cluster managers section of the docs.
The key points are:
Vert.x has a clustering SPI; implementations are named "cluster managers"
Cluster managers, provide Vert.x with discovery and membership management of the clustered nodes
Vert.x does not use the cluster manager for message transport, it uses its own set of TCP connections
If you want to try this out, take a look at Infinispan Cluster Manager examples.
For more technical details, I guess the best option is to go to the source code.
I have trouble with establishing asynchronous point to point channel using ZeroMQ.
My approach to build point to point channel was that it generates as many ZMQ_PAIR sockets as possible up to the number of peers in the network. Because ZMQ_PAIR socket ensures an exclusive connection between two peers, it needs the same number of peers. My first attempt is realized as the following diagram that represents paring connections between two peers.
But the problem of the above approach is the fact that each pairing socket needs a distinct bind address. For example, if four peers are in the network, then each peer should have at least three ( TCP ) address to bind the rest of peers, which is very unrealistic and inefficient.
( I assume that peer has exactly one unique address among others. Ex. tcp://*:5555 )
It seems that there is no way other than using different patterns, which contain some set of message brokers, such as XREQ/XREP.
( I intentionally avoid broker based approach, because my application will heavily exchange message between peers, which it will often result in performance bottleneck at the broker processes. )
But I wonder that if there is anybody who uses ZMQ_PAIR socket to efficiently build point to point channel? Or is there a way to bypass to have distinct host IP addresses for multiple ZMQ_PAIR sockets to bind?
Q: How to effectively establish ... well,
Given the above narrative, the story of "How to effectively ..." ( where a metric of what and how actually measures the desired effectivity may get some further clarification later ), turns into another question - "Can we re-factor the ZeroMQ Signalling / Messaging infrastructure, so as to work without using as many IP-addresses:port#-s as would the tcp://-transport-class based topology actually need?"
Upon an explicitly expressed limit of having not more than a just one IP:PORT# per host/node ( being thus the architecture's / desing's the very, if not the most expensive resource ) one will have to overcome a lot troubles on such a way forward.
It is fair to note, that any such attempt will come at an extra cost to be paid. There will not be any magic wand to "bypass" such a principal limit expressed above. So get ready to indeed pay the costs.
It reminds me one Project in TELCO, where a distributed-system was operated in a similar manner with a similar original motivation. Each node had an ssh/sshd service setup, where local-port forwarding enabled to expose a just one publicly accessible IP:PORT# access-point and all the rest was implemented "inside" a mesh of all the topological links going through ssh-tunnels not just because the encryption service, but right due to the comfort of having the ability to maintain all the local-port-forwarding towards specific remote-ports as a means of how to setup and operate such exclusive peer-to-peer links between all the service-nodes, yet having just a single public access IP:PORT# per node.
If no other approach will seem feasible ( PUB/SUB being evicted for either traffic actually flowing to each terminal node in cases of older ZeroMQ/API versions, where Topic-filtering gets processed but on the SUB-side, which both security and network Departments will not like to support, or for concentrated workloads and immense resources needs on PUB-side, in cases of newer ZeroMQ/API versions, where Topic-filter is being processed on the sender's side. Adressing, dynamic network peer (re-)discovery, maintenance, resources planning, fault resilience, ..., yes, not any easy shortcut seems to be anywhere near to just grab and (re-)use ) the above mentioned "stone-age" ssh/sshd-port-forwarding with ZeroMQ, running against such local-ports only, may save you.
Anyway - Good Luck on the hunt!
I want to manage the bandwidth and traffic based on user activities on Squid Server Proxy.
I made some research but couldn't find the solution that I want.
For example, users who have more than 256K traffic should be restricted from server.
Can you help me?
Thanks
I'm assumed squid 3.x:
To provide a way to limit the bandwidth of certain requests based on any list of criteria.
class:
the class of a delay pool determines how the delay is applied, ie, whether the different client IPs are treated separately or as a group (or both)
class 1:
a class 1 delay pool contains a single unified bucket which is used for all requests from hosts subject to the pool
class 2:
a class 2 delay pool contains one unified bucket and 255 buckets, one for each host on an 8-bit network (IPv4 class C)
class 3:
contains 255 buckets for the subnets in a 16-bit network, and individual buckets for every host on these networks (IPv4 class B )
class 4:
as class 3 but in addition have per authenticated user buckets, one per user.
class 5:
custom class based on tag values returned by external_acl_type helpers in http_access. One bucket per used tag value.
Delay pools allows you to limit traffic for clients or client groups, with various features:
Can specify peer hosts which aren't affected by delay pools, ie,
local peering or other 'free' traffic (with the no-delay peer
option).
delay behavior is selected by ACLs (low and high priority traffic,
staff vs students or student vs authenticated student or so on).
each group of users has a number of buckets, a bucket has an amount
coming into it in a second and a maximum amount it can grow to; when
it reaches zero, objects reads are deferred until one of the object's
clients has some traffic allowance.
any number of pools can be configured with a given class and any set
of limits within the pools can be disabled, for example you might
only want to use the aggregate and per-host bucket groups of class 3,
not the per-network one.
In your case can you use:
For a class 4 delay pool:
delay_pools pool 4
delay_parameters pool aggregate network individual user
The last delay_pool, can be configure in your squid server proxy:
for example; each user will be limited to 128Kbits/sec no matter how many workstations they are logged into:
delay_pools 1
delay_class 1 2
delay_access 1 allow all
delay_parameters 4 32000/32000 8000/8000 600/64000 16000/16000
Please read more:
http://wiki.squid-cache.org/Features/DelayPools
http://www.squid-cache.org/Doc/config/delay_parameters/
I understand that in HornetQ you can do live-backup pairs type of clustering. I also noticed from the documentation that you can do load balancing between two or more nodes in a cluster. Are those the only two possible topologies? How would you implement a clustered queue pattern?
Thanks!
Let me answer this using two terminologies: One the core queues from hornetq:
When you create a cluster connection, you are setting an address used to load balance hornetq addresses and core-queues (including its direct translation into jms queues and jms topics), for the addresses that are part of the cluster connection basic address (usually the address is jms)
When you load balance a core-queue, it will be load balanced among different nodes. That is each node will get one message at the time.
When you have more than one queue on the same address, all the queues on the cluster will receive the messages. In case one of these queues are in more than one node.. than the previous rule on each message being load balanced will also apply.
In JMS terms:
Topic subscriptions will receive all the messages sent to the topic. Case a topic subscription name / id is present in more than one node (say same clientID and subscriptionName on different nodes), they will be load balanced.
Queues will be load balanced through all the existent queues.
Notice that there is a setting on forward when no consumers. meaning that you may not get a message if you don't have a consumer. You can use that to configure that as well.
How would you implement a clustered queue pattern?
Tips for EAP 6.1/HornetQ 2.3 To implement a distributed queue/topic:
Read the official doc for your version: e.g. for 2.3 https://docs.jboss.org/hornetq/2.3.0.Final/docs/user-manual/html/clusters.html
Note that the old setting clusterd=true is deprecated, defining the cluster connection is enough, check that internal core bridges are created automatically / clustered=true is deprecated in 2.3+
take the full-ha configuration as a baseline or make sure you have jgroups properly set. This post goes deeply into the subject: https://developer.jboss.org/thread/253574
Without it, no errors are shown, the core bridge connection is
established... but messages are not being distributed, again no errors
or warnings at all...
make sure security domain and security realms, users, passwords, roles are properly set.
E.g. I confused the domain id ('other') with the realm id
('ApplicationRealm') and got auth errors, but the errors were
generic, so I wasted time checking users, passwords, roles... until I
eventually found out.
debug by enabling debug (logger.org.hornetq.level=DEBUG)
As libnids seems to be two years old and there are no current updates, do some one know any alternative solution for libnids or better library than it, as it seems to drop packets in higher speeds more than 1G/per sec
And more over it has no support for 64 bit ip addresses.
An alternative to libnids is Bro. It comes with a robust TCP reassembler which has been thoroughly tested and used by the network security monitoring community over the years. It ships with a bunch of protocol analyzers for common protocols, such as HTTP, DNS, FTP, SMTP, and SSL.
Bro is "the Python of network processing:" it has its own domain-specific scripting language with first-class types and functions for IP addresses (both v4 and v6), subnets, ports. The programming style has an asynchronous event-based flavor: users write callback functions for events that reflect network activity. The analysis operates at connection granularity. Here is an example:
event connection_established(c: connection)
{
if ( c$id$orig_h == 1.2.3.4 && c$id$resp_p == 31337/udp )
// IP 1.2.3.4 successfully connected to remote host at port 31337.
}
Moreover, Bro supports a cluster mode that allows for line-rate monitoring of 10 Gbps links. Because most analyses do not require sharing of inter-connection state, Bro scales very well across cores (using PF_RING) as well as multiple nodes. There exist Bro installations with >= 140 nodes. A typical deployment looks as follows:
(source: bro.org)
Due to the high scalability, there is typically no more need to grapple with low-level details and fine-tune C implementations. Or put differently, with Bro you spend your time working on the analysis and not the implementation.