Can a single instance of Fiximulator support multiple banzai clients? - quickfix

I have been testing Fiximulator and although it is very useful and mostly satisfies my needs it appears that in spite of being based on quickfix initiators and acceptors a single Fiximulator instance can only support a single Banzai client. I have read the available documentation for Fiximulator and can find to mention of it's ability to handle multiple sessions with different compids on differing ports. Does anyone have any experience of running many Banzai clients into a single Fiximulator instance ?? Any pointers very greatly appreciated !!

FIXimulator runs on QuickFIX/J and therefore has a QuickFIX/J configuration file. The one for FIXimulator is located in ./config/FIXimulator.cfg.
You can add more sessions to the FIXimulator configuration file for each Banzai client that wishes to connect. Give each Banzai client a different CompID and supply them in the FIXimulator configuration file.

Related

Artemis Bridges/Federation

Looking to understand differences between various options for moving messages i.e. diverts , bridges & Federation. As I understand diverts are for within same broker and can mix along with brides.Bridge on the other hand,can be used to move messages to different broker instance(JMS Compliant one).
Then Federation when I read looks similar to Bridging , where messages can be moved/pulled from upstream. Quick help on when which feature to be used is helpful.
Thanks a lot for your help!
Bridges are the most basic way to move messages from one broker to another. However, each bridge can only move messages from one queue to one address, and each bridge must be created manually in broker.xml or programmatically via the management interface. Many messaging use-cases involve dynamically created addresses and queues so manually creating bridges is not feasible. Furthermore, many messaging use-cases involve lots of addresses in queues and manually creating corresponding bridges would be undesirable.
Federation uses bridges behind the scenes, but it allows configuring one element in broker.xml to apply to lots of addresses and queues (even those created dynamically). Federation also allows upstream & downstream configurations whereas bridges can only be configured to "push" messages from one broker to another.

Eclipse milo - How to handle data (nodes) visibility in OPCUA so that different users see different data?

I am in the process of analizing how to set up an OPCUA server in the cloud, and one of the challenges is data visibility. As data visibility, I mean that a user/customer can see certain data/devices that only belongs to him, and the same will apply to other users.
So the node creation process will depend on who the connected user is.
How can this be implemented in the best way according to OPCUA and specifically eclipse milo? Is it different namespaces for each customer? Any suggestion will be appreciated.
Different namespaces per customer would be an okay approach, but whether you do that or not you ultimately need to be examining the Session during the execution of Browse, Read, Write, and other services to determine which User is connected and what rights they have.

In hadoop, Is there any limit to the size of data that can be accessed through knox + webhdfs?

In hadoop, Is there any limit to the size of data that can be accessed/Ingested to HDFS through knox + webhdfs?
Apache Knox is your best option when you need to access webhdfs resources from outside of a cluster that is protected with firewall/s. If you don't have access to all of the datanode ports then direct access to webhdfs will not work for you. Opening firewall holes for all of those host:ports defeats the purpose of the firewall, introduces a management nightmare and leaks network details to external clients needlessly.
As Hellmar indicated, it depends on your specific usecase/s and clients. If you need to do ingestion of huge files or numbers of files then you may want to consider a different approach to accessing the cluster internals for those clients. If you merely need access to files of any size then you should be able to extend that access to many clients.
Not having to authenticate using kerberos/SPNEGO to access such resources opens up many possible clients that would otherwise be unusable with secure clusters.
The Knox users guide has examples for accessing webhdfs resources - you can find them: http://knox.apache.org/books/knox-0-7-0/user-guide.html#WebHDFS - this also illustrates the groovy based scripting available from Knox. This allows you to do some really interesting things.
In theory, there is no limit. However, using Knox creates a bottleneck. Pure WebHDFS would redirect the read/write request for each block to a
(possibly) different datanode, parallelizing access; but with Knox everything is routed through a single gateway and serialized.
That being said, you would probably not want to upload a huge file using Knox and WebHDFS. It will simply take too long (and depending on your client, you may get a timeout.)

Biztalk - how do I set up MSMQ load balancing and high availability?

From what I understand, in order to achieve MSMQ load-balancing, one must use a technology such as NLB.
And in order to achieve MSMQ high-availability, one must cluster the related Biztalk Host (and hence the underlying servers have to be in a cluster themselves).
Yet, according to Microsoft Documentation, NLB and FailOver Clustering technologies are not compatible. See this link for reference: http://support.microsoft.com/kb/235305
Can anyone PLEASE explain to me how MSMQ load-balancing and high-availability can be achieved?
thank you in advance,
M
I've edited my original answer because on reflection, I think I was talking nonsense.
I don't believe that it is possible to achieve both load balancing and high availability in a BizTalk transactional scenario. Have a look at the section "Migration considerations for moving from MSMQ/T to MSMQ adapter in BizTalk 2006" on the following site http://blogs.msdn.com/eldarm/
To summarise that post, there are a couple of scenarios:
High Availability (Non-transactional)
You simply have MSMQ on more than one BizTalk server behind NLB
High Availability (Transactional)
For this you need to have a clustered MSMQ host, which means that you can't do any sort of load balancing upon a single queue.
One possible halfway solution is to create two MSMQ adapters, on different clustered hosts, each handling different queues. Doesn't sound too nice to me though.
A key point is understanding the reasons why you would want transactional, clustered behaviour - you need this for ordered delivery and to ensure no duplicates.
In general I wouldn't go to the trouble of load balancing MSMQ - BizTalk itself is load balanced once messages have reached the MessageBox database. While it is true that you will see asymmetric load due to the queue processing happening on one machine, in the overall context of your BizTalk environment this should not be significant.
Again, it is worth remembering that you are clustering MSMQ for reasons beyond simple high availability:
MSMQ adapter receive handler - MSMQ does not support remote
transactional reads; only local
transactional reads are supported. The
MSMQ adapter receive handler must run
in a host instance that is local to
the clustered MSMQ service in order to
complete local transactional reads
with the MSMQ adapter.
That was from the following MSDN page.
I hope this edited answer helps - I don't think it was what you were after, maybe I'm wrong and you'll find a workable solution for NLB and transactional MSMQ, but the more I think about it the more it seems that the two scenarios are not compatible.
A final thought is that you could try posting a similar question on Server Fault - you get a few BizTalk devs on Stack Overflow, including at least two MVPs, but at least where I work this is that sort of question I'd be passing on to my networking team.

Log files in massively distributed systems

I do a lot of work in the grid and HPC space and one of the biggest challenges we have with a system distributed across hundreds (or in some case thousands) of servers is analysing the log files.
Currently log files are written locally to the disk on each blade but we could also consider publishing logging information using for example a UDP Appender and collect it centally.
Given that the objective is to be able to identify problems in as close to real time as possible, what should we do?
First, synchronize all clocks in the system using NTP.
Second, if you are collecting the logs in a single location (like the UDP appender you mention) make sure the logs have enough information to actually help. I would include at least the server that generated the log, the time it happened, and the message. If there is any sort of transaction id, or job id type concept, include that also.
Since you mentioned a UDP Appender I am guessing you are using log4j (or one of it's siblings). Log4j has an MDC class that allows extra information to be passed along through a processing thread. it can help collect some of the extra information and pass it along.
Are you using Apache? If so you could have a look at mod_log_spread Though you may have too big an infrastructure to make it maintainable. The other option is to look at "broadcasting" or "multicasting" your log messages and having dedicated logging servers subscribing to those feeds and collating them