Eclipse milo - How to handle data (nodes) visibility in OPCUA so that different users see different data? - opc-ua

I am in the process of analizing how to set up an OPCUA server in the cloud, and one of the challenges is data visibility. As data visibility, I mean that a user/customer can see certain data/devices that only belongs to him, and the same will apply to other users.
So the node creation process will depend on who the connected user is.
How can this be implemented in the best way according to OPCUA and specifically eclipse milo? Is it different namespaces for each customer? Any suggestion will be appreciated.

Different namespaces per customer would be an okay approach, but whether you do that or not you ultimately need to be examining the Session during the execution of Browse, Read, Write, and other services to determine which User is connected and what rights they have.

Related

Adding Data from UI to different microservices

Imagine you have a user registration form, where you fill in the form:
First Name, Last Name, Age, Address, Prefered way of communication: Sms, Email (radiobuttons). You have 2 microservices:
UserManagement service
Communication service
When user is registered we should create 2 aggregates in 2 services: User in UserManagementContext and UserCommunicationSettings in Communication. There are three ways I could think of achieving this:
Perform 2 different requests from UI. What if one of them fails?
Put all that data in User and then raise integration event with all that data, catch it in CommunicationContext. Fat events, integration events shouldn't contain domain data, just the ids of aggreagates.
Put the message in the queue, so both contexts would have the adapters to take the needed info.
What is the best option to split this data and save the information?
Perform 2 different requests from UI. What if one of them fails?
I don't think this would do. You are going to end up in inconsistent state.
I'm for approach 3#:
User is persisted (created) in your user store.
UserRegistered event is sent around, containing the ID of the user.
All interested parties handle UserRegistered event.
I'd opt for slim events, because your services may need different user data and its better to let them to get this data on their own rather than putting all the things into the event.
As you mentioned to store communication settings, assuming that communication data are supposedly not part of the bounded context of UserManagement Service.
I see a couple of problems with approach #3 proposed in other answer. Though I think approach #3 in original answer is quite similar to my answer.
What if more communication modes are added? Naturally, it should only cause Communication Service to change, not UserManagement Service. SRP. Communication settings MS should store all communication settings related data in its own datastore.
What if user updates his communication settings preference only? Why user management service should take burden of handling that? Communication settings change should just trigger changes in its corresponding micro-service that is Communication Service in our case.
I just find it better to use some natural keys to identify & correlate entities across micro-services rather than internal IDs generated by DB. Consider that tomorrow you decide to use completely different strategy to create "ids" of user for UserManagement service e.g. non-numeric IDs, different id generation algorithm etc. I would want to keep other micro-services unaffected of any such decisions.
Proposed approach:
Include API Gateway in architecture. Frontend always talks to API Gateway.
API Gateway sends commands to message queue like RegisterUser to be consumed by interested micro-services.
If you wish to keep architecture simple, you may go with publishing a single message with all data that can be consumed by any interested micro-services. If you strictly want individual micro-services to see only its relevant data, create a message queue per unique data structure expected by consuming services.

How to share Presence among Resources?

I'm developing a cross-platform application and which incoorperates XMPP (ejabberd). How can I share presence among resources?
Please consider the following scenario:
User A is logged onto three devices: PC, Android and iOS. User A, using Android, sets his presence to 'away'. How can I set (synchronise) the other resources to 'away' (and send out presence stanzas)?
I'm looking to solve this problem using the XMPP protocol / ejabberd server; not by adding logic to the clients.
You do not really need to synchronise presence among your resource. What you need is a display rules from your other clients.
For example, if a user has three different resources you may decide:
To display the most available resource of the three.
To display the most recent one.
To display the one with the highest priority.
So, you do not net to synchronize them as you will lose precision. Simply define your presence display rules, based on the goal of your client application.

how high frequency trading system connects to exchange

I'm trying to study about high frequency trading systems. Whats the mechanism that HFT use to connect with the exchange and whats the procedure (does it has to go through a broker or is it direct access, if it's direct access what sort of connection information that i require)
Thanks in advance for your answers.
Understand that there are two different "connections" in an HFT engine. The first is the connection to a market data source. The second is to a clearing resource. As mentioned in kpavlov's answer, a very expensive COLO (co-location) is needed to get as close to the data source/target as possible. Depending on their nominal latency these COLO resources cost thousands of dollars per month.
With both connections, your trading engine must be certified by the provider (ICE, CME, etc) to comply with their requirements. With CME the certification process is automated, with ICE it employs human review. In any case, the certification requires that your software demonstrate conformance to standards and freedom from undesirable network side effects.
You must also subscribe to your data source(s) and clearing service, neither is inexpensive and pricing varies over a pretty wide range. During the subscription process you'll gain access to the service providers technical data specification(s)-- a critical part of designing your trading engine. Using old data that you find on the Internet for design purposes is a recipe for problems later. Subscription also gets you access to the provider(s) test sites. It is on these test sites that you test and debug your engine.
After you think you engine is ready for deployment you begin connecting to the data/clearing production servers. This connection will get you into a place of shadows-- port roulette. Not every port at the provider's network edge has the same latency. Here you'll learn that you can have the shortest latency yet seldom have orders filled first. Traditional load balancing does little to help this and CME has begun deployment of FPGA-based systems to ensure correct temporal sequencing of inbound orders, but it's still early in its deployment process.
Once you're running you then get to learn that mistakes can be very expensive. If you place an order prior to a market pre-open event the order is automatically rejected. Do it too often and the clearing provider will charge you a very stiff penalty. Other things can also get you penalized or even kicked-off the service if your systems are determined to be implementing strategies to block others from access, etc.
All the major exchanges web sites have links to public data and educational resources to help decide if HFT is "for you" and how to go about it.
It usually requires an approval from exchange to grant access from outside. They protect their servers by firewalls so your server/network need to be authorized to access.
Special certification procedure with technician (by phone) is usually required before they authorize you.
Most liquidity providers use FIX protocol or custom APIs. You may consider starting implementing your connector with QuickFix, but it may become a bottleneck later, when your traffic will grow.
Information you need to access by FIX is:
Server IP
Server port
FIX protocol credentials:
SenderCompID
TargetCompID
Username
Password
Other fields

Horizontal scalability for distributed apps, how to achieve that?

I would like to disregard web applications here, because to scale them horizontally, ie to use multiple server instances together, it is "sufficient" to just duplicate the server software over the machines and just use a sort of router that forwards requests to the "less busy" server machine.
But what if my server application allows users to engage together in realtime ?
If the response to the request of a certain client X depends on the context of a client Y whose connection is managed by another machine then "inter machines" communication is needed.
I'd like to know the kind of "design solutions" that people has used in such cases.
For example, the people at Facebook must have already encountered such situation when enabling the chat feature of their social app.
Thank you in advance for any advise.
One solution to achive that is to use distibuted caches like memcache (Facebook also uses that aproach).
Then all the information which is needed on all nodes is stored in that cache (and a database if it needs to be permanent) an so all nodes can access that information (with a very small latency between the nodes).
regards
You should consider some solutions that provide transparent horizontal database scalability and guarantee ACID semantics. There are many solutions that offer this at various levels. People at Facebook which you reference have solved the problem by accepting eventual consistency but your question leads me to believe that you can't accept eventual consistency.

Scala + Akka: How to develop a Multi-Machine Highly Available Cluster

We're developing a server system in Scala + Akka for a game that will serve clients in Android, iPhone, and Second Life. There are parts of this server that need to be highly available, running on multiple machines. If one of those servers dies (of, say, hardware failure), the system needs to keep running. I think I want the clients to have a list of machines they will try to connect with, similar to how Cassandra works.
The multi-node examples I've seen so far with Akka seem to me to be centered around the idea of scalability, rather than high availability (at least with regard to hardware). The multi-node examples seem to always have a single point of failure. For example there are load balancers, but if I need to reboot one of the machines that have load balancers, my system will suffer some downtime.
Are there any examples that show this type of hardware fault tolerance for Akka? Or, do you have any thoughts on good ways to make this happen?
So far, the best answer I've been able to come up with is to study the Erlang OTP docs, meditate on them, and try to figure out how to put my system together using the building blocks available in Akka.
But if there are resources, examples, or ideas on how to share state between multiple machines in a way that if one of them goes down things keep running, I'd sure appreciate them, because I'm concerned I might be re-inventing the wheel here. Maybe there is a multi-node STM container that automatically keeps the shared state in sync across multiple nodes? Or maybe this is so easy to make that the documentation doesn't bother showing examples of how to do it, or perhaps I haven't been thorough enough in my research and experimentation yet. Any thoughts or ideas will be appreciated.
HA and load management is a very important aspect of scalability and is available as a part of the AkkaSource commercial offering.
If you're listing multiple potential hosts in your clients already, then those can effectively become load balancers.
You could offer a host suggestion service and recommends to the client which machine they should connect to (based on current load, or whatever), then the client can pin to that until the connection fails.
If the host suggestion service is not there, then the client can simply pick a random host from it internal list, trying them until it connects.
Ideally on first time start up, the client will connect to the host suggestion service and not only get directed to an appropriate host, but a list of other potential hosts as well. This list can routinely be updated every time the client connects.
If the host suggestion service is down on the clients first attempt (unlikely, but...) then you can pre-deploy a list of hosts in the client install so it can start immediately randomly selecting hosts from the very beginning if it has too.
Make sure that your list of hosts is actual host names, and not IPs, that give you more flexibility long term (i.e. you'll "always have" host1.example.com, host2.example.com... etc. even if you move infrastructure and change IPs).
You could take a look how RedDwarf and it's fork DimDwarf are built. They are both horizontally scalable crash-only game app servers and DimDwarf is partly written in Scala (new messaging functionality). Their approach and architecture should match your needs quite well :)
2 cents..
"how to share state between multiple machines in a way that if one of them goes down things keep running"
Don't share state between machines, instead partition state across machines. I don't know your domain so I don't know if this will work. But essentially if you assign certain aggregates ( in DDD terms ) to certain nodes, you can keep those aggregates in memory ( actor, agent, etc ) when they are being used. In order to do this you will need to use something like zookeeper to coordinate which nodes handle which aggregates. In the event of failure you can bring the aggregate up on a different node.
Further more, if you use an event sourcing model to build your aggregates, it becomes almost trivial to have real-time copies ( slaves ) of your aggregate on other nodes by those nodes listening for events and maintaining their own copies.
By using Akka, we get remoting between nodes almost for free. This means that which ever node handles a request that might need to interact with an Aggregate/Entity on another nodes can do so with RemoteActors.
What I have outlined here is very general but gives an approach to distributed fault-tolerance with Akka and ZooKeeper. It may or may not help. I hope it does.
All the best,
Andy