Separate processes that communicate via a NSpipe - nstask

I need to realize a communication between two application using NSPipe channels.A NSPipe has to write data and the other has to read the data (Bi-directional) that will communicate between separate process in two different applications.
For example such things in c# can be easy done with NamedPipeClientStream, NamedPipeServerStream classes where pipes are registered by id string.
Any suggestion how to achieve it in Objective-C?
With regards,
Vadivelu

Related

Cross language workflow in Cadence

Suppose I have workers written in different languages (Java & C#). Each registered activities and workflows in the Cadence server. Is it possible to create a workflow which invokes activities from both workers ?
Yes it's possible. And it would be much easier if you implement the workflow in Java.
The only thing you need to deal with is how to translate the activity input/output and exception between C# activity and Java workflow.
To achieve that, you need to write some customized code for DataConverter interface. See this sample.
Basically, you need to define special Input/output/exception classes for C# activity. For toData, convert to the data format for C# client. For fromData and fromDataArray, convert back to the classes of Java.
I assume Neon.Cadence use JSON so input/output should be easy. You just need to pay more attention to exception.

How to retrieve data from another bounded context in ddd?

Initially, There is an app runs in Desktop, however, the app will run in web platform in the future.
There are some bounded contexts in the app and some of them needs to retrieve data from another. In this case, I don't know which approach I have to use for this case.
I thought of using mediator pattern that a bound context "A" requests data "X" and then mediator call another bound context, like B" " and gets the correct data "X". Finally, The mediator brings data "X" to BC "A".
This scenario will be change when the app runs in web, then I've thought of using a microservice requests data from another microservice using meaditor pattern too.
Do the both approaches are interest or there is another better solution?
Could anyone help me, please?
Thanks a lot!
If you're retrieving data from other bounded contexts through either DB or API calls, your architecture might potentially fall to death star pattern because it introduces unwanted coupling and knowledge to the client context.
A better approach might be is looking at event-driven mechanisms like webhooks or message queues as a way of emitting data that you want to share to subscribing context(s). This is good because it reduces coupling of your bounded context(s) through data replication across contexts which results to higher bounded contexts independence.
This gives you the feeling of "Who cares if bounded context B is not available ATM, bounded context A and C have data they need inside them and I can resume syncing later since my data update related events are recorded on my queue"
The answer to this question breaks down into two distinct areas:
the logical challenge of communicating between different contexts, where the same data could be used in very different ways. How does one context interpret the meaning of the data?
and the technical challenge of synchronizing data between independent systems. How do we guarantee the correctness of each system's behavior when they both have independent copies of the "same" data?
Logically, a context map is used to define the relationship between any bounded contexts that need to communicate (share data) in any way. The domain models that control the data are only applicable with a single bounded context, so some method for interpreting data from another context is needed. That's where the patterns from Evan's book come in to play: customer/supplier, conformist, published language, open host, anti-corruption layer, or (the cop-out pattern) separate ways.
Using a mediator between services can be though of as an implementation of the anti-corruption layer pattern: the services don't need to speak the same language, because there's an independent buffer between them doing the translation. In a microservice architecture, this could be some kind of integration service between two very different contexts.
From a technical perspective, direct API calls between services in different bounded contexts introduce dependencies between those services, so an event-driven approach like what Allan mentioned is preferred, assuming your application is okay with the implications of that (eventual consistency of the data). Picking a messaging platforms that gives you the guarantees necessary to keep the data in sync is important. Most asynchronous messaging protocols guarantee "at least once" delivery, but ordering of messages and de-duplication of repeats is up to the application.
Sometimes it's simpler to use a synchronous API call, especially if you find yourself doing a lot of request/response type messaging (which can happen if you have services sending command-type messages to each other).
A composite UI is another pattern that allows you to do data integration in the presentation layer, by having each component pull data from the relevant service, then share/combine the data in the UI itself. This can be easier to manage than a tangled web of cross-service API calls in the backend, especially if you use something like an IT/Ops service, NGINX, or MuleSoft's Experience API approach to implement a "backend-for-frontend".
What you need is a ddd pattern for integration. BC "B" is upstream, "A" is downstream. You could go for an OHS PL in upstream, and ACL in downstream. In practice this is a REST API upstream and an adapter downstream. Every time A needs the data from B , the adapter calls the REST API and adapts the info returned to A domain model. This would be sync. If you wanna go for an async integration, B would publish events to MQ with the info, and A would listen for those events and get the info.
I want to add-on a comment about analysis in DDD. Exist e several approaches for sending data to analytic.
1) If you have a big enterprise application and you should collect a lot of statistic from all bounded context better move analytic in separate service and use a message queue for send data there.
2) If you have a simple application separate your Analytic from your App in other context and use an event or command to speak with there.

How can you share Transformers across mirth channels

We are using appliance based mirth connect ver 3.4.2
We have few transformers which are common to all the channels but still they are under each channel. Anytime we have to modify something, we have to make changes in all channels.
We have transformers for
some functions with javascript and java code
some mappings
some database operations like inserts etc
Can we put this code somewhere where it is shared across channels and we don't need to write transformers under each channel ?
Thanks
Sid
A good way to do this is to move common code (functions, database operations, etc) into code templates.
some functions with javascript - Edit Code Templates will be a place where you can provide common codes which has to go for all channels.
some database operations like inserts - I believe/(good practice) these should be specific to channels, and if you have functions specific to certain channel and used in many places in that specific channel, then declare that function in modes of process needed like either in deploy,pre-processor,undeploy or post-processor.
some mappings - I'm not sure about this. If you choose Javascript for mapping we can achieve this mapping by making it as a global variable in global script places or coded templates.
some JAVA code - If it is a JAVA code, and a library built to invoke script on top of the library, then make the JAVA library to have get and set objects that way you can traverse to any depth on your Mirth script to access JAVA objects
For Eg: If you are building XML, there are many libraries you can use like Stax parser, JDOM etc, but using a document builder factory for developing XML will allow you to access JAVA objects to depth in Mirth script .

Make multiple Matlab sessions communicate

Is there a way to make multiple Matlab sessions communicate (the session 1 accessing to the workspace of the session 2, for example)?
Thank you
You can do this in a few ways. For example using sockets or write to/read from files. One instance writes while the other reads, in each case.

Socket protocol specification for transferring keystrokes?

Is there an existing socket protocol for defining how a way to transfer keystroke data across machines? I want to be able to type on one machine and have what I type there show up on another machine.
For instance, the protocol might the field (eg. that the data is for a keystroke), data type (keystroke), and a value (keystroke value).
One thing that you could do is look into messaging. I am not sure if this will help you that much but, there are a lot of different implementations of this but a good place to start is looking at the JMS API.