Decentralized PLC Network using Soft-PLC - plc

My setup is a tracked mobile robot which needs to be controlled using a PLC. For numerous reasons, it is necessary to have a decentralized computing unit (e.g. located in the office) which sends the commands to the tracked robot. The computing unit runs on a industrial Soft-PLC (in our case "CODESYS Control WIN"). The basic idea is, that the decentralized unit does the heavy computing and the local unit is just a proxy (i.e. "Read Sensor Data and send it to decentralized unit" as well as "Get Controls from decentralized unit and send commands to ECUs")
Question is: How is it possible to do this kind of decentralized setup, i.e. How to route commands from local Codesys instance to decentralized unit?

Related

Layer in control software to abstract from real mechatronical system and simulation program

It's about a mechatronical system that needs to be controlled via software. It is not yet clear in which language it will be written, but since it is not important, let's just say in Java.
The 1. thing is that we will need to send messages via CAN. So we have the control software, some event happens and we send a message via CAN. The mechatronical system will react.
Now the 2. thing is that obviously it would be good to be able to test the software without a real mechatronical system, since it reduces effort. So I thought about writing another program, a simulation program.
So I imagine that the simulation program notices when a CAN message is sent and reacts to it.
How is a good approach to accomplish that?
I mean for the real mechatronical system the control software needs to send a CAN message directly on the bus(, maybe via some native code). For the simulation program some kind of interprocess communication is needed. How must the control software be designed that it doesn't care if there is some simulation program that is listening or a real mechatronical system that gets the CAN messages?
My first thought was that the control software always sends "CAN messages" via an interprocess communication approach. Let's say for the sake of simplicity it is RMI. Then to send real CAN messages via the bus there is some module in the same control software that gets the "CAN messages" via RMI and forwarding them to the real CAN bus.
Now the simulation program is able to receive the "CAN messages" via RMI, too, and can react to it.
Is that a good approach? Because I see that there is some overhead in the control software by communicating to itself via interprocess communication, which is not neccessary in principle. But I see no other possibility to have an abstraction layer, such that I have no special code for the simulation program in the control software.
Thank you for feedback!
You're describing one aspect of Hardware-in-the-loop testing. It's a standard approach for developing mechatronic systems that combine software and hardware.
In a software setting one way to solve this problem is to provide an interface (as in a Java interface, rather than a physical one). You end up with two concrete implementations of that interface, one for your real hardware, and one for your test version. Because the real and test versions provide the same interface, they should be interchangeable.
Once you've got your interfaces described how you implement them should be irrelevant (ie/ you could use a scripting language to develop the test code more quickly or cheaply) - so RPC may be a possibility, but there are certainly other choices.

Communicate with a microcontroller over ethernet

I am planning to make some microcontroller boards which would do miscellaneous tasks. For example measuring analog voltages or controlling other instruments. Each board needs to be controlled/downloaded it's data from one place. For that purpose I would use an ethernet interface and do the comnunication over that. So my question is: which would be the most suitable method of achieving that. My ideas are: run a webserver on each module and communicate with POST/GET, or run a telnet server on boards and communicate with a telnet client. The security and speed/latency is not an concern but the data integrity is.
I don't need a html based gui for modules because I will implement an application which will communicate wizh the modules periodically, gets the data from them and stores in a database. And the database is what I will use later, for examining the data for example.
An other example:
I have a board which measures measures temperature. There is a server on the board itself run by the mcu. It is connected to a router via the ENC chip. I have my pc also connected to that router. I have an application which connects to the server run by the Atmega328 and collects the data then stores to a database. It repeats this let's say in every hour. I would use an Atmega328 and an ENC28J60 ethernet interface chip. What do you recommend?

Centralized/Distributed/Service oriented Architecture/Application

I am doing a system architecture and my knowledge from college doesn't help me when it comes to understand the subtle differences between centralized, distributed and service oriented architecture/application.
If I take a typical client/server architecture, the client sends requests to a server, the server then sends responses to the client. That is a centralized architecture.
An application that handles both server and client sides will be a distributed application (because working on different platforms), but that is still a centralized architecture.
Therefore, a distributed architecture must involve a distributed application.
Questions: am I right? What does all that become when it comes to service oriented architectures / applications?
Distributed: The whole process involved in a computation task is divided into pieces and assigned to multiple computational nodes. Each node when doing its part of the processing does not have access to the whole information of the system that is necessary to achieve a globally optimized result. The aggregate of the results from multiple nodes will converge towards a global optimal result through usually multiple iterations of computations distributed across multiple nodes.
A good example is a router system in which each router has only the information it exchanges with its neighbours. At the start the neighbours known only part of the whole network system. Once a router gets more information from its neighbours it incorporates the new information into its view of the whole system, then spreads its view to its neighbours. Through multiple iteration of these steps, each separately computed by individual routers, all routers would settle on a consistent global view of the whole network system.
Another example could be a web ordering system where the browser initially gets a list of commonly order goods. The browser may have logic to track user viewing behaviour and make a decision to fetch from the server a different category of goods list, but does not send all the user behaviour parameters to the server. In this imaged example, the browser knows something the serer does not know, and the server does too. Thus the whole application would be a distributed system. In addition to this part of distributed nature, the user authentication could be done on one of the servers, the inventory is done on another server, and reservation on yet another one. Each of the servers involved would not have the whole information of the specific user browsing and ordering instance. But the aggregate work from all these nodes will fulfill the business need to sell more goods and satisfy more customers.
Opposite to the distributed, is the centralized, in that the computation logic would be always able to get the information of the whole picture.
Given this view, a client-server application can be a viewed as a distributed system if you think the client side involves non-trivial decision making. Or can be a centralized system if you think the client is dumb.
The service-oriented term is more about how the functional processing power is integrated into the system. In a service-oriented system, new capability may be introduced into the system at runtime by new API functionality discovery, or new logic capability discovery behind an unchanged API. Think about it, you could build an application that initially has no much built-in capability, then it expands its capability by discovering and incorporating new capability on the service providers. In contrast a traditional system needs to be built at build time, typically as a consequence of human-involving discussion-design-documentation activity. A service-oriented design is a good fit to a distributed system.

Akka -- Deploy two ActorSystems on the same host

I'm writing this as a follow up to PlayFramework -- Look up actors in another local ActorSystem, but this time targetting the question specifically to the Akka crowd.
The question is simple: Does it make sense to deploy two ActorSystems on the same host (not just on the same host but even on the same JVM), given that there appears to be no way to simply lookup the other system through system.actorSelection unless you remote to localhost?
In other words, since system1.actorSelection("akka://system2/user/my-actor") does not work, but system1.actorSelection("akka.tcp://system2#127.0.0.1:2552/user/my-actor") does, why even consider deploying two systems?
I suspect you're going to ask about a use case, so here's one for you. Assume I have a complex real-time system using Akka and that this system is deployed as autonomous agents on any number of machines. Ideally, I'd like to have fine-grained control of the resources I allocate to this system and I'd like it to be somewhat isolated. Furthermore, assume that I want to write a small control interface (e.g., a REST API) with the specific purpose to provide input and monitor the real-time system. Naturally, I would make that control system another ActorSystem which interacts with the first system. It makes sense, right? I don't want to have actors running in the same ActorSystem as the real-time processing (for isolation, practicality, separate logging, non pollution of resource monitoring, supervision -- that would add one more branch to the hierarchy --, etc.). That control ActorSystem would never be deployed on a separate machine since it goes hand in hand with the real-time system. Yet, the only way for these two systems to communicate is through loopback tcp.
Is what I'm suggesting not the proper/intended way to do things? Am I missing something? Is there a way to do this that I haven't considered? Does my use case even call for using Akka?
Thanks in advance for your input!
Instead of having two separate actor systems, you could have a top level actor for each of the branches and run each branch on a dedicated dispatcher. Each top level actor will have its own error kernel as well. Having 2 actor systems mostly makes sense, when they are not related, but as yours communicate, I would not separate them.

how can i measure stress testing for the iPhone app?

how can i measure stress testing for the iPhone app ?
i need stress testing not performance testing, for example 100 users access the database of the app which is on the server at the same time.
any help?
thanks in advance
First, you need to decide if you need to test the client-side (iPhone) app, the server-side code, or both.
Testing ONLY the server-side, might make this much easier - especially if it is using HTTP to communicate with the server and exchanges data via a text-based format (XML, JSON, etc). There are many web load testing tools available which can handle this scenario. Using our Load Tester product, for example, you would configure the proxy settings on your iPhone to point to our software running on a local machine. Then start a recording and use the application. Load Tester will record the messages exchanged with the server. You can then replay the scenario, en masse, to simulate many users hitting your server simultaneously. The process, at a high level, is the same with most of the web load testing tools.
Of course, the requests to the server can't be replayed exactly as recorded - they'll need to be customized to accurately simulate multiple users. How much customization is needed will depend on the kind of data being exchanged, the complexity of the scenario and the ability of the tool to automatically configure dynamic fields (and this is one area where the abilities of the tools vary greatly).
Hope that helps!
A basic simulation would involve running your unit tests on OS X, using many simultaneous unit test processes (with unique simulated users, and other variables).
If you need more 'stress', add machines - you'll likely end up hitting io or network limits from one machine relatively early on.