PLC modbus datalogger that can push data to a web server [closed] - modbus

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
We are looking for a data logger to connect to a PLC through Modbus TCP or RTU. I have found several of these on the market, but I need the ability to post the data back to a web server. Basically we have a website that uses a graph to show the current values from the PLC over time and this data shouldn’t be more than a few seconds old. We have used a raspberry pi, but we are looking for alternatives for a more industrial environment.
Critical features
1. If the connection is lost, then the data that has been logged since the last connection should be sent up.
2. A backup of the logged data should be stored on the device
3. Use some type of frequent update mechanism to send data to the server like a html post.
I have only found one device and I wonder if Im using the wrong search terms/lingo or if COT devices do not exist with these features.

I did a similar project recently using a B&R Plc, and the AsHTTP library. It was able to do http put / get requests directly to a resource on the Web. (I've seen the term rest API used in the Web world) You could write your own code to buffer and store the data locally on flash memory in case it disconnects from the net.
Also B&R lets you use ModbusTCP for free directly through the ethernet port.
I've never used a "data logger" standalone device, but this is one option.

Investigate using an Omron NJ Series PLC with SQL Connection capabilities. This PLC could grab any data on another PLC through EtherNet/IP. From there, it has the ability to log to an SQL Server.
A very useful feature that I have made extensive use of is the spool function. If the connection between the NJ and the SQL Database is broken, the NJ will retain the data that wasn't logged in its own memory and insert it automatically once the connection is reestablished.
https://industrial.omron.ca/en/products/nj5-database-connection

Related

How to read data from socket, until client stopped send? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have some problem.
I have client and server. Client connect with server over TCP.
Then, client send some data (separated by chunks), I don't know what is the length of data (TLS handshake). But I know that client send some data with fixed length, and then stop, until not received some response, then he send data with fixed length again.
I need read all chunks, until client stopped send (because so many chunks). How to do that ?
I have only one idea, it's timeout. Read data in loop and set timeout between iterate. If timeout is ended, then data complete collected.
Perhaps there is a more elegant solution?
Based on the information in your comments, you're doing this wrong. The correct way to write an HTTPS proxy is to read the CONNECT line, make the upstream connection, send the appropriate response back o the client, and then if successful start copying bytes in both directions simultaneously. You're not in the least concerned with packets or read sizes, and you should certainly not make any attempt to 'collect' packets before retransmission, as that will just add latency to the system.
You can accomplish this either by starting two threads per connection, one in each direction, or via non-blocking sockets and select()/poll()/epoll(), or whatever that looks like in Go.
BUT I have no idea why you're doing this at all. There are plenty of open-source HTTP proxies already in existence, and as you're dealing with HTTPS there is no value you can possibly add to them. Your claim about 'business logic' is meaningless, or at least unimplementable.

What does my OPC Server need to do after OPC Client restores lost connection? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have my own OPC Server written using the SLIK-DA4 ActiveX control in VB6. It hosts quite a large collection of tags (probably 2,000).
A customer uses a Siemens OPC client to connect (no private security). Everything goes fine and subscription reads appear just fine on the client.
Some time later, the IP link is lost between client and server for a while. However, the customer is telling me that when the link recovers, they then have to "do something" on the OPC client to get it to start subscribing again, after which things return to normal.
... Yes, I know, I'm trying to find out what they mean by "do something" !!
However, in the meantime, I'm trying to think of what I might not be doing correctly in my server code to handle this situation. My tag values don't update too often in the attached field equipment, so is it possible that, on reconnection, the client isn't receiving any callbacks simply because there are no tag changes taking place ?
On recovery of the link, how can I get the server to push an up-to-date status for all tags to the client rather than rely on someone "doing something" on the client end ? Do I need to use the OnConnect event and then SetVQT(,,sdaSGood) for all tags, or will this not have any effect ?
Thanks
When the OPC server receives a new connection (which seems to be the case), or more precisely, when the OPC client it creates an active group and puts items in it, the server is supposed to send an initial notification about each item (value/timestamp/quality, or error) - even if it has not changed recently.
If you are developing the server using a reasonable OPC toolkit, however, this should be taken care of automatically by the toolkit code. It certainly makes no sense to try to change the quality of the tags just because the OPC client had connected. The quality in the VQT should reflect whatever comes from your underlying system, or the communication problems communicating to THAT system, but not anything between the OPC server and the client.
It may also very well be a problem on the client side - simply not resilient enough to handle certain situations. The authoritative way to tell what is happening (and put the "blame" into the direction of the server or the client) would be place an OPC Analyzer (available from OPC Foundation to OPC Members) in between, and log the OPC calls and check which side is not behaving right.
what they mean by "do something"
I think it means that customer needs to restart the client, create a group and add tags. But this is how things should be, because OPC Specification doesn't say anything about handling of connection breaks. It has only description of interfaces which can be used to check connection (server) status (e.g. IOPCServer::GetStatus). Typically, clients reconnect (create a new connection with new group and add tags) automatically, but only if they have been noticed that connection was lost.
a "re-open" request from their OPC Client
You can ask someone from Siemens to provide you quotation from OPC Specification where "re-open" mechanism and/or interfaces are described. Because I don't remember such notations in specification.
But if "re-open" means reusing of old connection (i.e. DCOM objects left from old connection) then I may assume the following situation:
client and server interacts properly
connection breaks
server sends OnDataChange callback to client and receives an error (something like "RPC service is unavailable")
server just stops sending of OnDataChange callbacks and do nothing with DCOM objects related to current connection
connection restores
client still can call server using existing DCOM objects but doesn't receive OnDataChange callbacks
customer need to restart client manually to repair connection
In such case you (or toolkit) should remove all objects related to broken connection or don't stop sending of callbacks.
When IP changes ,try to reconnect the OPC Server or Make your OPC Server IP address as static

How to make a realtime notification like facebook? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to make a realtime notification just like facebook.After learning and searching alot i m very confuse please explain me what is right and what is wrong..
Please make sure that the site may would have same number of users as Facebook
We can make Realtime notification with long polling or not? IF yes what is the advantages, disadvantages and limitations.
we can make Realtime notifiaction with websockets or not?Please mind the number of users can be same as facebook .If yes what is the advantages, disadvantages and limitations.
If there is any other method please explain.
Confusion
How Far I learn in web and found that Websocket is good but there is a limitation(Max 5K) in number of open connection which means that at a time the max number of user is just 5K,this is very less than facebook's number of users.. if I m wrong please explain.
You're wrong, a websocket based solution is not limited to 5K concurrent connections.
According to the Facebook Newsroom they have about 727 million daily active users on average in September 2013 or about 504k unique users that hit the Facebook page every minute. Given an average visit time of 18 minutes (researched by statisticbrain.com) their notification infrastructure must be able to serve about 9 million (18*504k) concurrent TCP connections 24/7. Although this number is a very far approximation it gives a far idea of what they are dealing with and what you have to deal with if you are going to build such a system.
You can use long polling as well as websockets to build your realtime notification system. In both cases you face similar problems which are related to your OS (Explanations are for a Unix based system):
limitation of ports, one tcp listener socket can only accept 2^16 connections on the same IP/Port it is listening, so you'll need to listen on multiple ports and/or multiple IP adresses.
memory, every open connection uses at least one file descriptor
Read more about the limitations in What is the theoretical maximum number of open TCP connections that a modern Linux box can have
Long-polling vs. Websockets:
Every poll in your long-poll solution requires a new HTTP request, which requires more bandwidth than what is needed to keep a websocket connection alive. Moreover the notification is returned as a HTTP response resulting in a new poll request. Although the websocket solution can be more efficient in terms of bandwidth and consumption of system resources, it has a major drawback: lack of browser support.
Depending on the stats at hand, a websocket-only solution ignores about 20-40% of your visitors (stats from statscounter.com). For this reason different server libraries were developed that abstract the concept of a connection away from the 'physical' underlying transport model. As a result more modern browsers create the connection using websockets and older browsers fall back to an alternative transport such as e.g. HTTP long polling, jsonp polling, or flash. Prominent examples of such libraries are Sock.js and Socket.io.

How will you identify if this is web server / code / vm shortage [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
given a whole picture first.
In a Oracle VM box, I've installed a WinXP pro(x32) and a Web server. The web server's web root, CGI scripts, and interpreter are mounted from share folders from my host computer(my real C drive), which the folders are read-only.
My problem is, when I create any (CGI) web pages with frames (or iframe), it happens to throw Error 500 in random frame (even I run the page from localhost), but if I reload the frame, or reloads the whole page, it can go normal again (this also happen a first ok frame go error after reload the whole page). And I've checked very carefully, there's no problem for my script. btw, I use Perl for my CGI scripts.
So I suspect there might be some problem along the "traffic" though in the same machine, but I don't know if this can happen if I call the same module among those different frames. Anyone experience similar situation or relative information? or if any test plan you would suggest me to do? I am recently using Abyss x1 as my web server, but I tried Apache also, and same thing happens
Thanks in advance
Windows XP does not allow more than 10 incoming connections and is therefore not a good operating system on which to install a web server.
Note For Windows XP Professional, the maximum number of other computers that are permitted to simultaneously connect over the network is ten. This limit includes all transports and resource sharing protocols combined. For Windows XP Home Edition, the maximum number of other computers that are permitted to simultaneously connect over the network is five. This limit is the number of simultaneous sessions from other computers the system is permitted to host. This limit does not apply to the use of administrative tools that attach from a remote computer.
Thanks Amon and Sinan, that gave the clues. These 2 are reasons why this happen ( only don't sure if they are all the reasons). Since the interpreter and underlying modules also calling from host machine, which is quite expensive. After I installed a Perl(and modules) inside my VM. This problem won't happen again!

What are the advantages and disadvantages of site mirroring [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Question 1:
When sites are mirrored, the content of their respective servers is synchronized (possibly automatically (live mirrors) or manually). Is this true? Are all servers 'equal', or does a main server exists? which then sends it changes to other 'children servers'? So all changes have to happen on the main server, and children servers are not allowed changes?
Question 2:
Expected advantages:
Global advantage: when a site that is originally hosted in the US is mirrored to a server in London, Europeans will benefit from this. They will have a better response time and because the amount of downloaders is cut down into two pieces (American and European servers) their download speeds can be higher.
Security: When one server crashes or is hacked, the other server can continue to operate normally.
Expected disadvantages:
If live mirroring is not used, some users will have to wait for renewed content.
More servers equals higher upkeep costs.
What other items can be added to these lists?
When sites are mirrored, the content of their respective servers is
synchronized. Is this true?
Yes, mirror sites should always be synchronized with their masters even if, for several reasons (eg. updates propagation times, network failures, etc.) they may not be.
There are several ways to achieve this; for example, a simple method could be using a rsync command in a cron job; a better solution is the "push mirroring" technique, used by the Debian and Ubuntu Linux distributions.
Are all servers 'equal', or does a main server exists, which then
sends it changes to other 'children servers'?
No, not all server are equals; generally the content provider updates one or more master servers which, in turn, provide the updated content to the other mirrors.
For example, in the Fedora infrastructure there are master servers, tier-1 servers (fastest mirrors) and tier-2 servers.
So all changes have to happen on the main server, and children servers
are not allowed changes?
Yes, in a mirrored context the content must be updated only on the master servers (one or more).
Expected advantages
Maybe the most comprehensive list of reasons for mirroring can be found on the Wikipedia:
To preserve a website or page, especially when it is closed or is about to be closed.
To allow faster downloads for users at a specific geographical location.
To counteract censorship and promote freedom of information.
To provide access to otherwise unavailable information.
To preserve historic content.
To balance load.
To counterbalance a sudden, temporary increase in traffic.
To increase a site's ranking in a search engine.
To serve as a method of circumventing firewalls.
Expected disadvantages
Cost: you have to buy additional servers and spend time to operate them.
Inconsistency: when one or more mirrors are not synchronized with the master (and this could happen not only with manual sync, but also with live sync).
As a further reference, since mirroring is a simple form of a Web Distributed System, you could also be interested in this reading.
Also, for files that are popular for downloading, a mirror helps reduce network traffic, ensures better availability of the Web site or files, or enables the site or downloaded files to arrive more quickly for users close to the mirror site. Mirroring is the practice of creating and maintaining mirror sites.
A mirror site is an exact replica of the original site and is usually updated frequently to ensure that it reflects the content of the original site. Mirror sites are used to make access faster when the original site may be geographically distant (for example, a much-used Web site in Germany may arrange to have a mirror site in the United States). In some cases, the original site (for example, on a small university server) may not have a high-speed connection to the Internet and may arrange for a mirror site at a larger site with higher-speed connection and perhaps closer proximity to a large audience.
In addition to mirroring Web sites, you can also mirror files that can be downloaded from an File Transfer Protocol server. Netscape, Microsoft, Sun Microsystems, and other companies have mirror sites from which you can download their browser software.
Mirroring could be considered a static form of content delivery.