Omron PLC Ethernet card - plc

I have an Ethernet card in a Omron PLC. Is there any way to do an automatic check to see if the Ethernet card is working? If not, is there a manual way? For example, if the card was to go out on the PLC it would give an error. But if the card just loses signal with the server then it would NOT give error. Any help on how to do this?

There are several types of errors you can check for. The way you do this depends on the type of error. Things you can check :
ETN unit Error Status (found at PLC CIO address CIO 1500 + (25 x unit number) +18)
What it reports : IP configuration, routing, DNS, mail, network services, etc, errors.
See : Manual Section 8-2
The ETN unit also keeps an internal error log (manual section 8-3) that you can read out to your HMI software (if you use it) using FINS commands. This documents all manner of errors internal to the ETN unit.
There are also other memory reservations in the PLC for CPU bus devices (like the ETN unit) which provide basic status flags you can include in ladder logic to raise alarms, etc. (See section 4-3 : Auxiliary Area Data).
These flags indicate whether the unit is initializing, for example, has initialized successfully, is ready to execute network commands, whether the last executed command completed OK or returned an error code (which can be read from the Error Log as above), etc. These can indicate whether the PLC is unable to properly communicate with the ETN device.

You can implement single byte location which will be autoincremented each second by the server. Then every few seconds you check in your PLC logic if old reading is the same as new reading, and if it is then you trigger an alarm that physical server (which is an communication client) is not communicating to PLC ethernet card.

Related

Can I write in an Input Register? Modbus

I've been working for 2 months in a MODBUS project and now I found a problem.
My client is asking me to write in an input register (Address 30001 to 40000).
I thought that was not a thing for me because every modbus documentation says that 30001 to 40000 registers are read-only.
Is it even possible to write in those registers? Thanks in advance
Both holding and input register related functions contain a 2-byte address value. This means that you can have 65536 input registers and 65536 holding registers in a device at the same time.
If your client is developing the firmware of the slave, they can place holding registers into the 3xxxx - 4xxxx area. They don't need to follow the memory layout of the original Modicon devices.
If one can afford diverging from the Modbus standard, it's even possible to increase the number of registers. In one of my projects, I was considering to use Preset Single Register (06) function as a bank select command. Of course, you can't call it Modbus anymore. But, the master can still access the slave using a standard library or diagnostics tools.
You can't write to Input Contacts or Input Registers, there is no Modbus function to write to them, they are read only by definition
Modbus is a protocol and in no case specifies where the values are stored, only how they are transmitted
Currently there are devices that support 6-digit addresses and therefore can address up to 65536 registers per group

Packet generation in PCI PCIe devices

I have few questions on the PCI/PCIe Packet generation and the CRC generation and calculation. I have tried many searches but could not get the satisfactory answer. Please help me to understand the below points.
1.How does Packets(TLP, DLLP and PLLP) are formed in the PCI/PCIe System : For example lets say The CPU generates a Memory read/write from/to a PCIe device(here device is mapped into the memory). This request will be received by the PCI/PCIe Root Complex. The Root Complex will generate the TLP, also the DLLP and PLLP will be generated and appended to the TLP accordingly to form a PCI/PCIe pcket. This packet will be claimed by one of the root ports based on the MMIO address ranges. Each port on the Switch/Endpoints generate the DLLP and PLLP and pass it over to the next device on the link where it will be stripped and checked for errors.
Q.1 - Is it true that the packet generation/checking is fully done by the hardware ? What contribution does software do in packet generation as well as packet checking for errors on the receiving device ?
Q.2 - How does ECRC and LCRC are generated for a packet ? As the LCRC will be generated and checked at each PCI/PCIe device/ports and ECRC will be generated only once by requester which is root complex in our example. So Does the ECRC/LCRC generation/check are completely done by Hardware ? Can someone please explain with an example how the CRC/ECRC generated/check from the moment when the CPU generates a PCI read/write request ?
Q.3 - When we say that the "Transaction Layer", "DataLink Layer" and the "Physical Link Layer" generates the TLP, DLLP and PLLP respectively, Does this layers mean the Hardware or software layers ?
I think that if software will come into play each time when a packet, CRCs are generated/checked, It would slow down the data transfer. Also the Hardware can do these tasks much faster.
please correct me If I am wrong somewhere. I want to understand the above scenarios from HW vs SW points of view. Please help.

What is General Call Address and what is the purpose of it in I2C?

I wonder what is General Call Address in I2C (0x00). If we have a master and some slaves can we communicate with these slaves through our master with this address?
Section 3.2.10 of I2C specification v.6 (https://www.i2c-bus.org/specification/) clearly describes the purpose of general call.
3.2.10General call address
The general call address is for addressing every device connected to the I2C-bus at the
same time. However, if a device does not need any of the data supplied within the general
call structure, it can ignore this address. If a device does require data from a general call
address, it behaves as a slave-receiver. The master does not actually know how many
devices are responsive to the general call. The second and following bytes are received
by every slave-receiver capable of handling this data. A slave that cannot process one of
these bytes must ignore it. The meaning of the general call address is always specified in
the second byte (see Figure 30).
You can use it to communicate with your slaves, but three restrictions applied.
General call can only write data to slave, not read.
Every slave should receive general call, you cannot address specific device with it, or you have to encode device address in general call message body, and decode it in the slave.
There are standard general call message format. You should not use standard codes for for your own functions.

Simulink: Introduce delay with UDP Send/Receive

I'm building a client/server-type subsystem in a control system application using UDP Send/Receive blocks in Simulink. Data x is sent to the server via UDPSend block which is then processed at the server that returns output y.
Currently, I've both the client (a Simulink model) and the server (processing logic return in Java) resides in the localhost. Therefore, the packet exchanges essentially take near-zero time. I'd like to introduce network delay such that the packet exchanges take a varying amount of time (say due to changes in bandwidth availability), effectively simulating a scenario where the server node is located in a different geographical location.
Could someone guide me on how to achieve this? Thanks.
As a general (Simulink-independent) solution in a Windows environment, you should have a look at following tool, which "makes your network condition significantly worse, but in a managed and interactive manner."

AHCI Driver for own OS

I have been programming a little AHCI driver for two weeks. I have read this article and Intel's Serial ATA Advanced Host Controller Interface (AHCI) 1.3. There is an example, which shows how to read sectors via DMA mode (osdev.org). I have done this operation (ATA_CMD_READ_DMA 0xC8) successfully, but when i tried to write sectors (ATA_CMD_WRITE_DMA 0xCA) to the device, the HBA set the error
Offset 30h: PxSERR – Port x Serial ATA Error - Handshake Error
(this is decoding from Intel AHCI specification). I don't understand why it happened. Please, help me.
In addition, I have tried to issue the command IDENTIFY 0xEC, but not successfully...
You asked this question nearly two months ago so I'm not sure if you've already figured this out. Please note that I'm writing from memory in terms of what must be done first, etc. I may not have remembered all, or accurately, what must be done. You should reference the AHCI spec for everything. The methods for doing this are as varied as there are programmers that have done this. For this reason, I'm not including code examples.
For starters, ensure that you've set the HBA state machine accordingly. You'll be able to find references for the state machines supported by the HBA in that same SATA spec 1.3. In lieu of this, you should check a few registers.
Please note that all page numbers are given with respect to viewing in Adobe Acrobat and are 8 pages more than numbered in the actual document
From page 24 and 25 of the spec., check GHC.IE and GHC.AE. These two will turn on interrupts and ensure that the HBA is working in AHCI mode. Another, very important register to check, is CAP.SSS (Page 23). If this bit is high, then the HBA Supports Staggered Spin-up. This means that the HBA will not perform any protocol negotiation for any port. Before you do the following, store the value of PxSIG (Page 35 and 36).
To actually spin up the port, you'll need to visit pages 33, 34 and 35 of the spec. These pages cover the PxCMD register. For each port supported by the HBA (check CAP.NP to know how many are there), you'll have to switch high bit PxCMD.SUD. After switching that bit high, you'll want to poll on PxSSTS (Page 36) to check the state of the PHY. You can check CAP.ISS in order to know what speed you can expect to see "come alive" on PxSSTS.
After spinning up the port, check PxSIG (Page 35 & 36). The value should be different than when you started. I don't recall now what you can expect them to become, but they will be different. When communication is actually established, the device sends to the host an initial FIS. Without this first FIS, the HBA will be unable to communicate with the device. (It's with this first FIS that the HBA sets the correct bits in PxSIG.)
Finally, after all of this, you'll need to set PxCMD.FRE (page 34). This bit in the port command register enables FIS delivery to the device. If this bit is low, the HBA will ignore anything you send to it.
As I said in the beginning, I'm not sure if this will answer all of your question but I hope that it does get you on the right track. I'm going from memory on the events that must be done in order to effectively communicate to a SATA device. I may not have remembered in full detail.
I hope this helps you.