AHCI Driver for own OS - drivers

I have been programming a little AHCI driver for two weeks. I have read this article and Intel's Serial ATA Advanced Host Controller Interface (AHCI) 1.3. There is an example, which shows how to read sectors via DMA mode (osdev.org). I have done this operation (ATA_CMD_READ_DMA 0xC8) successfully, but when i tried to write sectors (ATA_CMD_WRITE_DMA 0xCA) to the device, the HBA set the error
Offset 30h: PxSERR – Port x Serial ATA Error - Handshake Error
(this is decoding from Intel AHCI specification). I don't understand why it happened. Please, help me.
In addition, I have tried to issue the command IDENTIFY 0xEC, but not successfully...

You asked this question nearly two months ago so I'm not sure if you've already figured this out. Please note that I'm writing from memory in terms of what must be done first, etc. I may not have remembered all, or accurately, what must be done. You should reference the AHCI spec for everything. The methods for doing this are as varied as there are programmers that have done this. For this reason, I'm not including code examples.
For starters, ensure that you've set the HBA state machine accordingly. You'll be able to find references for the state machines supported by the HBA in that same SATA spec 1.3. In lieu of this, you should check a few registers.
Please note that all page numbers are given with respect to viewing in Adobe Acrobat and are 8 pages more than numbered in the actual document
From page 24 and 25 of the spec., check GHC.IE and GHC.AE. These two will turn on interrupts and ensure that the HBA is working in AHCI mode. Another, very important register to check, is CAP.SSS (Page 23). If this bit is high, then the HBA Supports Staggered Spin-up. This means that the HBA will not perform any protocol negotiation for any port. Before you do the following, store the value of PxSIG (Page 35 and 36).
To actually spin up the port, you'll need to visit pages 33, 34 and 35 of the spec. These pages cover the PxCMD register. For each port supported by the HBA (check CAP.NP to know how many are there), you'll have to switch high bit PxCMD.SUD. After switching that bit high, you'll want to poll on PxSSTS (Page 36) to check the state of the PHY. You can check CAP.ISS in order to know what speed you can expect to see "come alive" on PxSSTS.
After spinning up the port, check PxSIG (Page 35 & 36). The value should be different than when you started. I don't recall now what you can expect them to become, but they will be different. When communication is actually established, the device sends to the host an initial FIS. Without this first FIS, the HBA will be unable to communicate with the device. (It's with this first FIS that the HBA sets the correct bits in PxSIG.)
Finally, after all of this, you'll need to set PxCMD.FRE (page 34). This bit in the port command register enables FIS delivery to the device. If this bit is low, the HBA will ignore anything you send to it.
As I said in the beginning, I'm not sure if this will answer all of your question but I hope that it does get you on the right track. I'm going from memory on the events that must be done in order to effectively communicate to a SATA device. I may not have remembered in full detail.
I hope this helps you.

Related

Can I write in an Input Register? Modbus

I've been working for 2 months in a MODBUS project and now I found a problem.
My client is asking me to write in an input register (Address 30001 to 40000).
I thought that was not a thing for me because every modbus documentation says that 30001 to 40000 registers are read-only.
Is it even possible to write in those registers? Thanks in advance
Both holding and input register related functions contain a 2-byte address value. This means that you can have 65536 input registers and 65536 holding registers in a device at the same time.
If your client is developing the firmware of the slave, they can place holding registers into the 3xxxx - 4xxxx area. They don't need to follow the memory layout of the original Modicon devices.
If one can afford diverging from the Modbus standard, it's even possible to increase the number of registers. In one of my projects, I was considering to use Preset Single Register (06) function as a bank select command. Of course, you can't call it Modbus anymore. But, the master can still access the slave using a standard library or diagnostics tools.
You can't write to Input Contacts or Input Registers, there is no Modbus function to write to them, they are read only by definition
Modbus is a protocol and in no case specifies where the values are stored, only how they are transmitted
Currently there are devices that support 6-digit addresses and therefore can address up to 65536 registers per group

Packet generation in PCI PCIe devices

I have few questions on the PCI/PCIe Packet generation and the CRC generation and calculation. I have tried many searches but could not get the satisfactory answer. Please help me to understand the below points.
1.How does Packets(TLP, DLLP and PLLP) are formed in the PCI/PCIe System : For example lets say The CPU generates a Memory read/write from/to a PCIe device(here device is mapped into the memory). This request will be received by the PCI/PCIe Root Complex. The Root Complex will generate the TLP, also the DLLP and PLLP will be generated and appended to the TLP accordingly to form a PCI/PCIe pcket. This packet will be claimed by one of the root ports based on the MMIO address ranges. Each port on the Switch/Endpoints generate the DLLP and PLLP and pass it over to the next device on the link where it will be stripped and checked for errors.
Q.1 - Is it true that the packet generation/checking is fully done by the hardware ? What contribution does software do in packet generation as well as packet checking for errors on the receiving device ?
Q.2 - How does ECRC and LCRC are generated for a packet ? As the LCRC will be generated and checked at each PCI/PCIe device/ports and ECRC will be generated only once by requester which is root complex in our example. So Does the ECRC/LCRC generation/check are completely done by Hardware ? Can someone please explain with an example how the CRC/ECRC generated/check from the moment when the CPU generates a PCI read/write request ?
Q.3 - When we say that the "Transaction Layer", "DataLink Layer" and the "Physical Link Layer" generates the TLP, DLLP and PLLP respectively, Does this layers mean the Hardware or software layers ?
I think that if software will come into play each time when a packet, CRCs are generated/checked, It would slow down the data transfer. Also the Hardware can do these tasks much faster.
please correct me If I am wrong somewhere. I want to understand the above scenarios from HW vs SW points of view. Please help.

IC2 SLAVE NOT RESPONDING XC8

Hey guys i've been working on this like 72 hours straight and i can't find the error, i'm working on a PIC16F1719 i'm trying to set 3 peripherials an ADC a I2C Protocol and a USART for comunicating to a BT however the ADC was easy, but i'm having a rough time with the I2C despite the fact i've check the code several times, for some reason when i get the ACK's everything seems OK, but when i go for a lecture on the sensor (MPU6050) nothing shows up but the value i putted last time on the buffer, any ideas why this is happening? It's like the buffer doesn't clear itself and i think i can´t clear it through software, thanks.
i2c slave has the ability to lock the bus if the master does not communicate correctly with it (several possible scenarios...)
This is electirically possible since the 2 wires are wired-and, that means if any slave pulls the clock (for example) down, and keeps it that way, the bus is locked.
Always check first the values on both wires (using scope or dvm), if '0' it means bus locked.
Next test the status register of your i2c controller, it may show arbitration error or something of that sort.
If any of the errors, read the i2c slave part datasheet carefully to check what types of protocol read/write it expects and fix your code.

RAW socket send: packet loss

During the RAW-socket based packet send testing, I found very irritating symptoms.
With a default RAW socket setting (especially for SO_SNDBUF size),
the raw socket send 100,000 packets without problem but it took about 8 seconds
to send all the packets, and the packets are correctly received by the receiver process.
It means about 10,000 pps (packets per second) is achieved by the default setting.
(I think it's too small figure contrary to my expectation.)
Anyway, to increase the pps value, I increased the packet send buffer size
by adjusting the /proc/sys/net/core/{wmem_max, wmem_default}.
After increasing the two system parameters, I have identified the irritating symptom.
The 100,000 packets are sent promptly, but only the 3,000 packets are
received by the receiver process (located at a remote node).
At the sending Linux box (Centos 5.2), I did netstat -a -s and ifconfig.
Netstat showed that 100,000 requests sent out, but the ifconfig shows that
only 3,000 packets are TXed.
I want to know the reason why this happens, and I also want to know
how can I solve this problem (of course I don't know whether it is really a problem).
Could anybody give me some advice, examples, or references to this problem?
Best regards,
bjlee
You didn't say what size your packets were or any characteristics of your network, NIC, hardware, or anything about the remote machine receiving the data.
I suspect that instead of playing with /proc/sys stuff, you should be using ethtool to adjust the number of ring buffers, but not necessarily the size of those buffers.
Also, this page is a good resource.
I have just been working with essentially the same problem. I accidentally stumbled across an entirely counter-intuitive answer that still doesn't make sense to me, but it seems to work.
I was trying larger and larger SO_SNDBUF buffer sizes, and losing packets like mad. By accidentally overrunning my system defined maximum, it set the SO_SNDBUF size to a very small number instead, but oddly enough, I no longer had the packet loss issue. So I intentionally set SO_SNDBUF to 1, which again resulted in a very small number (not sure, but I think it actually set it to something like 1k), and amazingly enough, still no packet loss.
If anyone can explain this, I would be most interested in hearing it. In case it matters, my version of Linux is RHEL 5.11 (yes, I know, I'm a bit behind the times).

Omron PLC Ethernet card

I have an Ethernet card in a Omron PLC. Is there any way to do an automatic check to see if the Ethernet card is working? If not, is there a manual way? For example, if the card was to go out on the PLC it would give an error. But if the card just loses signal with the server then it would NOT give error. Any help on how to do this?
There are several types of errors you can check for. The way you do this depends on the type of error. Things you can check :
ETN unit Error Status (found at PLC CIO address CIO 1500 + (25 x unit number) +18)
What it reports : IP configuration, routing, DNS, mail, network services, etc, errors.
See : Manual Section 8-2
The ETN unit also keeps an internal error log (manual section 8-3) that you can read out to your HMI software (if you use it) using FINS commands. This documents all manner of errors internal to the ETN unit.
There are also other memory reservations in the PLC for CPU bus devices (like the ETN unit) which provide basic status flags you can include in ladder logic to raise alarms, etc. (See section 4-3 : Auxiliary Area Data).
These flags indicate whether the unit is initializing, for example, has initialized successfully, is ready to execute network commands, whether the last executed command completed OK or returned an error code (which can be read from the Error Log as above), etc. These can indicate whether the PLC is unable to properly communicate with the ETN device.
You can implement single byte location which will be autoincremented each second by the server. Then every few seconds you check in your PLC logic if old reading is the same as new reading, and if it is then you trigger an alarm that physical server (which is an communication client) is not communicating to PLC ethernet card.