Initially, I used a eBus SDK which supports 8 bits registers for the I2C. This SDK does not support 16 bits register address for I2C. Is there any alternative to this sdk that support 16 bit register address for the I2C?
Best wishes and thank you in advance
There are a few concepts to clear up based on the other comments. All I2C devices ONLY support 7-bit (8 with the read/write) and 10-bit Slave Addressing. This, however, was not the concept asked about in the topic.
I2C, per the protocol specifications, reads/writes in sets of 8-bits followed by an Acknowledgement (ACK/NACK) from the device receiving the data. How the device interprets the bits read/written to it can vary greatly from device to device.
From my personal experience, I have found that often a larger register address -- such as 0x1234 -- simply means that you need to read/write from registers 0x12 and 0x34. Both registers will hold 8-bits of information which together form the actual 16-bit word referenced by the hexadecimal 0x1234.
As I mentioned though, this can vary per device. You will likely need to read through the Data Sheets/Manuals for your specific I2C device for more information on its register addressing to ensure you read/write from the right registers and assemble the individual 8-bits into the correct order to extract the corresponding 16-bit word.
As MrHappyAsthma perfectly pointed out the I2C is organized in 8-bit transfers.You must study documentation of your device. Search for something like setting internal 16-bit address with writing two bytes, and later do the read (as you mentioned one or two bytes). It would look something like this:
// register read scenario (first 0x12 will be your 8-bit API address, and you attach the 0x34 to the data part of your API)
DO WRITE: |S| slave address |W| write 0x12 | write 0x34 |S| (be careful with ordering)
DO READ: |S| slave address |R| read 1'st byte | read 2'st byte |S| (if 16-bit data)
// register write scenario (first 0x12 will be your 8-bit API address, and send 3-bytes of data, where first byte is your LSB address)
DO WRITE: |S| slave address |W| write 0x12 | write 0x34 | write data 1'st byte | write data 2'st byte |S| (if 16-bit data)
Check documentation for you slave device. If you are able to use your API to force such transfers you can trick the device to give you what you need.
It seems to be impossible for some to understand what difference is between I2C external address, internal address and data.
The original question is simply "How to read/write to a I2C register, whose INTERNAL ADDRESS is 16bit ?"
smbus handles only 8 bit internal addresses. So it is impossible to read/write but first 256 bytes of a I2C EEPROM.
There is a lot of talk about at24, but so far I have not found anything describing how to use it or even install.
Nine years gone after the original question and still there has been no way to address 16bit REGISTER addresses. Everybody understand allready the difference between device and register addresses. Please stop repeating the comments about '7bit address and one R/W bit', when somebody asks about 16bit addressing.
This should be handled by the Linux kernel specialists, because there is so much large I2C memories. For example I myself am fighting with a bed of FeRAM chips.
The next thing is to move the bed to general IO-pins of RasPi and write the whole program from scratch.
Related
Network byte order used e.g. by TCP is big endian. This doesn't effect the actual payload the user is sending over the network, does it? This is in regards to the e.g. 16-bit port number and a 32-bit IPv4 address, which TCP exchanges itself and thus requires the participants to agree on endianness.
In other words: Assuming 2 participants with same endianness machines and a simple setup with TCP sockets, there is no need to convert anything in terms of data/payload, right?
I'm just a little confused as there is a lot of talk regarding endianness conversion with regards to network byte order. For example, in IBM's docs (IBM Docs Network Byte Order) it says:
The TCP/IP standard network byte order is big-endian. In order to participate in a TCP/IP network, little-endian systems usually bear the burden of conversion to network byte order.
To me this sounds like conversion depends on network byte order when in fact the only thing that matter are the endianness on the participating machines, doesn't it.
This doesn't effect the actual payload the user is sending over the network, does it?
TCP and UDP just transport bytes without any inherent meaning. How the payloads needs to be interpreted and if endianness is relevant is part of the application layer, i.e. depends on the application protocol spoken.
I've been working for 2 months in a MODBUS project and now I found a problem.
My client is asking me to write in an input register (Address 30001 to 40000).
I thought that was not a thing for me because every modbus documentation says that 30001 to 40000 registers are read-only.
Is it even possible to write in those registers? Thanks in advance
Both holding and input register related functions contain a 2-byte address value. This means that you can have 65536 input registers and 65536 holding registers in a device at the same time.
If your client is developing the firmware of the slave, they can place holding registers into the 3xxxx - 4xxxx area. They don't need to follow the memory layout of the original Modicon devices.
If one can afford diverging from the Modbus standard, it's even possible to increase the number of registers. In one of my projects, I was considering to use Preset Single Register (06) function as a bank select command. Of course, you can't call it Modbus anymore. But, the master can still access the slave using a standard library or diagnostics tools.
You can't write to Input Contacts or Input Registers, there is no Modbus function to write to them, they are read only by definition
Modbus is a protocol and in no case specifies where the values are stored, only how they are transmitted
Currently there are devices that support 6-digit addresses and therefore can address up to 65536 registers per group
OK, here's what I mean:
Let's say you want to write your own bootable code.
Further, your code is going to be really simple.
So simple, in fact, that it only consists of a single instruction.
Your bootable code is going to write a byte or word or double word or whatever to a register or RAM location on a peripheral device, not main RAM or a CPU register.
How do you find out what address(es) have been assigned to that peripheral memory location by the BIOS / UEFI?
Here's a more concrete example:
My bootable code's first and only instruction will write the number 11H to a register located on the sound card.
If the BIOS / UEFI initialization code did its job properly, that sound card register should be mapped into the CPU's memory space and/or IO space.
I need to find that address to accomplish that write.
How do I find it?
This is what real operating systems must do at some point.
When you open control panel / device manager in Windows, you see all the memory ranges for peripherals listed there.
At some point, Windows must have queried the BIOS /UEFI to find this data.
Again, how is this done?
EDIT:
Here is my attempt at writing this bootable assembly program:
BITS 16
ORG 100h
start:
;I want to write a byte into a register on the sound card or NIC or
;whatever. So, I'm using a move instruction to accomplish that where X
;is the register's memory mapped or IO mapped address.
mov X,11h
times 510 - ($ - $$) db 0
dw 0xaa55
What number do I put in for X? How do I find the address of this peripheral's register?
If you want to do this with one instruction, you can just get the address for the device from the Windows device manager. But if you want to do it the "proper" way, you need to scan the PCI bus to find the device you want to program, and then read the Base Address Registers (BARs) of the device to find its MMIO ranges. This is what Windows does; it doesn't query the BIOS.
To find the device that you want to access, scan the PCI bus looking for the device. Devices are addressed on the PCI bus by their "BDF" (short for Bus/ Device/ Function). Devices are identified by a Vendor ID and a Device ID assigned by the vendor.
Read offset 0 and 2 of each BDF to get the Vendor ID and Device ID. When you have found the device you want to program, read the correct 32-bit BAR value at an offset between 10h and 24h. You need to know which BAR contains the register you want to program, which is specific to the device you are using.
This article describes how to access PCI config space and has sample code in C showing how to scan the PCI bus. http://wiki.osdev.org/PCI
So far I thought they are the same as bytes are made of bits and that both side needs to know byte size and endiannes of the other side and transform stream accordingly. However Wikipedia says that byte stream != bit stream (https://en.wikipedia.org/wiki/Byte_stream ) and that bit streams are specifically used in video coding (https://en.wikipedia.org/wiki/Bitstream_format). In this RFC https://www.rfc-editor.org/rfc/rfc107 they discuss these 2 things and describe Two separate kinds of inefficiency arose from bit streams.. My questions are:
what's the real difference between byte stream and bit stream?
how bit stream works if it's different from byte stream? How does a receiving side know how many bits to process at a given time?
why is bit stream better than byte stream in some cases?
This is a pretty broad question, I'll have to give the 10,000 feet view. Bit streams are common in two distinct usages:
very low-level, it is the fundamental way that lots of hardware operates. Best examples are the data stream that comes off a hard disk or a optical disk or the data sent across a transmission line, like a USB cable or the coax cable or telephone line through which you received this post. The RFC you found applies here.
high-level, they are common in data compression, a variable number of bits per token allows packing data tighter. Huffman coding is the most basic way to compress. The video encoding subjects you found applies here.
what's the real difference between byte stream and bit stream?
Byte streams are highly compatible with computers which are byte-oriented devices and the ones you'll almost always encounter in programming. Bit streams are much more low-level, only system integration engineers ever worry about them. While the payload of a bit stream is often the bytes that a computer is interested in, more overhead is typically required to ensure that the receiver can properly interpret the data. There are usually a lot more bits than necessary to encode the bytes in the data. Extra bits are needed to ensure that the receiver is properly synchronized and can detect and perhaps correct bit errors. NRZ encoding is very common.
The RFC is quite archeological, in 1971 they were still hammering out the basics of getting computers to talk to each other. Back then they were still close to the transmission line behavior, a bit stream, and many computers did not yet agree on 8 bits in a byte. They are fretting over the cost of converting bits to local bytes on very anemic hardware and the need to pack as many bits into a message as possible.
How does a receiving side know how many bits to process at a given time?
The protocol determines that, like that RFC does. In the case of a variable length bit encoding it is bit values themselves that determine it, like Huffman coding does.
why is bit stream better than byte stream in some cases?
Covered already I think, because it is better match for its purpose. Either because the hardware is bit-oriented or because variable bit-length coding is useful.
A bit is a single 1 or 0 in computer code, also known as a binary digit.
The most common use for the bit stream is with the transmission control protocol, or TCP. This series of guidelines tells computers how to send and receive messages between each other. The World Wide Web and e-mail services, among others, rely on TCP guidelines to send information in an orderly fashion. Sending through the bit stream ensures the pieces arrive in the proper order and the message isn't corrupted during delivery, which could make it unreadable.So a bit stream sends one bit after another.
Eight bits make up a byte, and the byte stream transmits these eight-bit packets from computer to computer.
The packets are decoded upon arrival so the computer can interpret them.Thus a byte stream is a special case of bits sent together as a group in sequential order.For a byte stream to be most effective, it flows through a dedicated and reliable path sometimes referred to as a pipe, or pipeline.
When it comes to sending a byte stream over a computer network, a reliable bi-directional transport layer protocol, such as the transmission control protocol (TCP) used on the Internet, is required. These are referred to as a byte stream protocol. Other serial data protocols used with certain types of hardware components, such as the universal asynchronous receiver/transmitter (UART) technique, is a serial data channel that also uses a byte stream for communication. In this case, the byte, or character, is packaged up in a frame on the transmitting end, where an extra starting bit and some optional checking bits are attached and then separated back out of the frame on the receiving end. This technique is sometimes referred to as a byte-oriented protocol.
Taking a general life example,suppose you have a lot of match sticks to send.Then you could send them one stick after the other,one at a
time.. or you could pack a few of them in a match box and send them
together ,one matchbox after the other in sequence.the first is like
bitstream and the latter like bytestream.
Thus it all depends on what the hardware wants or is best suited for..If your hand is small and you cant accept matchboxes but you still want matchsticks then you take them one at a time or else take the box.Also byte streams are better in a sense that every bit does not need to be checked and data can be sent in batches of 8,.if any of it fails the entire 8bits can be re sent.
To add to the other good answers here:
A byte stream is a type of bit stream. A byte stream describes the bits as meaningful "packages" that are 8 bits wide.
Certain (especially low-level) streams may be agnostic of meaning in each 8 bit sequence. It would be a poor description to call these "byte streams"
Similar to how every Honda Civic is a car, but not every car is a Honda Civic...
I can't understand the meaning of the following "The USART Transmit Data buffer register (TXB) and the USART Receive Data buffer register (RXB) share the same I/O address" there is two data register .how they share the same address ?
Now it's clear
From the diagram you see that The transmitter and receiver share the UDR (UART Data Register). Actually they only share the UDR address: The "real" register is divided into the transmitter and receiver register so that received data cannot overwrite data being written into the transmit register. Consequently you can't read back data you wrote into the transmitter register.
The register address is the same for both TXB and RBX and the actual addressed register is determined by the modality in which the UART is (reading or writing mode). This is depended on the actual implementation, but usually it consist in setting one or two more pins.
You can consider UDR register as the buffer between TXD and RXD registers.
As you know UART is sent bit by bit in the bus, While recieving the bits is entering the RXD register, when all the byte is being recieved it copied to UDR register and the flag is raised, now you should read the UDR register and if you write to it you will lose the recieved byte!!
By the same way in transmission, you write the a byte in UDR then it moved to TXD then output from registerbit by bit and the UDR is empty during transmission.
That's why there is interrupt for UDR, when the UDR becomes empty, and interrupt for TXD, when trasnmission is completed.