Data rate/Line rate on the Ethernet interface - ethernet

I got a question about the data rate of the ethernet interface and hope someone can give me some hints on that.
I know the calculation method of the PCIe interface, for example, PCIe Gen3 X1 lane:
The data rate of single-lane should be
8 Gb/s (Gen3 line rate) * 2 (TX/RX, full-duplex) / 8 (to Byte) = 2 GB/s
(128/130 encoding is ignored)
So, how do we calculate the data rate of an ethernet interface?
Take 1000base-T for example, we have 4 twisted pairs, to sum up to 1Gb data rate.
So one pair should provide a 250Mb data rate. It’s full-duplex so TX/RX provides 125Mb each at the same time. With that being said, the “line rate” of a 1000base-T interface is 125MHz (125Mb).
Do I understand it correctly about the speedrunning on the ethernet interface?

how do we calculate the data rate of an ethernet interface?
Ethernet's nominal bit rate is generally defined at the top of the physical layer (L1). It includes preamble, SOF and IPG, but excludes all PHY-specific line encoding (PCS and PMA).
This is done to make all PHY variants of the same speed 100% compatible with each other. You can convert 1000BASE-T to 1000BASE-LX to 1000BASE-SX and back to 1000BASE-T without any buffer drops.
It’s full-duplex so TX/RX provides 125Mb each at the same time.
No - the nominal bitrate runs each direction, simultaneously for full duplex links. Each 1000BASE-T lane transports 250 Mbit/s worth of "user" data.
With that being said, the “line rate” of a 1000base-T interface is 125MHz (125Mb).
Since the line rate is (usually) the PHY rate it's 1000 MBit/s, four lanes of 250 Mbit/s each.
1000BASE-T does use a symbol rate of 125 MBaud since its PAM-5 modulation transports more than two bits per symbol. You might think that PAM-4 with exactly two bits would be sufficient, but the line code overhead eats up the rest. 1000BASE-T is already quite complex, it uses two-dimensional Trellis modulation plus scrambling to get across the wire (to produce a self-clocking signal, improve the signal/noise ratio and eliminate excess DC).
The 1000BASE-X PHYs for fiber are much simpler. The PCS uses 8b10b to produce a binary stream of 1.25 GBd that can be directly used to modulate the laser.

Related

Profibus synchronisation using Linux (Raspberry Pi)

I am planning to develop a simple Profibus master (FDL level) in Linux, more specifically on a Raspberry Pi. I have an RS485 transceiver based on a MAX 481. The master must work on a bus where there are multiple masters.
According to the Profibus specification, you must count the number of '1' bits on the bus to determine when it is time to rotate the access token. Specifically after 11 '1' bits the next frame starts. 11 bits is also exactly one frame.
In Linux, how can I detect these 11 '1' bits? They won't be registered by the driver as there is no start bit. So I need a stream of bits, instead of decoded bytes.
What would be the best approach?
Unfortunately, making use of microcontroller/microprocessor UART is a BAD choice.
You can generate 11 bits setting START_BIT, STOP_BIT, and PARTITY_BIT (even) in your microcontroller UART peripheral. Maybe you will be lucky to receive whole bytes from a datagram without losses.
However, PROFIBUS DP datagram is up to 244 bytes and PROFIBUS DP requires NO IDLE bits between bytes during datagram transmission. You need a UART hardware or UART microcontroller peripheral with a FIFO or register that supports up to 244 bytes - Which is very uncommon, once this requirement is very specific from PROFIBUS.
Another aspect is related to the compatibility of baud rates. Usually, the whole range of PROFIBUS PD baud rates is not fully available on common microcontrollers UART.
My suggestions:
Implement this UART part on FPGA and interface with Raspberry Pi using e.g. SPI. You can decide on the extension of PROFIBUS stack portion you can 'outsource' to FPGA and the part you can keep on RPi.
Use an ASIC (maybe ASPC2, but outdated) and add another compatible processor to implement a deterministic portion of the stack. Later you can interface this processor with your RPi.
Implement using an industrial communication dedicated processor (Like TI Sitara am335x).

ADXL345 Accelerometer data use on I2C (Beaglebone Black)

Background Information
I am trying to make sure I will be able to run two ADXL345 Accelerometers on the same I2C Bus.
To my understanding, the bus can transmit up to 400k bits/s on fast mode.
In order to send 1 byte of data, there are 20 extra bits of overhead.
There are 6 bytes per accelerometer reading (XLow, XHigh, YLow, YHigh, ZLow, ZHigh)
I need to do 1000 readings per second with both accelerometers
Thus,
My total data used per second is 336k bits/s which is within my limit of 400k bits/s.
I am not sure if I am doing these calculations correctly.
Question:
How much data am I transmitting per second with two accelerometers reading 1000 times per second on i2c?
Your math seems to be a bit off; for this accelerometer (from the datasheet: https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf), in order to read the 6 bytes of XYZ sample data, you need to perform a 6-byte burst read of the registers. What this means in terms of data transfer is a write of the register address to the accelerometer (0x31) then a burst read of 6 bytes continuously. Each of these two transfers requires sending first the I2C device address and the R/W bit, as well as an ACK/NAK per byte, including the address bytes, as well as START/REPEAT START/STOP conditions. So, over all, an individual transfer to get a single sample (ie, a single XYZ acceleration vector) is as follows:
Start (*) | Device Address: 0x1D (7) | Write: 0 (1) | ACK (1) | Register Address: 0x31 (8) | ACK (1) | Repeat Start (*) | Device Address: 0x1D (7) | Read: 1 (1) | ACK (1) | DATA0 (8) | ACK(1) | DATA1 (8) | ACK (1) | ... | DATA5 (8) | NAK (1) | Stop (*)
If we add all that up, we get 81+3 bits of data that need to be transmitted. Note first that the START, REPEAT START and STOP might not actually take a bits worth of time each but for simplicity we can assume they do. Note also that while the device address is only 7 bits, you always need to postpend the READ/WRITE bit, so an I2C transaction is always 8 bits + ACK/NAK, so 9 bits in total. Note also, the I2C max transfer rate really defines the max SCK speed the device can handle, so in fast mode, the SCK is at most 400KHz (thus 400Kbps at most, but because of the protocol, you'll get less in real data). Thus, 84 bits at 400KHz means that we can transfer a sample in 0.21 ms or ~4700 samples/sec assuming no gaps or breaks in transmission.
Since you need to read 2 samples every 1ms (2 accelerometers, so 84 bits * 2 = 164 bits/sample or 164Kbps at 1KHz sampling rate), this should at least be possible for fast mode I2C. However, you will need to be careful that you are taking full use of the I2C controller. Depending on the software layer you are working on, it might be difficult to issue I2C burst reads fast enough (ie, 2 burst read transactions within 1ms). Using the FIFO on the accelerometer would significantly help the latency requirement, meaning instead of having 1ms to issue two burst reads, you can delay up to 32ms to issue 64 burst reads (since you have 2 accelerometers); but since you need to issue a new burst read to read the next sample, you'll have to be careful about the delay introduced by software between calls to whatever API youre using to perform the I2C transactions.

What's meant by Lane in Ethernet connection speed?

In Ethernet terminology, I hit the word Lane, this speed is 4 Lanes or 2 lanes ?
I searched google, but I didn't get any useful results ?
Can any one explain what's a lane, its width and its relation to speeds ?
Some Ethernet flavors use multiple lanes within a link. E.g. 10BASE-T and 100BASE-TX use one dedicated lane (one twisted pair) in each direction while 1000BASE-T and 2.5/5/10/25/40GBASE-T use four lanes bidirectionally.
Each lane transports its fraction of the total data rate - for 1000BASE-T, each lane/pair effectively transports 250 Mbit/s. Due to PHY encoding, this doesn't match the physical signal rate (e.g. 125 MBd for 1000BASE-T).
For copper cables, each lane is represented by a twisted pair. With fiber cables, a lane can be a separate fiber pair, a wavelength (WDM), or a combination of both.
Usually, the number of lanes is fixed for a given PHY but some PHYs can be split very commonly, e.g. 40GBASE-R into 4x 10GBASE-R.

How to decrease wifi link quality and/or wifi signal level?

I have been following a tutorial that enables you to play around with the TXPOWER parameter of your wifi card / wifi adapter:
http://null-byte.wonderhowto.com/how-to/set-your-wi-fi-cards-tx-power-higher-than-30-dbm-0149606/
You can easily boost up your wifi range when increasing the TXPOWER.
Now, most people want to improve their wifi signal strength of their home router, right. But in my case, I would like my home router (which runs on a raspberry pi) to have a relative small wifi signal radius (say, a radius of 2 meters), so that you actually need to physically look for the pi home router when trying to connect to it.
I have learned that this tutorial does not do a thing with the wifi link quality and/or the wifi signal level and thus does not influence the wifi radius of my pi home router.
link quality & signal level
Do you guys have any ideas/thoughts about how to decrease link quality and/or wifi signal level (e.g Link Quality = 12/70 and Signal level =-10dBm) ? Is this even possible ?
I am using a Tp-Link TL-WN722N IEEE 802.11n USB - Wi-Fi Adapter.
WIRELESS LITE N ADAPTER 150M USB HIGH GAIN 1DETACHABLE ANTENNA WL-AP.
150 Mbps - External
First, I recommend reviewing this section from your link:
QUICK DECIBEL UNDERSTANDING:
Every 10 decibels is a 10X increase in power starting from 1 dBm equal
to 1mW... 10 dBm equals 10 mW, 20 dBm equals 100 mW, 30 dBm equals
1000 mW, and so on. Every 3 decibels is approximately double that of
the prior power, so 30 dBm is 1000 mW, if we add 3 dBm, then we can
double the power such that 33 dBm is about equal to 2000 mW.
It appears to me that you are able to modify the transmit power of your adapter as the tutorial states. Are you saying this is not working? If you set your transmit power to something extremely low (-30dBm, for example) you would effectively be turning off the transmitter. Keep increasing that value until you get your desired coverage radius.
If the transmit power parameter is not functioning as per the tutorial, then there are other means to achieve reduced coverage. The model you specified has a detachable antenna....so detach it. This would definitely reduce your coverage. However, if it reduces coverage too much, you could simply add an inline attenuator. Fortunately, your antenna uses an SMA connector which is very common. You can find many SMA attenuators on ebay with different attenuation values. Experiment with different values until you get the desired coverage.
And if that doesn't work, just wrap a bunch of aluminum foil around the thing lol.

How computers can receive data quickly?

In computer networks, we are trying to increase the transmission speed of data. Since data is nothing but electrical signals. How these electric signals can be converted into bits so quickly? This conversion is done by ADC - DAC. We can’t control the speed of computation of ADC then how can we translate the electric signals to bits so quickly. Next, Is this ADC integrated in our computer chipset?
Also, does it mean that every peripheral has ADC. For example, NIC card will have ADC. Is the information carried in the LAN cable like CAT 5, 6 are analog in nature?
You clock bits in by detecting a rising edge on a signal wired up to one of the pins of your chip. Then the rise lasts for a certain period of time, but only a fraction of a millisecond. There's a bit of tolerance so sender and receiver don't have to be exactly synchronised. The chip then transfers the bit to a buffer in very low level code. When it has a byte, slightly higher level code transfers the byte to another buffer, then the next level is user level - we have a stream of input bytes.
Whilst the wire is of course analogue, that is not analogue to digital conversion. Analogue to digital conversion is where we measure the signal, quantise it, then create a binary representation in place value notation.