Is there an ethernet header in IEEE 802.11 - sockets

I have been capturing some packets over wifi using wireshark for analysis. If I captured IEEE 802.11 frames on an interface in monitor mode. If I capture an IEEE packet on an open network without encryption then I cannot see any ethernet headers. However if I capture the same packets on a usual interface(not in monitor mode), then I can see ethernet headers. I was not able to decrypt wpa packets captured in monitor mode for more analysis. So is there actually an ethernet layer when an IEEE packet is transmitted? Or is it added to it by the driver before delivering to applications listening on the upper layers?
Here is a packet missing ethernet layer.

Ethernet is defined by IEEE 802.3, not IEEE 802.11 (Wi-Fi), so, no, there is no ethernet header in 802.11 frames; they are different network types, and IEEE 802.11 has its own frame format and headers. It the same with any of the IEEE 802.x LANs. For instance, IEEE 802.5 (token ring) has a different frame and header format, too.

Related

2 layer switch how to handle the datagram bigger than MTU?

If the datagram bigger than MTU, 2 layer switch will drop it? Dose 2 layer switch can report a ICMP? If not report ICMP, how can I determine the data size to pass the switch successfully?
If the datagram bigger than MTU, 2 layer switch will drop it?
Yes. A switch does not forward frames larger than the (configured) maximum size and drops them. For standard Ethernet, that's 1500 bytes payload plus 18 bytes L2 overhead. Note that MTU is an L3 term referring to the maximum packet size that an underlying network can transport.
Does 2 layer switch report a ICMP?
No. A layer-2 switch generally sends no ICMP messages nor is there an ICMP message to report oversized frames in L2.
A layer-3 switch used as gateway should return an ICMP Fragmentation required when the destination network's MTU does not admit the IP packet without fragmentation and its DF bit is set or IPv6 is used. For IPv4 without DF, the gateway just fragments the packet.
If not report ICMP, how can I determine the data size to pass the switch successfully?
On an unmanaged switch, see above for the maximum standard size. A few support jumbo frames, check their documentation. On some managed switches you can configure the maximum frame size globally or by VLAN. Methods and syntax vary.

Profibus synchronisation using Linux (Raspberry Pi)

I am planning to develop a simple Profibus master (FDL level) in Linux, more specifically on a Raspberry Pi. I have an RS485 transceiver based on a MAX 481. The master must work on a bus where there are multiple masters.
According to the Profibus specification, you must count the number of '1' bits on the bus to determine when it is time to rotate the access token. Specifically after 11 '1' bits the next frame starts. 11 bits is also exactly one frame.
In Linux, how can I detect these 11 '1' bits? They won't be registered by the driver as there is no start bit. So I need a stream of bits, instead of decoded bytes.
What would be the best approach?
Unfortunately, making use of microcontroller/microprocessor UART is a BAD choice.
You can generate 11 bits setting START_BIT, STOP_BIT, and PARTITY_BIT (even) in your microcontroller UART peripheral. Maybe you will be lucky to receive whole bytes from a datagram without losses.
However, PROFIBUS DP datagram is up to 244 bytes and PROFIBUS DP requires NO IDLE bits between bytes during datagram transmission. You need a UART hardware or UART microcontroller peripheral with a FIFO or register that supports up to 244 bytes - Which is very uncommon, once this requirement is very specific from PROFIBUS.
Another aspect is related to the compatibility of baud rates. Usually, the whole range of PROFIBUS PD baud rates is not fully available on common microcontrollers UART.
My suggestions:
Implement this UART part on FPGA and interface with Raspberry Pi using e.g. SPI. You can decide on the extension of PROFIBUS stack portion you can 'outsource' to FPGA and the part you can keep on RPi.
Use an ASIC (maybe ASPC2, but outdated) and add another compatible processor to implement a deterministic portion of the stack. Later you can interface this processor with your RPi.
Implement using an industrial communication dedicated processor (Like TI Sitara am335x).

Data rate/Line rate on the Ethernet interface

I got a question about the data rate of the ethernet interface and hope someone can give me some hints on that.
I know the calculation method of the PCIe interface, for example, PCIe Gen3 X1 lane:
The data rate of single-lane should be
8 Gb/s (Gen3 line rate) * 2 (TX/RX, full-duplex) / 8 (to Byte) = 2 GB/s
(128/130 encoding is ignored)
So, how do we calculate the data rate of an ethernet interface?
Take 1000base-T for example, we have 4 twisted pairs, to sum up to 1Gb data rate.
So one pair should provide a 250Mb data rate. It’s full-duplex so TX/RX provides 125Mb each at the same time. With that being said, the “line rate” of a 1000base-T interface is 125MHz (125Mb).
Do I understand it correctly about the speedrunning on the ethernet interface?
how do we calculate the data rate of an ethernet interface?
Ethernet's nominal bit rate is generally defined at the top of the physical layer (L1). It includes preamble, SOF and IPG, but excludes all PHY-specific line encoding (PCS and PMA).
This is done to make all PHY variants of the same speed 100% compatible with each other. You can convert 1000BASE-T to 1000BASE-LX to 1000BASE-SX and back to 1000BASE-T without any buffer drops.
It’s full-duplex so TX/RX provides 125Mb each at the same time.
No - the nominal bitrate runs each direction, simultaneously for full duplex links. Each 1000BASE-T lane transports 250 Mbit/s worth of "user" data.
With that being said, the “line rate” of a 1000base-T interface is 125MHz (125Mb).
Since the line rate is (usually) the PHY rate it's 1000 MBit/s, four lanes of 250 Mbit/s each.
1000BASE-T does use a symbol rate of 125 MBaud since its PAM-5 modulation transports more than two bits per symbol. You might think that PAM-4 with exactly two bits would be sufficient, but the line code overhead eats up the rest. 1000BASE-T is already quite complex, it uses two-dimensional Trellis modulation plus scrambling to get across the wire (to produce a self-clocking signal, improve the signal/noise ratio and eliminate excess DC).
The 1000BASE-X PHYs for fiber are much simpler. The PCS uses 8b10b to produce a binary stream of 1.25 GBd that can be directly used to modulate the laser.

Promiscuous mode in LrWpanNetDevice NS3

I have a network of Lrwpan devices in NS3 and I want to make some nodes operate as sniffers. I enabled promiscuous mode in PCAP files but I want also to process the information that are contained in these packets. How should I get the actual content of the packet in NS3 so that I can do some calculations?
For example, if A -- B -- C is my topology and B is in RX/TX range of A and C, I want to enable B sniff packets of neighbors. Is this possible to do it in NS3? I don't want to see packets only in pcap file.
Edit: I found the SetPromiscuousMode() from MAC layer in Netdevice and enters promiscuous mode. However, is not able to send any packet, just receives. I want to enable TX and RX packets. Is that possible?

How to sync microcontroller to MIDI controller output

I am looking to receive MIDI messages to control a microcontroller based synthesizer and I am working on understanding the MIDI protocol so I may implement a MIDI handler. I've read MIDI is transmitted at 31.25kHz without a dedicated clock line - must I sample the line at 31.25kHz with the microcontroller in order to receive MIDI bytes?
The MIDI specification says:
The hardware MIDI interface operates at 31.25 (+/- 1%) Kbaud, asynchronous, with a start bit, 8 data bits (D0 to D7), and a stop bit. […] Bytes are sent LSB first.
This describes a standard UART protocol; you can simply use the UART hardware that most microcontrollers have built in. (The baud rate of 31250 Hz was chosen because it can be easily derived from a 1 Mhz (or multiple) clock.)
If you really wanted to implement the receiver in software, you would sample the input signal at a higher rate to able to reliably detect the level in the middle of each bit; for details, see What exactly is the start bit error in UART? and How does UART know the difference between data bits and start/stop bits?