Why In Manchester encoding, the bit rate is half of the baud rate? - ethernet

I think baud rate is the rate of the symbols, and if each symbol contains n bit, then the bit rate should be n x baud rate
In Ethernet( Manchester encoding) ,if bit rate is half of the baud rate, then a symbol contains 1/2 bit ? As far as I know, bit rate should at least not less than symbol rate (baud rate).
About the relationship of baud rate and bit rate, my understanding have no problems, yet when it comes to the Manchester code, it's totally counterintuitive, could anyone explain about these?

Bit rate is related to the speed of the transmission of the digital bit, while baudrate is related to the speed of change of symbols, which are significancies in analog signal. These can be either in amplitude, frequency or phase or more complex modulation methods. In manchester encoding, one bit is reprsented by two different levels of voltage. Therefore, lets say if you want to transfer 1Mbit digital data in one second, then you will need to make ~ 2 million changes in the level of the analogous signal. That is why, your bit rate will be 1Mbs, while your baud rate will be 2M bauds.
In NRZ encoding, one bit is represented by one symbol. Therfore rates will be equal.

The Wikipedia article for Baud says that it can be defined as pulses per second. In the case of Manchester Encoding, this results in the baud rate being defined as "clock transitions".
A transition is what occurs when the signaling voltage goes from a low voltage to a high voltage, or vice versa. If you look at this diagram:
You will notice that the Manchester wave always makes a transition from either low to high or high to low when the clock transitions from high to low. The bits are encoded in that transition; a transition from low to high indicates a 1, and a transition from high to low indicates a 0. The low-to-high clock transitions are used to get the Manchester wave in a position where it can make the correct transition for the next bit. As you can see, there are never more than two clock transitions between one Manchester transition and the next; the clock is effectively encoded in the Manchester wave itself.
If the bits were encoded in a single clock transition (i.e. high being 1 and low being 0), then the clock (baud) rate and the bit rate would be the same, but then you would have to run a separate line for the clock. Because Manchester guarantees a transition every

You can think of Manchester encoding not only transmitting the actual data, but also the clock (meta data) due to its self clocking characteristic.
http://en.wikipedia.org/wiki/Self-clocking_signal

All you need to understand is that WITHIN any ONE state in Manchester encoding ( i.e either 1 or 0 )
there would be a transition . . as depicted in DIAGRAM above. . .the sole reason for transition
being for reciever to synchronize
This being said, it means if we compare this encoding scheme to others. . Like NRZ. there would be double the transitions in manchester encoding as compared to other techniques ( for a sequence of 10101
manchester will have 10 transitions while NRZ would have 5 ). . there may be exceptions. This means baud rate for manchester would be 10 while for NRZ would be 5. .
In designing we use to say that if any recvr is capable of syncing to baud rate of 10 . . . that means with manchester it transmts five Bits while with NRZ it would transmit 10 bis

Related

Accuracy of STM32L496 generated square wave

I have an STM32L496 MCU, and I want to generate a 3MHz square wave. I would like to know what would be the accuracy of this signal.
The system clock frequency of this MCU is 80MHz. If I use a prescaler of 80MHz / 3MHz = 26.667 (can I do that?), then the timer will tick at a rate of 3MHz. If I use a 16-bit timer (TIMER16), it would count to 65 535 maximum, which means it would increment once every 0.33 microseconds.
That is how far I understood, but I am not sure how to calculate the accuracy of this signal. Any help would be much appreciated!
If the core is 80MHz you can't make 3MHz exactly with a timer clocked from the same source as the core.
You can make 3.076923 MHz with even mark space ratio (prescaler 1, compare value 13 reset value 26), or you can make 2.962962 MHz (which is slightly closer) with a 13:14 mark space ratio (prescaler 1, compare value 13 or 14, reset value 27).
To get 3MHz you would have to underclock your core down to 78MHz.
I don't know the exact part you are using. You might be able to get it exactly using one of the clock outputs or a PLLs other than the one that drives the core, eg: if you have a 12MHz crystal you can output 3MHz easily on an MCO pin.

Synchronization in Manchester coding

Lately I have been reading about Manchester encoding and I think I'm beginning to understand most of it now, but still I have got some whys that need addressing. Mainly 3 for the moment:
1) Most articles on Internet when introducing Manchester coding start by telling how bad NRZI really was and one of the disadvantages that gets mentioned is that synchronization becomes a problem when lengthy 1's or 0's get sent. Why is that a problem, since most places where NRZI is used have got separate clock and data lines. As long as the clock signal is there why should that ever be a problem?
2) Also, is Manchester supposed to work on a fixed frequency? Or can it work like I2C where clock frequency can be variable?
3) The good thing that gets mentioned about Manchester encoding is that it does not require separate clock line and that clock is embedded in the data and can be recovered by the receiver. Frequent transitions in Manchester help in synchronization and that the transitions happen in the middle and so clock can be recovered from transition. But my question is, if there are repeated 1's or 0's transition can happen in the middle and in the end as well (see attached waveform pic, look at the transitions when sending 111). So when a receiver sees a transition how does it figure out whether it is in the middle or at the end?
If I'm talking rubbish I would love to be corrected.
regarding your third question: I'm also brushing up on manchester and it appears that to recover a clock you need a differential signal:
Reference: "Data Communications, Computer Networks and Open Systems" by Fred Halsall, page 104, figure 3.8
For the 3 question,
Whenever a signal is transmitted, initially a few redundant bits which contain info about clock are sent.
For example, 1111, now the receiver knows the real data will arrive next, and through those redundant bits clock signal is extracted as well as the “notification “ that a signal is going to come.
As for question 1, NRZ scheme can send lengthy 1’s and lengthy 0’s.... but here the problem is actually with lengthy 1’s, if you could check sending lengthy 1’s with some modulation scheme and a dipole antenna, you could observe that the power of carrier signal will start decaying exponentially.
And the other reason would be the power needed to send that many lengthy 1’s, which is not favourable!
For question 2, yes it is possible to use it variable clock frequency but the condition is you should send redundant bits before you could change the clock frequency so that the receiver understands that the clock is changed from this point onwards.
Hope it’s clear now ;)

How computers can receive data quickly?

In computer networks, we are trying to increase the transmission speed of data. Since data is nothing but electrical signals. How these electric signals can be converted into bits so quickly? This conversion is done by ADC - DAC. We can’t control the speed of computation of ADC then how can we translate the electric signals to bits so quickly. Next, Is this ADC integrated in our computer chipset?
Also, does it mean that every peripheral has ADC. For example, NIC card will have ADC. Is the information carried in the LAN cable like CAT 5, 6 are analog in nature?
You clock bits in by detecting a rising edge on a signal wired up to one of the pins of your chip. Then the rise lasts for a certain period of time, but only a fraction of a millisecond. There's a bit of tolerance so sender and receiver don't have to be exactly synchronised. The chip then transfers the bit to a buffer in very low level code. When it has a byte, slightly higher level code transfers the byte to another buffer, then the next level is user level - we have a stream of input bytes.
Whilst the wire is of course analogue, that is not analogue to digital conversion. Analogue to digital conversion is where we measure the signal, quantise it, then create a binary representation in place value notation.

Why are both Viterbi and Reed-Solomon used in DVB-T?

From my understanding, DVB-T packets go through two FEC systems, that are, Viterbi, with a data loss up to 50%, and RS, with a data loss up to 10%. Those are called external and internal coding.
I can't understand the need for the second RS coding (in that case, MPEG-TS packets 188 bytes long are added an additional 20 bytes).
More specifically, what happens to packets that are corrupted, say, 55%? Are 50% of the errors fixed by the Viterbi decoder and the remaining 5% from the RS?
Sorry for my dumbness.
The abilities and targets of Viterbi / RS differ considerably: Viterbi coding is done next to baseband/analog level, where each bit has a high probability of being corrupted. This is combated with a scheme, where not all combinations of e.g. '00000' through '11111' are possible, but where every other or 1/3 or 2/3 bits are correction bits calculated from the history of some N previous bits transferred.
This causes a comparably high expansion of data with the possibility of correcting typically one half of individual bit errors. One has to notice that the bit errors can occur for the correction bits as well...
This kind of bit error correction can mitigate errors mostly on AWGN channels and somewhat on Rayleigh fading (simulation model for signal fading due to moving vehicle with multi-path propagation, i.e. same signal coming from multiple paths).
Because the "window" of the Viterbi encoder is small, and when there's a burst error over the complete window (e.g. 7 bits), the encoder is not able to correct any errors. Thus a secondary coder is needed: Reed Solomon (in DVB or CD) coder works with codewords of size 8 bits, i.e. when a single bit in the codeword is corrupted, the complete codeword needs to be fixed.
The idea thus is, that the outer coder can reduce sporadic single bit errors to a manageable level, leaving basically burst errors (long period of unreceived signal) to the inner coding.

Why Does MIDI Offer 127 Notes

Is the 127 note values in MIDI musically significant (certain number of octaves or something)? or was it set at 127 due to the binary file format, IE for the purposes of computing?
In the MIDI protocol there are status bytes (think commands, such as note-on or note-off) and there are data bytes (think parameters, such as pitch value and velocity). The way to determine the difference between them is by the first bit. If that first bit is 1, then it is a status byte. If the first bit is 0, then it is a data byte. This leaves only 7 bits available for the rest of the status or data byte value.
So to answer your question in short, this has more to do with the protocol specification, but it just so happens to nicely line up to good number of available pitch values.
Now, these pitch values do not correspond to specific pitches. Yes it is true that typically a pitch value of 60 will give you C4, or middle C. Most synths work this way, but certainly not all. It isn't even a requirement that the synth uses the pitch value for pitches! MIDI doesn't care... it is just a protocol. You may be wondering how alternate tunings work... they work just fine. It is up to the synthesizer to produce the correct pitches for these alternate tunings. MIDI simply provides for a selection of 128 different values to be sent.
Also, if you are wondering why it is so important for that first bit to signify what the data is... There are system realtime messages that can be interjected in the middle of some other command. These are things like the timing clock which is often used to sync up LFOs among other things.
You can read more about the types of MIDI messages here: http://www.midi.org/techspecs/midimessages.php
127 = 27 - 1
It's the maximum positive value of an 8-bit signed integer, and so is a meaningful limit in file formats--it's the highest value you can store in a byte (on most systems) without making it unsigned.
I think what you are missing is that MIDI was created in the early 1980's, not to run on personal computers, but to run on musical instruments with extremely limited processing and storage capabilities. Storing 127 values seemed GIANT back then, especially when the largest keyboard typically has only 88 keys, and most electronic instruments only had 48. If you think MIDI is doing something in a strange way, it is likely that stems from its jurassic heritage.
Yes it is true that typically a pitch value of 60 will give you C4,
or middle C. Most synths work this way, but certainly not all.
Yes ... there has always been a disagreement about where middle C is in MIDI. On Yamaha keyboards it is C3, on Roland keyboards it is C4. Yamaha did it one way and Roland did it another.
Now, these pitch values do not correspond to specific pitches.
Not originally. However, in the "General MIDI" standard, A = 440, which is standard tuning. General MIDI also describes which patch is a piano, which is a guitar, and so on, so that MIDI files become portable across multitimbral sound sources.
Simple efficiency.
As a serial protocol MIDI was designed around simple serial chips of the time which would take 8 data bits in and transmit them as a stream out of one separate serial data pin at a proscribed rate. In the MIDI world this was 31,250 Hz. It added stop and start bits so all data could travel over one wire.
It was designed to be cheap and simple and the simplicity was extended into the data format.
The most significant bit of the 8 data bits was used to signal if the data byte was a command or data. So-
To send Middle C note ON on channel 1 at a velocity of 56 A command bytes is sent first
and the command for Note on was the upper 4 bits of that command bit 1001. Notice the 1 in the Most significant bit, this was followed by the channel ID for channel 1 0000 ( computers preferring to start counting from 0)
10010000 or 128 + 16 = 144
This was followed by the actual Note data
72 for Middle C or 01001000
and then the velocity data again specified in the range 0 -127 with a 0 MSB
56 in our case
00111000
So what would go down the wire (ignoring stop start & sync bits was)
144, 72, 56
For the almost brain dead microcomputers of the time in electronic keyboards the ability to separate command from data by simply looking at the first bit was a godsend.
As has been stated 127 bits covers pretty much any western keyboard you care to mention. So made perfectly logical sense and the protocols survival long after many serial protocols have disappeared into obscurity is a great compliment to http://en.wikipedia.org/wiki/Dave_Smith_(engineer) Dave Smith of Sequential Circuits who started the discussions with other manufacturers to set all this in place.
Modern music and composition would be considerably different without him and them.
Enjoy!
127 is enough to cover all piano keys
0 ~ 127 fits nicely for ADC conversions.
Many MIDI hardware devices rely on performing Analog to Digital conversions (ADC). Considering MIDI is a real time communication protocol, when performing an ADC conversion using successive-approximation (a commonly used algorithm), a good rule of thumb is to use 8 bit resolution for fast computation. This will yield values in the 0 ~ 1023 range, which can be converted to MIDI range by dividing by 8.