Converting from bandwidth to traffic gives different results depending on operators position? - bandwidth

This must be a stupid question, but nevertheless I find it curious:
Say I have a steady download of 128Kbps.
How much disk space is going to be consumed after a hour in Megabytes?
128 x 60 x 60 / 8 / 1024 = 56.25 MB
But
128 x 60 x 60 / 1000 /8 = 57.6 MB
So what is the correct way to calculate this?
Thanks!

In one calculation you're dividing by 1000, but in another you're dividing by 1024. There shouldn't be any surprise you get different numbers.
Officially, the International Electrotechnical Commission standards body has tried to push "kibibyte" as an alternative to "kilobyte" when you're talking about the 1024-based version. But if you use it, people will laugh at you.

Please remember that there is overhead in any transmission. There can be "dropped" packets etc. Also there is generally some upstream traffic as your PC acknoledges receipt of packets. Finally since packets can be received out of order, the packets themselves contain "extra" data to all the receiver to reconstruct the data in the proper order.

Ok, I found out an official explanation from Symantec on the matter:
http://seer.entsupport.symantec.com/docs/274171.htm
It seems the idea is to convert from bits to bytes as early as possible in calculation, and then the usual 1024 division comes in place.
I just hope it's a standard procedure, and not Symantec imposed one :).

Related

How much faster is sending 16 bit vs 32 bit over BLE?

I am working on a project where I am sending information over BLE from a phone to a Raspberry Pi Zero. I can fit all the information I need into 16 bit messages, however, down the line I may need more bits, though I probably won’t. Would I be better off sending only 16 bit packets than 32 bit? Is it that much faster to send and parse 16 bits for a RPi Zero over BLE? I am only entertaining the idea of 32 bits because if I do need more information in the future, updating the code would be much easier.
The packets contain position data of the phone and will be send every .1 of a second. I am using Bleno on the Pi to receive data.
Dude, those two extra bytes won't kill your energy budget. It's wise to keep reserved space for future use. It enables backwards compatibility and ease of future development.
There's not really any difference in the length of the on-air packet transmission due to the big overhead of BLE, and you won't experience any difference due to the nature of connection intervals. We're talking 16bits/(10^6) = 16uS in 1mbps mode and 8uS in 2mbps mode.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

Representation of a Kilo/Mega/Tera Byte

I was getting a little confused with the representation of different units of bytes.
It is accepted throughout that 1 byte = 8 bits.
However, in a lot of sources I have seen that
1 kiloByte = 2^10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
Different sources claim different reasons for these different representations, thus I am not sure what the most important/real reason is for this rather confusing difference in representation.
Can someone please explain and clarify?
It is accepted throughout that 1 byte = 8 bits
However, in a lot of sources I have seen that
1 kiloByte = 2^ 10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
To make sure we're all clear, your question is "Is a kilobyte equal to 1024 bytes or 1000 bytes?".
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
This is irrelevant to the question.
So, let's begin. In SI (metric), the multiplier of 1000 is called kilo, abbreviated k. k always means 1000, never anything else.
When binary computers entered the world, we noticed that 2 to the power of 10 is 1024, which is conveniently close to 1000. Computer engineers decided to abuse this coincidence and say that kilo means 1024. By extension, they say that mega means 10242 (instead of the proper definition of 10002), and so on with giga, tera, etc.
While the difference between 1000 and 1024 is small for many purposes, there are times when exact answers are required, and this is where the abusive terminology hurts everyone. Only after decades after kilo=1024 got established did anyone really try to fix the problem. The IEC proposed new prefixes for the binary multipliers: 1024 = kibi, 10242 = mebi, 10243 = gibi, etc.
In summary, the notion that kilo=1024 is an abusive deviation from the consistent SI definition of kilo=1000. While kilo=1024 is popular in the computer industry, it is nevertheless wrong and should be replaced by kibi=1024. Or numbers need to be recomputed to reflect the true definition of kilo/mega/etc. (For example, "512 MB" of RAM is actually about 536.9 MB.)
Btw, don't use random capitalization; it's spelled kilobyte, not kiloByte.
References and links:
http://physics.nist.gov/cuu/Units/binary.html
http://en.wikipedia.org/wiki/Kilo-
http://en.wikipedia.org/wiki/Kilobyte
http://en.wikipedia.org/wiki/Kibibyte
http://xkcd.com/394/
When you talk about data information in computer science, you always have to calculate the result by a power of two. See what wikipedia says:
"In computing, a binary prefix is a
specifier or mnemonic that is
prepended to the units of digital
information, the bit and the byte, to
indicate multiplication by a power of
2. In practice the powers used are multiples of 10, so the prefixes
denote powers of 1024 = 2^10."
Sometimes people use to round it as you have mentioned, but it is a bad use of it.
I don't see what the byte to bits has to do with anything if you are asking whether 1 kiloByte is equal to 1024 or 1000 bytes. These measurements are not set in stone and are not really controlled at all. Computer makers can (and have) used the 1000 conversion to make it look like they have more memory.
The problem comes up when thinking about binary (base 2) or base 10. Base 10 you would use 1000, base 2, 1024.

How to compute a reasonable number of bits for a checksum?

I have around 1500 bytes of data that I want to construct a checksum for so that if the data gets corrupted the chances of the checksum still matching the data is less than say 1 in 10^15, i.e. a low enough probability that I can treat it as it is never going to happen.
The question is how many bits should I compute? I have a sha-160 computation that gives me a 160 bit hash of my data, but I expect this is way larger than necessary. So I'm thinking I could truncate the resulting hash down to say the low 40 bits and use that as a sufficiently large bit pattern that if the data gets corrupted, I will most likely detect it.
So the question is two fold, how many bits is good enough and is taking the lower bits of a sha-160 hash a good approach to take?
You can use the table here to determine approximately how many bits you need for your desired error detection rate.

Why Does MIDI Offer 127 Notes

Is the 127 note values in MIDI musically significant (certain number of octaves or something)? or was it set at 127 due to the binary file format, IE for the purposes of computing?
In the MIDI protocol there are status bytes (think commands, such as note-on or note-off) and there are data bytes (think parameters, such as pitch value and velocity). The way to determine the difference between them is by the first bit. If that first bit is 1, then it is a status byte. If the first bit is 0, then it is a data byte. This leaves only 7 bits available for the rest of the status or data byte value.
So to answer your question in short, this has more to do with the protocol specification, but it just so happens to nicely line up to good number of available pitch values.
Now, these pitch values do not correspond to specific pitches. Yes it is true that typically a pitch value of 60 will give you C4, or middle C. Most synths work this way, but certainly not all. It isn't even a requirement that the synth uses the pitch value for pitches! MIDI doesn't care... it is just a protocol. You may be wondering how alternate tunings work... they work just fine. It is up to the synthesizer to produce the correct pitches for these alternate tunings. MIDI simply provides for a selection of 128 different values to be sent.
Also, if you are wondering why it is so important for that first bit to signify what the data is... There are system realtime messages that can be interjected in the middle of some other command. These are things like the timing clock which is often used to sync up LFOs among other things.
You can read more about the types of MIDI messages here: http://www.midi.org/techspecs/midimessages.php
127 = 27 - 1
It's the maximum positive value of an 8-bit signed integer, and so is a meaningful limit in file formats--it's the highest value you can store in a byte (on most systems) without making it unsigned.
I think what you are missing is that MIDI was created in the early 1980's, not to run on personal computers, but to run on musical instruments with extremely limited processing and storage capabilities. Storing 127 values seemed GIANT back then, especially when the largest keyboard typically has only 88 keys, and most electronic instruments only had 48. If you think MIDI is doing something in a strange way, it is likely that stems from its jurassic heritage.
Yes it is true that typically a pitch value of 60 will give you C4,
or middle C. Most synths work this way, but certainly not all.
Yes ... there has always been a disagreement about where middle C is in MIDI. On Yamaha keyboards it is C3, on Roland keyboards it is C4. Yamaha did it one way and Roland did it another.
Now, these pitch values do not correspond to specific pitches.
Not originally. However, in the "General MIDI" standard, A = 440, which is standard tuning. General MIDI also describes which patch is a piano, which is a guitar, and so on, so that MIDI files become portable across multitimbral sound sources.
Simple efficiency.
As a serial protocol MIDI was designed around simple serial chips of the time which would take 8 data bits in and transmit them as a stream out of one separate serial data pin at a proscribed rate. In the MIDI world this was 31,250 Hz. It added stop and start bits so all data could travel over one wire.
It was designed to be cheap and simple and the simplicity was extended into the data format.
The most significant bit of the 8 data bits was used to signal if the data byte was a command or data. So-
To send Middle C note ON on channel 1 at a velocity of 56 A command bytes is sent first
and the command for Note on was the upper 4 bits of that command bit 1001. Notice the 1 in the Most significant bit, this was followed by the channel ID for channel 1 0000 ( computers preferring to start counting from 0)
10010000 or 128 + 16 = 144
This was followed by the actual Note data
72 for Middle C or 01001000
and then the velocity data again specified in the range 0 -127 with a 0 MSB
56 in our case
00111000
So what would go down the wire (ignoring stop start & sync bits was)
144, 72, 56
For the almost brain dead microcomputers of the time in electronic keyboards the ability to separate command from data by simply looking at the first bit was a godsend.
As has been stated 127 bits covers pretty much any western keyboard you care to mention. So made perfectly logical sense and the protocols survival long after many serial protocols have disappeared into obscurity is a great compliment to http://en.wikipedia.org/wiki/Dave_Smith_(engineer) Dave Smith of Sequential Circuits who started the discussions with other manufacturers to set all this in place.
Modern music and composition would be considerably different without him and them.
Enjoy!
127 is enough to cover all piano keys
0 ~ 127 fits nicely for ADC conversions.
Many MIDI hardware devices rely on performing Analog to Digital conversions (ADC). Considering MIDI is a real time communication protocol, when performing an ADC conversion using successive-approximation (a commonly used algorithm), a good rule of thumb is to use 8 bit resolution for fast computation. This will yield values in the 0 ~ 1023 range, which can be converted to MIDI range by dividing by 8.