The last word of a MIDI header chunk specifies the division. It contains information about whether delta times should be interpreted as ticks per quarter note or ticks per frame (where frame is a subdivision of a second). If bit 15 of this word is set then information is in ticks per frame. Next 7 bits (bit 14 through bit 8) specify the amount of frames per second and can contain one of four values: -24, -25, -29, or -30. (they are negative)
Does anyone know whether the bit 15 counts towards this negative value? So the question is, are the values which specify fps actually 8 bits long (15 through 8) or are they 7 bit long(14 through 8). The documentation I am reading is very unclear about this, and I can not find info anywhere else.
Thanks
The MMA's Standard MIDI-File Format Spec says:
The third word, <division>, specifies the meaning of the delta-times.
It has two formats, one for metrical time, and one for time-code-based
time:
+---+-----------------------------------------+
| 0 | ticks per quarter-note |
==============================================|
| 1 | negative SMPTE format | ticks per frame |
+---+-----------------------+-----------------+
|15 |14 8 |7 0 |
[...]
If bit 15 of <division> is a one, delta times in a file correspond
to subdivisions of a second, in a way consistent with SMPTE and MIDI
Time Code. Bits 14 thru 8 contain one of the four values -24, -25, -29,
or -30, corresponding to the four standard SMPTE and MIDI Time Code
formats (-29 corresponds to 30 drop frome), and represents the
number of frames per second. These negative numbers are stored in
two's complement form. The second byte (stored positive) is the
resolution within a frame [...]
Two's complement representation allows to sign-extend negative values without changing their value by adding a MSB bit of value 1.
So it does not matter whether you take 7 or 8 bits.
In practice, this value is designed to be interpreted as a signed 8-bit value, because otherwise it would have been stored as a positive value.
Related
Suppose that we have a cpu with cache that consists of 128 blocks. 8 bytes of memory can be saved to each block.How can I find which block each address belongs to? Also what is each address' tag?
The following is my way of thinking.
Take the 32bit address 1030 for example. If I do 1030 * 4 = 4120 I have the address in a byte format. Then I turn it in a 8byte format 4120 / 8 = 515.
Then I do 515 % 128 = 3 which is (8byte address)%(number of blocks) to find the block that this address is on (block no.3).
Then I do 515 / 128 = 4 to find the possition that the address is on block no.3. So tag = 4.
Is my way of thinking correct?
Any comment is welcomed!
What we know generically:
A cache decomposes addresses into fields, namely: a tag field, an index field, and a block offset field. For any given cache the field sizes are fixed, and, knowing their width (number of bits) allows us decompose an address the same way that cache does.
An address as a simple number:
+---------------------------+
| address |
+---------------------------+
We would view addresses as unsigned integers, and the number of bits used for the address is the address space size. As decomposed into fields by the cache:
+----------------------------+
| tag | index | offset |
+----------------------------+
Each field uses an integer number of bits for its width.
What we know from your problem statement:
the block size is 8 bytes, therefore
the block offset field width is log2( block size in bytes )
the address space (total number of bit in an address) is 32 bits, therefore
tag width + index width + offset width = 32
Since information about associativity is not given we should assume the cache is direct mapped. No information to the contrary is provided, and direct mapped caches are common early in coursework. I'd verify or else state the assumption explicitly of direct mapped cache.
there are 128 blocks, therefore, for a direct mapped cache
there are 128 index positions in the cache array.
(for 2- way or 4- way we would divide by 2 or 4, respectively)
Given 128 index positions in the cache array
the index field width is log2( number of index positions )
Knowing the index field width, the block offset field width, and total address width, we can compute the tag field width
tag field width = 32 - index field width - block offset field width
Only when you have such field widths does it make sense to attempt to decode a given address and extract the fields' actual values for that address.
Because there are three fields, the preferred approach to extraction is to simply write out the address in binary and group the bits according to the fields and their widths.
(Division and modulus can be made to work but with (a) 3 fields, and (b) the index field being in the middle using math there is a arguable more complex, since to get the index we have to divide (to remove the block offset) and modulus (to remove the tag bits), but this is equivalent to the other approach.)
Comments on your reasoning:
You need to know if 1030 is in decimal or hex. It is unusual to write an addresses in decimal notation, since hex notation converts into binary notation (and hence the various bit fields) so much easier. (Some educational computers use decimal notation for addresses, but they generally have a much smaller address space, like 3 decimal digits, and certainly not a 32-bit address space.)
Take the 32bit address 1030 for example. If I do 1030 * 4 = 4120 I have the address in a byte format.
Unless something is really out of the ordinary, the address 1030 is already in byte format — so don't do that.
Then I turn it in a 8byte format 4120 / 8 = 515.
The 8 bytes of the cache make up the block offset field for decoding an address. Need to decode the address into 3 fields, not necessarily divide it.
Again the key is to first compute the block size, then the index size, then the tag size. Take a given address, convert to binary, and group the bits to know the tag, index, and block offset values in binary (then maybe convert those values to hex (or decimal if you must)).
My question is regarding the division chunk of the header when the last word of the division is of SMPTE format i.e. the value lies between 0x8000 and 0xFFFF.
Lets say the division value is 0xE728. So in this case, the 15th bit is 1, which means it is of SMPTE format. After we have concluded that it is SMPTE, do we need to get rid of the 1 at the 15th bit? Or do we simply store 0xE7 as the SMPTE format and 0x28 as the ticks per frame?
I am really confused and I was not able to understand the online formats either. Thank you.
The Standard MIDI Files 1.0 specification says:
If bit 15 of <division> is a one, delta-times in a file correspond to subdivisions of a second, in a way consistent with SMPTE and MIDI time code. Bits 14 thru 8 contain one of the four values -24, -25, -29, or -30, corresponding to the four standard SMPTE and MIDI time code formats (-29 corresponds to 30 drop frame), and represents the number of frames per second. These negative numbers are stored in two's complement form.
It would be possible to mask bit 15 off. But in two's complement form, the most significant bit indicates a negative number, so you can simply interpret the entire byte (bits 15…8) as a signed 8-bit value (e.g., signed char in C), and it will have one of the four values.
I have a UILabel in my iPhone app simulator. It displays a coin count and I have an action that adds 1 hundred million to the count. I want the number to keep going up but for some reason once the count hits 2 billion, it adds a minus sign and starts counting down, then counts back up to 2 billion and back down again and so on.
I want to be able to display a much greater number of digits ie trillions and so on... Does anyone know what's going on with this and how to fix it so the label digits will keep going up as high as I want.
I'm using Xcode and Interface Builder and running through the simulator. I'm storing the number in a int variable, if that matters.
You store your coin count in an int, that's the problem. A 4 byte int can't store numbers higher than 2,147,483,647. If you add 1 to 2,147,483,647 you will get −2,147,483,648, which is the smallest possible int.
If you want to store bigger numbers you have to use a long which can store numbers between −(2^63) and 2^63−1 (or −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807).
See this for additional details.
This is occurring because, as #DrummerB pointed out, your int variable only has enough bits to store integral values in the range of -2,147,483,647 to 2,147,483,647. The reason this gets "reset" or "rolls over" back to a negative has to do with how computers store data, which is in binary.
For example, if you had an 8-bit integer (otherwise known as a byte) can store integral values from 0 to 255 if it is unsigned (meaning it can only store positive values) and -127 to 127 if it is signed (meaning it can store negative numbers). When an integer reaches its max value, it is represented in memory by all ones as you see here with the unsigned value 255:
255 = 11111111
So the maximum number that can be stored in an 8-bit int (byte) is 255. If you add 1 to this value, you end up flipping all the 1 values so that they are zeroes and since storing the value 256 would require a 9th bit you lose the 9th bit entirely and the integer value will appear to "roll over" to the minimum value.
Now.. As I stated above, the result of the addition above yields the value 256, but we only have 8 bits of storage in our integer so the most significant bit (9th bit) is lost. So you can picture it kinda like this with the pipes | marking your storage area:
only 8 bits of storage total
v
255 = 0|11111111|
+ 1 = 0|00000001|
-------------------
256 = 1|00000000|
^
9th bit is lost
In an unsigned int, the same is true, however the first bit is used to determine if the value is negative so you gain signing but you lose 1 bit of storage, resulting in your only having enough space to store the values 0 to 127 and 1 bit for signing.
Now that we understand what's going on, it should be noted that iOS is, at the time of this writing, a 32-bit operating system and while it can handle larger integers you probably don't want to use them all over the place as it's not optimized to work with these values.
If you just want to increase the range of values you can store in this variable, I would recommend changing it to an unsigned int, which can be done using the NSUInteger typedef.
Since a number can also be a decimal this makes me think that a CC number should be an integer. This would make sense as I don't think any credit cards start with 0 and since they all follow the same sort of pattern:
4444333322221111
So I guess they're an integer but I'm not too sure what international cards are like? Do any start with 0?
Update
Thanks for all your responses. It's less to store them (in fact I'd only store the last 4 numbers) and more to do a quick validation check. Regardless, I'd just treat it as an integer for validation, i.e. making sure that it's between 13-16 digits in length and always never a decimal.
Credit card numbers are not strictly numbers. They are strings, but the numbers which make up the long 16 digit number can be exploded in order to validate the number by using the checksum of the number.
You aren't going to be doing any multiplication or division on the CC number, so it should be a string in my opinion.
Quick Prefix, Length, and Check Digit Criteria
CARD TYPE | Prefix | Length | Check digit algorithm
-----------------------------------------------------------------
MASTERCARD | 51-55 | 16 | mod 10
VISA | 4 | 13, 16 | mod 10
AMEX | 34/37 | 15 | mod 10
Discover | 6011 | 16 | mod 10
enRoute | 2014/2149 | 15 | any
JCB | 3 | 16 | mod 10
JCB | 2131/1800 | 15 | mod 10
Don't use an integer for this.
Depending on the size of your integers (language/machine dependent), they may be too large to store as integers.
The use of credit card numbers is also not as integers, as there's no reason for doing arithmetic with them.
You should generally regard them as arrays of decimal digits, which might most easily be stored as strings, but might merit an actual array, again depending on your programming language.
They also contain encoded banking authority information as described in the wikipedia article on Bank Card Numbers, and are a special case of ISO/IEC 7812 numbers, which in fact can start with zero (though I don't think any credit cards do). If you need this information, a CC number might actually merit it's own data type, and likely some banks implement one.
Better to use an array of single-digit ints. Often the individual digits are used in some type of checksum to validate the credit card number. It would also take care of the case that a CC# actually starts with 0.
For example,
int[] cc = { 4, 3, 2, 1 }
bool Validate(int[] cc)
{
return ((cc[0] + 2*cc[1] + 6*cc[2]) % 5) == 0;
}
Something like that, although the equations they use are more complex. This would be a lot harder (i.e. would require division and truncation) with just
int cc = 4321;
Edit:
Keep in mind also, that each digit in a credit card number means something. For example, the 3rd and 4th digits might represent the state in which the card was made, and the 5th digit might be an index to the particular bank that issued the card, for example.
Personally, I always store them as a string... it is a series of integers, much like a telephone number, not one big integer itself.
I would imagine that they are integers as they (almost certainly) do not contain any alphabetic characters and never have any decimal places.
No credit/debit card numbers start with a zero (probably due to discussions like this).
All credit/debit card numbers have a check digit that is calculated using the Luhn algorithm
As it happens, 4444333322221111 passes the Luhn check digit test.
I'm currently implementing an application to perform some tasks on MIDI files, and my current problem is to output the notes I've read to a LilyPond file.
I've merged note_on and note_off events to single notes object with absolute start and absolute duration, but I don't really see how to convert that duration to actual music notation. I've guessed that a duration of 376 is a quarter note in the file I'm reading because I know the song, and obviously 188 is an eighth note, but this certainly does not generalise to all MIDI files.
Any ideas?
By default a MIDI file is set to a tempo of 120 bpm and the MThd chunk in the file will tell you the resolution in terms of "pulses per quarter note" (ppqn).
If the ppqn is, say, 96 than a delta of 96 ticks is a quarter note.
Should you be interested in the real duration (in seconds) of each sound you should also consider the "tempo" that can be changed by an event "FF 51 03 tt tt tt"; the three bytes are the microseconds for a quarter note.
With these two values you should find what you need. Beware that the duration in the midi file can be approximate, especially if that MIDI file it's the recording of a human player.
I've put together a C library to read/write midifiles a long time ago: https://github.com/rdentato/middl in case it may be helpful (it's quite some time I don't look at the code, feel free to ask if there's anything unclear).
I would suggest to follow this approach:
choose a "minimal note" that is compatible with your division (e.g. 1/128) and use it as a sort of grid.
Align each note to the closest grid line (i.e. to the closest integer multiple of the minimal node)
Convert it to standard notation (e.g a quarter note, a dotted eight note, etc...).
In your case, take 1/32 as minimal note and 384 as division (that would be 48 ticks). For your note of 376 tick you'll have 376/48=7.8 which you round to 8 (the closest integer) and 8/32 = 1/4.
If you find a note whose duration is 193 ticks you can see it's a 1/8 note as 193/48 is 4.02 (which you can round to 4) and 4/32 = 1/8.
Continuing this reasoning you can see that a note of duration 671 ticks should be a double dotted quarter note.
In fact, 671 should be approximated to 672 (the closest multiple of 48) which is 14*48. So your note is a 14/32 -> 7/16 -> (1/16 + 2/16 + 4/16) -> 1/16 + 1/8 + 1/4.
If you are comfortable using binary numbers, you could notice that 14 is 1110 and from there, directly derive the presence of 1/16, 1/4 and 1/8.
As a further example, a note of 480 ticks of duration is a quarter note tied with a 1/16 note since 480=48*10 and 10 is 1010 in binary.
Triplets and other groups would make things a little bit more complex. It's not by chance that the most common division values are 96 (3*2^5), 192 (3*2^6) and 384 (3*2^7); this way triplets can be represented with an integer number of ticks.
You might have to guess or simplify in some situations, that's why no "midi to standard notation" program can be 100% accurate.