How do i calculate the size of a tag field? - cpu-architecture

I'm revising for an exam and i've came across a question that I have no idea how to do, i've looked through my notes and cant seem to find anything on it, can anyone help me?
Given a 64KB cache that contains 1024 blocks with 64 bytes per block, what is the size of the tag field for a 32-bit architecture?
The question is only worth 1 mark so i cant imagine the answer is too hard, but i cant seem to find anthing on it.

You need 32 bits for the address. You need 6 bits for the offset within a block. You need 10 bits to identify one of the 1,024 possible blocks in the cache. That's 16 bits in total. Therefore the tag needs to be 32 bits - 16 bits = 16 bits.
I recommend following the link that aruisdante provided and look at how to calculate this yourself.

Related

Why is this question worded like this regarding main memory?

I have this question:
1. How many bits are required to address a 4M × 16 main memory if main memory is word-addressable?
And before you say it, yes I have looked this question up and there have been posts on stackoverflow asking about how to answer it but my question is different.
This may sound like a silly question but I don't understand what it means when it says "How many bits are required to address...".
To my understanding and what I have been taught is that (if we're talking about word addressable) each cell would contain 16 bits in the RAM chip and the length would be 4M-1, with 2^22 words. But I don't understand what it is asking when it says 'How many bits are required...':
The answer says 22 bits would be required but I just don't understand. 22 bits for what? All I know is each word is 16 bits and each cell would be numbered from 0 - 4M-1. Can someone clear this up for me please?
Since you have 4 million cells, you need a number that is able to represent each cell. 22 bits is the size of the address to allow representing 2^22 cels (4,194,304 cells)
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor.
(https://en.m.wikipedia.org/wiki/Word)
Using this principle imagine a memory with a word that uses 2 bits only, and it is capable of storing 4 words:
XX|YY|WW|ZZ
Each word in this memory is represented by a number that tells to computer it's position.
XX is 0
YY is 1
WW is 2
ZZ is 3
The smallest binary number length that can represent 3 is a 2 bit binary length right? Now apply the same example to a largest memory. Doesn't matters if the word size is 16 bits or 2 bits. Only the length of words matters

What is this table called and how do I read it?

I'm reading the powerpoint specification and I came across a table like this:
Do tables like these have a name? How do I read this?
I'm pretty sure it means that the first 4 bits identifies the recVer and the next 12 identifies the recInstance, but what about recLen? Do all 32 bits pull double-duty and identify the recLen or does that mean the next 32 bits do that?
It looks like some type of packet header. The numbers at the top are the bit position. It is read left to right, top to bottom, so it is telling you that the header is made up of 4 bits interpreted as the recVer, followed by 12 bits that is interpreted as recInstance, followed by 16 bits that is the recType, followed by 32 bits which is the recLen.
This is a common way to show the header structure, as can be seen on Wikipedia's TCP page.
This is just part of the binary format for the powerpoint file. the 0,1,2 etc are the bit numbers. So you can see bit's 0 - 3 inclusive are the recVer etc.
The specification will tell you want recVer, recInstance and recType mean.
I think recLen should be obvious but it'll be in the spec.
To read it, you'd read in the bytes and then do bit manipulation to decode those fields. You don't say what language you'll be using but you can do bit manipulation in a number of languages.
Not sure about an official/standard name, but this looks like a record layout map.
You read it left to right, every box is a single bit.
The record is composed of
4 bits recVer
12 bits recInstance
16 bits recType
32 bits recLen

Virtual Machine Instruction Length

I'm creating a virtual machine and I'm encoding the instructions into byte code. The instructions are hexadecimal numbers like this: 0x1064, this instruction means load the value of 100 (hexadecimal 64) into register 0 and the number of the load instruction is 1. My question is, if I wanted to load a larger number I would change the 64 to a larger number 3E8 for example (1000 in hexadecimal) the instruction would be 5 characters long, is it possible to keep the instructions the same length some how?
It is certainly possible to keep the instructions the same length. In fact, it is possible to having a turing complete language using only one instruction! The question is what you want to do.
For simplicity of decoding, you may just decide to have all the instructions be the same length. It increases the size of the code, but either way it doesn't really matter. Just do whatever you think is the best.

Representation of a Kilo/Mega/Tera Byte

I was getting a little confused with the representation of different units of bytes.
It is accepted throughout that 1 byte = 8 bits.
However, in a lot of sources I have seen that
1 kiloByte = 2^10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
Different sources claim different reasons for these different representations, thus I am not sure what the most important/real reason is for this rather confusing difference in representation.
Can someone please explain and clarify?
It is accepted throughout that 1 byte = 8 bits
However, in a lot of sources I have seen that
1 kiloByte = 2^ 10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
To make sure we're all clear, your question is "Is a kilobyte equal to 1024 bytes or 1000 bytes?".
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
This is irrelevant to the question.
So, let's begin. In SI (metric), the multiplier of 1000 is called kilo, abbreviated k. k always means 1000, never anything else.
When binary computers entered the world, we noticed that 2 to the power of 10 is 1024, which is conveniently close to 1000. Computer engineers decided to abuse this coincidence and say that kilo means 1024. By extension, they say that mega means 10242 (instead of the proper definition of 10002), and so on with giga, tera, etc.
While the difference between 1000 and 1024 is small for many purposes, there are times when exact answers are required, and this is where the abusive terminology hurts everyone. Only after decades after kilo=1024 got established did anyone really try to fix the problem. The IEC proposed new prefixes for the binary multipliers: 1024 = kibi, 10242 = mebi, 10243 = gibi, etc.
In summary, the notion that kilo=1024 is an abusive deviation from the consistent SI definition of kilo=1000. While kilo=1024 is popular in the computer industry, it is nevertheless wrong and should be replaced by kibi=1024. Or numbers need to be recomputed to reflect the true definition of kilo/mega/etc. (For example, "512 MB" of RAM is actually about 536.9 MB.)
Btw, don't use random capitalization; it's spelled kilobyte, not kiloByte.
References and links:
http://physics.nist.gov/cuu/Units/binary.html
http://en.wikipedia.org/wiki/Kilo-
http://en.wikipedia.org/wiki/Kilobyte
http://en.wikipedia.org/wiki/Kibibyte
http://xkcd.com/394/
When you talk about data information in computer science, you always have to calculate the result by a power of two. See what wikipedia says:
"In computing, a binary prefix is a
specifier or mnemonic that is
prepended to the units of digital
information, the bit and the byte, to
indicate multiplication by a power of
2. In practice the powers used are multiples of 10, so the prefixes
denote powers of 1024 = 2^10."
Sometimes people use to round it as you have mentioned, but it is a bad use of it.
I don't see what the byte to bits has to do with anything if you are asking whether 1 kiloByte is equal to 1024 or 1000 bytes. These measurements are not set in stone and are not really controlled at all. Computer makers can (and have) used the 1000 conversion to make it look like they have more memory.
The problem comes up when thinking about binary (base 2) or base 10. Base 10 you would use 1000, base 2, 1024.

Operating System: Paging Question

I have a question that I am trying to answer that gives the following situation:
16K Pages
32-bit Virtual Addresses
512MB hard disk, sector-addressable with 16K sectors
8 processes currently running
I am asked:
i) How many process page tables are required?
I think this is a trick question? Surely the answer is just 1.
ii) If a process address register PAR can be up to 32 bits, what is the maxmimum amount of physical memory that can be supported on this machine?
iii) How wide in bits should each entry in a process table be if 64MB physical memory is installed?
Please could anyone give me help/hint with the last two parts as I'm really stuck on them? Thanks!
In case you look on here before the exam later today, it is because it doesn't mean Process address register, it means Page address register!
Try looking at http://cseweb.ucsd.edu/classes/fa03/cse120/Lec08.pdf for some more information including help about segmentation and paging combined
Also, the book in the IC library called Operating Systems concepts with code 005.43SIL says that each process has it's own process page table and can even be segmented itself!
i) I said 8
ii) Well, 32 bits of virtual memory addressing with 14 bits of offset in the page table (2^14 = 16K page length) means there are 18 bits left for the page number. In 32 bits of PAR, this means 14 bits for the page location. If you multiple the amount of page locations by the page size, you get 2^14 * 2^14 = 2^18 which is 256MB of RAM
iii) I got 30 bits. 64MB is 2^26 divided by the page size is 2^26/2^14 = 2^12 which means 12 bits for the page location. From (ii) I calculated that 18 bits are left in the virtual memory address for the page number meaning that it should be 30 bits wide. I also put a comment that since it should be byte-aliged maybe the extra 2 bits can be used so that we know whether it has been written to and whether it is currently being stored on the disk.
Hope this helps!