Why is this question worded like this regarding main memory? - cpu-architecture

I have this question:
1. How many bits are required to address a 4M × 16 main memory if main memory is word-addressable?
And before you say it, yes I have looked this question up and there have been posts on stackoverflow asking about how to answer it but my question is different.
This may sound like a silly question but I don't understand what it means when it says "How many bits are required to address...".
To my understanding and what I have been taught is that (if we're talking about word addressable) each cell would contain 16 bits in the RAM chip and the length would be 4M-1, with 2^22 words. But I don't understand what it is asking when it says 'How many bits are required...':
The answer says 22 bits would be required but I just don't understand. 22 bits for what? All I know is each word is 16 bits and each cell would be numbered from 0 - 4M-1. Can someone clear this up for me please?

Since you have 4 million cells, you need a number that is able to represent each cell. 22 bits is the size of the address to allow representing 2^22 cels (4,194,304 cells)
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized piece of data handled as a unit by the instruction set or the hardware of the processor.
(https://en.m.wikipedia.org/wiki/Word)
Using this principle imagine a memory with a word that uses 2 bits only, and it is capable of storing 4 words:
XX|YY|WW|ZZ
Each word in this memory is represented by a number that tells to computer it's position.
XX is 0
YY is 1
WW is 2
ZZ is 3
The smallest binary number length that can represent 3 is a 2 bit binary length right? Now apply the same example to a largest memory. Doesn't matters if the word size is 16 bits or 2 bits. Only the length of words matters

Related

Word Addressable Memory

For a 32 bit word addressable memory, the word has size of 4 bytes.
If I try to store a data structure uses less than 4 byte memory, say 2 bytes. Is the remaining 2 bytes wasted?
Should we consider the word size when we decide what data structure to use?
Got similar question here but not exactly what i am asking.
Please help.
On a modern CPU, memory itself is retrieved in usually chunks called cache lines (64 bytes on x86), but the CPU instruction set can address individual bytes.
If you had some esoteric machine with an instruction set that couldn't address individual bytes, then your compiler would hide that from you.
Whether or not memory is wasted in data structures smaller than a word would depend on the language you use and its implementation, but generally, records are aligned according to the field with the coarsest requirement. If you have an array of 16 bit integers, they will pack together tightly.
If you have 3 or 4 integers, it scarcely matters whether you store them in 2, 4, or 8 bytes.
If you have 3 or 4 billion integers, then it's probably worth considering a more space-efficient structure.
Generally speaking, the natural integer size for a given language implementation is supposed to be optimal in some way, so my advice is in general 'use int unless you know it's not appropriate' and let the compiler worry about it - until you have performance data to show otherwise.

Determining maximum memory size, maximum unique operations, and maximum value of an unsigned constant for 12 bit instruction microprocessor

I'm new to computer architecture and am having some trouble with this question:
A hypothetical microprocessor having 12-bit instructions is composed of three fields: the first 4 bits are reserved for the opcode, the next two bits are reserved for a register number and the remaining bits contain the operand memory. What is the maximum memory size, maximum number of different operations the system would be able to understand and the maximum value of an unsigned integer?
There’s not enough information to answer that question. You provided specs about the instructions but not about address size or data word size. The only thing we can say for sure is that 4 bits are enough to specify 16 different instructions.

How do i calculate the size of a tag field?

I'm revising for an exam and i've came across a question that I have no idea how to do, i've looked through my notes and cant seem to find anything on it, can anyone help me?
Given a 64KB cache that contains 1024 blocks with 64 bytes per block, what is the size of the tag field for a 32-bit architecture?
The question is only worth 1 mark so i cant imagine the answer is too hard, but i cant seem to find anthing on it.
You need 32 bits for the address. You need 6 bits for the offset within a block. You need 10 bits to identify one of the 1,024 possible blocks in the cache. That's 16 bits in total. Therefore the tag needs to be 32 bits - 16 bits = 16 bits.
I recommend following the link that aruisdante provided and look at how to calculate this yourself.

Representation of a Kilo/Mega/Tera Byte

I was getting a little confused with the representation of different units of bytes.
It is accepted throughout that 1 byte = 8 bits.
However, in a lot of sources I have seen that
1 kiloByte = 2^10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
Different sources claim different reasons for these different representations, thus I am not sure what the most important/real reason is for this rather confusing difference in representation.
Can someone please explain and clarify?
It is accepted throughout that 1 byte = 8 bits
However, in a lot of sources I have seen that
1 kiloByte = 2^ 10 bytes = 1024 bytes
AND
1 kiloByte = 1000 bytes
To make sure we're all clear, your question is "Is a kilobyte equal to 1024 bytes or 1000 bytes?".
Doesn't this contradict as in both cases it is stated that 1 byte is 8 bits...?
This is irrelevant to the question.
So, let's begin. In SI (metric), the multiplier of 1000 is called kilo, abbreviated k. k always means 1000, never anything else.
When binary computers entered the world, we noticed that 2 to the power of 10 is 1024, which is conveniently close to 1000. Computer engineers decided to abuse this coincidence and say that kilo means 1024. By extension, they say that mega means 10242 (instead of the proper definition of 10002), and so on with giga, tera, etc.
While the difference between 1000 and 1024 is small for many purposes, there are times when exact answers are required, and this is where the abusive terminology hurts everyone. Only after decades after kilo=1024 got established did anyone really try to fix the problem. The IEC proposed new prefixes for the binary multipliers: 1024 = kibi, 10242 = mebi, 10243 = gibi, etc.
In summary, the notion that kilo=1024 is an abusive deviation from the consistent SI definition of kilo=1000. While kilo=1024 is popular in the computer industry, it is nevertheless wrong and should be replaced by kibi=1024. Or numbers need to be recomputed to reflect the true definition of kilo/mega/etc. (For example, "512 MB" of RAM is actually about 536.9 MB.)
Btw, don't use random capitalization; it's spelled kilobyte, not kiloByte.
References and links:
http://physics.nist.gov/cuu/Units/binary.html
http://en.wikipedia.org/wiki/Kilo-
http://en.wikipedia.org/wiki/Kilobyte
http://en.wikipedia.org/wiki/Kibibyte
http://xkcd.com/394/
When you talk about data information in computer science, you always have to calculate the result by a power of two. See what wikipedia says:
"In computing, a binary prefix is a
specifier or mnemonic that is
prepended to the units of digital
information, the bit and the byte, to
indicate multiplication by a power of
2. In practice the powers used are multiples of 10, so the prefixes
denote powers of 1024 = 2^10."
Sometimes people use to round it as you have mentioned, but it is a bad use of it.
I don't see what the byte to bits has to do with anything if you are asking whether 1 kiloByte is equal to 1024 or 1000 bytes. These measurements are not set in stone and are not really controlled at all. Computer makers can (and have) used the 1000 conversion to make it look like they have more memory.
The problem comes up when thinking about binary (base 2) or base 10. Base 10 you would use 1000, base 2, 1024.

What is the exact meaning of 'N' bit processor ? , clarification for freescale arch

While reading one Freescale processor manual I stuck somewhere, which specifies that it is a 32-bit processor.
May I know the exact meaning and logic behind that?
Update:
Does it specify its ALU width or its address width or its register width specifically or all of them together is N-bit each.
Update:
Hope you have heard of Freescale processors. I just came across their site which describes one of their latest Starcore-based processor known as SC3850 as a 16-bit processor. As far as I know, it has 32 bit program counters, including ALU, and 40-bit register width and 2x64 bit address bus width. Also the SC3850 can handle SIMD(2) instructions which are of 32 bit or 64 bit.
For more details please go through this link
One of the major reasons you would care about the register width of the processor is performance. Generally doubling the number of bits doubles the rate at which a processor can move data around, and compute. This is why we're not all using 8 bit processors.
The other major reason is address space. A 16 bit program counter limits you to 64k of address space, and a 32 bit counter limits you to 4 gigabytes. The new 64 bit processors make it possible, if all the address lines are present, to support 17,179,869,184 gigabytes of memory.
Firstly i dont have a definitive answer but i would guess that 8 being a power of 2, is an important factor. Being a power of 2 also means that certain optimisations may be performed by dividing the 8 bits into groups which also means lookup tables can be used for certain operations. 8 bits in the past was also the perfect size when dealing wiht plain old ascii characters. I can imagine that using 5 bit bytes and encoding a string of ascii characters across memory would be a pain.
Please check out the Wikipedia entry on 32-bit processors, from the entry:
In computer architecture, 32-bit
integers, memory addresses, or other
data units are those that are at most
32 bits (4 octets) wide. Also, 32-bit
CPU and ALU architectures are those
that are based on registers, address
buses, or data buses of that size.
32-bit is also a term given to a
generation of computers in which
32-bit processors were the norm.
Read and understand the article - then the answer for N will be obvious.