How would I convert a 16K x 32 SRAM into a 64K x 8 SRAM? The 16K x 32 RAM module is a single unit that cannot be altered internally and is capable of address decoding, has tristate outputs, and read/write/and chip enable. Also only the memory read of the circuit has to be implemented.
The 16K x 32 SRAM accepts a 14-bit address and produces a 32-bit value. However, the 64K x 8 SRAM accepts a 16-bit address and produces an 8-bit value. You can design a circuit whose interface exactly matches the interface of a 64K x 8 SRAM module. However, internally, it sends the most significant 14 bits of the supplied 16-bit address to the 16K x 32 SRAM together with the control signals. Then it uses the least significant 2 bits of the 16-bit address to select an 8-bit value from the 32-bit value produced by the 16K x 32 SRAM. Finally, the resulting 8-bit value is sent to the output pins of the circuit.
Related
How can I find DRAM row buffer size programmatically or by using already existing tools in say a *nix system ?
As an example, with a Kingston DDR4, I executed the following commands (you might need to install some packages) :
sudo modprobe eeprom
decode-dimms
These commands, among many information, give me the characteristics of my DDR stick:
---=== Memory Characteristics ===---
Maximum module speed 2132 MHz (PC4-17000)
Size 16384 MB
Banks x Rows x Columns x Bits 16 x 16 x 10 x 64
SDRAM Device Width 8 bits
Ranks 2
Rank Mix Symmetrical
AA-RCD-RP-RAS (cycles) 14-14-14-35
Supported CAS Latencies 16T, 15T, 14T, 13T, 12T, 11T, 9T
Rows and columns are actually number of address bits (you can check the decode-dimms source at https://fossies.org/linux/i2c-tools/eeprom/decode-dimms, and understand what the code is doing when you look at the DDR4 SPD information: https://en.wikipedia.org/wiki/Serial_presence_detect#DDR4_SDRAM)
Thus, if we have 10 column bits, we have 1024 columns (2^10), where each column is composed of the module width (64 as per the data above, represented as "bits"). Since we can also see that the SDRAM device width is 8x, we can deduce that the DIMM locksteps 8 SDRAM chips (those black boxes you see in your DRAM stick) to get that total width of 64 bits.
The row buffer size in my DDR4 is, therefore, 64 bits * 1024 columns = 65536 bits wide (8192 Bytes). Row buffer sizes in DDR3 and DDR4 are mostly this length, but new architectures such as HMC and HBM have different sizes.
So, in one short commandline, to return in bits (just divide by 8 to get bytes):
decode-dimms | grep "Columns x Bits" | awk -F 'x' '{print (2^$(NF-1))*$NF}
PS: mind you, this handles a single DIMM. if you have multiple DIMMS decode-dimms might return information for multiple modules.
Why the Operative System of 32 bits has the limitation of 4 GB of RAM?. I know the processor has a register, the PC(Program Counter), where the adress of an instruction is located and if this register is of 32 bits, its a hardware limitation because of size of the register. But why the Operative System of 32 bits has this limitation? Its harcoded in its kernel that the maximum ram avalaible can be 2 exponent 32 bits?
Thanks
Why the Operative System of 32 bits has the limitation of 4 GB of RAM?
It probably doesn't.
"32-bit" typically refers to the size of general purpose registers and may have nothing to do with the size of any address. For example, for modern 64-bit operating systems addresses are often only 48 bits.
Also; most operating systems use some form of paging where virtual address size may have nothing to do with physical address size. For example; for 32-bit 80x86 using PAE (Physical Address Extensions); virtual addresses are limited to 32-bit (causing each process to be limited to 4 GiB of "virtual space", minus whatever kernel reserves for itself); but physical addresses are/were 36 bits (giving a limit of "up to 64 GiB of RAM, minus space used for devices, ROMs, etc").
Even when physical address size is 32-bit, there are other hardware restrictions - e.g. some of those bits may be ignored by the hardware and/or used for other purposes (e.g. one bit used as an "encrypt the RAM or not" flag), and/or not supported by the RAM controller; and some of the physical address space must be used by things that are not RAM (ROM, devices, etc); so it's extremely unlikely that "32-bit physical addresses" will mean "max. of 4 GiB of RAM".
Finally; it is possible for hardware to support bank switching, where RAM is split into banks and some banks are mapped into the physical address space while others are not; and where "which bank/s are selected" is controlled by OS using special hardware. This was very common for 8-bit and 16-bit CPUs (e.g. "expanded memory" cards plugged into an ISA slot in PCs in the early 1980s); but has become significantly less common as physical address sizes have increased.
The limit is because in 32-bit architectures you are referencing the addresses in memory using 32-bit addresses. So this means that in 32-bit architecture you can only reference 2^32 addresses. Next we need to take into consideration that each address means that we are referencing a single byte which is 8 bits. This means that in effect we can reference 2^32 * 8 bits
Now lets get to the mathy part of the answer. If you can reference 2^32 * 8 bits then you can reference 2^35 bits and 2^35 = 34359738368 bits = 4294967296 bytes = 4194304 kilobytes = 4096 megabytes
And that is why you can only reference 4GiB of memory in 32-bit computers.
It is said that 8086 Microprocessor has 1MB Memory and 20-bit address, 16- bit data bus . My doubt is that if it is 1MB memory that means (2^20 * 2^3) (1 byte = 8 bits) bits or 2^ 23 bits is the whole memory size. Then as 8086 is a 16- bit register then 2^20 ( from address lines) * 2^4( 16- bit size) is the memory i.e 2^ 24 bits which is not what I calculated above.
So there is a false in my assessment , what is that ?.
Each of the 2^20 addresses refers to an 8-bit Byte.
Some of the 8086's machine instructions operate on Bytes (8-bits) (using registers AH, AL, BH, BL, ...) and other machine instructions operate on Words (16-bits) (using registers AX, BX, ...).
When using a Word instruction, two adjacent bytes in memory (addresses (a) and (a+1)) are treated as a Word datum. I do not recall if the 8086 enforces even address alignment for Word-datum memory references. But, 2^20 Bytes contains only 2^19 Words (aligned to even addresses).
Bits are conserved:
(2^20 * 2^3) = (2^19 * 2^4) = 2^23
I heard that in the current implementation of Intel processors the upper 16 bits of virtual address are set to zero. Doesn't that mean that the machine can only use up to 2^48 bytes of memory instead of 2^64 bytes? Why have they implemented it this way? Is it because a normal machine doesn't really need 2^64 bytes of memory?
Essentially, how does 4Gb turn into 4GB? If the memory is addressing Bytes, should not the possibilities be 2(32/8)?
It depends on how you address the data.
If you use 32 bits to address each bit, you can address 232 bits or 4Gb = 512MB. If you address bytes like most current architectures it will give you 4GB.
But if you address much larger blocks you will need less bits to address 4GB. For example if you address each 512-byte block (29 bytes) you can address 4GB with 23 bits. FAT16 uses 16 bits to address (maximum) 64KB clusters and therefore can address a maximum 4GB volume. The same is used in Java Compressed Oops where you can address 32GB of memory with 32-bit reference.
Some older architectures even use word-addressable memory instead of byte like most do nowadays. Modern architectures that have a minimum addressable unit bigger than an octet are mainly found in DSPs. There also a few architectures with bit-addressable memory like Intel 8051
Most modern computers are byte-addressable, with each address identifying a single eight bit byte of storage; data too large to be stored in a single byte may reside in multiple bytes occupying a sequence of consecutive addresses.
There exist word-addressable computers, where minimal addressable storage unit is exactly the processor's word. For example, the Data General Nova minicomputer, and the Texas Instruments TMS9900 and National Semiconductor IMP-16 microcomputers used 16 bit words, and there were many 36-bit mainframe computers (e.g., PDP-10) which used 18-bit word addressing, not byte addressing, giving an address space of 218 36-bit words, approximately 1 megabyte of storage.
The efficiency of addressing of memory depends on the bit size of the bus used for addresses – the more bits used, the more addresses are available to the computer. For example, an 8-bit-byte-addressable machine with a 20-bit address bus (e.g. Intel 8086) can address 220 (1,048,576) memory locations, or one MiB of memory, while a 32-bit bus (e.g. Intel 80386) addresses 232 (4,294,967,296) locations, or a 4 GiB address space.
The electrical interface on the chip consists (extremely simplified) of a wires for the address (e.g. 32 address lines) and wires for the data (e.g. 8 wires for read data coming from the RAM, 8 wires for write data going to the RAM). In this case you have 232 words of 8 bits, so you can address 232*8 bits of data.
If you had a RAM with a word width of 16-bit instead (much more likely than 8-bit) you would be able to address twice as much RAM with the same number of address bits. On a modern system, you cannot really "read one byte" but instead the CPU fetches a whole cache line from the RAM and then gives you back just the byte that you asked for.
You can address 2 fields in memory with 1 bits.
You can address 4 fields in memory with 2 bits.
00, 01, 10, 11
So we can address memory by 2^n. For 32bit memory that each address holds 1byte can address 4GB data.
2^32 = 4.294.967.296 address can hold 4GByte data.