Why is the size of partitions 64M in Minix filesystem? - operating-system

I have read Introduction to the Minix File System from Wikipedia. I don't understand this sentence, "but since the Minix fs uses unsigned shorts for block pointers, it is limited to 64M partitions". What's the relationship between the data structure of block pointers and the size of partitions?

MINIX 1 and 2 (and the Linux minixfs which derives from it) use fixed-size 1,024-byte blocks. If each block is given a 16-bit (unsigned short in C) number, the farthest byte of the farthest block will be at offset 64 MiB-1.
Those two limits were lifted in 2005 with MINIX 3, but the vast majority of existing file systems comply with the older format, which is more than enough for the floppies it was designed for.

Related

Word Addressable Memory

For a 32 bit word addressable memory, the word has size of 4 bytes.
If I try to store a data structure uses less than 4 byte memory, say 2 bytes. Is the remaining 2 bytes wasted?
Should we consider the word size when we decide what data structure to use?
Got similar question here but not exactly what i am asking.
Please help.
On a modern CPU, memory itself is retrieved in usually chunks called cache lines (64 bytes on x86), but the CPU instruction set can address individual bytes.
If you had some esoteric machine with an instruction set that couldn't address individual bytes, then your compiler would hide that from you.
Whether or not memory is wasted in data structures smaller than a word would depend on the language you use and its implementation, but generally, records are aligned according to the field with the coarsest requirement. If you have an array of 16 bit integers, they will pack together tightly.
If you have 3 or 4 integers, it scarcely matters whether you store them in 2, 4, or 8 bytes.
If you have 3 or 4 billion integers, then it's probably worth considering a more space-efficient structure.
Generally speaking, the natural integer size for a given language implementation is supposed to be optimal in some way, so my advice is in general 'use int unless you know it's not appropriate' and let the compiler worry about it - until you have performance data to show otherwise.

advantages of segmentation in 8086 microprocessor

what are the advantages of segmentation in 8086 microprocessor?
Not getting the importance of segmentation. Is it for managing more memory?
The instruction set used in 8086 is a 16-bit instruction set. This means that a register can only store values in the range 0x0000 to 0xFFFF, and instructions mostly only did 16-bit operations (16-bit addition, 16-bit subtraction, etc). If a register contains an address/pointer, then it would've worked out to a maximum of 64 KiB of address space (some for ROMs, some for RAM) and this wasn't enough for the market at the time.
Segmentation was a way to allow the 16-bit CPU to support a larger address space. Essentially, combining two 16-bit registers together, so that addresses/pointers could be much larger. Unfortunately (likely, to avoid "unnecessary at the time" costs of having more address lines on the CPU's bus), instead of using two 16-bit registers as a 32-bit address, Intel did an "address = segment * 16 + offset" thing to end up with a 20-bit address, giving the 8086 a 1 MiB address space.
Later (early 1980s) there was a push towards "protected objects" where "objects" (in object oriented programming) could be given access controls and limits that are enforced/checked by hardware, and around the same time there were "virtual memory" ideas floating around. These ideas led to the ill-fated iAPX 432 CPU; but also led to the idea of associating protection (attributes and limits) to the segments that 8086 already had, which resulted in the "protected mode" introduced with 80286 (and extended in 80386).
Essentially; the original reason for (advantage of) segments was to increase the address space (without the cost of a 32-bit instruction set, etc); and things like protection and memory management were retro-fitted afterwards (and then barely used by software before being abandoned in favour of paging).
Answer
Memory size is divided into segments of various sizes.
A segment is just area in memory.
Process of dividing memory in this way is called segmentation.
data ----> bytes -----> specific address.
8086 has 20 lines address bus.
2^20 bytes = 1Mb
4 types of Segments
Code Segment
Data Segment
Stack Segment
Extra Segment
Each of these segments are addressed by an address stored in corresponding segment address.
registers are 16 bit in size.
store base address of corresponding segments and store upper 16 bits.

CISC instruction length

I was wondering, what is the maximum possible length of a CISC instruction on most of today's CISC architectures?
I haven't found the definitive answer yet, but it is suggested that it's 16 bytes long, in theory.
In the video # around 15:00 mins, why does the speaker suggests "in theory" and why exactly 16 bytes?
In practice as well. For the x86-64 AMD has limited the allowed instruction length to 15 bytes. After that, the instruction decoder will give up and signal an error.
Otherwise, with multiple instruction prefixes and override bytes, we don't know exactly how long the instruction could get. No limit at all, if we allow redundant repetitions of some prefixes.
Agner Fog describes the problem:
Executing three, four or five instructions simultaneously is not unusual. The limit is not the execution units, which we have plenty of, but the instruction decoder. The length of an instruction can be anywhere from one to fifteen bytes. If we want to decode several instructions simultaneously, then we have a serious problem. We have to know the length of the first instruction before we know where the second instruction begins. So we can't decode the second instruction before we have decoded the first instruction. The decoding is a serial process by nature, and it takes a lot of hardware to be able to decode multiple instructions per clock cycle. In other words, the decoding of instructions can be a serious bottleneck, and it becomes worse the more complicated the instruction codes are.
See the rest of his blog post here.
CISC is a design philosophy, not an architecture, therefore there's no such thing as "CISC instruction length", only instruction length of a specific CISC architecture (like x86 or Motorola 68k)
Talking specifically about x86 then the limit is 15 bytes. Theoretically the instruction length can be infinite because prefixes can be repeated. However that makes it difficult for the decoder so in 80286 Intel began to limit it to 10 bytes, and then 15 bytes in later ISA versions. For more information about it read
x86_64 ASM - maximum bytes for an instruction?
What is the maximum length an Intel 386 instruction without any prefixes?
Also note that RISC doesn't mean fixed-length instructions. Modern MIPS, ARM, RISC-V... all have a variable length instruction mode to increase code density

What is the exact meaning of 'N' bit processor ? , clarification for freescale arch

While reading one Freescale processor manual I stuck somewhere, which specifies that it is a 32-bit processor.
May I know the exact meaning and logic behind that?
Update:
Does it specify its ALU width or its address width or its register width specifically or all of them together is N-bit each.
Update:
Hope you have heard of Freescale processors. I just came across their site which describes one of their latest Starcore-based processor known as SC3850 as a 16-bit processor. As far as I know, it has 32 bit program counters, including ALU, and 40-bit register width and 2x64 bit address bus width. Also the SC3850 can handle SIMD(2) instructions which are of 32 bit or 64 bit.
For more details please go through this link
One of the major reasons you would care about the register width of the processor is performance. Generally doubling the number of bits doubles the rate at which a processor can move data around, and compute. This is why we're not all using 8 bit processors.
The other major reason is address space. A 16 bit program counter limits you to 64k of address space, and a 32 bit counter limits you to 4 gigabytes. The new 64 bit processors make it possible, if all the address lines are present, to support 17,179,869,184 gigabytes of memory.
Firstly i dont have a definitive answer but i would guess that 8 being a power of 2, is an important factor. Being a power of 2 also means that certain optimisations may be performed by dividing the 8 bits into groups which also means lookup tables can be used for certain operations. 8 bits in the past was also the perfect size when dealing wiht plain old ascii characters. I can imagine that using 5 bit bytes and encoding a string of ascii characters across memory would be a pain.
Please check out the Wikipedia entry on 32-bit processors, from the entry:
In computer architecture, 32-bit
integers, memory addresses, or other
data units are those that are at most
32 bits (4 octets) wide. Also, 32-bit
CPU and ALU architectures are those
that are based on registers, address
buses, or data buses of that size.
32-bit is also a term given to a
generation of computers in which
32-bit processors were the norm.
Read and understand the article - then the answer for N will be obvious.

Why doesn't my processor have built-in BigInt support?

As far as I understood it, BigInts are usually implemented in most programming languages as arrays containing digits, where, eg.: when adding two of them, each digit is added one after another like we know it from school, e.g.:
246
816
* *
----
1062
Where * marks that there was an overflow. I learned it this way at school and all BigInt adding functions I've implemented work similar to the example above.
So we all know that our processors can only natively manage ints from 0 to 2^32 / 2^64.
That means that most scripting languages in order to be high-level and offer arithmetics with big integers, have to implement/use BigInt libraries that work with integers as arrays like above.
But of course this means that they'll be far slower than the processor.
So what I've asked myself is:
Why doesn't my processor have a built-in BigInt function?
It would work like any other BigInt library, only (a lot) faster and at a lower level: Processor fetches one digit from the cache/RAM, adds it, and writes the result back again.
Seems like a fine idea to me, so why isn't there something like that?
There are simply too many issues that require the processor to deal with a ton of stuff which isn't its job.
Suppose that the processor DID have that feature. We can work out a system where we know how many bytes are used by a given BigInt - just use the same principle as most string libraries and record the length.
But what would happen if the result of a BigInt operation exceeded the amount of space reserved?
There are two options:
It'll wrap around inside the space it does have
or
It'll use more memory.
The thing is, if it did 1), then it's useless - you'd have to know how much space was required beforehand, and that's part of the reason you'd want to use a BigInt - so you're not limited by those things.
If it did 2), then it'll have to allocate that memory somehow. Memory allocation is not done in the same way across OSes, but even if it were, it would still have to update all pointers to the old value. How would it know what were pointers to the value, and what were simply integer values containing the same value as the memory address in question?
Binary Coded Decimal is a form of string math. The Intel x86 processors have opcodes for direct BCD arthmetic operations.
It would work like any other BigInt library, only (a lot) faster and at a lower level: Processor fetches one digit from the cache/RAM, adds it, and writes the result back again.
Almost all CPUs do have this built-in. You have to use a software loop around the relevant instructions, but that doesn't make it slower if the loop is efficient. (That's non-trivial on x86, due to partial-flag stalls, see below)
e.g. if x86 provided rep adc to do src += dst, taking 2 pointers and a length as input (like rep movsd to memcpy), it would still be implemented as a loop in microcode.
It would be possible for a 32bit x86 CPU to have an internal implementation of rep adc that used 64bit adds internally, since 32bit CPUs probably still have a 64bit adder. However, 64bit CPUs probably don't have a single-cycle latency 128b adder. So I don't expect that having a special instruction for this would give a speedup over what you can do with software, at least on a 64bit CPU.
Maybe a special wide-add instruction would be useful on a low-power low-clock-speed CPU where a really wide adder with single-cycle latency is possible.
The x86 instructions you're looking for are:
adc: add with carry / sbb: subtract with borrow
mul: full multiply, producing upper and lower halves of the result: e.g. 64b*64b => 128b
div: dividend is twice as wide as the other operands, e.g. 128b / 64b => 64b division.
Of course, adc works on binary integers, not single decimal digits. x86 can adc in 8, 16, 32, or 64bit chunks, unlike RISC CPUs which typically only adc at full register width. (GMP calls each chunk a "limb"). (x86 has some instructions for working with BCD or ASCII, but those instructions were dropped for x86-64.)
imul / idiv are the signed equivalents. Add works the same for signed 2's complement as for unsigned, so there's no separate instruction; just look at the relevant flags to detect signed vs. unsigned overflow. But for adc, remember that only the most-significant chunk has the sign bit; the rest are essential unsigned.
ADX and BMI/BMI2 add some instructions like mulx: full-multiply without touching flags, so it can be interleaved with an adc chain to create more instruction-level parallelism for superscalar CPUs to exploit.
In x86, adc is even available with a memory destination, so it performs exactly like you describe: one instruction triggers the whole read / modify / write of a chunk of the BigInteger. See example below.
Most high-level languages (including C/C++) don't expose a "carry" flag
Usually there aren't intrinsics add-with-carry directly in C. BigInteger libraries usually have to be written in asm for good performance.
However, Intel actually has defined intrinsics for adc (and adcx / adox).
unsigned char _addcarry_u64 (unsigned char c_in, unsigned __int64 a, \
unsigned __int64 b, unsigned __int64 * out);
So the carry result is handled as an unsigned char in C. For the _addcarryx_u64 intrinsic, it's up to the compiler to analyse the dependency chains and decide which adds to do with adcx and which to do with adox, and how to string them together to implement the C source.
IDK what the point of _addcarryx intrinsics are, instead of just having the compiler use adcx/adox for the existing _addcarry_u64 intrinsic, when there are parallel dep chains that can take advantage of it. Maybe some compilers aren't smart enough for that.
Here's an example of a BigInteger add function, in NASM syntax:
;;;;;;;;;;;; UNTESTED ;;;;;;;;;;;;
; C prototype:
; void bigint_add(uint64_t *dst, uint64_t *src, size_t len);
; len is an element-count, not byte-count
global bigint_add
bigint_add: ; AMD64 SysV ABI: dst=rdi, src=rsi, len=rdx
; set up for using dst as an index for src
sub rsi, rdi ; rsi -= dst. So orig_src = rsi + rdi
clc ; CF=0 to set up for the first adc
; alternative: peel the first iteration and use add instead of adc
.loop:
mov rax, [rsi + rdi] ; load from src
adc rax, [rdi] ; <================= ADC with dst
mov [rdi], rax ; store back into dst. This appears to be cheaper than adc [rdi], rax since we're using a non-indexed addressing mode that can micro-fuse
lea rdi, [rdi + 8] ; pointer-increment without clobbering CF
dec rdx ; preserves CF
jnz .loop ; loop while(--len)
ret
On older CPUs, especially pre-Sandybridge, adc will cause a partial-flag stall when reading CF after dec writes other flags. Looping with a different instruction will help for old CPUs which stall while merging partial-flag writes, but not be worth it on SnB-family.
Loop unrolling is also very important for adc loops. adc decodes to multiple uops on Intel, so loop overhead is a problem, esp if you have extra loop overhead from avoiding partial-flag stalls. If len is a small known constant, a fully-unrolled loop is usually good. (e.g. compilers just use add/adc to do a uint128_t on x86-64.)
adc with a memory destination appears not to be the most efficient way, since the pointer-difference trick lets us use a single-register addressing mode for dst. (Without that trick, memory-operands wouldn't micro-fuse).
According to Agner Fog's instruction tables for Haswell and Skylake, adc r,m is 2 uops (fused-domain) with one per 1 clock throughput, while adc m, r/i is 4 uops (fused-domain), with one per 2 clocks throughput. Apparently it doesn't help that Broadwell/Skylake run adc r,r/i as a single-uop instruction (taking advantage of ability to have uops with 3 input dependencies, introduced with Haswell for FMA). I'm also not 100% sure Agner's results are right here, since he didn't realize that SnB-family CPUs only micro-fuse indexed addressing modes in the decoders / uop-cache, not in the out-of-order core.
Anyway, this simple not-unrolled-at-all loop is 6 uops, and should run at one iteration per 2 cycles on Intel SnB-family CPUs. Even if it takes an extra uop for partial-flag merging, that's still easily less than the 8 fused-domain uops that can be issued in 2 cycles.
Some minor unrolling could get this close to 1 adc per cycle, since that part is only 4 uops. However, 2 loads and one store per cycle isn't quite sustainable.
Extended-precision multiply and divide are also possible, taking advantage of the widening / narrowing multiply and divide instructions. It's much more complicated, of course, due to the nature of multiplication.
It's not really helpful to use SSE for add-with carry, or AFAIK any other BigInteger operations.
If you're designing a new instruction-set, you can do BigInteger adds in vector registers if you have the right instructions to efficiently generate and propagate carry. That thread has some back-and-forth discussion on the costs and benefits of supporting carry flags in hardware, vs. having software generate carry-out like MIPS does: compare to detect unsigned wraparound, putting the result in another integer register.
Suppose the result of the multiplication needed 3 times the space (memory) to be stored - where would the processor store that result ? How would users of that result, including all pointers to it know that its size suddenly changed - and changing the size might need it to relocate it in memory cause extending the current location would clash with another variable.
This would create a lot of interaction between the processor, OS memory managment, and the compiler that would be hard to make both general and efficient.
Managing the memory of application types is not something the processor should do.
As I think, the main idea behind not including the bigint support in modern processors is the desire to reduce ISA and leave as few instructions as possible, that are fetched, decoded and executed at full throttle.
By the way, in x86 family processors there is a set of instructions that make writing big int library a single-day's matter.
Another reason, I think, is price. It's much more efficient to save some space on the wafer dropping the redundant operations, that can be easily implemented on the higher level.
Seems Intel is Adding (or has added as # time of this post - 2015) new Instructions Support for Large Integer Arithmetic.
New instructions are being introduced on IntelĀ® Architecture
Processors to enable fast implementations of large integer arithmetic.
Large Integer Arithmetic is widely used in multi-precision libraries
for high-performance technical computing, as well as for public key
cryptography (e.g., RSA). In this paper, we describe the critical
operations required in large integer arithmetic and their efficient
implementations using the new instructions.
http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/ia-large-integer-arithmetic-paper.html
There are so many instructions and functionalities jockeying for area on a CPU chip that in the end those that are used more often/deemed more useful will push out those that aren't. The instructions necessary for implementing BigInt functionality are there and the math is straight-forward.
BigInt: the fundamental function required is:
Unsigned Integer Multiplication, add previous high order
I wrote one in Intel 16bit assembler, then 32 bit...
C code is usually fast enough .. ie for BigInt you use a software library.
CPUs (and GPUs) are not designed with unsigned Integer as top priority.
If you want to write your own BigInt...
Division is done via Knuths Vol 2 (its a bunch of multiply and subtract, with some tricky add-backs)
Add with carry and subtract are easier. etc etc
I just posted this in Intel:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
SSE4 is there a BigInt LIbrary?
i5 2410M processor I suppose can NOT use AVX [AVX is only on very recent Intel CPUs]
but can use SSE4.2
Is there a BigInt Library for SSE?
I Guess I am looking for something that implements unsigned integer
PMULUDQ (with 128-Bit operands)
PMULUDQ __m128i _mm_mul_epu32 ( __m128i a, __m128i b)
and does the carries.
Its a Laptop so I cant buy an NVIDIA GTX 550, which isnt so grand on unsigned Ints, I hear.
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx