Calculating size of the page table - operating-system

Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is the approximate size of the page table ?
My Solution:
Number of pages in physical memory = (size of physical memory)/(size of page)
= 64 * 2^10 / 4
= 2^14
Number of pages in virtual memory = (size of virtual memory)/(size of page)
size of virtual memory = 2^32 bits
= 2^29 bytes
= 2^19 kBytes
Number of pages in virtual memory = 2^19/4 = 2^17
=> Number of entries in page table = 2^17
Size of each entry = 17+14 =31 bits
Size of page table = 31 * 2^17 bits
= 31 * 2^14 bytes
= 31 * 2^4 KB
= 31*16
= 496 KB
But the answer is 8 MB. Why?

8MB cannot be the answer,
Physical Address Space = 64MB = 2^26B
Virtual Address = 32-bits, ∴ Virtual Address Space = 2^32B
Page Size = 4KB = 2^12B
Number of pages = 2^32/2^12 = 2^20 pages.
Number of frames = 2^26/2^12 = 2^14 frames.
∴ Page Table Size = 2^20×14-bits ≈ 2^20×16-bits ≈ 2^20×2B = 2MB.

The question has been asked before. However, there is not sufficient information in the question to determine the size of the page table.
It does not specify the size of the page table entries.
It does not specify the number of pages mapped to the process address space.
It does not specify the division between the process and system address pace. How much of the 32 bits is part of the process address space.
It does not specify whether this is a process or system table.

Related

Speed up autovacuum in Postgres

I have a question regarding Postgres autovacuum / vacuum settings.
I have a table with 4.5 billion rows and there was a period of time with a lot of updates resulting in ~ 1.5 billion dead tuples. At this point autovacuum was taking a long time (days) to complete.
When looking at the pg_stat_progress_vacuum view I noticed that:
max_dead_tuples = 178956970
resulting in multiple index rescans (index_vacuum_count)
According to docs - max_dead_tuples is a number of dead tuples that we can store before needing to perform an index vacuum cycle, based on maintenance_work_mem.
According to this one dead tuple requires 6 bytes of space.
So 6B x 178956970 = ~1GB
But my settings are
maintenance_work_mem = 20GB
autovacuum_work_mem = -1
So what am I missing? why didn't all my 1.5b dead tuples fit in max_dead_tuples, since 20GB should give enough space, and why there were multiple runs necessary?
There is a hard-coded limit of 1GB for the number of dead tuples in one VACUUM cycle, see the source:
/*
* Return the maximum number of dead tuples we can record.
*/
static long
compute_max_dead_tuples(BlockNumber relblocks, bool useindex)
{
long maxtuples;
int vac_work_mem = IsAutoVacuumWorkerProcess() &&
autovacuum_work_mem != -1 ?
autovacuum_work_mem : maintenance_work_mem;
if (useindex)
{
maxtuples = MAXDEADTUPLES(vac_work_mem * 1024L);
maxtuples = Min(maxtuples, INT_MAX);
maxtuples = Min(maxtuples, MAXDEADTUPLES(MaxAllocSize));
/* curious coding here to ensure the multiplication can't overflow */
if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
maxtuples = relblocks * LAZY_ALLOC_TUPLES;
/* stay sane if small maintenance_work_mem */
maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
}
else
maxtuples = MaxHeapTuplesPerPage;
return maxtuples;
}
MaxAllocSize is defined in src/include/utils/memutils.h as
#define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */
You could lobby on the pgsql-hackers list to increase the limit.

How to get desired values from BLE manufacturer data flutter

I am new to flutter and I am working on an app that reads data from a BLE beacon. I have scanned the device and got the manufacturer data as {256:[0,0,0,16,1,57,33,18,0,0,154,10,0,0,94,0]}
the device manufacturer told me to device data like :
KCBAdvDataManufacturerData = <.. .. .. .. .. .. .. .. be 21 01 00 50 08 00 00 5e ..>
The UUID - kCBAdvDataManufacturerData packet contains the sensor data as shown below:
Byte index 8 – 11 = Pressure 32bit value
Byte index 12 – 15 = Temperature 32bit value
Byte index 16 = Battery level in percentage
I am totally not getting any idea that in Dart how can I achieve it from
{256:[0,0,0,16,1,57,33,18,0,0,154,10,0,0,94,0]}
to
Byte index 8 – 11 = Pressure 32bit value
Byte index 12 – 15 = Temperature 32bit value
Byte index 16 = Battery level in percentage
and then
in a human-understandable form
here temperature is in C pressure in PSI and battery is in %.
There is a list of bytes so you can get sublists from that list with https://api.dart.dev/stable/2.9.1/dart-core/List/sublist.html
That sublist can be converted from 4 bytes into a signed integer for the pressure and temperature values:
Convert 4 byte into a signed integer
I am not sure the index values you have been given look quite right. I am assuming the data is in little endian format so my assumption on the data is:
Pressure = [33,18,0,0] = 4641 (Are you expecting a value of about 46.41psi?)
Temperature = [154,10,0,0] = 2714 (Are you expecting a value of about 27.14c?)
Battery = [94] = 94 (Are you expecting a value of 94%?)
This might be done like the following:
import 'dart:typed_data';
var manufacturerData = Uint8List.fromList([0,0,0,16,1,57,33,18,0,0,154,10,0,0,94,0]);
var pressure = ByteData.sublistView(manufacturerData, 6, 10);
var temperature = ByteData.sublistView(manufacturerData, 10, 14);
var battery = ByteData.sublistView(manufacturerData, 14, 15);
main() {
print(pressure.getUint32(0, Endian.little));
print(temperature.getUint32(0, Endian.little));
print(battery.getUint8(0));
}
Gives me the output:
4641
2714
94

Packetsize in Gamepsparks Realtime

My packet in gamesparks contains:
Two vector2: 8bytes x 2 = 16bytes
key values with vector 2 = 8bytes
peerid present in packet = 4bytes
opCode present in packet = 4bytes
Total = 32bytes.
However my packet size is a little big bigger than this. Am i missing something in the packet that i should account for ?

I have a query related to Computer Architecture

20% of the total instructions in an application are multiplies. A new processor cuts the CPI for
multiplies from 10 to 5 but increases the cycle time by 20%. All other instructions take 1 cycle
each to process.
What is the speedup of the new processor compared to the old one?
In order to calculate normal CPI:
CPI = (SIGMA(Intruction_Count*Instruction_Precentage)/Total_Instructions)*Cycle_Length
Calculating the old cpi we get:
CPI_OLD= ( 10*0.2 + 1*0.8 ) * 1 = 2.8
Adjusting for the new changes we get:
CPI_NEW = (5*0.2 + 1*0.8) * 1.2 = 2.16
Speedup = CPI_NEW/CPI_OLD = 1.3

Logical Address in 2 level paging

Consider a system using 2 level paging.Page table is divided into 2K pages each of size 4 KW. The page table entry size is 2W. If PAS is 64 MW which is divided into 16K frames. Memory is word addressable,Calculate length of Logical Address (LA), Physical Address(PA),Outer Page Table Size (OPTS)and Inner Page Table Size (IPTS).
What I did -
PAS=64MW= 2^26
Thus,PA=26 Bits
LAS = Page Size* No. of Pages * Page Table Entry Size
= 4KW * 2K * 2W
= 2^23
Thus LA=23 bits.
The answer are as follows :
1.LA=35 bits
2.PA=26 bits
3.OPTS=4KW
4.IPTS=8KW
I can't make out how did LA becomes 35 bits instead of 22 bits. How is LA distributed in terms of P1,P2 & d ? Can someone help me ?
Size of page = 4KW = 2^12 W. This means that offset(d) is 12 bits.
Let us assume that LAS(logical address space) consist of total 2^x pages. Because it is 2 level paging, we have
((2^x)*2)/(size of 1 page) = 2K pages
This means that 2^(x + 1 - 12) = 2^(11). Therefore, we have x = 22. Hence, the logical address space = 22 + 12 = 34 bits