Speed up autovacuum in Postgres - postgresql

I have a question regarding Postgres autovacuum / vacuum settings.
I have a table with 4.5 billion rows and there was a period of time with a lot of updates resulting in ~ 1.5 billion dead tuples. At this point autovacuum was taking a long time (days) to complete.
When looking at the pg_stat_progress_vacuum view I noticed that:
max_dead_tuples = 178956970
resulting in multiple index rescans (index_vacuum_count)
According to docs - max_dead_tuples is a number of dead tuples that we can store before needing to perform an index vacuum cycle, based on maintenance_work_mem.
According to this one dead tuple requires 6 bytes of space.
So 6B x 178956970 = ~1GB
But my settings are
maintenance_work_mem = 20GB
autovacuum_work_mem = -1
So what am I missing? why didn't all my 1.5b dead tuples fit in max_dead_tuples, since 20GB should give enough space, and why there were multiple runs necessary?

There is a hard-coded limit of 1GB for the number of dead tuples in one VACUUM cycle, see the source:
/*
* Return the maximum number of dead tuples we can record.
*/
static long
compute_max_dead_tuples(BlockNumber relblocks, bool useindex)
{
long maxtuples;
int vac_work_mem = IsAutoVacuumWorkerProcess() &&
autovacuum_work_mem != -1 ?
autovacuum_work_mem : maintenance_work_mem;
if (useindex)
{
maxtuples = MAXDEADTUPLES(vac_work_mem * 1024L);
maxtuples = Min(maxtuples, INT_MAX);
maxtuples = Min(maxtuples, MAXDEADTUPLES(MaxAllocSize));
/* curious coding here to ensure the multiplication can't overflow */
if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
maxtuples = relblocks * LAZY_ALLOC_TUPLES;
/* stay sane if small maintenance_work_mem */
maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
}
else
maxtuples = MaxHeapTuplesPerPage;
return maxtuples;
}
MaxAllocSize is defined in src/include/utils/memutils.h as
#define MaxAllocSize ((Size) 0x3fffffff) /* 1 gigabyte - 1 */
You could lobby on the pgsql-hackers list to increase the limit.

Related

Decay chain simulation - with significantly different time scales

I would like to simulate a decay chain with Python. Normally, (in a loop over all nuclides) one calculates the number of decays per time step and updates the number of mother and daughter nuclei.
My problem is that the decay chain contains half-lives on very different time scales, i.e.
0.0001643 seconds for Po-214 and 307106512477175.9 seconds (= 1600 years) for Ra-226.
Using the same time step for all nuclides seems useless.
Is there a simulation method, preferably in Python, that can be used to handle this case?
Don't use time steps for this. Use event scheduling.
Half lives can be expressed as exponential decay, and the conversion between half life and rate of decay is straightforward. Start with the number of both types of nuclei, and schedule exponential inter-event times to figure out when the next decay of each type will occur. Whichever type has the lower time, decrement the corresponding number of nuclei and schedule the next decay for that type (and if need be, increment the count of whatever it decays into).
This can easily be generalized to multiple distinct event types by using a priority queue ordered by time of occurrence to determine which event will be the next one performed. This is the underlying principle behind discrete event simulation.
Update
This approach works with individual decay events, but we can leverage two important properties when we have exponential inter-event times.
The first is to note that exponentially distributed inter-event times means these are Poisson processes. The superposition property tells us that the union of two independent Poisson processes, each having rate λ, is a Poisson process with rate 2λ. Simple induction shows that if we have n independent Poisson properties with the same rate, their superposition is a Poisson process with rate nλ.
The second property is that the exponential distribution is memoryless. This means that when a Poisson event occurs, we can generate the time to the next event by generating a new exponentially distributed time at the current rate and adding it to the current time.
You haven't provided any information about what you want in the way of output, so I arbitrarily decided to print a report showing the time and the current numbers of nuclides whenever that number was halved. I also printed a report every 10 years, given the long half-life of Po-214.
I converted half-lifes to rates using the link provided at the top of the post, and then to means since that's what
Python numpy's exponential generator is parameterized to use. That's an easy conversion, since means and rates are inverses of each other.
Here's a Python implementation with comments:
from numpy.random import default_rng
from math import log
rng = default_rng()
# This creates a list of entries of quantities that will trigger a report.
# I've chosen to go with successive halvings of the original quantity.
def generate_report_qtys(n0):
report_qty = []
divisor = 2
while divisor < n0:
report_qty.append(n0 // divisor) # append next half-life qty to array
divisor *= 2
return report_qty
seconds_per_year = 365.25 * 24 * 60 * 60
po_214_half_life = 0.0001643 # seconds
ra_226_half_life = 1590 * seconds_per_year
log_2 = log(2)
po_mean = po_214_half_life / log_2 # per-nuclide decay rate for po_214
ra_mean = ra_226_half_life / log_2 # ditto for ra_226
po_n = po_n0 = 1_000_000_000
ra_n = ra_n0 = 1_000_000_000
time = 0.0
# Generate a report when the following sets of half-lifes are reached
po_report_qtys = generate_report_qtys(po_n0)
ra_report_qtys = generate_report_qtys(ra_n0)
# Initialize first event times for each type of event:
# - first entry is polonium next event time
# - second entry is radium next event time
# - third entry is next ten year report time
next_event_time = [
rng.exponential(po_mean / po_n),
rng.exponential(ra_mean / ra_n),
10 * seconds_per_year
]
# Print column labels and initial values
print("time,po_214,ra_226,time_in_years")
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
while time < ra_226_half_life:
# Find the index of the next event time. Index tells us the event type.
min_index = next_event_time.index(min(next_event_time))
if min_index == 0:
po_n -= 1 # decrement polonium count
time = next_event_time[0] # update clock to the event time
if po_n > 0:
next_event_time[0] += rng.exponential(po_mean / po_n) # determine next event time for po
else:
next_event_time[0] = float('Inf')
# print report if this is a half-life occurrence
if len(po_report_qtys) > 0 and po_n == po_report_qtys[0]:
po_report_qtys.pop(0) # remove this occurrence from the list
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
elif min_index == 1:
# same as above, but for radium
ra_n -= 1
time = next_event_time[1]
if ra_n > 0:
next_event_time[1] += rng.exponential(ra_mean / ra_n)
else:
next_event_time[1] = float('Inf')
if len(ra_report_qtys) > 0 and ra_n == ra_report_qtys[0]:
ra_report_qtys.pop(0)
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
else:
# update clock, print ten year report
time = next_event_time[2]
next_event_time[2] += 10 * seconds_per_year
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
Run times are proportional to the number of nuclides. Running with a billion of each took 831.28s on my M1 MacBook Pro, versus 2.19s for a million of each. I also ported this to Crystal, a compiled Ruby-like language, which produced comparable results in 32 seconds for a billion of each nuclide. I would recommend using a compiled language if you intend to run larger sized problems, but I will also point out that if you use half-life reporting as I did the results are virtually identical for smaller population sizes but are obtained much more rapidly.
I would also suggest that if you want to use this approach for a more complex model, you should use a priority queue of tuples containing time and type of event to store the set of pending future events rather than a simple list.
Last but not least, here's some sample output:
time,po_214,ra_226,time_in_years
0.0,1000000000,1000000000,0.0
0.0001642985647308265,500000000,1000000000,5.20630734690935e-12
0.0003286071415481526,250000000,1000000000,1.0412931957694901e-11
0.0004929007624958987,125000000,1000000000,1.5619082645571865e-11
0.0006571750701843468,62500000,1000000000,2.082462133319222e-11
0.0008214861652253772,31250000,1000000000,2.6031325741671646e-11
0.0009858208114474198,15625000,1000000000,3.1238776442043114e-11
0.0011502417677631668,7812500,1000000000,3.6448962144243124e-11
0.0013145712145548718,3906250,1000000000,4.165624808460947e-11
0.0014788866075394896,1953125,1000000000,4.686308868670272e-11
0.0016432124609700412,976562,1000000000,5.2070260760325286e-11
0.001807832817519779,488281,1000000000,5.728676507465013e-11
0.001972981254301889,244140,1000000000,6.252000324175124e-11
0.0021372947080755688,122070,1000000000,6.772678239395799e-11
0.002301139510796509,61035,1000000000,7.29187108904514e-11
0.0024642826956509244,30517,1000000000,7.808840645837847e-11
0.0026302282280720344,15258,1000000000,8.33469030620844e-11
0.0027944471221414947,7629,1000000000,8.855068579808016e-11
0.002954014120737834,3814,1000000000,9.3607058861822e-11
0.0031188370035748177,1907,1000000000,9.882998084692174e-11
0.003282466175503322,953,1000000000,1.0401507641592902e-10
0.003457552492113242,476,1000000000,1.0956322699169905e-10
0.003601851131916978,238,1000000000,1.1413577496124477e-10
0.0037747824699194033,119,1000000000,1.1961563838566314e-10
0.0039512825256332275,59,1000000000,1.252085876503038e-10
0.004124330529803301,29,1000000000,1.3069214800248755e-10
0.004337121375518753,14,1000000000,1.3743508300754027e-10
0.004535068261934763,7,1000000000,1.437076413268044e-10
0.004890820999020369,3,1000000000,1.5498076529965425e-10
0.004909065046898487,1,1000000000,1.555588842908994e-10
315576000.0,0,995654793,10.0
631152000.0,0,991322602,20.0
946728000.0,0,987010839,30.0
1262304000.0,0,982711723,40.0
1577880000.0,0,978442651,50.0
1893456000.0,0,974185269,60.0
2209032000.0,0,969948418,70.0
2524608000.0,0,965726762,80.0
2840184000.0,0,961524848,90.0
3155760000.0,0,957342148,100.0
3471336000.0,0,953178898,110.0
3786912000.0,0,949029294,120.0
4102488000.0,0,944898063,130.0
4418064000.0,0,940790494,140.0
4733640000.0,0,936699123,150.0
5049216000.0,0,932622334,160.0
5364792000.0,0,928565676,170.0
5680368000.0,0,924523267,180.0
5995944000.0,0,920499586,190.0
6311520000.0,0,916497996,200.0
6627096000.0,0,912511030,210.0
6942672000.0,0,908543175,220.0
7258248000.0,0,904590364,230.0
7573824000.0,0,900656301,240.0
7889400000.0,0,896738632,250.0
8204976000.0,0,892838664,260.0
8520552000.0,0,888956681,270.0
8836128000.0,0,885084855,280.0
9151704000.0,0,881232862,290.0
9467280000.0,0,877401861,300.0
9782856000.0,0,873581425,310.0
10098432000.0,0,869785364,320.0
10414008000.0,0,866002042,330.0
10729584000.0,0,862234212,340.0
11045160000.0,0,858485627,350.0
11360736000.0,0,854749939,360.0
11676312000.0,0,851032010,370.0
11991888000.0,0,847329028,380.0
12307464000.0,0,843640016,390.0
12623040000.0,0,839968529,400.0
12938616000.0,0,836314000,410.0
13254192000.0,0,832673999,420.0
13569768000.0,0,829054753,430.0
13885344000.0,0,825450233,440.0
14200920000.0,0,821859757,450.0
14516496000.0,0,818284787,460.0
14832072000.0,0,814727148,470.0
15147648000.0,0,811184419,480.0
15463224000.0,0,807655470,490.0
15778800000.0,0,804139970,500.0
16094376000.0,0,800643280,510.0
16409952000.0,0,797159389,520.0
16725528000.0,0,793692735,530.0
17041104000.0,0,790239221,540.0
17356680000.0,0,786802135,550.0
17672256000.0,0,783380326,560.0
17987832000.0,0,779970864,570.0
18303408000.0,0,776576174,580.0
18618984000.0,0,773197955,590.0
18934560000.0,0,769836170,600.0
19250136000.0,0,766488931,610.0
19565712000.0,0,763154778,620.0
19881288000.0,0,759831742,630.0
20196864000.0,0,756528400,640.0
20512440000.0,0,753237814,650.0
20828016000.0,0,749961747,660.0
21143592000.0,0,746699940,670.0
21459168000.0,0,743450395,680.0
21774744000.0,0,740219531,690.0
22090320000.0,0,736999181,700.0
22405896000.0,0,733793266,710.0
22721472000.0,0,730602000,720.0
23037048000.0,0,727427544,730.0
23352624000.0,0,724260327,740.0
23668200000.0,0,721110260,750.0
23983776000.0,0,717973915,760.0
24299352000.0,0,714851218,770.0
24614928000.0,0,711740161,780.0
24930504000.0,0,708645945,790.0
25246080000.0,0,705559170,800.0
25561656000.0,0,702490991,810.0
25877232000.0,0,699436919,820.0
26192808000.0,0,696394898,830.0
26508384000.0,0,693364883,840.0
26823960000.0,0,690348242,850.0
27139536000.0,0,687345934,860.0
27455112000.0,0,684354989,870.0
27770688000.0,0,681379178,880.0
28086264000.0,0,678414567,890.0
28401840000.0,0,675461363,900.0
28717416000.0,0,672522494,910.0
29032992000.0,0,669598412,920.0
29348568000.0,0,666687807,930.0
29664144000.0,0,663787671,940.0
29979720000.0,0,660901676,950.0
30295296000.0,0,658027332,960.0
30610872000.0,0,655164886,970.0
30926448000.0,0,652315268,980.0
31242024000.0,0,649481821,990.0
31557600000.0,0,646656096,1000.0
31873176000.0,0,643841377,1010.0
32188752000.0,0,641041609,1020.0
32504328000.0,0,638253759,1030.0
32819904000.0,0,635479981,1040.0
33135480000.0,0,632713706,1050.0
33451056000.0,0,629962868,1060.0
33766632000.0,0,627223350,1070.0
34082208000.0,0,624494821,1080.0
34397784000.0,0,621778045,1090.0
34713360000.0,0,619076414,1100.0
35028936000.0,0,616384399,1110.0
35344512000.0,0,613702920,1120.0
35660088000.0,0,611035112,1130.0
35975664000.0,0,608376650,1140.0
36291240000.0,0,605729994,1150.0
36606816000.0,0,603093946,1160.0
36922392000.0,0,600469403,1170.0
37237968000.0,0,597854872,1180.0
37553544000.0,0,595254881,1190.0
37869120000.0,0,592663681,1200.0
38184696000.0,0,590085028,1210.0
38500272000.0,0,587517782,1220.0
38815848000.0,0,584961743,1230.0
39131424000.0,0,582420312,1240.0
39447000000.0,0,579886455,1250.0
39762576000.0,0,577362514,1260.0
40078152000.0,0,574849251,1270.0
40393728000.0,0,572346625,1280.0
40709304000.0,0,569856166,1290.0
41024880000.0,0,567377753,1300.0
41340456000.0,0,564908008,1310.0
41656032000.0,0,562450828,1320.0
41971608000.0,0,560005832,1330.0
42287184000.0,0,557570018,1340.0
42602760000.0,0,555143734,1350.0
42918336000.0,0,552729893,1360.0
43233912000.0,0,550326162,1370.0
43549488000.0,0,547932312,1380.0
43865064000.0,0,545550017,1390.0
44180640000.0,0,543178924,1400.0
44496216000.0,0,540814950,1410.0
44811792000.0,0,538462704,1420.0
45127368000.0,0,536123339,1430.0
45442944000.0,0,533792776,1440.0
45758520000.0,0,531469163,1450.0
46074096000.0,0,529157093,1460.0
46389672000.0,0,526854383,1470.0
46705248000.0,0,524564196,1480.0
47020824000.0,0,522282564,1490.0
47336400000.0,0,520011985,1500.0
47651976000.0,0,517751635,1510.0
47967552000.0,0,515499791,1520.0
48283128000.0,0,513257373,1530.0
48598704000.0,0,511022885,1540.0
48914280000.0,0,508798440,1550.0
49229856000.0,0,506582663,1560.0
49545432000.0,0,504379227,1570.0
49861008000.0,0,502186693,1580.0
50176584000.0,0,500000869,1590.0
Expanded for More than 2 Nuclides
I mentioned that for more than a couple of nuclides you'd want to use a priority queue to track which decays occur next. I reorganized the code around functions, but that allowed greater flexibility in expanding the scope of the problem. Here you go:
#!/usr/bin/env python3
from numpy.random import default_rng
from math import log
import heapq
SECONDS_PER_YEAR = 365.25 * 24 * 60 * 60
LOG_2 = log(2)
rng = default_rng()
def generate_report_qtys(n0):
report_qty = []
divisor = 2
while divisor < n0:
report_qty.append(n0 // divisor) # append next half-life qty to array
divisor *= 2
return report_qty
po_n0 = 10_000_000
ra_n0 = 10_000_000
mu_n0 = 10_000_000
# mean is half-life / LOG_2
properties = dict(
po_214 = dict(
mean = 0.0001643 / LOG_2,
qty = po_n0,
report_qtys = generate_report_qtys(po_n0)
),
ra_226 = dict(
mean = 1590 * SECONDS_PER_YEAR / LOG_2,
qty = ra_n0,
report_qtys = generate_report_qtys(ra_n0)
),
made_up = dict(
mean = 75 * SECONDS_PER_YEAR / LOG_2,
qty = mu_n0,
report_qtys = generate_report_qtys(mu_n0)
)
)
nuclide_names = [name for name in properties.keys()]
def population_mean(nuclide):
return properties[nuclide]['mean'] / properties[nuclide]['qty']
def report(): # isolate as single point of maintenance even though it's a one-liner
nuc_qtys = [str(properties[nuclide]['qty']) for nuclide in nuclide_names]
print(f"{time},{time / SECONDS_PER_YEAR}," + ','.join(nuc_qtys))
def decay_event(nuclide):
properties[nuclide]['qty'] -= 1
current_qty = properties[nuclide]['qty']
if current_qty > 0:
heapq.heappush(event_q, (time + rng.exponential(population_mean(nuclide)), nuclide))
rep_qty = properties[nuclide]['report_qtys']
if len(rep_qty) > 0 and current_qty == rep_qty[0]:
rep_qty.pop(0) # remove this occurrence from the list
report()
def report_event():
heapq.heappush(event_q, (time + 10 * SECONDS_PER_YEAR, 'report_event'))
report()
event_q = [(rng.exponential(population_mean(nuclide)), nuclide) for nuclide in nuclide_names]
event_q.append((0.0, "report_event"))
heapq.heapify(event_q)
time = 0.0 # simulated time
print("time(seconds),time(years)," + ','.join(nuclide_names)) # column labels
while time < 1600 * SECONDS_PER_YEAR:
time, event_id = heapq.heappop(event_q)
if event_id == 'report_event':
report_event()
else:
decay_event(event_id)
To add more nuclides, add more entries to the properties dictionary, following the template of the current entries.

how to reduce wait time in perl

I have script that queries a database a variable number of times per second.
For example, to achieve 36,000 queries per hour we input 600 queries per minute into our script. 600 x 60 = 36,000
This is the output we get you can see the delay between each query
{1} [2019-11-06 21:38:01.313]
{1} [2019-11-06 21:38:01.413]
{1} [2019-11-06 21:38:01.513]
{1} [2019-11-06 21:38:01.613]
{1} [2019-11-06 21:38:01.713]
{1} [2019-11-06 21:38:01.813]
My problem is we are missing out on that 0.0100 because we have a wait time inplace.
rates per minute = varies we can change this to a max of 960 queries per min but we would want fourmla that is flexible for 0-960.
my $wait_time = (1 / $rpm) * 60 * 1(connection); (max of 4 connections) wait time increases based on number of connections
Does anyone know how to reduce the wait time in between queries ?
thanks
This is the code line
my $wait_time = (1 / $rpm) * 60 * 1;
So when i enter in 600 queries per min
This code line calcuates the wait time based on number of connection we have
my $wait_time = (1 / 600) * 60 * 1;
1/600 * 60 * 1 = WAIT: 0.1
Well, your processing of the query needs time. A fragile solution, if i interpret your problem right, is to measure the time the current processing takes and substract that from the next sleep time. Of course that would break if the processing time equals or exceeds the sleep time.
A clean solution would be to have a dedicated main loop that does nothing but sleeping and firing off queries in separate threads.
I'm not sure if this will help because I have a very hard time understanding your question. I think you are concerned that you aren't making queries at the rate you desire.
It could be because you think of the wait time as being static. It's not the wait time that's static —that's dependent on how long the previous query took— it's the interval that's static.
use Time::HiRes qw( time sleep ); # Add support for fractional times.
my $interval = (1 / $qpm) * 60 * 1; # In (fractional) seconds.
my $next_run = time;
while (1) {
my $wait = $next_run - time;
sleep($wait) if $wait > 0;
$next_run += $interval;
... do work ...
}

I have a query related to Computer Architecture

20% of the total instructions in an application are multiplies. A new processor cuts the CPI for
multiplies from 10 to 5 but increases the cycle time by 20%. All other instructions take 1 cycle
each to process.
What is the speedup of the new processor compared to the old one?
In order to calculate normal CPI:
CPI = (SIGMA(Intruction_Count*Instruction_Precentage)/Total_Instructions)*Cycle_Length
Calculating the old cpi we get:
CPI_OLD= ( 10*0.2 + 1*0.8 ) * 1 = 2.8
Adjusting for the new changes we get:
CPI_NEW = (5*0.2 + 1*0.8) * 1.2 = 2.16
Speedup = CPI_NEW/CPI_OLD = 1.3

servicestack.redis wrapper poor performance

We are trying to store some big buffers (8MB each) in Redis using the ServiceStack wrapper. We use the “RedisNativeClient.Set(string key, byte[] value)” API to set the buffers.
Both client and server reside on the same machine.
Persistence in disabled.
We are currently using the evaluation version of ServiceStack.
The problem is that we get very poor performance - around 60 MB/Sec.
Using some different c# wrappers ("Sider"), we get better performance (~400 MB/Sec).
The code I used for my measurements:
public void SimpleTest()
{
Stopwatch sw;
long ms1, ms2, interval;
int nBytesHandled = 0;
int nBlockSizeBytes = 8000000;
int nMaxIterations = 5;
byte[] pBuffer = new byte[(int)(nBlockSizeBytes)];
// Create Redis Wrapper
ServiceStack.Redis.RedisNativeClient m_serviceStackRedisClient = new ServiceStack.Redis.RedisNativeClient();
// Clear DB
m_serviceStackRedisClient.FlushAll();
sw = Stopwatch.StartNew();
ms1 = sw.ElapsedMilliseconds;
for (int i = 0; i < nMaxIterations; i++)
{
m_serviceStackRedisClient.Set("eitan" + i.ToString(), pBuffer);
nBytesHandled += nBlockSizeBytes;
}
ms2 = sw.ElapsedMilliseconds;
interval = ms2 - ms1;
// Calculate rate
double dMBPerSEc = nBytesHandled / 1024.0 / 1024.0 / ((double)interval / 1000.0);
Console.WriteLine("Rate {0:N4}", dMBPerSEc);
}
What could the problem be ?
Thanks,
Eitan.
ServiceStack.Redis uses a reusable Buffer Pool to reduce memory pressure by reusing a pool of byte buffers. The size of the default byte[] buffer is 1450 bytes to fit within the Ethernet MTU packet size. Whilst this default configuration is optimal for the normal use-case of smaller payloads (<100k) it looks like it ends up being slower for larger payloads (>1MB+).
Based on this the ServiceStack.Redis client has now been modified so that it no longer uses the buffer pool for payloads larger than 500k which is now configurable with RedisConfig.BufferPoolMaxSize, e.g:
RedisConfig.BufferPoolMaxSize = 500000;
The default 1450 byte size of the byte[] buffer is now also configurable with:
RedisConfig.BufferLength = 1450;
This change now improves the throughput performance of ServiceStack.Redis for larger payloads as seen in RedisBenchmarkTests suite which uses your benchmark with different payload sizes, e.g:
public void Run(string name, int nBlockSizeBytes, Action<int,byte[]> fn)
{
Stopwatch sw;
long ms1, ms2, interval;
int nBytesHandled = 0;
int nMaxIterations = 5;
byte[] pBuffer = new byte[nBlockSizeBytes];
// Create Redis Wrapper
var redis = new RedisNativeClient();
// Clear DB
redis.FlushAll();
sw = Stopwatch.StartNew();
ms1 = sw.ElapsedMilliseconds;
for (int i = 0; i < nMaxIterations; i++)
{
fn(i, pBuffer);
nBytesHandled += nBlockSizeBytes;
}
ms2 = sw.ElapsedMilliseconds;
interval = ms2 - ms1;
// Calculate rate
double dMBPerSEc = nBytesHandled / 1024.0 / 1024.0 / (interval / 1000.0);
Console.WriteLine(name + ": Rate {0:N4}, Total: {1}ms", dMBPerSEc, ms2);
}
Results running from my MacBook Pro and redis-server running in an Ubuntu VirtualBox VM:
1K Results:
ServiceStack.Redis 1K: Rate 4.7684, Total: 1ms
Sider 1K: Rate 0.4768, Total: 10ms
10K Results:
ServiceStack.Redis 10K: Rate 47.6837, Total: 1ms
Sider 10K: Rate 4.3349, Total: 11ms
100K Results:
ServiceStack.Redis 100K: Rate 26.4910, Total: 18ms
Sider 100K: Rate 20.7321, Total: 23ms
1MB Results:
ServiceStack.Redis 1MB: Rate 103.6603, Total: 46ms
Sider 1MB: Rate 70.1231, Total: 68ms
8MB Results:
ServiceStack.Redis 8MB: Rate 77.0646, Total: 495ms
Sider 8MB: Rate 84.3960, Total: 452ms
Where the performance for of ServiceStack.Redis is faster for smaller payloads and now closer for payloads larger than 8MB.
This change is available from v4.0.41+ that's now available on MyGet.

Calculating size of the page table

Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is the approximate size of the page table ?
My Solution:
Number of pages in physical memory = (size of physical memory)/(size of page)
= 64 * 2^10 / 4
= 2^14
Number of pages in virtual memory = (size of virtual memory)/(size of page)
size of virtual memory = 2^32 bits
= 2^29 bytes
= 2^19 kBytes
Number of pages in virtual memory = 2^19/4 = 2^17
=> Number of entries in page table = 2^17
Size of each entry = 17+14 =31 bits
Size of page table = 31 * 2^17 bits
= 31 * 2^14 bytes
= 31 * 2^4 KB
= 31*16
= 496 KB
But the answer is 8 MB. Why?
8MB cannot be the answer,
Physical Address Space = 64MB = 2^26B
Virtual Address = 32-bits, ∴ Virtual Address Space = 2^32B
Page Size = 4KB = 2^12B
Number of pages = 2^32/2^12 = 2^20 pages.
Number of frames = 2^26/2^12 = 2^14 frames.
∴ Page Table Size = 2^20×14-bits ≈ 2^20×16-bits ≈ 2^20×2B = 2MB.
The question has been asked before. However, there is not sufficient information in the question to determine the size of the page table.
It does not specify the size of the page table entries.
It does not specify the number of pages mapped to the process address space.
It does not specify the division between the process and system address pace. How much of the 32 bits is part of the process address space.
It does not specify whether this is a process or system table.