Playback historical data Akka Stream - scala

Is it possible to emit data according to a defined clock with Akka Streams? Or do they just emit (ignoring backpressure) as fast as their data arrive? I'm particularly wondering if it's possible to playback historical data with a "mocked" clock, somehow using Source.tick perhaps.

It depends on what you mean by "defined clock".
Actual Wall Time
As you mentioned Source.tick is one possibility to get a clock of actual times coming from the system clock. The problem is that the Sink may not signal demand at a rate that is greater than or equal to the interval that the Source generates ticks. For example, your Sink may only signal demand once every minute but your interval in Source.tick may be 10 seconds. In this case the 5 intermediate ticks will be dropped, from the documentation:
If a consumer has not requested any elements at the point in time when
the tick element is produced it will not receive that tick element
later.
Simulated Time
It is always possible to simulate time using a Source.
We can first create a function that will simulate a clock using a start time, end time, and interval:
type MillisFromEpoch = Long
type MillisInterval = Long
val clock : (MillisFromEpoch, MillisFromEpoch, MillisInterval) => () => Iterator[MillisFromEpoch] =
(startTime, stopTime, interval) => () => new Iterator[MillisFromEpoch] {
var currentTime = startTime
override def hasNext : Boolean = currentTime < stopTime
override def next() : MillisFromEpoch = {
val returnMilis = currentTime
currentTime += interval
return returnMillis
}
}
This clock can now feed a Source. As an example we can create a clock that start at unix epoch and increments 1 second until the end of time:
val epoch : MillisFromEpoch = 0L
val second : MillisInterval = 1000L
val simulatedClockFromEpochSource : Source[MillisFromEpoch,_] =
Source fromIterator clock(epoch, Long.MaxValue, 1*second)
Or we can create a clock that starts now, and ends in 60 seconds incrementing by 5 second intervals:
val now : MillisFromEpoch = System.currentTimeMillis()
val simulatedClockFromNowSource : Source[MillisFromEpoch,_] =
Source fromIterator clock(now, now + 60*second, 5*second)
Sampling Frequency
There is a way to use Source.tick even when the downstream consumer is slower than the tick interval specified at the Source. We can create a Flow.filter that is constantly signaling demand to the Source but will only pass through times that are a defined increment apart.
We can start with a function that does the tracking of the time interval with an internal variable:
val frequencySample : (MillisInterval) => (MillisFromEpoch) => Boolean =
(interval) => {
var lastValidTime : MillisFromEpoch = -1
(timeToCheck) => {
if(lastValidTime < 0 || timeToCheck >= lastValidTime + interval) {
lastValidTime = timeToCheck
true
}
else {
false
}
}
}
And now this function can be used to create the Flow:
val frequencySampleFlow : (MillisInterval) => Flow[MillisFromEpoch, MillisFromEpoch, _] =
(frequency) => Flow[MillisFromEpoch] filter frequencySample(frequency)
Now we can create a Flow that has a slow frequency (e.g. 10 seconds) that is attached to a Source with a higher frequency (e.g. 1 second):
val slowFrequency : MillisInterval = 10 * second
//simulatedClockFromEpoch ticks every 1 second
//frequnencySampleFlow only passes every 10 second tick through
val slowSource =
simulatedClockFromEpochSource via frequencySampleFlow(slowFrequency)

Related

Decay chain simulation - with significantly different time scales

I would like to simulate a decay chain with Python. Normally, (in a loop over all nuclides) one calculates the number of decays per time step and updates the number of mother and daughter nuclei.
My problem is that the decay chain contains half-lives on very different time scales, i.e.
0.0001643 seconds for Po-214 and 307106512477175.9 seconds (= 1600 years) for Ra-226.
Using the same time step for all nuclides seems useless.
Is there a simulation method, preferably in Python, that can be used to handle this case?
Don't use time steps for this. Use event scheduling.
Half lives can be expressed as exponential decay, and the conversion between half life and rate of decay is straightforward. Start with the number of both types of nuclei, and schedule exponential inter-event times to figure out when the next decay of each type will occur. Whichever type has the lower time, decrement the corresponding number of nuclei and schedule the next decay for that type (and if need be, increment the count of whatever it decays into).
This can easily be generalized to multiple distinct event types by using a priority queue ordered by time of occurrence to determine which event will be the next one performed. This is the underlying principle behind discrete event simulation.
Update
This approach works with individual decay events, but we can leverage two important properties when we have exponential inter-event times.
The first is to note that exponentially distributed inter-event times means these are Poisson processes. The superposition property tells us that the union of two independent Poisson processes, each having rate λ, is a Poisson process with rate 2λ. Simple induction shows that if we have n independent Poisson properties with the same rate, their superposition is a Poisson process with rate nλ.
The second property is that the exponential distribution is memoryless. This means that when a Poisson event occurs, we can generate the time to the next event by generating a new exponentially distributed time at the current rate and adding it to the current time.
You haven't provided any information about what you want in the way of output, so I arbitrarily decided to print a report showing the time and the current numbers of nuclides whenever that number was halved. I also printed a report every 10 years, given the long half-life of Po-214.
I converted half-lifes to rates using the link provided at the top of the post, and then to means since that's what
Python numpy's exponential generator is parameterized to use. That's an easy conversion, since means and rates are inverses of each other.
Here's a Python implementation with comments:
from numpy.random import default_rng
from math import log
rng = default_rng()
# This creates a list of entries of quantities that will trigger a report.
# I've chosen to go with successive halvings of the original quantity.
def generate_report_qtys(n0):
report_qty = []
divisor = 2
while divisor < n0:
report_qty.append(n0 // divisor) # append next half-life qty to array
divisor *= 2
return report_qty
seconds_per_year = 365.25 * 24 * 60 * 60
po_214_half_life = 0.0001643 # seconds
ra_226_half_life = 1590 * seconds_per_year
log_2 = log(2)
po_mean = po_214_half_life / log_2 # per-nuclide decay rate for po_214
ra_mean = ra_226_half_life / log_2 # ditto for ra_226
po_n = po_n0 = 1_000_000_000
ra_n = ra_n0 = 1_000_000_000
time = 0.0
# Generate a report when the following sets of half-lifes are reached
po_report_qtys = generate_report_qtys(po_n0)
ra_report_qtys = generate_report_qtys(ra_n0)
# Initialize first event times for each type of event:
# - first entry is polonium next event time
# - second entry is radium next event time
# - third entry is next ten year report time
next_event_time = [
rng.exponential(po_mean / po_n),
rng.exponential(ra_mean / ra_n),
10 * seconds_per_year
]
# Print column labels and initial values
print("time,po_214,ra_226,time_in_years")
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
while time < ra_226_half_life:
# Find the index of the next event time. Index tells us the event type.
min_index = next_event_time.index(min(next_event_time))
if min_index == 0:
po_n -= 1 # decrement polonium count
time = next_event_time[0] # update clock to the event time
if po_n > 0:
next_event_time[0] += rng.exponential(po_mean / po_n) # determine next event time for po
else:
next_event_time[0] = float('Inf')
# print report if this is a half-life occurrence
if len(po_report_qtys) > 0 and po_n == po_report_qtys[0]:
po_report_qtys.pop(0) # remove this occurrence from the list
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
elif min_index == 1:
# same as above, but for radium
ra_n -= 1
time = next_event_time[1]
if ra_n > 0:
next_event_time[1] += rng.exponential(ra_mean / ra_n)
else:
next_event_time[1] = float('Inf')
if len(ra_report_qtys) > 0 and ra_n == ra_report_qtys[0]:
ra_report_qtys.pop(0)
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
else:
# update clock, print ten year report
time = next_event_time[2]
next_event_time[2] += 10 * seconds_per_year
print(f"{time},{po_n},{ra_n},{time / seconds_per_year}")
Run times are proportional to the number of nuclides. Running with a billion of each took 831.28s on my M1 MacBook Pro, versus 2.19s for a million of each. I also ported this to Crystal, a compiled Ruby-like language, which produced comparable results in 32 seconds for a billion of each nuclide. I would recommend using a compiled language if you intend to run larger sized problems, but I will also point out that if you use half-life reporting as I did the results are virtually identical for smaller population sizes but are obtained much more rapidly.
I would also suggest that if you want to use this approach for a more complex model, you should use a priority queue of tuples containing time and type of event to store the set of pending future events rather than a simple list.
Last but not least, here's some sample output:
time,po_214,ra_226,time_in_years
0.0,1000000000,1000000000,0.0
0.0001642985647308265,500000000,1000000000,5.20630734690935e-12
0.0003286071415481526,250000000,1000000000,1.0412931957694901e-11
0.0004929007624958987,125000000,1000000000,1.5619082645571865e-11
0.0006571750701843468,62500000,1000000000,2.082462133319222e-11
0.0008214861652253772,31250000,1000000000,2.6031325741671646e-11
0.0009858208114474198,15625000,1000000000,3.1238776442043114e-11
0.0011502417677631668,7812500,1000000000,3.6448962144243124e-11
0.0013145712145548718,3906250,1000000000,4.165624808460947e-11
0.0014788866075394896,1953125,1000000000,4.686308868670272e-11
0.0016432124609700412,976562,1000000000,5.2070260760325286e-11
0.001807832817519779,488281,1000000000,5.728676507465013e-11
0.001972981254301889,244140,1000000000,6.252000324175124e-11
0.0021372947080755688,122070,1000000000,6.772678239395799e-11
0.002301139510796509,61035,1000000000,7.29187108904514e-11
0.0024642826956509244,30517,1000000000,7.808840645837847e-11
0.0026302282280720344,15258,1000000000,8.33469030620844e-11
0.0027944471221414947,7629,1000000000,8.855068579808016e-11
0.002954014120737834,3814,1000000000,9.3607058861822e-11
0.0031188370035748177,1907,1000000000,9.882998084692174e-11
0.003282466175503322,953,1000000000,1.0401507641592902e-10
0.003457552492113242,476,1000000000,1.0956322699169905e-10
0.003601851131916978,238,1000000000,1.1413577496124477e-10
0.0037747824699194033,119,1000000000,1.1961563838566314e-10
0.0039512825256332275,59,1000000000,1.252085876503038e-10
0.004124330529803301,29,1000000000,1.3069214800248755e-10
0.004337121375518753,14,1000000000,1.3743508300754027e-10
0.004535068261934763,7,1000000000,1.437076413268044e-10
0.004890820999020369,3,1000000000,1.5498076529965425e-10
0.004909065046898487,1,1000000000,1.555588842908994e-10
315576000.0,0,995654793,10.0
631152000.0,0,991322602,20.0
946728000.0,0,987010839,30.0
1262304000.0,0,982711723,40.0
1577880000.0,0,978442651,50.0
1893456000.0,0,974185269,60.0
2209032000.0,0,969948418,70.0
2524608000.0,0,965726762,80.0
2840184000.0,0,961524848,90.0
3155760000.0,0,957342148,100.0
3471336000.0,0,953178898,110.0
3786912000.0,0,949029294,120.0
4102488000.0,0,944898063,130.0
4418064000.0,0,940790494,140.0
4733640000.0,0,936699123,150.0
5049216000.0,0,932622334,160.0
5364792000.0,0,928565676,170.0
5680368000.0,0,924523267,180.0
5995944000.0,0,920499586,190.0
6311520000.0,0,916497996,200.0
6627096000.0,0,912511030,210.0
6942672000.0,0,908543175,220.0
7258248000.0,0,904590364,230.0
7573824000.0,0,900656301,240.0
7889400000.0,0,896738632,250.0
8204976000.0,0,892838664,260.0
8520552000.0,0,888956681,270.0
8836128000.0,0,885084855,280.0
9151704000.0,0,881232862,290.0
9467280000.0,0,877401861,300.0
9782856000.0,0,873581425,310.0
10098432000.0,0,869785364,320.0
10414008000.0,0,866002042,330.0
10729584000.0,0,862234212,340.0
11045160000.0,0,858485627,350.0
11360736000.0,0,854749939,360.0
11676312000.0,0,851032010,370.0
11991888000.0,0,847329028,380.0
12307464000.0,0,843640016,390.0
12623040000.0,0,839968529,400.0
12938616000.0,0,836314000,410.0
13254192000.0,0,832673999,420.0
13569768000.0,0,829054753,430.0
13885344000.0,0,825450233,440.0
14200920000.0,0,821859757,450.0
14516496000.0,0,818284787,460.0
14832072000.0,0,814727148,470.0
15147648000.0,0,811184419,480.0
15463224000.0,0,807655470,490.0
15778800000.0,0,804139970,500.0
16094376000.0,0,800643280,510.0
16409952000.0,0,797159389,520.0
16725528000.0,0,793692735,530.0
17041104000.0,0,790239221,540.0
17356680000.0,0,786802135,550.0
17672256000.0,0,783380326,560.0
17987832000.0,0,779970864,570.0
18303408000.0,0,776576174,580.0
18618984000.0,0,773197955,590.0
18934560000.0,0,769836170,600.0
19250136000.0,0,766488931,610.0
19565712000.0,0,763154778,620.0
19881288000.0,0,759831742,630.0
20196864000.0,0,756528400,640.0
20512440000.0,0,753237814,650.0
20828016000.0,0,749961747,660.0
21143592000.0,0,746699940,670.0
21459168000.0,0,743450395,680.0
21774744000.0,0,740219531,690.0
22090320000.0,0,736999181,700.0
22405896000.0,0,733793266,710.0
22721472000.0,0,730602000,720.0
23037048000.0,0,727427544,730.0
23352624000.0,0,724260327,740.0
23668200000.0,0,721110260,750.0
23983776000.0,0,717973915,760.0
24299352000.0,0,714851218,770.0
24614928000.0,0,711740161,780.0
24930504000.0,0,708645945,790.0
25246080000.0,0,705559170,800.0
25561656000.0,0,702490991,810.0
25877232000.0,0,699436919,820.0
26192808000.0,0,696394898,830.0
26508384000.0,0,693364883,840.0
26823960000.0,0,690348242,850.0
27139536000.0,0,687345934,860.0
27455112000.0,0,684354989,870.0
27770688000.0,0,681379178,880.0
28086264000.0,0,678414567,890.0
28401840000.0,0,675461363,900.0
28717416000.0,0,672522494,910.0
29032992000.0,0,669598412,920.0
29348568000.0,0,666687807,930.0
29664144000.0,0,663787671,940.0
29979720000.0,0,660901676,950.0
30295296000.0,0,658027332,960.0
30610872000.0,0,655164886,970.0
30926448000.0,0,652315268,980.0
31242024000.0,0,649481821,990.0
31557600000.0,0,646656096,1000.0
31873176000.0,0,643841377,1010.0
32188752000.0,0,641041609,1020.0
32504328000.0,0,638253759,1030.0
32819904000.0,0,635479981,1040.0
33135480000.0,0,632713706,1050.0
33451056000.0,0,629962868,1060.0
33766632000.0,0,627223350,1070.0
34082208000.0,0,624494821,1080.0
34397784000.0,0,621778045,1090.0
34713360000.0,0,619076414,1100.0
35028936000.0,0,616384399,1110.0
35344512000.0,0,613702920,1120.0
35660088000.0,0,611035112,1130.0
35975664000.0,0,608376650,1140.0
36291240000.0,0,605729994,1150.0
36606816000.0,0,603093946,1160.0
36922392000.0,0,600469403,1170.0
37237968000.0,0,597854872,1180.0
37553544000.0,0,595254881,1190.0
37869120000.0,0,592663681,1200.0
38184696000.0,0,590085028,1210.0
38500272000.0,0,587517782,1220.0
38815848000.0,0,584961743,1230.0
39131424000.0,0,582420312,1240.0
39447000000.0,0,579886455,1250.0
39762576000.0,0,577362514,1260.0
40078152000.0,0,574849251,1270.0
40393728000.0,0,572346625,1280.0
40709304000.0,0,569856166,1290.0
41024880000.0,0,567377753,1300.0
41340456000.0,0,564908008,1310.0
41656032000.0,0,562450828,1320.0
41971608000.0,0,560005832,1330.0
42287184000.0,0,557570018,1340.0
42602760000.0,0,555143734,1350.0
42918336000.0,0,552729893,1360.0
43233912000.0,0,550326162,1370.0
43549488000.0,0,547932312,1380.0
43865064000.0,0,545550017,1390.0
44180640000.0,0,543178924,1400.0
44496216000.0,0,540814950,1410.0
44811792000.0,0,538462704,1420.0
45127368000.0,0,536123339,1430.0
45442944000.0,0,533792776,1440.0
45758520000.0,0,531469163,1450.0
46074096000.0,0,529157093,1460.0
46389672000.0,0,526854383,1470.0
46705248000.0,0,524564196,1480.0
47020824000.0,0,522282564,1490.0
47336400000.0,0,520011985,1500.0
47651976000.0,0,517751635,1510.0
47967552000.0,0,515499791,1520.0
48283128000.0,0,513257373,1530.0
48598704000.0,0,511022885,1540.0
48914280000.0,0,508798440,1550.0
49229856000.0,0,506582663,1560.0
49545432000.0,0,504379227,1570.0
49861008000.0,0,502186693,1580.0
50176584000.0,0,500000869,1590.0
Expanded for More than 2 Nuclides
I mentioned that for more than a couple of nuclides you'd want to use a priority queue to track which decays occur next. I reorganized the code around functions, but that allowed greater flexibility in expanding the scope of the problem. Here you go:
#!/usr/bin/env python3
from numpy.random import default_rng
from math import log
import heapq
SECONDS_PER_YEAR = 365.25 * 24 * 60 * 60
LOG_2 = log(2)
rng = default_rng()
def generate_report_qtys(n0):
report_qty = []
divisor = 2
while divisor < n0:
report_qty.append(n0 // divisor) # append next half-life qty to array
divisor *= 2
return report_qty
po_n0 = 10_000_000
ra_n0 = 10_000_000
mu_n0 = 10_000_000
# mean is half-life / LOG_2
properties = dict(
po_214 = dict(
mean = 0.0001643 / LOG_2,
qty = po_n0,
report_qtys = generate_report_qtys(po_n0)
),
ra_226 = dict(
mean = 1590 * SECONDS_PER_YEAR / LOG_2,
qty = ra_n0,
report_qtys = generate_report_qtys(ra_n0)
),
made_up = dict(
mean = 75 * SECONDS_PER_YEAR / LOG_2,
qty = mu_n0,
report_qtys = generate_report_qtys(mu_n0)
)
)
nuclide_names = [name for name in properties.keys()]
def population_mean(nuclide):
return properties[nuclide]['mean'] / properties[nuclide]['qty']
def report(): # isolate as single point of maintenance even though it's a one-liner
nuc_qtys = [str(properties[nuclide]['qty']) for nuclide in nuclide_names]
print(f"{time},{time / SECONDS_PER_YEAR}," + ','.join(nuc_qtys))
def decay_event(nuclide):
properties[nuclide]['qty'] -= 1
current_qty = properties[nuclide]['qty']
if current_qty > 0:
heapq.heappush(event_q, (time + rng.exponential(population_mean(nuclide)), nuclide))
rep_qty = properties[nuclide]['report_qtys']
if len(rep_qty) > 0 and current_qty == rep_qty[0]:
rep_qty.pop(0) # remove this occurrence from the list
report()
def report_event():
heapq.heappush(event_q, (time + 10 * SECONDS_PER_YEAR, 'report_event'))
report()
event_q = [(rng.exponential(population_mean(nuclide)), nuclide) for nuclide in nuclide_names]
event_q.append((0.0, "report_event"))
heapq.heapify(event_q)
time = 0.0 # simulated time
print("time(seconds),time(years)," + ','.join(nuclide_names)) # column labels
while time < 1600 * SECONDS_PER_YEAR:
time, event_id = heapq.heappop(event_q)
if event_id == 'report_event':
report_event()
else:
decay_event(event_id)
To add more nuclides, add more entries to the properties dictionary, following the template of the current entries.

Google OR-Tools: VRP with time windows - solution doesn't account for transit time

While trying to solve a VRP with time windows (based on the tutorial), the solver does not seem to take transit time into account. Instead, it seems to assume you can jump instantaneously between nodes. Why aren't the transit and loading/unloading times in time_dimension being applied?
Route for vehicle 6:
Node A: Time(36000,36000) ->
Node B: Time(36000,36000) ->
Node C: Time(36000,36000) ->
Node D: Time(36000,36000) ->
Node A: Time(36000,36000)
Cost of the route: $811.84
Timing of route: 10:0:0 - 10:0:0
Duration on-duty: 0hr, 0min, 0sec
I have verified that data['time_matrix'] is not all zeros. Below is the bulk of the code for context:
# Create and register a transit time callback.
def transit_time_callback(from_index, to_index):
"""Returns the travel time only between the two nodes."""
# Convert from routing variable Index to time matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['time_matrix'][from_node][to_node]
# Create and register a time callback.
def time_callback(from_index, to_index):
"""Returns the travel and dock time between the two nodes."""
# Convert from routing variable Index to time matrix NodeIndex.
transit_time = transit_time_callback(from_index, to_index)
if to_node in data['pickups']:
dock_time = data['load_time'][to_node]
else:
dock_time = data['unload_time'][to_node]
return transit_time + dock_time
time_callback_index = routing.RegisterTransitCallback(time_callback)
# Add time windows constraint.
routing.AddDimension(
time_callback_index,
4 * MIN_PER_HR * SEC_PER_MIN, # allow waiting time
24 * MIN_PER_HR * SEC_PER_MIN, # maximum time per vehicle
False, # Don't force start cumul to zero.
'Time')
time_dimension = routing.GetDimensionOrDie('Time')
# Add time window constraints for each location.
for location_idx, time_window in enumerate(data['time_windows']):
index = manager.NodeToIndex(location_idx)
time_dimension.CumulVar(index).SetRange(time_window[0], time_window[1])
# Limit max on-duty time for each vehicle
for vehicle_id in range(data['num_vehicles']):
routing.solver().Add(
time_dimension.CumulVar(routing.End(vehicle_id)) - time_dimension.CumulVar(routing.Start(vehicle_id)) <= MAX_TIME_ON_DUTY)
# Minimize on-duty duration.
for i in range(data['num_vehicles']):
routing.AddVariableMaximizedByFinalizer(
time_dimension.CumulVar(routing.Start(i)))
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.End(i)))
# Set cost
routing.SetArcCostEvaluatorOfAllVehicles(time_callback_index)
# Solve the problem
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
solution = routing.SolveWithParameters(search_parameters)

NextExecution time with quartz definition fails when date if the end of the month

Currently, I have a cron job scheduling for every minute but once the date is the end of the month, the nextExecution method fails to provide the proper date.
test("check execution times") {
// make sure the execution times are correct
val expression = "0 * * ? * * *" // every minute (the top of the minute)
val currentTime = "2019-12-31T00:00:00.000Z" // get current time
val expectedNextExecution = "2019-12-31T00:01:00.000Z" // get current time
val cron = new CronParser(CronDefinitionBuilder.instanceDefinitionFor(CronType.QUARTZ)).parse(expression)
val executor = ExecutionTime.forCron(cron)
val actualNextExecution = executor.nextExecution(DateTime.parse(currentTime)).toString // call nextExecution which should give us the next top of the minute
// check if nextExecution has correct next execution time
assert(actualNextExecution == expectedNextExecution, "Wrong execution time")
}
Results :
Expected :"20[19-12-31T00:01]:00.000Z"
Wrong execution time Actual :"20[20-01-01T00:00]:00.000Z"
The answer to this was updating my cron-utils version.
Previously on:
4.1.0
Issue was fixed on:
9.0.2
Thank you

ForAll in scala check skips some input and do not respect containers size

I am new to scala check and I want to test the following piece of my application. I want to generate 30 and 20 random events and check if my application code correctly computes a result
// generate 30 random events
val eventGenerator: Gen[Event] = for {
d <- Gen.oneOf[String](Seq("es1", "es2", "es3"))
t <- Gen.choose[Long](minEvent.getTime, maxEvent.getTime)
s <- Gen.oneOf[String](Seq("s1", "s2", "s3", "s4", "s5", "s6", "s7"))} yield Event(d, t, s)
val eventsGenerator: Gen[List[VpSearchLog]] = Gen.containerOfN[List, VpSearchLog](30, eventGenerator)
// generate 20 random instances
val instanceGenerator: Gen[Instance] = for {
d <- Gen.oneOf[String](Seq("es1", "es2", "es3"))
t <- Gen.choose[Long](minInstance.getTime, maxInstance.getTime)} yield Instance(d, new Timestamp(t))
val instancesGenerator: Gen[List[Instance]] = Gen.containerOfN[List, Instance](20, instanceGenerator)
val p: Prop = forAll(instancesGenerator, eventsGenerator) { (i, e) =>
println(i.size)
println(e.size)
println()
val instancesWithFeature = computeExpected(instance)
isEqual(transform(instance), instanceWithFeature)
}
For some reason I see this in the stdout
20
15
20
7
20
3
20
1
20
0
starting to compute expected:
Basically it looks like the forAll generates a couple of inputs with a certain size and then skips them. For some reaon, it starts to compute things when one of the input has size 0 and then it starts the proper check. My questions are:
why if I use containerofN or listOfN I don't get exactly input of that specific size? How can I then generate input like this?
is it normal that forAll starts to explore the space of the possible input and skips some of them? Am I missing something here? This behaviour is quite counter intuitive for me
You may need to use forAllNoShrink to avoid the known defect in ScalaCheck where shrinking fails to respect generators
val thirtyInts: Gen[List[Int]] =
Gen.listOfN[Int](30, Gen.const(99))
val twentyLongs: Gen[List[Long]] =
Gen.listOfN[Long](20, Gen.const(44L))
property("listOfN") = {
Prop.forAllNoShrink(thirtyInts, twentyLongs) { (ii, ll) =>
ii.size == 30 && ll.size == 20
}
}

Paradoxal timing functions

I have a function to compute the time spent in a block:
import collection.mutable.{Map => MMap}
var timeTotalMap = MMap[String, Long]()
var numMap = MMap[String, Float]()
var averageMsMap = MMap[String, Float]()
def time[T](key: String)(block: =>T): T = {
val start = System.nanoTime()
val res = block
val total = System.nanoTime - start
timeTotalMap(key) = timeTotalMap.getOrElse(key, 0L) + total
numMap(key) = numMap.getOrElse(key, 0f) + 1
averageMsMap(key) = timeTotalMap(key)/1000000000f/numMap(key)
res
}
I am timing a function and the place where it is called in one place.
time("outerpos") { intersectPos(a, b) }
and the function itself starts with:
def intersectPos(p1: SPosition, p2: SPosition)(implicit unify: Option[Identifier] =None): SPosition
= time("innerpos") {(p1, p2) match {
...
}
When I display the nano times for each key (timeTotalMap), I get (added spaces for lisibility)
outerpos-> 37 870 034 714
innerpos-> 53 647 956
It means that the total execution time of innerpos is 1000 less than the one of outerpos.
What ! there is a factor 1000 between the two ? And its says that the outer call takes 1000x more time than all the inner functions? am I missing something or is there a memory leaking ghost somewhere ?
Update
When I compare the number of execution for each block (numMap), I find the following:
outerpos -> 362878
innerpos -> 21764
This is paradoxal. Even if intersectPos is called somewhere else, shouldn't the number of times it is called be as great as the number of times outerpos is called?
EDIT
If I move the line numMap(key) = numMap.getOrElse(key, 0f) + 1 to the top of the time functinon, then these numbers become approximately equal.
nanoTime on JVM is considered safe but not very accurate. Here are some good reads:
Is System.nanoTime() completely useless?
System.currentTimeMillis vs System.nanoTime
Basically your test will suffer from timer inaccuracy. One way to work around that would be to call time("outerpos") for a quite a long time (note JIT compiler optimization might kick in at some point) and measure start and end times only once. Take an average and repeat with time("innerpos"). That's all I could think of. Still not the best test ever ;)