Promela system with unranged values - queue

int rq_begin = 0, rq_end = 0;
int av_begin = 0, av_end = 0;
#define MAX_DUR 10
#define RQ_DUR 5
proctype Writer() {
do
:: (av_end < rq_end) -> av_end++;
if
:: (av_end - av_begin) > MAX_DUR -> av_begin = av_end - MAX_DUR;
:: else -> skip;
fi
printf("available span: [%d,%d]\n", av_begin, av_end);
od
}
proctype Reader() {
do
:: d_step {
rq_begin++;
rq_end = rq_begin + RQ_DUR;
}
printf("requested span: [%d,%d]\n", rq_begin, rq_end);
(rq_begin >= av_begin && rq_end <= av_end);
printf("got requested span\n");
od
}
init {
run Writer();
run Reader();
}
This system (only an example) should model a reader/writer queue where the reader requests a certain span of frames [rq_begin,rq_end], and the writer should then make at least this span available. [av_begin,av_end] is the span of available frames.
The 4 values are absolute frame indices, rq_begin gets incremented infinitley as the reader reads the next span of frames.
The system cannot be directly verified because the values are unranges (generating infinitely many states). Does Promela/Spin (or a similar software) has support to verify a system like this, and automatically transform it such that it becomes finite?
For example if all the 4 values were incremented by the same amount, the situation would still be the same. Or it could be reformulated into a model which instead has variables for the differences of these values, for example av_end - rq_end.
I'm using Promela/Spin to verify a more complex queuing system which uses absolute frame indices like this.

Related

Minimum cost solution to connect all elements in set A to at least one element in set B

I need to find the shortest set of paths to connect each element of Set A with at least one element of Set B. Repetitions in A OR B are allowed (but not both), and no element can be left unconnected. Something like this:
I'm representing the elements as integers, so the "cost" of a connection is just the absolute value of the difference. I also have a cost for crossing paths, so if Set A = [60, 64] and Set B = [63, 67], then (60 -> 67) incurs an additional cost. There can be any number of elements in either set.
I've calculated the table of transitions and costs (distances and crossings), but I can't find the algorithm to find the lowest-cost solution. I keep ending up with either too many connections (i.e., repetitions in both A and B) or greedy solutions that omit elements (e.g., when A and B are non-overlapping). I haven't been able to find examples of precisely this kind of problem online, so I hoped someone here might be able to help, or at least point me in the right direction. I'm not a graph theorist (obviously!), and I'm writing in Swift, so code examples in Swift (or pseudocode) would be much appreciated.
UPDATE: The solution offered by #Daniel is almost working, but it does occasionally add unnecessary duplicates. I think this may be something to do with the sorting of the priorityQueue -- the duplicates always involve identical elements with identical costs. My first thought was to add some kind of "positional encoding" (yes, Transformer-speak) to the costs, so that the costs are offset by their positions (though of course, this doesn't guarantee unique costs). I thought I'd post my Swift version here, in case anyone has any ideas:
public static func voiceLeading(from chA: [Int], to chB: [Int]) -> Set<[Int]> {
var result: Set<[Int]> = Set()
let im = intervalMatrix(chA, chB: chB)
if im.count == 0 { return [[0]] }
let vc = voiceCrossingCostsMatrix(chA, chB: chB, cost: 4)
// NOTE: cm contains the weights
let cm = VectorUtils.absoluteAddMatrix(im, toMatrix: vc)
var A_links: [Int:Int] = [:]
var B_links: [Int:Int] = [:]
var priorityQueue: [Entry] = []
for (i, a) in chA.enumerated() {
for (j, b) in chB.enumerated() {
priorityQueue.append(Entry(a: a, b: b, cost: cm[i][j]))
if A_links[a] != nil {
A_links[a]! += 1
} else {
A_links[a] = 1
}
if B_links[b] != nil {
B_links[b]! += 1
} else {
B_links[b] = 1
}
}
}
priorityQueue.sort { $0.cost > $1.cost }
while priorityQueue.count > 0 {
let entry = priorityQueue[0]
if A_links[entry.a]! > 1 && B_links[entry.b]! > 1 {
A_links[entry.a]! -= 1
B_links[entry.b]! -= 1
} else {
result.insert([entry.a, (entry.b - entry.a)])
}
priorityQueue.remove(at: 0)
}
return result
}
Of course, since the duplicates have identical scores, it shouldn't be a problem to just remove the extras, but it feels a bit hackish...
UPDATE 2: Slightly less hackish (but still a bit!); since the requirement is that my result should have equal cardinality to max(|A|, |B|), I can actually just stop adding entries to my result when I've reached the target cardinality. Seems okay...
UPDATE 3: Resurrecting this old question, I've recently had some problems arise from the fact that the above algorithm doesn't fulfill my requirement |S| == max(|A|, |B|) (where S is the set of pairings). If anyone knows of a simple way of ensuring this it would be much appreciated. (I'll obviously be poking away at possible changes.)
This is an easy task:
Add all edges of the graph in a priority_queue, where the biggest priority is the edge with the biggest weight.
Look each edge e = (u, v, w) in the priority_queue, where u is in A, v is in B and w is the weight.
If removing e from the graph doesn't leave u or v isolated, remove it.
Otherwise, e is part of the answer.
This should be enough for your case:
#include <bits/stdc++.h>
using namespace std;
struct edge {
int u, v, w;
edge(){}
edge(int up, int vp, int wp){u = up; v = vp; w = wp;}
void print(){ cout<<"("<<u<<", "<<v<<")"<<endl; }
bool operator<(const edge& rhs) const {return w < rhs.w;}
};
vector<edge> E; //edge set
priority_queue<edge> pq;
vector<edge> ans;
int grade[5] = {3, 3, 2, 2, 2};
int main(){
E.push_back(edge(0, 2, 1)); E.push_back(edge(0, 3, 1)); E.push_back(edge(0, 4, 4));
E.push_back(edge(1, 2, 5)); E.push_back(edge(1, 3, 2)); E.push_back(edge(1, 4, 0));
for(int i = 0; i < E.size(); i++) pq.push(E[i]);
while(!pq.empty()){
edge e = pq.top();
if(grade[e.u] > 1 && grade[e.v] > 1){
grade[e.u]--; grade[e.v]--;
}
else ans.push_back(e);
pq.pop();
}
for(int i = 0; i < ans.size(); i++) ans[i].print();
return 0;
}
Complexity: O(E lg(E)).
I think this problem is "minimum weighted bipartite matching" (although searching for " maximum weighted bipartite matching" would also be relevant, it's just the opposite)

How to generate a model for my code using boolector?

I'm experimenting a bit with boolector so I'm trying to create model for simple code. Suppose that I have the following pseudo code:
int a = 5;
int b = 4;
int c = 3;
For this simple set of instructions I can create the model and all works fine. The problem is when I have other instructions after that like
b = 10;
c = 20;
Obviously it fails to generate the model because b cannot be equal to 4 and 10 within the same module. One of the maintainer suggested me to use boolector_push and boolector_pop in order to create new Contexts when needed.
The code for boolector_push is :
void
boolector_push (Btor *btor, uint32_t level)
{
BTOR_ABORT_ARG_NULL (btor);
BTOR_TRAPI ("%u", level);
BTOR_ABORT (!btor_opt_get (btor, BTOR_OPT_INCREMENTAL),
"incremental usage has not been enabled");
if (level == 0) return;
uint32_t i;
for (i = 0; i < level; i++)
{
BTOR_PUSH_STACK (btor->assertions_trail,
BTOR_COUNT_STACK (btor->assertions));
}
btor->num_push_pop++;
}
Instead for boolector_pop is
void
boolector_pop (Btor *btor, uint32_t level)
{
BTOR_ABORT_ARG_NULL (btor);
BTOR_TRAPI ("%u", level);
BTOR_ABORT (!btor_opt_get (btor, BTOR_OPT_INCREMENTAL),
"incremental usage has not been enabled");
BTOR_ABORT (level > BTOR_COUNT_STACK (btor->assertions_trail),
"can not pop more levels (%u) than created via push (%u).",
level,
BTOR_COUNT_STACK (btor->assertions_trail));
if (level == 0) return;
uint32_t i, pos;
BtorNode *cur;
for (i = 0, pos = 0; i < level; i++)
pos = BTOR_POP_STACK (btor->assertions_trail);
while (BTOR_COUNT_STACK (btor->assertions) > pos)
{
cur = BTOR_POP_STACK (btor->assertions);
btor_hashint_table_remove (btor->assertions_cache, btor_node_get_id (cur));
btor_node_release (btor, cur);
}
btor->num_push_pop++;
}
In my opinion, those 2 functions maintains track of the assertions generated using boolector_assert so how is it possible to obtain the final and correct model using boolector_push and boolector_pop considering that the constraints are going to be the same?
What am I missing?
Thanks
As you suspected, solver's push and pop methods aren't what you're looking for here. Instead, you have to turn the program you are modeling into what's known as SSA (Static Single Assignment) form. Here's the wikipedia article on it, which is quite informative: https://en.wikipedia.org/wiki/Static_single_assignment_form
The basic idea is that you "treat" your mutable variables as time-varying values, and give them unique names as you make multiple assignments to them. So, the following:
a = 5
b = a + 2
c = b + 3
c = c + 1
b = c + 6
becomes:
a0 = 5
b0 = a0 + 2
c0 = b0 + 3
c1 = c0 + 1
b1 = c1 + 6
etc. Note that conditionals are tricky to deal with, and generally require what's known as phi-nodes. (i.e., merging the values of branches.) Most compilers do this sort of conversion automatically for you, as it enables many optimizations down the road. You can either do it by hand, or use an algorithm to do it for you, depending on your particular problem.
Here's another question on stack-overflow, that's essentially asking for something similar: Z3 Conditional Statement
Hope this helps!

formula to pick every pixel in a bitmap without repeating

I'm looking for an algorithm, I am programming in swift now but pseudocode or any reasonably similar "C family" syntax will do.
Imagine a large list of values, such as pixels in a bitmap. You want to pick each one in a visually random order, one at a time, and never pick the same one twice, and always end up picking them all.
I used it before in a Fractal generator so that it was not just rendering line by line, but built it up slowly in a stochastic way, but that was long ago, in a Java applet, and I no longer have the code.
I do not believe it used any pseudo-random number generator, and the main thing I liked about it is that it did not make the rendering time take longer than the just line by line approach. Any of the shuffling algorithms I looked at would make the rendering take longer with such a large number of values to deal with, unless I'm missing something.
EDIT: I used the shuffling an array approach. I shuffle once when the app loads, and it does not take that long anyway. Here is the code for my "Dealer" class.
import Foundation
import Cocoa
import Quartz
class Dealer: NSObject
{
//########################################################
var deck = [(CGFloat,CGFloat)]()
var count = 0
//########################################################
init(_ w:Int, _ h:Int)
{
super.init()
deck.reserveCapacity((w*h)+1)
for y in 0...h
{
for x in 0...w
{
deck.append((CGFloat(x),CGFloat(y)))
}
}
self.shuffle()
}
//########################################################
func shuffle()
{
var j:Int = 0
let total:Int = deck.count-1
for i:Int in 0...total
{
j = Int(arc4random_uniform(UInt32(total)))
deck.swapAt(i, j)
}
}
//########################################################
func deal() -> (CGFloat,CGFloat)
{
let result = deck[count]
let total:Int = deck.count-1
if(count<total) { count=count+1 } else { count=0 }
return(result)
}
//########################################################
}
The init is called once, and it calls shuffle, but if you want you can call shuffle again if needed.
Each time you need a "card" you call Deal. It loops to the beginning when the "deck" is done.
if you got enough memory space to store all the pixel positions you can shuffle them:
const int xs=640; // image resolution
const int ys=480;
color pixel[sz]; // image data
const int sz=xs*ys; // image size
int adr[sz],i,j;
for (i=0;i<sz;i++) adr[i]=i; // ordered positions
for (i=0;i<sz;i++) // shuffle them
{
j = random(sz); // pseudo-randomness with uniform distribution
swap(pixel[i],pixel[j]);
}
this way you got guaranteed that each pixel is used once and most likely all of them are shuffled ...
You need to implement a pseudo-random number generator with a theoretically known period, which is greater than but very close to the number of elements in your list. Suppose R() is a function that implements such a RNG.
Then:
for i = 1...N
do
idx = R()
while idx > N
output element(idx)
end
If the period of the RNG is greater than N, this algorithm is guaranteed to finish, and never output the same element twice
If the period of the RNG is close to N, this algorithm will be fast (i.e. the do-while loop will mostly do 1 iteration).
If the RNG has good quality, the visual output will look pleasant; here you have to do experiments and decide what is good enough for you
To find a RNG that has an exactly-known period, you should examine theory on RNGs, which is very extensive (maybe too extensive); Wikipedia has useful links.
Start with Linear congruential generators: they are very simple, and there is a chance they will be of good enough quality.
Here's a working example based on linear feedback shift registers. Since an n-bit LFSR has a maximal sequence length of 2n−1 steps, this will work best when the number of pixels is one less than a power of 2. For other sizes, the pseudo-random coordinates are discarded until one is obtained that lies within the specified range of coordinates. This is still reasonably efficient; in the worst case (where w×h is a power of 2), there will be an average of two LSFR iterations per coordinate pair.
The following code is in Javascript, but it should be easy enough to port this to Swift or any other language.
Note: For large canvas areas like 1920×1024, it would make more sense to use repeated tiles of a smaller size (e.g., 128×128). The tiling will be imperceptible.
var lsfr_register, lsfr_mask, lsfr_fill_width, lsfr_fill_height, lsfr_state, lsfr_timer;
var lsfr_canvas, lsfr_canvas_context, lsfr_blocks_per_frame, lsfr_frame_rate = 50;
function lsfr_setup(width, height, callback, duration) {
// Maximal length LSFR feedback terms
// (sourced from http://users.ece.cmu.edu/~koopman/lfsr/index.html)
var taps = [ -1, 0x1, 0x3, 0x5, 0x9, 0x12, 0x21, 0x41, 0x8E, 0x108, 0x204, 0x402,
0x829, 0x100D, 0x2015, 0x4001, 0x8016, 0x10004, 0x20013, 0x40013,
0x80004, 0x100002, 0x200001, 0x400010, 0x80000D, 0x1000004, 0x2000023,
0x4000013, 0x8000004, 0x10000002, 0x20000029, 0x40000004, 0x80000057 ];
nblocks = width * height;
lsfr_size = nblocks.toString(2).length;
if (lsfr_size > 32) {
// Anything longer than about 21 bits would be quite slow anyway
console.log("Unsupposrted LSFR size ("+lsfr_size+")");
return;
}
lsfr_register = 1;
lsfr_mask = taps[lsfr_size];
lsfr_state = nblocks;
lsfr_fill_width = width;
lsfr_fill_height = height;
lsfr_blocks_per_frame = Math.ceil(nblocks / (duration * lsfr_frame_rate));
lsfr_timer = setInterval(callback, Math.ceil(1000 / lsfr_frame_rate));
}
function lsfr_step() {
var x, y;
do {
// Generate x,y pairs until they are within the bounds of the canvas area
// Worst-case for an n-bit LSFR is n iterations in one call (2 on average)
// Best-case (where w*h is one less than a power of 2): 1 call per iteration
if (lsfr_register & 1) lsfr_register = (lsfr_register >> 1) ^ lsfr_mask;
else lsfr_register >>= 1;
y = Math.floor((lsfr_register-1) / lsfr_fill_width);
} while (y >= lsfr_fill_height);
x = (lsfr_register-1) % lsfr_fill_width;
return [x, y];
}
function lsfr_callback() {
var coords;
for (var i=0; i<lsfr_blocks_per_frame; i++) {
// Fetch pseudo-random coordinates and fill the corresponding pixels
coords = lsfr_step();
lsfr_canvas_context.fillRect(coords[0],coords[1],1,1);
if (--lsfr_state <= 0) {
clearInterval(lsfr_timer);
break;
}
}
}
function start_fade() {
var w = document.getElementById("w").value * 1;
var h = document.getElementById("h").value * 1;
var dur = document.getElementById("dur").value * 1;
lsfr_canvas = document.getElementById("cv");
lsfr_canvas.width = w;
lsfr_canvas.height = h;
lsfr_canvas_context = lsfr_canvas.getContext("2d");
lsfr_canvas_context.fillStyle = "#ffff00";
lsfr_canvas_context.fillRect(0,0,w,h);
lsfr_canvas_context.fillStyle = "#ff0000";
lsfr_setup(w, h, lsfr_callback, dur);
}
Size:
<input type="text" size="3" id="w" value="320"/>
×
<input type="text" size="3" id="h" value="240"/>
in
<input type="text" size="3" id="dur" value="3"/>
secs
<button onclick="start_fade(); return 0">Start</button>
<br />
<canvas id="cv" width="320" height="240" style="border:1px solid #ccc"/>

Merging geometries using a WebWorker?

Anyone know if it's possible to merge a set of cube geometries in a web worker and pass it back to the main thread? Was thinking this could reduce the lag when merging large amounts of cubes.
Does Three.JS work okay in a web worker, and if it does, would it be possible (and faster) to do this? Not sure if passing the geometry back would take just as long as merging it normally.
At the moment I'm using a timed for loop to reduce the lag:
// This array is populated by the server and contains the chunk position and data (which I do nothing with yet).
var sectionData = data.secData;
var section = 0;
var tick = function() {
var start = new Date().getTime();
for (; section < sectionData.length && (new Date().getTime()) - start < 1; section++) {
var sectionXPos = sectionData[section][0] * 10;
var sectionZPos = sectionData[section][1] * 10;
var combinedGeometry = new THREE.Geometry();
for (var layer = 0; layer < 1; layer++) { // Only 1 layer because of the lag...
for (var x = 0; x < 10; x++) {
for (var z = 0; z < 10; z++) {
blockMesh.position.set(x-4.5, layer-.5, z-4.5);
blockMesh.updateMatrix();
THREE.GeometryUtils.merge(combinedGeometry, blockMesh);
}
}
}
var sectionMesh = new THREE.Mesh(combinedGeometry, grassBlockMat);
sectionMesh.position.set(sectionXPos, 0, sectionZPos);
sectionMesh.matrixAutoUpdate = false;
sectionMesh.updateMatrix();
scene.add(sectionMesh);
}
if (section < sectionData.length) {
setTimeout(tick, 25);
}
};
setTimeout(tick, 25);
Using Three.JS rev59-dev.
Merged cubes make up the terrain in chunks, and at the moment (due to the lag) each chunk only has 1 layer.
Any tips would be appreciated! Thanks.
THREE.JS will not work in a web worker, however you can copy those parts of the library that you need to work both in the main thread and in your web worker.
Your first problem will be that you cannot send the geometry object itself back to the main thread.
Since the web worker onmessage variable passing works only by sending copies of JSON (not javascript objects) or references to ArrayBuffers, you would have to decode the geometry down to each float, pack it in an ArrayBuffer, and send a reference back to the main thread.
Note those are called transferable objects and once sent, they are cleared in the webworker / main thread from which they came.
See here for more details:
http://www.html5rocks.com/en/tutorials/workers/basics/
https://developer.mozilla.org/en-US/docs/Web/Guide/Performance/Using_web_workers
Here is an example of packing position vertices into an array for a physics type system:
//length * 3 axes * 4 bytes per vertex
var posBuffer = new Float32Array(new ArrayBuffer(len * 3 * 4));
//in a loop
//... do hard work
posBuffer[i * 3] = pos.x; //pos is a threejs vector
posBuffer[i * 3 + 1] = pos.y;
posBuffer[i * 3 + 2] = pos.z;
//after loop send buffer to main thread
self.postMessage({posBuffer:posBuffer}, [posBuffer.buffer]);
I copied the THREE.JS vector class inside my web worker and cut out all the methods I didn't need to keep it nice and lean.
FYI it's not slow and for something like n-body collisions it works well.
The main thread sends a command to the web worker telling it to run the update and then listens for the response. Kind of like a producer consumer model in regular threading.

PID controller in C# Micro Framework issues

I have built a tricopter from scratch based on a .NET Micro Framework board from TinyCLR.com. I used the FEZ Mini which runs at 72 MHz. Read more about my project at: http://bit.ly/TriRot.
So after a pre-flight check where I initialise and test each component, like calibrating the IMU and spinning each motor, checking that I get receiver data, etc., it enters a permanent loop which then calls the flight controller method on each loop.
I'm trying to tune my PID controller now using the Ziegler-Nichols method, but I am always getting a progressively larger overshoot. I was eventually able to get a [mostly] stable oscillation using proportional control only (setting Ki and Kd = 0); timing the period K with a stopwatch averaged out to 3.198 seconds.
I came across the answer (by Rex Logan) on a similar question by chris12892.
I was initially using the "Duration" variable in milliseconds which made my copter highly aggressive, obviously because I was multiplying the running integrator error by thousands on each loop. I then divided it by another thousand to bring it to seconds, but I'm still battling...
What I don't understand from Rex's answer is:
Why does he ignore the time variable in the integral and differential parts of the equations? Is that right or is it a typo?
What he means by the remark
In a normal sampled system the delta term would be one...
One what? Should this be one second under normal circumstances? What
if this value fluctuates?
My flight controller method is below:
private static Single[] FlightController(Single[] imuData, Single[] ReceiverData)
{
Int64 TicksPerMillisecond = TimeSpan.TicksPerMillisecond;
Int64 CurrentTicks = DateTime.Now.Ticks;
Int64 TickCount = CurrentTicks - PreviousTicks;
PreviousTicks = CurrentTicks;
Single Duration = (TickCount / TicksPerMillisecond) / 1000F;
const Single Kp = 0.117F; //Proportional Gain (Instantaneou offset)
const Single Ki = 0.073170732F; //Integral Gain (Permanent offset)
const Single Kd = 0.001070122F; //Differential Gain (Change in offset)
Single RollE = 0;
Single RollPout = 0;
Single RollIout = 0;
Single RollDout = 0;
Single RollOut = 0;
Single PitchE = 0;
Single PitchPout = 0;
Single PitchIout = 0;
Single PitchDout = 0;
Single PitchOut = 0;
Single rxThrottle = ReceiverData[(int)Channel.Throttle];
Single rxRoll = ReceiverData[(int)Channel.Roll];
Single rxPitch = ReceiverData[(int)Channel.Pitch];
Single rxYaw = ReceiverData[(int)Channel.Yaw];
Single[] TargetMotorSpeed = new Single[] { rxThrottle, rxThrottle, rxThrottle };
Single ServoAngle = 0;
if (!FirstRun)
{
Single imuRoll = imuData[1] + 7;
Single imuPitch = imuData[0];
//Roll ----- Start
RollE = rxRoll - imuRoll;
//Proportional
RollPout = Kp * RollE;
//Integral
Single InstanceRollIntegrator = RollE * Duration;
RollIntegrator += InstanceRollIntegrator;
RollIout = RollIntegrator * Ki;
//Differential
RollDout = ((RollE - PreviousRollE) / Duration) * Kd;
//Sum
RollOut = RollPout + RollIout + RollDout;
//Roll ----- End
//Pitch ---- Start
PitchE = rxPitch - imuPitch;
//Proportional
PitchPout = Kp * PitchE;
//Integral
Single InstancePitchIntegrator = PitchE * Duration;
PitchIntegrator += InstancePitchIntegrator;
PitchIout = PitchIntegrator * Ki;
//Differential
PitchDout = ((PitchE - PreviousPitchE) / Duration) * Kd;
//Sum
PitchOut = PitchPout + PitchIout + PitchDout;
//Pitch ---- End
TargetMotorSpeed[(int)Motors.Motor.Left] += RollOut;
TargetMotorSpeed[(int)Motors.Motor.Right] -= RollOut;
TargetMotorSpeed[(int)Motors.Motor.Left] += PitchOut;// / 2;
TargetMotorSpeed[(int)Motors.Motor.Right] += PitchOut;// / 2;
TargetMotorSpeed[(int)Motors.Motor.Rear] -= PitchOut;
ServoAngle = rxYaw + 15;
PreviousRollE = imuRoll;
PreviousPitchE = imuPitch;
}
FirstRun = false;
return new Single[] {
(Single)TargetMotorSpeed[(int)TriRot.LeftMotor],
(Single)TargetMotorSpeed[(int)TriRot.RightMotor],
(Single)TargetMotorSpeed[(int)TriRot.RearMotor],
(Single)ServoAngle
};
}
Edit: I found that I had two bugs in my code above (fixed now). I was integrating and differentiating with the last IMU values as opposed to the last error values. That got rid of the runaway sitation completely. The only problem now is that it seems to be a bit slow. When I perturb the system, it responds very quickly and stop it from continuing, but it takes a long time to get back to the setpoint (0), about 10 seconds or more. Is this now just down to tuning the PID? I'll give the suggestions below a go, and let you know if any of them make a difference.
One question I have is:
being a .NET board, I don't want to bank on any kind of accurate timing, so instead of trying to work out at what frequency I am executing that method, surely if I calculate the actual time and factor that into the equations, it should be better, or am I misunderstanding something?