RxNet TestScheduler and Windowing not doing what I expected - system.reactive

I have been trying to learn ReactiveUI + RxNet lately... I love them both and they are quite mind bending... I have been reading 'Programming Reactive Extensions and Linq' and it includes this code snippet (modified so that it uses the latest classes/methods):
var sched = new TestScheduler();
var input = sched.CreateColdObservable(
OnNext(205, 1),
OnNext(305, 10),
OnNext(405, 100),
OnNext(505, 1000),
OnNext(605, 10000),
OnCompleted<int>(1100));
int i = 0;
var windows = input.Window(
Observable.Timer(TimeSpan.Zero, TimeSpan.FromMilliseconds(100), sched).Take(7),
x => Observable.Timer(TimeSpan.FromMilliseconds(50), sched));
windows.Timestamp(sched)
.Subscribe(obs =>
{
int current = ++i;
Console.WriteLine($"Started Obserable {current} at {obs.Timestamp.Millisecond:n0}ms");
obs.Value.Subscribe(
item =>
Console.WriteLine($" {item} at {sched.Now.Millisecond:n0}ms"),
() => Console.WriteLine($"Ended Obserable {current} at {sched.Now.Millisecond:n0}"));
});
sched.Start();
This is the ouput:
Started Obserable 1 at 0ms
1 at 0ms
10 at 0ms
100 at 0ms
1000 at 0ms
10000 at 0ms
Ended Obserable 1 at 50
Started Obserable 2 at 100ms
Ended Obserable 2 at 150
Started Obserable 3 at 200ms
Ended Obserable 3 at 250
Started Obserable 4 at 300ms
Ended Obserable 4 at 350
Started Obserable 5 at 400ms
Ended Obserable 5 at 450
Started Obserable 6 at 500ms
Ended Obserable 6 at 550
Started Obserable 7 at 600ms
Ended Obserable 7 at 650
And this is the expected output:
Started Observable 1 at 0ms
Ended Observable 1 at 50ms
Started Observable 2 at 100ms
Ended Observable 2 at 150ms
Started Observable 3 at 200ms
1 at 205ms
Ended Observable 3 at 250ms
Started Observable 4 at 300ms
10 at 305ms
Ended Observable 4 at 350ms
Started Observable 5 at 400ms
100 at 405ms
Ended Observable 5 at 450ms
Started Observable 6 at 500ms
1000 at 505ms
Ended Observable 6 at 550ms
Started Observable 7 at 600ms
10000 at 605ms
Ended Observable 7 at 650ms
Any idea why? what have I missed?

I don't know what you have in your OnNext method, but the constructor for Recorded<Notification<T>>, what is what you put into the CreateColdObservable method, takes ticks and not milliseconds as the first argument. So I would try this:
var input = sched.CreateColdObservable(
OnNext(2050000, 1),
OnNext(3050000, 10),
OnNext(4050000, 100),
OnNext(5050000, 1000),
OnNext(6050000, 10000),
OnCompleted<int>(11000000));

Related

Marking values from the previous N number of days in KDB based on criteria?

Initial Table
company time value
-------------------------
a 00:00:15.000 100
a 00:00:30.000 100
b 00:01:00.000 100
a 00:01:10.000 100
a 00:01:15.000 100
a 00:01:20.000 300
a 00:01:25.000 100
b 00:01:30.000 400
a 00:01:50.000 100
a 00:02:00.000 100
a 00:00:03.000 200
Let t = 1 hour.
For each row, I would like to look back t time.
Entries falling in t will form a time window. I would like to get max(time window) - min (time window) / number of events).
For example, if it is 12:00 now, and there are a total of five events, 12:00, 11:50, 11:40, 11:30, 10:30, four of which falls in the window of t i.e. 12:00, 11:50, 11:40, 11:30, the result will be 12:00 - 11:30 / 4.
Additionally, the window should only account for rows with the same value and company name.
Resultant Table
company time value x
--------------------------------
a 00:00:15.000 100 0 (First event A).
a 00:00:30.000 100 15 (30 - 15 / 2 events).
b 00:01:00.000 100 0 (First event of company B).
a 00:01:10.000 100 55/3 = 18.33 (1:10 - 0:15 / 3 events).
a 00:01:15.000 100 60/4 = 15 (1:15 - 0:15 / 4 events).
a 00:01:20.000 300 0 (Different value).
a 00:01:25.000 100 55/4 = 13.75 (01:25 - 0:30 / 4 events).
b 00:01:30.000 400 0 (Different value and company).
a 00:01:50.000 100 40/4 = 10 (01:50 - 01:10 / 4 events).
a 00:02:00.000 100 50/5 = 10 (02:00 - 01:10 / 5 events).
a 00:03:00.000 200 0 (Different value).
Any help will be greatly appreciated. If it helps, I asked a similar question, which worked splendidly: Sum values from the previous N number of days in KDB?
Table Query
([] company:`a`a`b`a`a`a`a`b`a`a`a; time: 00:00:15.000 00:00:30.000 00:01:00.000 00:01:10.000 00:01:15.000 00:01:20.000 00:01:25.000 00:01:30.000 00:01:50.000 00:02:00.000 00:03:00.000; v: 100 100 100 100 100 300 100 400 100 100 200)
You may wish to use the following;
q)update x:((time-time[time binr time-01:00:00])%60000)%count each v where each time within/:flip(time-01:00:00;time) by company,v from t
company time v x
---------------------------------
a 00:15:00.000 100 0
a 00:30:00.000 100 7.5
b 01:00:00.000 100 0
a 01:10:00.000 100 18.33333
a 01:15:00.000 100 15
a 01:20:00.000 300 0
a 01:25:00.000 100 13.75
b 01:30:00.000 400 0
a 01:50:00.000 100 10
a 02:00:00.000 100 10
a 03:00:00.000 200 0
It uses time binr time-01:00:00 to get the index of the min time for the previous 1 hour of each time.
Then (time-time[time binr time-01:00:00])%60000 gives the respective time range (i.e., time - min time) for each time in minutes.
count each v where each time within/:flip(time-01:00:00;time) gives the number of rows within this range.
Dividing the two and implementing by company,v applies it all only to those that have the same company and v values.
Hope this helps.
Kevin
If your table is ordered by time then below solution will give you the required result. You can also order your table by time if it is not already using xasc.
I have also modified the table to have time with different hour values.
q) t:([] company:`a`a`b`a`a`a`a`b`a`a`a; time: 00:15:00.000 00:30:00.000 01:00:00.000 01:10:00.000 01:15:00.000 01:20:00.000 01:25:00.000 01:30:00.000 01:50:00.000 02:00:00.000 03:00:00.000; v: 100 100 100 100 100 300 100 400 100 100 200)
q) f:{(`int$x-x i) % 60000*1+til[count x]-i:x binr x-01:00:00}
q) update res:f time by company,v from t
Output
company time v res
---------------------------------
a 00:15:00.000 100 0
a 00:30:00.000 100 7.5
b 01:00:00.000 100 0
a 01:10:00.000 100 18.33333
a 01:15:00.000 100 15
a 01:20:00.000 300 0
a 01:25:00.000 100 13.75
b 01:30:00.000 400 0
a 01:50:00.000 100 10
a 02:00:00.000 100 10
a 03:00:00.000 200 0
You can modify the function f to change time window value. Or change f to accept that as an input parameter.
Explanation:
We pass time vector by company, value to a function f. It deducts 1 hour from each time value and then uses binr to get the index of the first time entry within 1-hour window range from the input time vector.
q) i:x binr x-01:00:00
q) 0 0 0 0 1 2 2
After that, it uses the indexes of the output to calculate the total count. Here I am multiplying the count by 60000 as time differences are in milliseconds because it is casting it to int.
q) 60000*1+til[count x]-i
q) 60000 120000 180000 240000 240000 240000 300000
Then finally we subtract the min and max time for each value and divide them by the above counts. Since time vector is ordered(ascending), the input time vector can be used as the max value and min values are at indexes referred by i.
q) (`int$x-x i) % 60000*1+til[count x]-i

How does CombineLatest reactive operator work?

I have run the following code snippet inside Linqpad:
$"[{(DateTime.Now.ToString("HH:mm:ss.fff"))}] 0 0".Dump();
Observable
.Interval(TimeSpan.FromSeconds(3))
.CombineLatest(Observable.Interval(TimeSpan.FromSeconds(10)), (x, y) => $"[{(DateTime.Now.ToString("HH:mm:ss.fff"))}] {x} {y}")
.Do(Console.WriteLine).Wait();
Here is what I've got in result:
[23:38:40.111] 0 0
[23:38:50.183] 2 0
[23:38:52.180] 3 0
[23:38:55.196] 4 0
[23:38:58.197] 5 0
[23:39:00.181] 5 1
[23:39:01.198] 6 1
[23:39:04.198] 7 1
[23:39:07.210] 8 1
[23:39:10.196] 8 2
[23:39:10.211] 9 2
[23:39:13.211] 10 2
[23:39:16.211] 11 2
[23:39:19.212] 12 2
[23:39:20.197] 12 3
[23:39:22.227] 13 3
[23:39:25.228] 14 3
[23:39:28.229] 15 3
[23:39:30.196] 15 4
[23:39:31.241] 16 4
[23:39:34.242] 17 4
I am unable to explain the beginning of this sequence:
Why is the first computed value 2 0?
Why 2 0 was output 10 seconds after the start?
From: http://reactivex.io/documentation/operators/combinelatest.html
CombineLatest
when an item is emitted by either of two Observables, combine the latest item emitted by each Observable via a specified function and emit items based on the results of this function
Maybe this modified code will help you understand what's happening:
$"[{(DateTime.Now.ToString("HH:mm:ss.fff"))}] 0 0".Dump();
Observable.Interval(TimeSpan.FromSeconds(3))
.Do(x => $"[{(DateTime.Now.ToString("HH:mm:ss.fff"))}] {x} _".Dump())
.CombineLatest(
Observable.Interval(TimeSpan.FromSeconds(10))
.Do(y => $"[{(DateTime.Now.ToString("HH:mm:ss.fff"))}] _ {y}".Dump()),
(x, y) => $"[{(DateTime.Now.ToString("HH:mm:ss.fff"))}] {x} {y}")
.Do(s => s.Dump())
.Wait();
Nothing is emitted from a CombineLatest until there's at least one message from each side. That happens 10 seconds after the start in your case, when you get the first message from the 10-second observable: Three messages from the 3-second observable have come out, so the third one is emitted, paired with the first message of the 10-second one.

Simulink: Creating a repeating irregular square wave

I want to genereate a square wave to represent different uptimes for a lightning installation over a year.
The schedule over a week is the following:
Mon-Thu: 06.00-20.00
Fri: 06.00-18.00
Sat: no uptime
Sun: no uptime
So my wave should repeat every 168 hours (one week) and look like this:
Time Signal
0-6 0
6-20 1
20-30 0
30-44 1
44-54 0
54-68 1
68-78 0
78-94 1
94-104 0
104-116 1
116-168 0
I've tried some with the repeating sequence block using the following:
Time values:[0 6 6.001 20 20.001 30 30.001 44 44.001 54 54.001 68 68.001 78 78.001 94 94.001 104 104.001 116 116.001 168]
Output values: [0 repmat([0 1 1 0],1,5) 0]
But since I'm simulating over 8760 (a year in hours) it seems that the step is f*cking things up.
Is there any better way or good way to make this work?
Thanks a bunch.

beego postgresql maximum db connections

I'm trying to make a simple api application using beego. During the stress test, there was an unexpected problem. Before ~16400 requests everything executes at fantastic speed. After 16400 queries almost all stops, runs 1-2 requests per second. I have a feeling that beego can not allocate a connection to the database. I tried to change maxIdle, maxConn parameters but no effect.
UPD. the same problem with other databases
MainController:
package controllers
import (
models "github.com/Hepri/taxi/models"
"github.com/astaxie/beego"
"github.com/astaxie/beego/orm"
)
type MainController struct {
beego.Controller
}
func (c *MainController) Get() {
o := orm.NewOrm()
app := models.ApiApp{}
err := o.Read(&app)
if err == orm.ErrMissPK {
// do nothing
}
c.ServeJson()
}
Model:
package models
const (
CompanyAccessTypeAll = 1
CompanyAccessTypeSpecific = 2
)
type ApiApp struct {
Id int `orm:"auto"`
Token string `orm:"size(100)"`
}
func (a *ApiApp) TableName() string {
return "api_apps"
}
main.go:
package main
import (
models "github.com/Hepri/taxi/models"
_ "github.com/Hepri/taxi/routers"
"github.com/astaxie/beego"
"github.com/astaxie/beego/orm"
_ "github.com/lib/pq"
)
func main() {
orm.RegisterDriver("postgres", orm.DR_Postgres)
orm.RegisterDataBase("default", "postgres", "user=test password=123456 dbname=test sslmode=disable")
orm.RegisterModel(new(models.ApiApp))
beego.EnableAdmin = true
orm.RunCommand()
beego.Run()
}
before reach ~16400:
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 3.844 seconds
Complete requests: 16396
Failed requests: 0
Write errors: 0
Total transferred: 2492192 bytes
HTML transferred: 65584 bytes
Requests per second: 4264.91 [#/sec] (mean)
Time per request: 2.345 [ms] (mean)
Time per request: 0.234 [ms] (mean, across all concurrent requests)
Transfer rate: 633.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.2 0 275
Processing: 0 2 10.9 1 370
Waiting: 0 1 8.6 1 370
Total: 0 2 11.1 2 370
Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 2
95% 3
98% 3
99% 4
100% 370 (longest request)
after reach ~16400:
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 15.534 seconds
Complete requests: 16392
Failed requests: 0
Write errors: 0
Total transferred: 2491584 bytes
HTML transferred: 65568 bytes
Requests per second: 1055.22 [#/sec] (mean)
Time per request: 9.477 [ms] (mean)
Time per request: 0.948 [ms] (mean, across all concurrent requests)
Transfer rate: 156.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 11
Processing: 0 2 16.7 1 614
Waiting: 0 1 15.7 1 614
Total: 0 2 16.7 1 614
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 614 (longest request)
same picture even after 30 seconds
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 25.585 seconds
Complete requests: 16391
Failed requests: 0
Write errors: 0
Total transferred: 2491432 bytes
HTML transferred: 65564 bytes
Requests per second: 640.65 [#/sec] (mean)
Time per request: 15.609 [ms] (mean)
Time per request: 1.561 [ms] (mean, across all concurrent requests)
Transfer rate: 95.10 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 10.1 0 617
Processing: 0 2 16.2 1 598
Waiting: 0 1 11.1 1 597
Total: 0 2 19.1 1 618
Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 618 (longest request)

uwsgi long timeouts

I am using ubuntu 12, nginx, uwsgi 1.9 with socket, django 1.5.
Config:
[uwsgi]
base_path = /home/someuser/web/
module = server.manage_uwsgi
uid = www-data
gid = www-data
virtualenv = /home/someuser
master = true
vacuum = true
harakiri = 20
harakiri-verbose = true
log-x-forwarded-for = true
profiler = true
no-orphans = true
max-requests = 10000
cpu-affinity = 1
workers = 4
reload-on-as = 512
listen = 3000
Client tests from Windows7:
C:\Users\user>C:\AppServ\Apache2.2\bin\ab.exe -c 255 -n 5000 http://www.someweb.com/about/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.someweb.com (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Finished 5000 requests
Server Software: nginx
Server Hostname: www.someweb.com
Server Port: 80
Document Path: /about/
Document Length: 1881 bytes
Concurrency Level: 255
Time taken for tests: 66.669814 seconds
Complete requests: 5000
Failed requests: 1
(Connect: 1, Length: 0, Exceptions: 0)
Write errors: 0
Total transferred: 10285000 bytes
HTML transferred: 9405000 bytes
Requests per second: 75.00 [#/sec] (mean)
Time per request: 3400.161 [ms] (mean)
Time per request: 13.334 [ms] (mean, across all concurrent requests)
Transfer rate: 150.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 207.8 1 9007
Processing: 10 3380 11480.5 440 54421
Waiting: 6 1060 3396.5 271 48424
Total: 11 3389 11498.5 441 54423
Percentage of the requests served within a certain time (ms)
50% 441
66% 466
75% 499
80% 519
90% 3415
95% 36440
98% 54407
99% 54413
100% 54423 (longest request)
I have set following options too:
echo 3000 > /proc/sys/net/core/netdev_max_backlog
echo 3000 > /proc/sys/net/core/somaxconn
So,
1) I make first 3000 requests super fast. I see progress in ab and in uwsgi requests logs -
[pid: 5056|app: 0|req: 518/4997] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5052|app: 0|req: 512/4998] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5054|app: 0|req: 353/4999] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
I dont have any broken pipes or worker respawns.
2) Next requests are running very slow or with some timeout. Looks like that some buffer becomes full and I am waiting before it becomes empty.
3) Some buffer becomes empty.
4) ~500 requests are processed super fast.
5) Some timeout.
6) see Nr. 4
7) see Nr. 5
8) see Nr. 4
9) see Nr. 5
....
....
Need your help
check with netstat and dmesg. You have probably exhausted ephemeral ports or filled the conntrack table.