I faced this issue while running all tests together. It says 11 KOs but there is no detail about the KOs in the list. I could not get which requests causing this issue. I also checked the log from the machine but it says all test passing.
This one of the machine logs, I run these tests in 3 machines but all the same.
================================================================================
---- Global Information --------------------------------------------------------
> request count 1142 (OK=1142 KO=0 )
> min response time 54 (OK=54 KO=- )
> max response time 1426 (OK=1426 KO=- )
> mean response time 186 (OK=186 KO=- )
> std deviation 170 (OK=170 KO=- )
> response time 50th percentile 116 (OK=116 KO=- )
> response time 75th percentile 271 (OK=271 KO=- )
> response time 95th percentile 461 (OK=461 KO=- )
> response time 99th percentile 782 (OK=782 KO=- )
> mean requests/sec 12.413 (OK=12.413 KO=- )
---- Response Time Distribution ------------------------------------------------
> t < 800 ms 1132 ( 99%)
> 800 ms < t < 1200 ms 7 ( 1%)
> t > 1200 ms 3 ( 0%)
> failed 0 ( 0%)
================================================================================
Related
I'm kinda new to DAX and PowerBi and I need to translate my SQL IF statement for whatever syntax is this on PowerBi to achieve the output I want.
Sql code I want to translate:
IF (Payment.payment>0) AND (Account.PV = Account.GV) THEN 1 ELSE 0
I want to make a calculated column on Payment table which will return 1 or 0 so that I can use this to filter all the records that meets my condition
account_id is the relationship of these two tables
Here is a sample data for reference: Account table
account_id
pv
gv
due_date
123
100
200
08/08/2022
124
200
200
08/09/2022
125
300
800
08/10/2022
126
400
670
08/11/2022
127
500
500
08/12/2022
128
600
600
08/13/2022
129
700
1000
08/14/2022
130
800
760
08/15/2022
131
900
900
08/16/2022
132
1000
1000
08/17/2022
133
1100
2300
08/09/2022
Here is a sample data for reference: Payment table
payment_id
payment_number
payment
payment_date
account_id
_test
101
554321
1000
03/01/2022
123
0
102
554322
1200
03/21/2022
124
1
103
554322
1100
04/28/2022
124
1
104
554323
2500
05/04/2022
131
1
105
554324
3000
05/14/2022
133
0
106
554325
3000
05/14/2022
132
1
107
554322
1200
03/21/2022
124
1
108
554323
2500
04/05/2022
131
1
109
554328
3100
04/05/2022
128
0
Codes I tried but I can't help myself to find the correct way to do it correctly and return the output that I need
_test = IF(Payments[payment]>0 && RELATED('Account'[PV])=RELATED('Account'[GV]), 1)
_test = IF(AND(Payments[payment])>0, RELATED('Account'[PV])=RELATED('Account'[GV])),1,0)
Any suggestion is much appreciated. Please recommend what kind of syntax/function should be used in order to achieve the output or what would be the work around to use other than IF statement
The problem that you are facing with RELATED is that RELATED only works from 1 side to many side.
Meaning, that if you bring the axis from 1-side and perform a calculation on the many side the filter works perfectly. Take a look at the direction of the filter below. The direction of the filter tells you on normal circumstances, you should bring your axis from Account and whatever calculation you perform on `Payment table it will work out.
But you are doing exactly the reverse. You are bringing the axis from Payment and hoping for RELATED to work. It won't cause the direction to be as such.
However, DAX is much more dynamic than that. If for whatever reason, you need to bring axis from many side where you need to still filter on 1-side, you can define a reverse filter direction on-the-fly (because DAX is magical) without needing to change anything in the data model by using CROSSFILTER. With CROSSFILTER you are customizing the filter direction as such
CROSSFILTER(<LEFtblcolumnName1>, <RIGHTtblcolumnName2>, <direction>)
This is how, (with your given dataset)
Column =
VAR cond1 =
CALCULATE (
MAX ( Account[Account.pv] ),
CROSSFILTER ( Payment[Payment.account_id], Account[Account.account_id], BOTH )
)
- CALCULATE (
MAX ( Account[Account.gv] ),
CROSSFILTER ( Payment[Payment.account_id], Account[Account.account_id], BOTH )
)
RETURN
IF ( cond1 == 0 && Payment[Payment.payment] > 0, 1, 0 )
Initial Table
company time value
-------------------------
a 00:00:15.000 100
a 00:00:30.000 100
b 00:01:00.000 100
a 00:01:10.000 100
a 00:01:15.000 100
a 00:01:20.000 300
a 00:01:25.000 100
b 00:01:30.000 400
a 00:01:50.000 100
a 00:02:00.000 100
a 00:00:03.000 200
Let t = 1 hour.
For each row, I would like to look back t time.
Entries falling in t will form a time window. I would like to get max(time window) - min (time window) / number of events).
For example, if it is 12:00 now, and there are a total of five events, 12:00, 11:50, 11:40, 11:30, 10:30, four of which falls in the window of t i.e. 12:00, 11:50, 11:40, 11:30, the result will be 12:00 - 11:30 / 4.
Additionally, the window should only account for rows with the same value and company name.
Resultant Table
company time value x
--------------------------------
a 00:00:15.000 100 0 (First event A).
a 00:00:30.000 100 15 (30 - 15 / 2 events).
b 00:01:00.000 100 0 (First event of company B).
a 00:01:10.000 100 55/3 = 18.33 (1:10 - 0:15 / 3 events).
a 00:01:15.000 100 60/4 = 15 (1:15 - 0:15 / 4 events).
a 00:01:20.000 300 0 (Different value).
a 00:01:25.000 100 55/4 = 13.75 (01:25 - 0:30 / 4 events).
b 00:01:30.000 400 0 (Different value and company).
a 00:01:50.000 100 40/4 = 10 (01:50 - 01:10 / 4 events).
a 00:02:00.000 100 50/5 = 10 (02:00 - 01:10 / 5 events).
a 00:03:00.000 200 0 (Different value).
Any help will be greatly appreciated. If it helps, I asked a similar question, which worked splendidly: Sum values from the previous N number of days in KDB?
Table Query
([] company:`a`a`b`a`a`a`a`b`a`a`a; time: 00:00:15.000 00:00:30.000 00:01:00.000 00:01:10.000 00:01:15.000 00:01:20.000 00:01:25.000 00:01:30.000 00:01:50.000 00:02:00.000 00:03:00.000; v: 100 100 100 100 100 300 100 400 100 100 200)
You may wish to use the following;
q)update x:((time-time[time binr time-01:00:00])%60000)%count each v where each time within/:flip(time-01:00:00;time) by company,v from t
company time v x
---------------------------------
a 00:15:00.000 100 0
a 00:30:00.000 100 7.5
b 01:00:00.000 100 0
a 01:10:00.000 100 18.33333
a 01:15:00.000 100 15
a 01:20:00.000 300 0
a 01:25:00.000 100 13.75
b 01:30:00.000 400 0
a 01:50:00.000 100 10
a 02:00:00.000 100 10
a 03:00:00.000 200 0
It uses time binr time-01:00:00 to get the index of the min time for the previous 1 hour of each time.
Then (time-time[time binr time-01:00:00])%60000 gives the respective time range (i.e., time - min time) for each time in minutes.
count each v where each time within/:flip(time-01:00:00;time) gives the number of rows within this range.
Dividing the two and implementing by company,v applies it all only to those that have the same company and v values.
Hope this helps.
Kevin
If your table is ordered by time then below solution will give you the required result. You can also order your table by time if it is not already using xasc.
I have also modified the table to have time with different hour values.
q) t:([] company:`a`a`b`a`a`a`a`b`a`a`a; time: 00:15:00.000 00:30:00.000 01:00:00.000 01:10:00.000 01:15:00.000 01:20:00.000 01:25:00.000 01:30:00.000 01:50:00.000 02:00:00.000 03:00:00.000; v: 100 100 100 100 100 300 100 400 100 100 200)
q) f:{(`int$x-x i) % 60000*1+til[count x]-i:x binr x-01:00:00}
q) update res:f time by company,v from t
Output
company time v res
---------------------------------
a 00:15:00.000 100 0
a 00:30:00.000 100 7.5
b 01:00:00.000 100 0
a 01:10:00.000 100 18.33333
a 01:15:00.000 100 15
a 01:20:00.000 300 0
a 01:25:00.000 100 13.75
b 01:30:00.000 400 0
a 01:50:00.000 100 10
a 02:00:00.000 100 10
a 03:00:00.000 200 0
You can modify the function f to change time window value. Or change f to accept that as an input parameter.
Explanation:
We pass time vector by company, value to a function f. It deducts 1 hour from each time value and then uses binr to get the index of the first time entry within 1-hour window range from the input time vector.
q) i:x binr x-01:00:00
q) 0 0 0 0 1 2 2
After that, it uses the indexes of the output to calculate the total count. Here I am multiplying the count by 60000 as time differences are in milliseconds because it is casting it to int.
q) 60000*1+til[count x]-i
q) 60000 120000 180000 240000 240000 240000 300000
Then finally we subtract the min and max time for each value and divide them by the above counts. Since time vector is ordered(ascending), the input time vector can be used as the max value and min values are at indexes referred by i.
q) (`int$x-x i) % 60000*1+til[count x]-i
I use Locust, a load testing framework, and the following is the summary of a test result.
Name # reqs # fails Avg Min Max | Median req/s
--------------------------------------------------------------------------------------------------------------------------------------------
GET /sample 10000 0(0.00%) 97 56 349 | 96 761.90
--------------------------------------------------------------------------------------------------------------------------------------------
Total 10000 0(0.00%) 761.90
I guess that req/s means 761.90 requests are processed in 1 second. How about Avg, Min, Max and Median? How can I read these columns?.
This perfomance test tooks about 15 sec. I set min_wait = 0 max_wait = 0.
Looking at the source, it appears to refer to the response time
I'm trying to make a simple api application using beego. During the stress test, there was an unexpected problem. Before ~16400 requests everything executes at fantastic speed. After 16400 queries almost all stops, runs 1-2 requests per second. I have a feeling that beego can not allocate a connection to the database. I tried to change maxIdle, maxConn parameters but no effect.
UPD. the same problem with other databases
MainController:
package controllers
import (
models "github.com/Hepri/taxi/models"
"github.com/astaxie/beego"
"github.com/astaxie/beego/orm"
)
type MainController struct {
beego.Controller
}
func (c *MainController) Get() {
o := orm.NewOrm()
app := models.ApiApp{}
err := o.Read(&app)
if err == orm.ErrMissPK {
// do nothing
}
c.ServeJson()
}
Model:
package models
const (
CompanyAccessTypeAll = 1
CompanyAccessTypeSpecific = 2
)
type ApiApp struct {
Id int `orm:"auto"`
Token string `orm:"size(100)"`
}
func (a *ApiApp) TableName() string {
return "api_apps"
}
main.go:
package main
import (
models "github.com/Hepri/taxi/models"
_ "github.com/Hepri/taxi/routers"
"github.com/astaxie/beego"
"github.com/astaxie/beego/orm"
_ "github.com/lib/pq"
)
func main() {
orm.RegisterDriver("postgres", orm.DR_Postgres)
orm.RegisterDataBase("default", "postgres", "user=test password=123456 dbname=test sslmode=disable")
orm.RegisterModel(new(models.ApiApp))
beego.EnableAdmin = true
orm.RunCommand()
beego.Run()
}
before reach ~16400:
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 3.844 seconds
Complete requests: 16396
Failed requests: 0
Write errors: 0
Total transferred: 2492192 bytes
HTML transferred: 65584 bytes
Requests per second: 4264.91 [#/sec] (mean)
Time per request: 2.345 [ms] (mean)
Time per request: 0.234 [ms] (mean, across all concurrent requests)
Transfer rate: 633.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.2 0 275
Processing: 0 2 10.9 1 370
Waiting: 0 1 8.6 1 370
Total: 0 2 11.1 2 370
Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 2
95% 3
98% 3
99% 4
100% 370 (longest request)
after reach ~16400:
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 15.534 seconds
Complete requests: 16392
Failed requests: 0
Write errors: 0
Total transferred: 2491584 bytes
HTML transferred: 65568 bytes
Requests per second: 1055.22 [#/sec] (mean)
Time per request: 9.477 [ms] (mean)
Time per request: 0.948 [ms] (mean, across all concurrent requests)
Transfer rate: 156.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 11
Processing: 0 2 16.7 1 614
Waiting: 0 1 15.7 1 614
Total: 0 2 16.7 1 614
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 614 (longest request)
same picture even after 30 seconds
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 25.585 seconds
Complete requests: 16391
Failed requests: 0
Write errors: 0
Total transferred: 2491432 bytes
HTML transferred: 65564 bytes
Requests per second: 640.65 [#/sec] (mean)
Time per request: 15.609 [ms] (mean)
Time per request: 1.561 [ms] (mean, across all concurrent requests)
Transfer rate: 95.10 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 10.1 0 617
Processing: 0 2 16.2 1 598
Waiting: 0 1 11.1 1 597
Total: 0 2 19.1 1 618
Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 618 (longest request)
I am using ubuntu 12, nginx, uwsgi 1.9 with socket, django 1.5.
Config:
[uwsgi]
base_path = /home/someuser/web/
module = server.manage_uwsgi
uid = www-data
gid = www-data
virtualenv = /home/someuser
master = true
vacuum = true
harakiri = 20
harakiri-verbose = true
log-x-forwarded-for = true
profiler = true
no-orphans = true
max-requests = 10000
cpu-affinity = 1
workers = 4
reload-on-as = 512
listen = 3000
Client tests from Windows7:
C:\Users\user>C:\AppServ\Apache2.2\bin\ab.exe -c 255 -n 5000 http://www.someweb.com/about/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.someweb.com (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Finished 5000 requests
Server Software: nginx
Server Hostname: www.someweb.com
Server Port: 80
Document Path: /about/
Document Length: 1881 bytes
Concurrency Level: 255
Time taken for tests: 66.669814 seconds
Complete requests: 5000
Failed requests: 1
(Connect: 1, Length: 0, Exceptions: 0)
Write errors: 0
Total transferred: 10285000 bytes
HTML transferred: 9405000 bytes
Requests per second: 75.00 [#/sec] (mean)
Time per request: 3400.161 [ms] (mean)
Time per request: 13.334 [ms] (mean, across all concurrent requests)
Transfer rate: 150.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 207.8 1 9007
Processing: 10 3380 11480.5 440 54421
Waiting: 6 1060 3396.5 271 48424
Total: 11 3389 11498.5 441 54423
Percentage of the requests served within a certain time (ms)
50% 441
66% 466
75% 499
80% 519
90% 3415
95% 36440
98% 54407
99% 54413
100% 54423 (longest request)
I have set following options too:
echo 3000 > /proc/sys/net/core/netdev_max_backlog
echo 3000 > /proc/sys/net/core/somaxconn
So,
1) I make first 3000 requests super fast. I see progress in ab and in uwsgi requests logs -
[pid: 5056|app: 0|req: 518/4997] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5052|app: 0|req: 512/4998] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5054|app: 0|req: 353/4999] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
I dont have any broken pipes or worker respawns.
2) Next requests are running very slow or with some timeout. Looks like that some buffer becomes full and I am waiting before it becomes empty.
3) Some buffer becomes empty.
4) ~500 requests are processed super fast.
5) Some timeout.
6) see Nr. 4
7) see Nr. 5
8) see Nr. 4
9) see Nr. 5
....
....
Need your help
check with netstat and dmesg. You have probably exhausted ephemeral ports or filled the conntrack table.