I am running a simple select statement in my database (connected from a remote machine using putty). I connected to SQLPLUS from the putty and executing the select statement. But getting different response times each time I run the query. Here are my observations.
1) Enabled the trace(10046). "elapsed_time" in the trace file is different for each execution of the query.
2) there is a huge difference in the elapsed time which is displayed on the console, and which is there in trace file... From the Putty, the elapsed time is showing approx. 2-3 secs. Whereas the elapsed time logged in the trace is showing the elapsed time as 1 sec... What is the difference between elapsed time on the putty console and trace file log?
Putty Console output:
select * from WWSH_TEST.T01DATA_LINK
489043 rows selected.
Elapsed: 00:02:57.16
Tracefile output:
select *
from
WWSH_TEST.T01DATA_LINK
call count cpu elapsed disk query current rows
---- ------- ------- --------- -------- -------- ------- ------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 32604 0.38 2.32 10706 42576 0 489043
From the putty console, the elapsed time is showing as 2.57 secs, whereas from the trace file the elapsed time is 2.32 .. Why we see this difference?
Moreover, when I am running this same SQL statement repeatedly, I could see different elapsed times in the trace files (ranging from 2.3 to 2.5 secs) .. What could be the reason for this difference in response time when there is no change in the database at all.
Database Version: 11.2.0.3.0
It looks like the time difference is the "client processing" - basically the time spent by sqlplus formatting the output.
Also, it looks like your array size is 15. Try running with a larger array size, such as 512 or 1024.
The mechanism to set the array size will vary from client to client. In sqlplus:
set arraysize 1024
The fetch time does not include network time, but if you use a level 8 trace
alter session set events '10046 trace name context forever, level 8';
this will given you wait events. The one to look for is SQL*Net message from client, which is basically the time the database is waiting to be asked to do something, such as fetch the next set of rows.
Related
I have 2 queries in a grafana panel.
I want to run Query A every 5 min
I want to run Query B every 10 min, so I can check the value difference between each query using transform.
How can I set the query interval, I know I can change scrape interval but my goal here is to check pending messages and if it doesnt change in 10 min trigger an alert. I am trying to get a count at 1st minute and get count again at 10th minute. check the difference using transform and trigger an alert if no change (messages are not getting processed )
using grafana 7
Thanks !
I am trying to optimize my NetLogo model using the Profiler extension. I get the following output [excerpt]:
Profiler
BEGIN PROFILING DUMP
Sorted by Exclusive Time
Name Calls Incl T(ms) Excl T(ms) Excl/calls
COMPLETE-COOKING 38741 0.711 4480.369 0.116
GET-RECIPE 10701 2618.651 2618.651 0.245
GET-EQUIPMENT 38741 1204.293 1204.293 0.031
SELECT-RECIPE-AT-TICK 990 9533.460 470.269 0.475
GIVE-RECIPE-REVIEW 10701 4.294 449.523 0.042
COMPLETE-COOKING and GIVE-RECIPE-REVIEW have a greater exclusive than inclusive time.
How can this be? And if it is an error, how do I fix it?
I'm trying to get some (application) performance data from a PostgreSQL database by looking at two dates in a record, subtracting them and getting the result as a number of seconds with fractions.
It seems I'm able to get a timestamp which includes hours, minutes, seconds, and milliseconds, or I can get just the seconds (and fractions) without e.g. the minutes or I can get a UNIX timestamp which is only seconds (without the milliseconds). Is there a convenient way to convert a timestamp to seconds+millis?
Example:
SELECT extract(SECOND FROM TIME '00:01:02.1234') as secs;
secs
--------
2.1234
I was hoping to get 62.1234 (that's 1 * 60 + 02.1234) as the return value for overall seconds, not just the "seconds component" from the time value.
Is there an existing function to do that, or do I have to EXTRACT each part, multiply, and add each component together?
Use EPOCH:
SELECT extract(EPOCH FROM TIME '00:01:02.1234') as secs; 62.1234
Alright... I have here a query that calculates a server uptime and sometimes this uptime goes over 24 hours. I use basically
select start, end, start-end as real_uptime from uptime
start and end being timestamp with timezone fields
Also, I have a stored function that sums all the real_uptime for a period and returns them...
However, I dumped the values start and end and put them into a spreadsheet and, doing the calculations there, the results had gone differently from what the stored function processed (the stored function returned lower values). I think that he ignores results for real_uptime bigger than one day. Postgresql shows them as x days hh:mm:ss... And I want them as hh:mm:ss (representing the total uptime in hours, without the full days counting.
Any ideas?
In PostgreSQL you can extract the epoch from timestamps: the seconds elapsed since 1970-01-01 00:00:00. With those values you can easily construct your desired output:
SELECT "start", "end", format('%s:%s:%s',
(up_secs / 3600)::int::text, -- hours
lpad(((up_secs % 3600) / 60)::int::text, 2, '0'), -- minutes
lpad((up_secs % 60)::text, 2, '0')) AS uptime -- seconds
FROM (
SELECT "start", "end",
(extract(epoch from "end") - extract(epoch from "start")) AS up_secs
FROM uptime) sub;
The lpad() function makes the minutes and seconds always two characters wide.
Premises
Rest API in scala / spray
Simple method that always returns OK
I try to achieve an average of 20k requests per second
Both machines (tester & tested) well configured (EC2 dedicated servers, each one with only its API & Gatling, configuration files
sudo sysctl -w net.ipv4.ip_local_port_range="1025 65535"
echo 300000 | sudo tee /proc/sys/fs/nr_open
echo 300000 | sudo tee /proc/sys/fs/file-max, /etc/security/limits.conf, ulimit -a --> 65535)
This is my simple test file scenario, just 1 user:
setUp(scn.inject(constantUsersPerSec(1) during(60 seconds)))
.throttle(
//reachRps(20000) in (60 seconds),
//holdFor(1 minute)
//,
jumpToRps(20000),
holdFor(1 minutes)
)
.protocols(httpConf)
I try to achieve 20k rqs (maximum) in 60sec or jump directly to 20k and maintain there along 1 minute.
This is always my results after executing Gatling script:
Simulation finished
Parsing log file(s)...
Parsing log file(s) done
Generating reports...
================================================================================
---- Global Information --------------------------------------------------------
> request count 60 (OK=60 KO=0 )
> min response time 0 (OK=0 KO=- )
> max response time 2 (OK=2 KO=- )
> mean response time 1 (OK=1 KO=- )
> std deviation 0 (OK=0 KO=- )
> response time 50th percentile 1 (OK=1 KO=- )
> response time 75th percentile 2 (OK=2 KO=- )
> mean requests/sec 1.017 (OK=1.017 KO=- )
---- Response Time Distribution ------------------------------------------------
> t 800 ms t > 1200 ms 0 ( 0%)
> failed 0 ( 0%)
I don't understand what means exactly these results... or, perhaps, I'm not configuring the right scenario for my goal.
I tried with several scenaries:
//setUp(scn.inject(atOnceUsers(20000)).protocols(httpConf))
//setUp(scn.inject(Users(200000).ramp(10)).protocols(httpConf))
//setUp(scn.inject(constantUsersPerSec(20000) during(1 seconds)).protocols(httpConf))
//setUp(scn.inject(constantUsersPerSec(20000) during(1 seconds))).protocols(httpConf)
//setUp(scn.inject(rampUsers(1500) over (60 seconds)))
//setUp(scn.inject(atOnceUsers(50000)))
// .throttle(jumpToRps(50000),
// holdFor(1 minutes))
// .protocols(httpConf)
setUp(scn.inject(constantUsersPerSec(1000) during(30 seconds)))
.throttle(
reachRps(20000) in (30 seconds),
holdFor(1 minute)
//,
//jumpToRps(20000),
//holdFor(1 minutes)
)
.protocols(httpConf)
So, I don't know how to configure my scala test file for, simply, getting a value like that:
> mean requests/sec 20000 (OK=20000 KO=- )
You don't get throttle right. From the documentation:
You still have to inject users at the scenario level. Throttling tries
to ensure a targeted throughput with the given scenarios and their
injection profiles (number of users and duration). It’s a bottleneck,
ie an upper limit. If you don’t provide enough users, you won’t reach
the throttle. If your injection lasts less than the throttle, your
simulation will simply stop when all the users are done. If your
injection lasts longer than the throttle, the simulation will stop at
the end of the throttle.
How can you expect reaching 20000 rps whith only injection 1 user per second?