I want sloq query which have 7 or greater than 7 sec in mysql log.
mysql log file have size greater than 500 Mb so
My script first select the datewise slow query from mysql log and from that I am selecting query which is greater than 7 sec. log output is below.
# Time: 161223 12:40:42
# User#Host: root[root] # [10.160.15.69]
# Query_time: 5.307732 Lock_time: 0.000061 Rows_sent: 1 Rows_examined:30334028
use Dbname;
SET timestamp=1482477042;
SELECT PRR.pr_register_request_date BETWEEN STR_TO_DATE( '2015-12-23', '%Y-%m-%d' ) AND STR_TO_DATE( '2016-12-23', '%Y-%m-%d' ) EXISTS (SELECT item_master_id FROM item WHERE IM.item_master_item_code = PRR.pr_register_material_code );
why not to use AWK ? $3 = third place in your line (time)
sample data (test.txt)
# Query_time: 9.353543 Lock_time: 0.000036 Rows_sent: 0 Rows_examined: 9091792 use Aarti_Engineering_Purchase; SET timestamp=1482477646; SELECT fieldvalue,I.InquiryID FROM FormMaster FM,FormDetail FD,Inquiry I,InquiryDetails I.InquiryID = ID.InquiryID AND AttributeValue IN ('ProjectCode');
# Query_time: 7.353543 Lock_time: 0.000036 Rows_sent: 0 Rows_examined: 9091792 use Aarti_Engineering_Purchase; SET timestamp=1482477646; SELECT fieldvalue,I.InquiryID FROM FormMaster FM,FormDetail FD,Inquiry I,InquiryDetails I.InquiryID = ID.InquiryID AND AttributeValue IN ('ProjectCode');
# Query_time: 9.353543 Lock_time: 0.000036 Rows_sent: 0 Rows_examined: 9091792 use Aarti_Engineering_Purchase; SET timestamp=1482477646; SELECT fieldvalue,I.InquiryID FROM FormMaster FM,FormDetail FD,Inquiry I,InquiryDetails I.InquiryID = ID.InquiryID AND AttributeValue IN ('ProjectCode');
# Query_time: 2.353543 Lock_time: 0.000036 Rows_sent: 0 Rows_examined: 9091792 use Aarti_Engineering_Purchase; SET timestamp=1482477646; SELECT fieldvalue,I.InquiryID FROM FormMaster FM,FormDetail FD,Inquiry I,InquiryDetails I.InquiryID = ID.InquiryID AND AttributeValue IN ('ProjectCode');
# Query_time: 28.353543 Lock_time: 0.000036 Rows_sent: 0 Rows_examined: 9091792 use Aarti_Engineering_Purchase; SET timestamp=1482477646; SELECT fieldvalue,I.InquiryID FROM FormMaster FM,FormDetail FD,Inquiry I,InquiryDetails I.InquiryID = ID.InquiryID AND AttributeValue IN ('ProjectCode');
and simply:
awk '$3 < 7' ./test.txt
or replace 7 with whatever time you want ... and store them in file
awk '$3 > 7.5' ./test.txt >> ./long_query.txt
You could use grep:
grep 'Query_time: [7-9]' file
With sed:
sed -n '/Query_time: [7-9]/p' file
Related
I have a table my_table:
case_id first_created last_paid submitted_time
3456 2021-01-27 2021-01-29 2021-01-26 21:34:36.566023+00:00
7891 2021-08-02 2021-09-16 2022-10-26 19:49:14.135585+00:00
1245 2021-09-13 None 2022-10-31 02:03:59.620348+00:00
9073 None None 2021-09-12 10:25:30.845687+00:00
6891 2021-08-03 2021-09-17 None
I created 2 new variables:
select *,
first_created-coalesce(submitted_time::date) as create_duration,
last_paid-coalesce(submitted_time::date) as paid_duration
from my_table;
The output:
case_id first_created last_paid submitted_time create_duration paid_duration
3456 2021-01-27 2021-01-29 2021-01-26 21:34:36.566023+00:00 1 3
7891 2021-08-02 2021-09-16 2022-10-26 19:49:14.135585+00:00 -450 -405
1245 2021-09-13 null 2022-10-31 02:03:59.620348+00:00 -412 null
9073 None None 2021-09-12 10:25:30.845687+00:00 null null
6891 2021-08-03 2021-09-17 null null null
My question is how can I replace new variables' value with 0, if it is smaller than 0?
The ideal output should look like:
case_id first_created last_paid submitted_time create_duration paid_duration
3456 2021-01-27 2021-01-29 2021-01-26 21:34:36.566023+00:00 1 3
7891 2021-08-02 2021-09-16 2022-10-26 19:49:14.135585+00:00 0 0
1245 2021-09-13 null 2022-10-31 02:03:59.620348+00:00 0 null
9073 None None 2021-09-12 10:25:30.845687+00:00 null null
6891 2021-08-03 2021-09-17 null null null
My code:
select *,
first_created-coalesce(submitted_time::date) as create_duration,
last_paid-coalesce(submitted_time::date) as paid_duration,
case
when create_duration < 0 THEN 0
else create_duration
end as QuantityText
from my_table
greatest(yourvalue,0)
Given yourvalue lower than 0, 0 will be returned as the greater value:
select *,
greatest(0,first_created-coalesce(submitted_time::date)) as create_duration,
greatest(0,last_paid-coalesce(submitted_time::date)) as paid_duration
from my_table
This will also change null values to 0.
case statement
If you wish to keep the null results, you can resort to a regular case statement. In order to alias your calculation you'll have to put it in a subquery or a cte:
select *,
case when create_duration<0 then 0 else create_duration end as create_duration_0,
case when paid_duration<0 then 0 else paid_duration end as paid_duration_0
from (
select *,
first_created-coalesce(submitted_time::date) as create_duration,
last_paid-coalesce(submitted_time::date) as paid_duration
from my_table ) as subquery;
(n+abs(n))/2
If you sum a number with its absolute value, then divide by two (average them out), you'll get that same number if it was positive, or you'll get zero if it was negative because a negative number will always balance itself out with its absolute value:
(-1+abs(-1))/2 = (-1+1)/2 = 0/2 = 0
( 1+abs( 1))/2 = ( 1+1)/2 = 2/2 = 1
select *,
(create_duration + abs(create_duration)) / 2 as create_duration_0,
(paid_duration + abs(paid_duration) ) / 2 as paid_duration_0
from (
select *,
first_created-coalesce(submitted_time::date) as create_duration,
last_paid-coalesce(submitted_time::date) as paid_duration
from my_table ) as subquery;
Which according to this demo, is slightly faster than case and about as fast as greatest(), without affecting null values.
Note that select * pulls everything from below, so you'll end up seeing create_duration as well as create_duration_0 - you can get rid of it by listing your desired output columns explicitly in the outer query. You can also rewrite it without subquery/cte, repeating the calculation, which will look ugly but in most cases planner will notice the repetition and make evaluate it only once
select *,
case when first_created-coalesce(submitted_time::date) < 0
then 0
else first_created-coalesce(submitted_time::date)
end as create_duration,
(abs(last_paid-coalesce(submitted_time::date))+last_paid-coalesce(submitted_time::date))/2 as paid_duration
from my_table ) as subquery;
or using a scalar subquery
select *,
(select case when a<0 then 0 else a end
from (select first_created-coalesce(submitted_time::date)) as alias(a) )
as create_duration,
(select case when a<0 then 0 else a end
from (select last_paid-coalesce(submitted_time::date)) as alias(a) )
as paid_duration
from my_table ) as subquery;
Neither of which help with anything in this case but are good to know.
If you are planning on attaching your SQL Database to an ASP.NET app, you could create a c# script to query your database, and use the following:
Parameters.AddWithValue(‘Data You want to change’ ‘0’);
However, if your not using your SQL database with a ASP.NET app, this will not work.
DBFIDDLE
How to query postgresql to check the constraint is valid or not.
CREATE TABLE emp (test_check int check ( test_check >1 and test_check < 0 ));
query the constraint:
select * from pg_constraint where conname = 'emp_test_check_check';
------------------------------------------------------------------------------------
oid | 24631
conname | emp_test_check_check
connamespace | 2200
contype | c
condeferrable | f
condeferred | f
convalidated | t
conrelid | 24628
contypid | 0
conindid | 0
conparentid | 0
confrelid | 0
confupdtype |
confdeltype |
confmatchtype |
conislocal | t
coninhcount | 0
connoinherit | f
conkey | {1}
confkey | [null]
conpfeqop | [null]
conppeqop | [null]
conffeqop | [null]
confdelsetcols | [null]
conexclop | [null]
conbin | {BOOLEXPR :boolop and :args ({OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 46} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 58 :constvalue 4 [ 1 0 0 0 0 0 0 0 ]}) :location 57} {OPEXPR :opno 97 :opfuncid 66 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 64} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 77 :constvalue 4 [ 0 0 0 0 0 0 0 0 ]}) :location 75}) :location 60}
get the check definition:
select pgc.conname as constraint_name,
ccu.table_schema as table_schema,
ccu.table_name,
ccu.column_name,
pg_get_constraintdef(pgc.oid)
from pg_constraint pgc
join pg_namespace nsp on nsp.oid = pgc.connamespace
join pg_class cls on pgc.conrelid = cls.oid
left join information_schema.constraint_column_usage ccu
on pgc.conname = ccu.constraint_name
and nsp.nspname = ccu.constraint_schema
where contype ='c'
order by pgc.conname;
return:
-[ RECORD 1 ]--------+------------------------------------------------
constraint_name | emp_test_check_check
table_schema | public
table_name | emp
column_name | test_check
pg_get_constraintdef | CHECK (((test_check > 1) AND (test_check < 0)))
Similar question:
CREATE TABLE emp1 (test_check int check ( test_check >1 and test_check > 10 ));
can postgresql deduce from above check constraint to
CREATE TABLE emp1 (test_check int check ( test_check > 10 ));
if It can, how to query it.
Every boolean expression is valid for a CHECK constraint.
a Boolean (truth-value) expression.
Postgres does not attempt to simplify your expression. It's your responsibility to provide sensible expressions. The expression is parsed and checked to be valid, then stored in an internal format. (So every stored expression is valid.) But not simplified. Not even the simplest cases like CHECK (true AND true).
You already have a working query to retrieve CHECK constraints, but I would use this faster query not involving the (bloated) information_schema:
SELECT c.conname AS constraint_name
, c.conrelid::regclass AS table_name -- schema-qualified where needed
, a.column_names
, pg_get_constraintdef(c.oid) AS constraint_definition
FROM pg_constraint c
LEFT JOIN LATERAL (
SELECT string_agg(attname, ', ') AS column_names
FROM pg_attribute a
WHERE a.attrelid = c.conrelid
AND a.attnum = ANY (c.conkey)
) a ON true
WHERE c.contype = 'c' -- only CHECK constraints
AND c.conrelid > 0 -- only table constraints (incl. "column constraints")
ORDER BY c.conname;
Aside:
It should be noted that a check constraint is satisfied if the check expression evaluates to true or the null value.
I am trying to use pgbench to perform a test on PolarDB for postgreSQL.
This is the command I used to perform the test.
pgbench -M prepared -r -c 16 -j 4 -T 30 -p 10001 -d pgbench -l
And this is the result
... ...
client 2 sending P0_10
client 2 receiving
client 2 receiving
client 14 receiving
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 32
query mode: prepared
number of clients: 16
number of threads: 4
duration: 30 s
number of transactions actually processed: 49126
latency average = 9.772 ms
tps = 1637.313156 (including connections establishing)
tps = 1637.438330 (excluding connections establishing)
statement latencies in milliseconds:
1.128 \set aid random(1, 100000 * :scale)
0.068 \set bid random(1, 1 * :scale)
0.040 \set tid random(1, 10 * :scale)
0.041 \set delta random(-5000, 5000)
0.104 BEGIN;
3.815 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
0.590 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
1.188 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
1.440 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
0.327 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
0.481 END;
I wonder if there is a way to calculate P99 from the result, or there is some extra parameter I need provide to pgbench?
The -l caused it to write log files. You need to look in those log files for the latencies. For me that looks something like this:
cat pgbench_log.107915*|wc
36635 219810 1033548
sort pgbench_log.107915* -k3rn|head -n 366|tail -n 1
13 990 65184 0 195589 166574
so about 65.184 ms was the 99% latency. I would question if that actually means anything, though. After all, the very last transaction had to wait nearly 30 seconds before it even got is turn on the server, so why wasn't its 'latency' 30 seconds? What about the transactions that never got their turn at all?
I am getting constant TPS as 500 while using pgbench with different TPS as input .How to increase the TPS in Postgres BDR set up, please suggest .
bdr_version used : 1.0.3-2017-11-21-15283ba
Sample Output and pgbench command input
Scenario 1:
Shared_Buffer = 1024 MB and throttling TPS as 2000
command used:
pgbench -h <hostname> -p 5432 -U postgres -d <dbname> -c 50 -j 50 -r -R 2000
O/P :
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 50
number of threads: 50
duration: 2400 s
number of transactions actually processed: 1200964
latency average: 894071.370 ms
latency stddev: -nan ms
rate limit schedule lag: avg 893971.489 (max 1819008.190) ms
tps = 500.382936 (including connections establishing)
tps = 500.385107 (excluding connections establishing)
statement latencies in milliseconds:
0.109225 \set nbranches 1 * :scale
0.100045 \set ntellers 10 * :scale
0.085917 \set naccounts 100000 * :scale
0.064758 \setrandom aid 1 :naccounts
0.057230 \setrandom bid 1 :nbranches
0.052089 \setrandom tid 1 :ntellers
0.049827 \setrandom delta -5000 5000
0.471368 BEGIN;
0.678317 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
0.599331 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
80.328257 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
15.537372 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
0.698180 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
0.995450 END;
Thu Aug 16 09:52:30 UTC 2018
Scenario 2:
Shared_Buffer = 1024 MB and throttling TPS as 6000
Command Used:
pgbench -h <hostname> -p 5432 -U postgres -d <dbname> -c 50 -j 50 -r -R 6000
O/P:
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 50
number of threads: 50
duration: 2400 s
number of transactions actually processed: 1184746
latency average: 1096729.554 ms
latency stddev: -nan ms
rate limit schedule lag: avg 1096628.305 (max 2208483.098) ms
tps = 493.625936 (including connections establishing)
tps = 493.629106 (excluding connections establishing)
statement latencies in milliseconds:
0.108491 \set nbranches 1 * :scale
0.098740 \set ntellers 10 * :scale
0.084497 \set naccounts 100000 * :scale
0.064168 \setrandom aid 1 :naccounts
0.056658 \setrandom bid 1 :nbranches
0.051678 \setrandom tid 1 :ntellers
0.049427 \setrandom delta -5000 5000
0.480755 BEGIN;
0.696514 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
0.607114 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
81.404448 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
15.775295 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
0.708403 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
1.009272 END;
Thu Aug 16 11:33:17 UTC 2018
Does anyone know of a simple method for solving this?
I have a table which consists of start times for events and the associated durations. I need to be able to split the event durations into thirty minute intervals. So for example if an event starts at 10:45:00 and the duration is 00:17:00 then the returned set should allocate 15 minutes to the 10:30:00 interval and 00:02:00 minutes to the 11:00:00 interval.
I'm sure I can figure out a clumsy approach but would like something a little simpler. This must come up quite often I'd imagine but Google is being unhelpful today.
Thanks,
Steve
You could create a lookup table with just the times (over 24 hours), and join to that table. You would need to rebase the date to that used in the lookup. Then perform a datediff on the upper and lower intervals to work out their durations. Each middle interval would be 30 minutes.
create table #interval_lookup (
from_date datetime,
to_date datetime
)
declare #time datetime
set #time = '00:00:00'
while #time < '2 Jan 1900'
begin
insert into #interval_lookup values (#time, dateadd(minute, 30, #time))
set #time = dateadd(minute, 30, #time)
end
declare #search_from datetime
declare #search_to datetime
set #search_from = '10:45:00'
set #search_to = dateadd(minute, 17, #search_from)
select
from_date as interval,
case
when from_date <= #search_from and
#search_from < to_date and
from_date <= #search_to and
#search_to < to_date
then datediff(minute, #search_from, #search_to)
when from_date <= #search_from and
#search_from < to_date
then datediff(minute, #search_from, to_date)
when from_date <= #search_to and
#search_to < to_date then
datediff(minute, from_date, #search_to)
else 30
end as duration
from
#interval_lookup
where
to_date > #search_from
and from_date <= #search_to
Create TVF that splits single event:
ALTER FUNCTION dbo.TVF_TimeRange_Split_To_Grid
(
#eventStartTime datetime
, #eventDurationMins float
, #intervalMins int
)
RETURNS #retTable table
(
intervalStartTime datetime
,intervalEndTime datetime
,eventDurationInIntervalMins float
)
AS
BEGIN
declare #eventMinuteOfDay int
set #eventMinuteOfDay = datepart(hour,#eventStartTime)*60+datepart(minute,#eventStartTime)
declare #intervalStartMinute int
set #intervalStartMinute = #eventMinuteOfDay - #eventMinuteOfDay % #intervalMins
declare #intervalStartTime datetime
set #intervalStartTime = dateadd(minute,#intervalStartMinute,cast(floor(cast(#eventStartTime as float)) as datetime))
declare #intervalEndTime datetime
set #intervalEndTime = dateadd(minute,#intervalMins,#intervalStartTime)
declare #eventDurationInIntervalMins float
while (#eventDurationMins>0)
begin
set #eventDurationInIntervalMins = cast(#intervalEndTime-#eventStartTime as float)*24*60
if #eventDurationMins<#eventDurationInIntervalMins
set #eventDurationInIntervalMins = #eventDurationMins
insert into #retTable
select #intervalStartTime,#intervalEndTime,#eventDurationInIntervalMins
set #eventDurationMins = #eventDurationMins - #eventDurationInIntervalMins
set #eventStartTime = #intervalEndTime
set #intervalStartTime = #intervalEndTime
set #intervalEndTime = dateadd(minute,#intervalMins,#intervalEndTime)
end
RETURN
END
GO
Test:
select getdate()
select * from dbo.TVF_TimeRange_Split_To_Grid(getdate(),23,30)
Test Result:
2008-10-31 11:28:12.377
intervalStartTime intervalEndTime eventDurationInIntervalMins
----------------------- ----------------------- ---------------------------
2008-10-31 11:00:00.000 2008-10-31 11:30:00.000 1,79372222222222
2008-10-31 11:30:00.000 2008-10-31 12:00:00.000 21,2062777777778
Sample usage:
select input.eventName, result.* from
(
select
'first' as eventName
,cast('2008-10-03 10:45' as datetime) as startTime
,17 as durationMins
union all
select
'second' as eventName
,cast('2008-10-05 11:00' as datetime) as startTime
,17 as durationMins
union all
select
'third' as eventName
,cast('2008-10-05 12:00' as datetime) as startTime
,100 as durationMins
) input
cross apply dbo.TVF_TimeRange_Split_To_Grid(input.startTime,input.durationMins,30) result
Sample usage result:
eventName intervalStartTime intervalEndTime eventDurationInIntervalMins
--------- ----------------------- ----------------------- ---------------------------
first 2008-10-03 10:30:00.000 2008-10-03 11:00:00.000 15
first 2008-10-03 11:00:00.000 2008-10-03 11:30:00.000 2
second 2008-10-05 11:00:00.000 2008-10-05 11:30:00.000 17
third 2008-10-05 12:00:00.000 2008-10-05 12:30:00.000 30
third 2008-10-05 12:30:00.000 2008-10-05 13:00:00.000 30
third 2008-10-05 13:00:00.000 2008-10-05 13:30:00.000 30
third 2008-10-05 13:30:00.000 2008-10-05 14:00:00.000 10
(7 row(s) affected)