Select max value rows from table column - select

my table look like this..
id name count
-- ---- -----
1 Mike 0
2 Duke 2
3 Smith 1
4 Dave 6
5 Rich 3
6 Rozie 8
7 Romeo 0
8 Khan 1
----------------------
I want to select rows with max(count) limit 5 (TOP 5 Names with maximum count)
that would look sumthing like...
id name count
-- ---- -----
6 Rozie 8
4 Dave 6
5 Rich 3
2 Duke 2
3 Smith 1
please help,,
thanks

Here is how:
MySQL:
SELECT * FROM tableName ORDER BY count DESC LIMIT 5
MS SQL:
SELECT TOP 5 * FROM tableName ORDER BY count DESC

Related

T-SQL: split string on multiple delimiters

I have been given a T-SQL task: to convert/format names which are in ALL CAPS into Title Case. I have decided that splitting the names into tokens, and capitalizing the first letter out of each token, would be a reasonable approach (I am willing to take advice if there's a better option, especially in T-SQL).
That said, to accomplish this, I'd have to split the name fields on spaces AND dashes, hyphens, etc. Then, once it is tokenized, I can worry about normalizing the case.
Is there any reasonable way to split a string along any delimiter in a list?
If ease & performance is important then grab a copy of PatExtract8k.
Here's a basic example where I split on any character that is not a letter or number ([^a-z0-9]):
-- Sample String
DECLARE #string VARCHAR(8000) = 'abc.123&xyz!4445556__5566^rrr';
-- Basic Use
SELECT pe.* FROM samd.patExtract8K(#string,'[^a-z0-9]') AS pe;
Output:
itemNumber itemIndex itemLength item
--------------- ----------- ----------- -------------
1 1 3 abc
2 5 3 123
3 9 3 xyz
4 13 7 4445556
5 22 4 5566
6 27 3 rrr
It returns what you need as well as:
the length of the item (ItemLength)
It's position in the string (ItemIndex)
It's ordinal position in the string (ItemNumber.)
Now against a table. Here we're doing the same thing but I'll explicitly call out the characters I want to use as a delimiter. Here it's any of these characters: *.&,?%/>
-- Sample Table
DECLARE #table TABLE (SomeId INT IDENTITY, SomeString VARCHAR(100));
INSERT #table VALUES('abc***332211,,XXX'),('abc.123&&555%jjj'),('ll/111>ff?12345');
SELECT t.*, pe.*
FROM #table AS t
CROSS APPLY samd.patExtract8K(t.SomeString,'[*.&,?%/>]') AS pe;
This returns:
SomeId SomeString itemNumber itemIndex itemLength item
----------- ------------------- ------------ ---------- ----------- ---------
1 abc***332211,,XXX 1 1 3 abc
1 abc***332211,,XXX 2 7 6 332211
1 abc***332211,,XXX 3 15 3 XXX
2 abc.123&&555%jjj 1 1 3 abc
2 abc.123&&555%jjj 2 5 3 123
2 abc.123&&555%jjj 3 10 3 555
2 abc.123&&555%jjj 4 14 3 jjj
3 ll/111>ff?12345 1 1 2 ll
3 ll/111>ff?12345 2 4 3 111
3 ll/111>ff?12345 3 8 2 ff
3 ll/111>ff?12345 4 11 5 12345
On the other hand - If I wanted to extract the delimiters I could change the pattern like this: [^*.&,?%/>]. Now the same query returns:
SomeId itemNumber itemIndex itemLength item
----------- -------------------- -------------------- ----------- ---------
1 1 4 3 ***
1 2 13 2 ,,
2 1 4 1 .
2 2 8 2 &&
2 3 13 1 %
3 1 3 1 /
3 2 7 1 >
3 3 10 1 ?

KDB+: How to retrieve the rows immediately before and after a given row that conform to a specific logic?

Given the following table
time kind counter key1 value
----------------------------------------
1 1 1 1 1
2 0 1 1 2
3 0 1 2 3
5 0 1 1 4
5 1 2 2 5
6 0 2 3 6
7 0 2 2 7
8 1 3 3 8
9 1 4 3 9
How would one select the value in the first row
immediately after and immediately before each
row of kind 1 ordered by time where the key1
value is the same in both instances .i.e:
time value prevvalue nextvalue
---------------------------------------------
1 1 0n 2
5 5 3 7
8 8 6 0n
9 9 6 0n
Here are some of the things I have tried, though
to be honest I have no idea how to canonically achieve
something like this in q whereby the prior value has a
variable offset to the current row?
select
prev[value],
next[value],
by key1 where kind<>1
update 0N^prevval,0N^nextval from update prevval:prev value1,nextval:next value1 by key1 from table
Some advice or a pointer on how to achieve this would be great!
Thanks
I was able to use the following code to return a table meeting your requirements. If this is correct, the sample table you have provided is incorrect, otherwise I have misunderstood the question.
q)table:([] time:1 2 3 5 5 6 7 8 9;kind:1 0 0 0 1 0 0 1 1;counter:1 1 1 1 2 2 2 3 4;key1:1 1 2 1 2 3 2 3 3;value1:1 2 3 4 5 6 7 8 9)
q)tab2:update 0N^prevval,0N^nextval from update prevval:prev value1,nextval:next value1 by key1 from table
q)tab3:select from tab2 where kind=1
time value1 prevval nextval
---------------------------
1 1 2
5 5 3 7
8 8 6 9
9 9 8
The update statement in tab2:
update 0N^prevval,0N^nextval from update prevval:prev value1,nextval:next value1 by key1 from table
is simply adding 2 columns onto the original table with the previous and next values for each row. 0^ is filling the empty fields with nulls.
The select statement in tab3:
tab3:select from tab2 where kind=1
is filtering tab2 for rows where kind=1.
The final select statement:
select time,value1,prevval,nextval from tab3
is selecting the rows you want to be returned in the final result.
Hope this answers your question.
Thanks,
Caitlin

SQL Select based on each row of previous select

I have a table with answers regarding different questions, all of them numbered. There are basically these columns: IdAnswer (unique for each answer in the table), IdUser (which won't repeat even if the same user answer questions a second time), IdQuestion and Answer.
IdAnswer IdUser IdQuestion Answer
1 John 1 0
2 John 4 1
3 John 5 1
4 John 6 0
5 Bob 1 1
6 Bob 3 1
7 Bob 5 0
8 Mark 2 0
9 Mark 7 1
10 Mark 5 0
I'd like to select from this table all answers to a specific question (say, IdQuestion = 5), and also the last question each user answered just before question number 5.
In the end I need a table that should look like this:
IdAnswer IdUser IdQuestion Answer
2 John 4 1
3 John 5 1
6 Bob 3 1
7 Bob 5 0
9 Mark 7 1
10 Mark 5 0
I've managed to make this work using a cursor to iterate through each line from the first SELECT result (which filters by IdQuestion), but I'm not sure if this is the best (and fastest) way of doing it. Is there any more efficient way of achieving the same result?
And by the way, I'm using SQL Server Management Studio 2012.
Here is one way using LEAD function
select * from
(
select *,NextQ = Lead(IdQuestion)over(partition by IdUser order by IdAnswer)
from youtable
) a
Where 5 in (IdQuestion, NextQ )
for older versions
;WITH cte
AS (SELECT prev_id = Min(CASE WHEN IdQuestion = 5 THEN rn - 1 END) OVER( partition BY IdUser),*
FROM (SELECT rn = Row_number()OVER(partition BY IdUser ORDER BY IdAnswer),*
FROM Yourtable)a)
SELECT *
FROM cte
WHERE rn IN ( prev_id, prev_id + 1 )

Querying sql table with multiple values

I would like to query a sql table from below
ID Val
-------------
1 5
1 7
1 8
1 9
2 5
2 7
2 9
3 1
3 5
that would return the following set of results
query > select distinct ID from dbo.table where val in (5,7,9)
result
--------
ID
1
2
I run into a problem where a single row can match only one val from the subset and not all of them...
Assuming the rows are distinct:
SELECT ID
FROM your_table
WHERE Val IN (5,7,9)
GROUP BY ID
HAVING COUNT(*) = 3

tsql sum data and include default values for missing data

I would like a query that will show a sum of columns with a default value for missing data. For example assume I have a table as follows:
type_lookup:
id name
-----------
1 self
2 manager
3 peer
And a table as follows
data:
id type_lookup_id value
--------------------------
1 1 1
2 1 4
3 2 9
4 2 1
5 2 9
6 1 5
7 2 6
8 1 2
9 1 1
After running a query I would like a result set as follows:
type_lookup_id value
----------------------
1 13
2 25
3 0
I would like all rows in type_lookup table to be included in the result set - even if they don't appear in the data table.
It's a bit hard to read your data layout, but something like the following should do the trick:
SELECT tl.type_lookup_id, tl.name, sum(da.type_lookup_id) how_much
from type_lookup tl
left outer join data da
on da.type_lookup_id = tl.type_lookup_id
group by tl.type_lookup_id, tl.name
order by tl.type_lookup_id
[EDIT]
...subsequently edited by changing count() to sum().