I'm newbie on tsql and stuck on this problem. Can anyone help this prb?
I have a table like below; (use SQL 2008 Express Edt.)
ID COL1 COL2
1 7 2
2 7 3
3 7 4
4 7 5
5 9 2
6 9 3
7 9 4
8 9 5
9 11 2
10 11 3
11 11 4
12 11 5
how to use select query to fetch between 7/3 and 11/2 (both columns and first/last rows included)
SQL Fiddle
MS SQL Server 2008 Schema Setup:
create table YourTable
(
ID int,
COL1 int,
COL2 int
)
insert into YourTable values
(1 ,7 ,2),
(2 ,7 ,3),
(3 ,7 ,4),
(4 ,7 ,5),
(5 ,9 ,2),
(6 ,9 ,3),
(7 ,9 ,4),
(8 ,9 ,5),
(9 ,11 ,2),
(10 ,11 ,3),
(11 ,11 ,4),
(12 ,11 ,5)
Query 1:
select *
from YourTable
where (COL1 > 7 and COL1 < 11) or
(COL1 = 7 and COL2 >= 3) or
(COL1 = 11 and COL2 <= 2)
Results:
| ID | COL1 | COL2 |
--------------------
| 2 | 7 | 3 |
| 3 | 7 | 4 |
| 4 | 7 | 5 |
| 5 | 9 | 2 |
| 6 | 9 | 3 |
| 7 | 9 | 4 |
| 8 | 9 | 5 |
| 9 | 11 | 2 |
Related
This question already has answers here:
PostgreSQL "Column does not exist" but it actually does
(6 answers)
sql statement error: "column .. does not exist"
(1 answer)
Closed 10 months ago.
In postgres, I have generated table user and table organizations. There is some relationship between them, i.e.: multiple users belong to one organization.
The table user_organizations_organization was auto-generated by postgres.
Here I have:
development=# select * from user_organizations_organization;
userId | organizationId
--------+----------------
1 | 1
2 | 1
3 | 2
4 | 2
5 | 1
6 | 1
7 | 2
8 | 2
9 | 3
10 | 3
11 | 4
12 | 4
13 | 3
14 | 3
15 | 4
16 | 4
17 | 5
18 | 5
19 | 6
20 | 6
21 | 5
22 | 5
23 | 6
24 | 6
25 | 7
26 | 7
27 | 8
28 | 8
29 | 7
30 | 7
31 | 8
32 | 8
33 | 9
34 | 9
35 | 10
36 | 10
37 | 9
38 | 9
39 | 10
40 | 10
(40 rows)
I want to delete relationships related to organizations 5,6,7,8:
development=# delete from user_organizations_organization where organizationId in (5,6,7,8);
ERROR: column "organizationid" does not exist
LINE 1: delete from user_organizations_organization where organizati...
^
HINT: Perhaps you meant to reference the column "user_organizations_organization.organizationId".
development=#
How can I delete them?
I want to create a function that finds the the number of pno's with the sid (staff id) that have worked on it.
So for example if I wanted to find the sid's corresponding to pno = 1
select sid_worked_on(1)
count
-------
2
I would have 2 as sid 0 and 1 have worked on it.
This is the joined table from 3 different tables.
pno | a_sid | b_sid | c_sid
-----+--------+--------+--------
1 | 0 | 0 | 0
4 | 4 | 4 | 6
5 | 4 | 4 | 5
2 | 0 | 0 | 0
1 | 0 | 1 | 0
7 | 5 | 4 | 4
7 | 5 | 5 | 4
5 | 4 | 4 | 4
4 | 4 | 5 | 6
7 | 5 | 4 | 1
7 | 5 | 5 | 1
6 | 5 | 4 | 5
My only way of thinking how to do it would be to "flatten" the table into one column since there is no need for multiple columns and do distinct sid, but I haven't learnt how to do that yet.
pno | sid
-----+--------
1 | 0 |
4 | 4 |
5 | 4 |
2 | 0 |
1 | 0 |
7 | 5 |
7 | 5 |
5 | 4 |
4 | 4 |
7 | 5 |
7 | 5 |
6 | 5 |
--where the new table starts
1 | 0 |
4 | 4 |
5 | 4 |
2 | 0 |
1 | 1 |
7 | 4 |
...
...
I also thought to create a table and going through each value, so
create table
for each row where pno = 1
check if a_sid in table
if not then add a_sid to table
check if b_sid in table
if not then add b_sid to table
check if c_sid in table
if not then add c_sid to table
Would there be a better way of doing this?
Use UNION
SELECT pno, a_sid from table
UNION
SELECT pno, b_sid from table
UNION
SELECT pno, c_sid from table
Depending on whether you want duplicated entries of the same pno with the same column on the right side, you can use UNION ALL instead of UNION.
You can create a view with this query.
TIL about tablefunc and crosstab. At first I wanted to "group data by columns" but that doesn't really mean anything.
My product sales look like this
product_id | units | date
-----------------------------------
10 | 1 | 1-1-2018
10 | 2 | 2-2-2018
11 | 3 | 1-1-2018
11 | 10 | 1-2-2018
12 | 1 | 2-1-2018
13 | 10 | 1-1-2018
13 | 10 | 2-2-2018
I would like to produce a table of products with months as columns
product_id | 01-01-2018 | 02-01-2018 | etc.
-----------------------------------
10 | 1 | 2
11 | 13 | 0
12 | 0 | 1
13 | 20 | 0
First I would group by month, then invert and group by product, but I cannot figure out how to do this.
After enabling the tablefunc extension,
SELECT product_id, coalesce("2018-1-1", 0) as "2018-1-1"
, coalesce("2018-2-1", 0) as "2018-2-1"
FROM crosstab(
$$SELECT product_id, date_trunc('month', date)::date as month, sum(units) as units
FROM test
GROUP BY product_id, month
ORDER BY 1$$
, $$VALUES ('2018-1-1'::date), ('2018-2-1')$$
) AS ct (product_id int, "2018-1-1" int, "2018-2-1" int);
yields
| product_id | 2018-1-1 | 2018-2-1 |
|------------+----------+----------|
| 10 | 1 | 2 |
| 11 | 13 | 0 |
| 12 | 0 | 1 |
| 13 | 10 | 10 |
Hi I have a problem with the group by clause if I have combinations between,
This is a part from my table with combinations:
CREATE TABLE sampleTable
(
id serial primary key,
sat1 varchar(3),
sat2 varchar(3)
);
INSERT INTO sampleTable
(sat1, sat2)
VALUES
('LE7','LE7'),
('LE8','LE7'),
('LE7','LE7'),
('LE7','LC8'),
('LE7','LE8'),
('LE8','LE7'),
...
http://sqlfiddle.com/#!15/63104/2
I search the count of combinations, but for me the combination sat1,sat2 is the same like sat2,sat1.
My (wrong) SQL-Code:
select sat1, sat2, count(*) from sampleTable group by sat1, sat2 order by sat1
and the result:
sat1 sat2 count
1 LC8 LC8 27
2 LC8 LE7 17
3 LE7 LE7 200
4 LE7 LC8 22
5 LM1 LM2 2
6 LM1 LM1 12
7 LM2 LM2 6
8 LM2 LM1 3
but it should by:
sat1 sat2 count
1 LC8 LC8 27
2 LC8 LE7 39 (17+22 / line 2 & 4)
3 LE7 LE7 200
4 LM1 LM2 5 (2+3 / line 5 & 8)
5 LM1 LM1 12
6 LM2 LM2 6
Has anyone a SQL-Code which solved my question?
Thanks for Help!!
Use LEAST() and GREATEST() to "simplify" the 2 grouping columns:
Query 1:
select least(sat1, sat2), greatest(sat1, sat2), count(*)
from sampleTable
group by least(sat1, sat2), greatest(sat1, sat2)
order by least(sat1, sat2)
[Results][2]:
| least | greatest | count |
|-------|----------|-------|
| LC8 | LC8 | 27 |
| LC8 | LE7 | 39 |
| LE7 | LE7 | 200 |
| LM1 | LM1 | 12 |
| LM1 | LM2 | 5 |
| LM2 | LM2 | 6 |
See this SQL Fiddle
I have temp table that I've populated with a running total. I used SQL Server windowing functions. The data in my temp table is in the following format:
|Day | Sku Nbr | CMQTY |
| 1 | f45 | 0 |
| 2 | f45 | 2 |
| 3 | f45 | 0 |
| 4 | f45 | 7 |
| 5 | f45 | 0 |
| 6 | f45 | 0 |
| 7 | f45 | 0 |
| 8 | f45 | 13 |
| 9 | f45 | 15 |
| 10 | f45 | 21 |
I would like to manipulate the data so that it displays like this:
|Day| Sku Nbr | CMQTY |
| 1 | f45 | 0 |
| 2 | f45 | 2 |
| 3 | f45 | 2 |
| 4 | f45 | 7 |
| 5 | f45 | 7 |
| 6 | f45 | 7 |
| 7 | f45 | 7 |
| 8 | f45 | 13 |
| 9 | f45 | 15 |
| 10 | f45 | 21 |
I've tried using a lag function but there are issues when I have multiple days, in a row, with a 0 CMQTY. I've also tried using CASE WHEN logic but am failing.
You can use row_number as below
;with cte as (
select *, sm = sum(case when cmqty>0 then 1 else 0 end) over (order by [day]) from #yoursum
)
select *, sum(cmqty) over(partition by sm order by [day]) from cte
Your table structure
create table #yoursum ([day] int, sku_nbr varchar(10), CMQTY int)
insert into #yoursum
([Day] , Sku_Nbr , CMQTY ) values
( 1 ,'f45', 0 )
,( 2 ,'f45', 2 )
,( 3 ,'f45', 0 )
,( 4 ,'f45', 7 )
,( 5 ,'f45', 0 )
,( 6 ,'f45', 0 )
,( 7 ,'f45', 0 )
,( 8 ,'f45', 13 )
,( 9 ,'f45', 15 )
,( 10 ,'f45', 21 )
For fun, another approach. First for some sample data:
IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1
(
[day] int NOT NULL,
[Sku Nbr] varchar(5) NOT NULL,
CMQTY int NOT NULL,
CONSTRAINT pk_t1 PRIMARY KEY CLUSTERED([day] ASC)
);
INSERT #t1 VALUES
(1 , 'f45', 0),
(2 , 'f45', 2),
(3 , 'f45', 0),
(4 , 'f45', 7),
(5 , 'f45', 0),
(6 , 'f45', 0),
(7 , 'f45', 0),
(8 , 'f45', 13),
(9 , 'f45', 15),
(10, 'f45', 21);
And the solution:
DECLARE #RunningTotal int = 0;
UPDATE #t1
SET #RunningTotal = CMQTY = IIF(CMQTY = 0, #RunningTotal, CMQTY)
FROM #t1 WITH (TABLOCKX)
OPTION (MAXDOP 1);
Results:
day Sku Nbr CMQTY
---- ------- ------
1 f45 0
2 f45 2
3 f45 2
4 f45 7
5 f45 7
6 f45 7
7 f45 7
8 f45 13
9 f45 15
10 f45 21
This approach is referred to the local updateable variable or Quirky Update. You can read more about it here: http://www.sqlservercentral.com/articles/T-SQL/68467/