I need a select result to look like this
UK
Europe
USA
The values are fixed ( no table is need it ). The order is important so ORDER BY 1 is not working.
What is the sql query ( as simple as possible ) that will build this result ?
You could use VALUES lists:
VALUES ('UK'), ('Europe'), ('USA');
column1
---------
UK
Europe
USA
(3 rows)
Related
I have a table with employment records. It has Employee code, status, and date when table was updated.
Like this:
Employee
Status
Date
001
termed
01/01/2020
001
rehired
02/02/2020
001
termed
03/03/2020
001
rehired
04/04/2021
Problem - I need to get period length when Employee was working for a company, and check if it was less than a year - then don't display that record.
There could be multiple hire-rehire cycles for each Employee. 10-20 is normal.
So, I'm thinking about two separate selects into two tables, and then looking for a closest date from hire in table 1, to termination in table 2. But it seems like overcomplicated idea.
Is there a better way?
Many approaches, but something like this could work:
SELECT
Employee,
SUM(DaysWorked)
FROM
(
SELECT
a1.employee,
IsNull(DateDiff(DD, a1.[Date],
(SELECT TOP 1 [Date] FROM aaa a2 WHERE a2.employee = a1.employee AND a2.[Date] > a1.[Date] and [status] <> 'termed' ORDER BY [Date] )
),DateDiff(DD, a1.[Date], getDate())) as DaysWorked
FROM
aaa a1
WHERE
[Status] = 'termed'
) Totals
GROUP BY
Totals.employee
HAVING SUM(DaysWorked) >= 365
Also using a CROSS JOIN is an option and perhaps more efficient. In this example, replace 'aaa' with the actual table name. The IsNull deals with an employee still working.
My question is the following ;
After a first query, I have a table with a single column of bigints, for example :
id
----
1
2
3
4
I would like to convert this column into a postgresql array, which would give - according to the example - {1,2,3,4}.
Any ideas about how to do that ?
Thank you for all your answers and have a nice day,
best regards
Use aggregation:
select array_agg(id)
from the_table;
If you need a specific sort order:
select array_agg(id order by id)
from the_table;
Here's my use case:
We have a analytics-like tools which used to count the number of users per hour on our system. And now the business would like to have the number of unique users. As our amount of user is very small, we will do that using
SELECT count(*)
FROM (
SELECT DISTINCT user_id
FROM unique users
WHERE date BETWEEN x and y
) distinct_users
i.e we will store the couple user_id, date and count unique users using DISTINCT (user_id is not a foreign key, as users are not logged in, it's just a unique identifier generated by the system, some kind of uuidv4 )
this works great in term of performance for a magnitude of data.
Now the problem is to import legacy data in it
I would like to know the SQL query to transform
date | number_of_users
12:00 | 2
13:00 | 4
into
date | user_id
12:00 | 1
12:00 | 2
13:00 | 1
13:00 | 2
13:00 | 3
13:00 | 4
(as long as the "count but not unique" returns the same number as before, we're fine if the "unique users count" is a bit off)
Of course, I could do a python script, but I was wondering if there was a SQL trick to do that, using generate_series or something related
generate_series() is indeed the way to go:
with data (date, number_of_users) as (
values
('12:00',2),
('13:00',4)
)
select d.date, i.n
from data d
cross join lateral generate_series(1, d.number_of_users) i (n)
order by d.date, i.n ;
So I have a table that looks like this:
Item_Id Value Type
001 300 B2B
001 450 (blank)
I am trying to make B2B and P2P columns so my result would look like this:
Item_Id B2B (blank)
001 300 450
So instead of taking up 2 rows it now only takes up one. The issue is that the values are not static and I need to account for that. Dynamic query + Pivot is slightly out of my league but not impossible. I'm hoping I can use a case statement or some other way to work around this.... any help is greatly appreciated!
I would also like to rename blank.... also can't seem to get pivot to work for type due to that blankety blank!
Try this a normal static pivot should work. I guess u need no column name so am inserting empty string.
INSERT INTO #temp
SELECT '001','300','B2B'
UNION
SELECT 001,450,''
SELECT *
FROM (SELECT *
FROM #temp) AS p
PIVOT (Max(value)
FOR [type] IN([B2B],
[ ])) piv
How do I turn a comma list field in a row and display it in a column?
For example,
ID | Colour
------------
1 | 1,2,3,4,5
to:
ID | Colour
------------
1 | 1
1 | 2
1 | 3
1 | 4
1 | 5
The usual way to solve this is to create a split function. You can grab one from Google, for example this one from SQL Team. Once you have created the function, you can use it like:
create table colours (id int, colour varchar(255))
insert colours values (1,'1,2,3,4,5')
select colours.id
, split.data
from colours
cross apply
dbo.Split(colours.colour, ',') as split
This prints:
id data
1 1
1 2
1 3
1 4
1 5
Another possible workaround is to use XML (assuming you are working with SQL Server 2005 or greater):
DECLARE #s TABLE
(
ID INT
, COLOUR VARCHAR(MAX)
)
INSERT INTO #s
VALUES ( 1, '1,2,3,4,5' )
SELECT s.ID, T.Colour.value('.', 'int') AS Colour
FROM ( SELECT ID
, CONVERT(XML, '<row>' + REPLACE(Colour, ',', '</row><row>') + '</row>') AS Colour
FROM #s a
) s
CROSS APPLY s.Colour.nodes('row') AS T(Colour)
I know this is an older post but thought I'd add an update. Tally Table and cteTally table based splitters all have a major problem. They use concatenated delimiters and that kills their speed when the elements get wider and the strings get longer.
I've fixed that problem and wrote an article about it which may be found at he following URL. http://www.sqlservercentral.com/articles/Tally+Table/72993/
The new method blows the doors off of all While Loop, Recursive CTE, and XML methods for VARCHAR(8000).
I'll also tell you that a fellow by the name of "Peter" made an improvement even to that code (in the discussion for the article). The article is still interesting and I'll be updating the attachments with Peter's enhancements in the next day or two. Between my major enhancement and the tweek Peter made, I don't believe you'll find a faster T-SQL-Only solution for splitting VARCHAR(8000). I've also solved the problem for this breed of splitters for VARCHAR(MAX) and am in the process of writing an article for that, as well.