Selecting/Inner Joining multiple values from one table into a single row - tsql

I've been searching google all afternoon I'm not even sure how I would word the question I have so that google or bing might understand it. So I have 2 tables(the tables and the database setup were not my doing) one is the details of a product
name, value, weight, weight unit of measure id, length, width, height, dimensions unit of measure id, description
Then another table with the units of measure and their ID values
eg.
id, unit name
1, cm
2, inches
3, lbs
4, kg
5, feet
I need to select from the products table, but replace the unit of measure ID's with the actual unit of measure text from the other table.
the data looks like
sm widgets, $10, 20, 3, 10, 10, 10, 1, small widgets
I want the results to come out like
sm widget, $10, 20, lbs, 10, 10, 10, cm, small widgets
lg widget, $15, 50, kg, 10, 10, 10, inches, large widgets
thanks for any insight you guys can give me

I think you just need to join the tables and return the description:
select
p.name,
p.value,
p.weight,
c.[unitname],
p.length,
p.width,
p.height,
c2.[unitname] as DimensionUnitName,
p.[description]
from products p
inner join unitcodetable c
on c.id = p.[weight unit of measure id]
inner join unitcodetable c2
on c2.id = p.[dimensions unit of measure id]

Related

Calculating a score for each road in Openstreetmap produces unexpected result. What am I missing?

I have a Postgres database with a postgis extention installed and filles with open street map data.
With the following SQL statement :
SELECT
l.osm_id,
sum(
st_area(st_intersection(ST_Buffer(l.way, 30), p.way))
/
st_area(ST_Buffer(l.way, 30))
) as green_fraction
FROM planet_osm_line AS l
INNER JOIN planet_osm_polygon AS p ON ST_Intersects(l.way, ST_Buffer(p.way,30))
WHERE p.natural in ('water') or p.landuse in ('forest') GROUP BY l.osm_id;
I calculate a "green" score.
My goal is to create a "green" score for each osm_id.
Which means; how much of a road is near a water, forrest or something similar.
For example a road that is enclosed by a park would have a score of 1.
A road that only runs by a river for a short period of time would have a score of for example 0.4
OR so is my expectation.
But by inspection the result of this calculation I get sometimes Values of
212.11701212511463 for a road with the OSM ID -647522
and 82 for a road with osm ID -6497265
I do get values between 0 and 1 too but I don't understand why I do also get such huge values.
What am I missing ?
I was expecting values between 1 and 0.
Using a custom unique ID that you must populate, the query can also union eventually overlapping polygons:
SELECT
l.uid,
st_area(
ST_UNION(
st_intersection(ST_Buffer(l.way, 30), p.way))
) / st_area(ST_Buffer(l.way, 30)) as green_fraction
FROM planet_osm_line AS l
INNER JOIN planet_osm_polygon AS p
ON st_dwithin(l.way, p.way,30)
WHERE p.natural in ('water') or p.landuse in ('forest')
GROUP BY l.uid;

How to create multiple rows from a single row in Redshift SQL

I want to expand a single row to multiple rows in my table based on a column in the table in AWS Redshift.
Here is my example table schema and rows:
CREATE TABLE test (
start timestamp, -- start time of the first slot
slot_length int, -- the length of the slots in minutes
repeat int -- how many slots will be there
);
INSERT INTO test (start, slot_length, repeat) VALUES
('2019-09-22T00:00:00', 90, 2),
('2019-09-21T15:30:00', 60, 3);
I want to expand these two rows into 5 based on the value of the "repeat" column. So any row will be expanded "repeat" times. The first expansion won't change anything. The subsequent expansions need to add "slot_length" to the "start" column. Here is the final list of rows I want to have in the end:
'2019-09-22 00:00:00', 90, 2 -- expanded from the first row
'2019-09-22 01:30:00', 90, 2 -- expanded from the first row
'2019-09-21 15:30:00', 60, 3 -- expanded from the second row
'2019-09-21 16:30:00', 60, 3 -- expanded from the second row
'2019-09-21 17:30:00', 60, 3 -- expanded from the second row
Can this be done via pure SQL in Redshift?
This SQL should solves your purpose. Kindly Up-vote if it does.
select t.start
, case when rpt.repeat>1
then dateadd(min,t.slot_length*(rpt.repeat-1),t.start)
else t.start
end as new_start
, t.slot_length
, t.repeat
from schema.test t
join (select row_number() over() as repeat from schema.random_table) rpt
on t.repeat>=rpt.repeat
order by t.slot_length desc,rpt.repeat;
Please note that the "random_table" in your schema should have at least as many rows as the maximum value in your "repeat" column.

Repeat same data certain number of times

I've ran into an issue that I cant seem to solve without a lot of changes deep in the code, and I think there must be a simpler solution that I'm simply not aware of.
I have a table of product names, product locations and various statuses (from 1 to 10). I have data for all products and locations but only some of the statuses (for example product X in city XX has data for categories 1 and 3, and product Y for city YY has data for categories 1 to 6).
I'd like to always display 10 repetitions of each product/location, with corresponding data (if there is any) or nulls. This makes a report I'm planning on creating much easier to read and understand.
I'm using SSMS2017, on SQL Server 2016.
SELECT
[Product],
[Location],
[Category],
[Week1],
[Week2],
[Week3]
FROM MyView
Naturally it will only return data that I have, but I'd like to always return all 10 rows for each product/location combination (with nulls in Week columns if I have no data there).
Your question ist not very clear, but I think, that my magic crystall ball gave me a good guess:
I think, that you are looking for LEFT JOIN and CROSS JOIN:
--Next time please create a stand-alone sample like this yourself
--I create 3 dummy tables with sample data
DECLARE #tblStatus TABLE(ID INT IDENTITY,StatusName VARCHAR(100));
INSERT INTO #tblStatus VALUES('Status 1')
,('Status 2')
,('Status 3')
,('Status 4')
,('Status 5');
DECLARE #tblGroup TABLE(ID INT IDENTITY,GroupName VARCHAR(100));
INSERT INTO #tblGroup VALUES ('Group 1')
,('Group 2')
,('Group 3')
,('Group 4')
,('Group 5');
DECLARE #tblProduct TABLE(ID INT IDENTITY,ProductName VARCHAR(100),StatusID INT, GroupID INT);
INSERT INTO #tblProduct VALUES ('Product 1, Status 1, Group 2',1,2)
,('Product 2, Status 1, Group 3',1,3)
,('Product 3, Status 3, Group 4',3,4)
,('Product 4, Status 3, Group 3',3,3)
,('Product 5, Status 1, Group 5',1,5);
--This will return each status (independent of product values), together with the products (if there is a corresponding line)
SELECT s.StatusName
,p.*
FROM #tblStatus s
LEFT JOIN #tblProduct p ON s.ID=p.StatusID
--This will first use CROSS JOIN to create an each-with-each cartesian product.
--The LEFT JOIN works as above
SELECT s.StatusName
,g.GroupName
,p.*
FROM #tblStatus s
CROSS JOIN #tblGroup g
LEFT JOIN #tblProduct p ON s.ID=p.StatusID AND g.ID=p.GroupID;
If this is not what you need, please try to set up an example like mine and provide the expected output.

How to create two virtual columns by different conditions from repeating row data using SQL?

My database table has three columns: [Date, Client, Value][1].
I want a new table with only three clients in it - "Technics", "Metal Inc", "Sunny Day" and two virtual total columns - "Price" / August, and "Price" / 2017.
What I tried so far and what I get: Click to see
Why does the "For August" total SUM goes down the next row, but not in a new column?
After all, my code says: SELECT SUM(Value) AS 'For August'.
Any ideas?
Union will give you rows instead of columns, for this case maybe you can use subqueries, try something like this
SELECT
t1.Client, SUM(t1.Value) AS 'For 2017', (SELECT SUM(t2.Value)FROM Test GROUP
BY t2.Client)AS 'For August'
FROM Test AS t1
JOIN Test AS t2
ON t1.Client = t2.Client
WHERE
t2.Date LIKE '%2017-08%'AND
(t2.CLient LIKE '%Technics%' OR t2.CLient LIKE
'%Metal Inc%' OR t2.CLient LIKE '%Sunny Day%')
AND
(t1.CLient LIKE '%Technics%'
OR t1.CLient LIKE '%Metal Inc%'
OR t1.CLient LIKE '%Sunny Day%')
GROUP BY t1.Client

Precedence/weight to a column using FREETEXTTABLE in dymnamic TSQL

I have dynamic sql that perform paging and a full text search using CONTAINSTABLE which works fine. Problem is I would like to use FREETEXTTABLE but weight the rank of some colums over others
Here is my orginal sql and the ranking weight I would like to integrate
(I have changed names for privacy reasons)
SELECT * FROM
(SELECT TOP 10 Things.ID, ROW_NUMBER()
OVER(ORDER BY KEY_TBL.RANK DESC ) AS Row FROM [Things]
INNER JOIN
CONTAINSTABLE([Things],(Features,Description,Address),
'ISABOUT("cow" weight (.9), "cow" weight(.1))') AS KEY_TBL
ON [Properties].ID = KEY_TBL.[KEY]
WHERE TypeID IN (91, 48, 49, 50, 51, 52, 53)
AND
dbo.FN_CalcDistanceBetweenLocations(51.89249, -8.493376,
Latitude, Longitude) <= 2.5
ORDER BY KEY_TBL.RANK DESC ) x
WHERE x.Row BETWEEN 1 AND 10
Here is what I would like to integrate
select sum(rnk) as weightRankfrom
From
(select
Rank * 2.0 as rnk,
[key]
from freetexttable(Things,Address,'cow')
union all
select
Rank * 1.0 as rnk,
[key]
from freetexttable(Things,(Description,Features),'cow')) as t
group by [key]
order by weightRankfrom desc
Unfortunately, the algorithm used by the freetext engine (FREETEXTTABLE) has no way to specify the significance of the various input columns. If this is critical, you may need to consider using a different product for your freetext needs.
You can create a column with the concatenation of:
Less_important_field &
More_important_field & More_important_field (2x)
This might look really stupid, but it's actually what BM25F does to simulate structured documents. The only downside of this hack-implementation is that you can't actually dynamically change the weight. It bloats up the table a bit, but not necessarily the index, which should only need counts.