Filter an ID Column against a range of values - tsql

I have the following SQL:
SELECT ',' + LTRIM(RTRIM(CAST(vessel_is_id as CHAR(2)))) + ',' AS 'Id'
FROM Vessels
WHERE ',' + LTRIM(RTRIM(CAST(vessel_is_id as varCHAR(2)))) + ',' IN (',1,2,3,4,5,6,')
Basically, I want to filter the vessel_is_id against a variable list of integer values (which is passed in as a varchar into the stored proc). Now, the above SQL does not work. I do have rows in the table with a `vessel__is_id' of 1, but they are not returned.
Can someone suggest a better approach to this for me? Or, if the above is OK
EDIT:
Sample data
| vessel_is_id |
| ------------ |
| 1 |
| 2 |
| 5 |
| 3 |
| 1 |
| 1 |
So I want to returned all of the above where vessel_is_id is in a variable filter i.e. '1,3' - which should return 4 records.
Cheers.
Jas.

IF OBJECT_ID(N'dbo.fn_ArrayToTable',N'FN') IS NOT NULL
DROP FUNCTION [dbo].[fn_ArrayToTable]
GO
CREATE FUNCTION [dbo].fn_ArrayToTable (#array VARCHAR(MAX))
-- =============================================
-- Author: Dan Andrews
-- Create date: 04/11/11
-- Description: String to Tabled-Valued Function
--
-- =============================================
RETURNS #output TABLE (data VARCHAR(256))
AS
BEGIN
DECLARE #pointer INT
SET #pointer = CHARINDEX(',', #array)
WHILE #pointer != 0
BEGIN
INSERT INTO #output
SELECT RTRIM(LTRIM(LEFT(#array,#pointer-1)))
SELECT #array = RIGHT(#array, LEN(#array)-#pointer),
#pointer = CHARINDEX(',', #array)
END
RETURN
END
Which you may apply like:
SELECT * FROM dbo.fn_ArrayToTable('2,3,4,5,2,2')
and in your case:
SELECT LTRIM(RTRIM(CAST(vessel_is_id AS CHAR(2)))) AS 'Id'
FROM Vessels
WHERE LTRIM(RTRIM(CAST(vessel_is_id AS VARCHAR(2)))) IN (SELECT data FROM dbo.fn_ArrayToTable('1,2,3,4,5,6')

Since Sql server doesn't have an Array you may want to consider passing in a set of values as an XML type. You can then turn the XML type into a relation and join on it. Drawing on the time-tested pubs database for example. Of course you're client may or may not have an easy time generating the XML for the parameter value, but this approach is safe from sql-injection which most "comma seperated" value approaches are not.
declare #stateSelector xml
set #stateSelector = '<values>
<value>or</value>
<value>ut</value>
<value>tn</value>
</values>'
select * from authors
where state in ( select c.value('.', 'varchar(2)') from #stateSelector.nodes('//value') as t(c))

Related

PostGIS returns record as datatype. This is unexpected

I have this query
WITH buffered AS (
SELECT
ST_Buffer(geom , 10, 'endcap=round join=round') AS geom,
id
FROM line),
hexagons AS (
SELECT
ST_HexagonGrid(10, buffered.geom) AS hex,
buffered.id
FROM buffered
) SELECT * FROM hexagons;
This gives the datatype record in the column hex. This is unexpected. I expect geometry as a datatype. Why is that?
According to the documentation, the function ST_HexagonGrid returns a setof record. These records contain however a geometry attribute called geom, so in order to access the geometry of this record you have to wrap the variable with parenthesis () and call the attribute with a dot ., e.g.
SELECT (hex).geom FROM hexagons;
or just access fetch all attributes using * (in this case, i,j and geom):
SELECT (hex).* FROM hexagons;
Demo (PostGIS 3.1):
WITH j (hex) AS (
SELECT
ST_HexagonGrid(
10,ST_Buffer('LINESTRING(-105.55 41.11,-115.48 37.16,-109.29 29.38,-98.34 27.13)',1))
)
SELECT ST_AsText((hex).geom,2) FROM j;
st_astext
----------------------------------------------------------------------------------------
POLYGON((-130 34.64,-125 25.98,-115 25.98,-110 34.64,-115 43.3,-125 43.3,-130 34.64))
POLYGON((-115 25.98,-110 17.32,-100 17.32,-95 25.98,-100 34.64,-110 34.64,-115 25.98))
POLYGON((-115 43.3,-110 34.64,-100 34.64,-95 43.3,-100 51.96,-110 51.96,-115 43.3))
POLYGON((-100 34.64,-95 25.98,-85 25.98,-80 34.64,-85 43.3,-95 43.3,-100 34.64))
As ST_HexagonGrid returns a setof record, you can access the record atributes using a LATERAL as described here, or just call the function in the FROM clause:
SELECT i,j,ST_AsText(geom,2) FROM
ST_HexagonGrid(
10,ST_Buffer('LINESTRING(-105.55 41.11,-115.48 37.16,-109.29 29.38,-98.34 27.13)',1));
i | j | st_astext
----+---+----------------------------------------------------------------------------------------
-8 | 2 | POLYGON((-130 34.64,-125 25.98,-115 25.98,-110 34.64,-115 43.3,-125 43.3,-130 34.64))
-7 | 1 | POLYGON((-115 25.98,-110 17.32,-100 17.32,-95 25.98,-100 34.64,-110 34.64,-115 25.98))
-7 | 2 | POLYGON((-115 43.3,-110 34.64,-100 34.64,-95 43.3,-100 51.96,-110 51.96,-115 43.3))
-6 | 2 | POLYGON((-100 34.64,-95 25.98,-85 25.98,-80 34.64,-85 43.3,-95 43.3,-100 34.64))
Further reading: How to divide world into cells (grid)

DB2: Converting varchar to money

I have 2 varchar(64) values that are decimals in this case (say COLUMN1 and COLUMN2, both varchars, both decimal numbers(money)). I need to create a where clause where I say this:
COLUMN1 < COLUMN2
I believe I have to convert these 2 varchar columns to a different data types to compare them like that, but I'm not sure how to go about that. I tried a straight forward CAST:
CAST(COLUMN1 AS DECIMAL(9,2)) < CAST(COLUMN2 AS DECIMAL(9,2))
But I had to know that would be too easy. Any help is appreciated. Thanks!
You can create a UDF like this to check which values can't be cast to DECIMAL
CREATE OR REPLACE FUNCTION IS_DECIMAL(i VARCHAR(64)) RETURNS INTEGER
CONTAINS SQL
--ALLOW PARALLEL -- can use this on Db2 11.5 or above
NO EXTERNAL ACTION
DETERMINISTIC
BEGIN
DECLARE NOT_VALID CONDITION FOR SQLSTATE '22018';
DECLARE EXIT HANDLER FOR NOT_VALID RETURN 0;
RETURN CASE WHEN CAST(i AS DECIMAL(31,8)) IS NOT NULL THEN 1 END;
END
For example
CREATE TABLE S ( C VARCHAR(32) );
INSERT INTO S VALUES ( ' 123.45 '),('-00.12'),('£546'),('12,456.88');
SELECT C FROM S WHERE IS_DECIMAL(c) = 0;
would return
C
---------
£546
12,456.88
It really is that easy...this works fine...
select cast('10.15' as decimal(9,2)) - 1
from sysibm.sysdummy1;
You've got something besides a valid numerical character in your data..
And it's something besides leading or trailing whitespace...
Try the following...
select *
from table
where translate(column1, ' ','0123456789.')
<> ' '
or translate(column2, ' ','0123456789.')
<> ' '
That will show you the rows with alpha characters...
If the above does't return anything, then you've probably got a string with double decimal points or something...
You could use a regex to find those.
There is a built-in ability to do this without UDFs.
The xmlcast function below does "safe" casting between (var)char and decfloat (you may use as double or as decimal(X, Y) instead, if you want). It returns NULL if it's impossible to cast.
You may use such an expression twice in the WHERE clause.
SELECT
S
, xmlcast(xmlquery('if ($v castable as xs:decimal) then xs:decimal($v) else ()' passing S as "v") as decfloat) D
FROM (VALUES ( ' 123.45 '),('-00.12'),('£546'),('12,456.88')) T (S);
|S |D |
|---------|------------------------------------------|
| 123.45 |123.45 |
|-00.12 |-0.12 |
|£546 | |
|12,456.88| |

Get cutted JSON in query

I have a column in db which constains JSON values like:
{"key-1": "val-1", "key-2": "val-2", "key-3": "val-3"}
By query like..
SELECT column->>'key-1' FROM table;
I can get my val-1.
Is there a way to get value with key as JSON in sql query from already existed JSON value?
I want to get result like:
{"key-1": "val-1"}
from
{"key-1": "val-1", "key-2": "val-2", "key-3": "val-3"}
using sql query.
Use ampersand operator, &, e.g.,
Live test: https://www.db-fiddle.com/f/9izCEH75JhwVDvsGvsZomG/0
with the_table as
(
select '{"key-1": "val-1", "key-2": "val-2", "key-3": "val-3"}'::jsonb as d
)
select d & 'key-1' as j from the_table
Output:
| j |
| ----------------- |
| {"key-1":"val-1"} |
Just kidding :) Create a function that extracts the desired key value pair, and then create your own user-defined operator for it.
create or replace function extract_one_jsonb(j jsonb, key text)
returns jsonb
as
$$
select jsonb_build_object(key, j->key)
$$ language sql;
create operator & (
leftarg = jsonb,
rightarg = text,
procedure = extract_one_jsonb
);
Of course you can just use a function, or if creating a user-defined operator is not an option:
with the_table as
(
select '{"key-1": "val-1", "key-2": "val-2", "key-3": "val-3"}'::jsonb as d
)
select extract_one_jsonb(d, 'key-1') as j from the_table
Output:
| j |
| ----------------- |
| {"key-1":"val-1"} |
If extracting a key value pair from jsonb is being done many times, it's desirable to give an operator for it, e.g., &. Postgres is pretty flexible when you want to create your own operator, this can be created too: ->>>.
Live test: https://www.db-fiddle.com/f/9izCEH75JhwVDvsGvsZomG/1
create operator ->>> (
leftarg = jsonb,
rightarg = text,
procedure = extract_one_jsonb
);
Output:
| j |
| ----------------- |
| {"key-1":"val-1"} |
->> is already used by Postgres: https://www.postgresql.org/docs/11/functions-json.html
You can create '->>>' instead. ->>> looks more like an extractor operator than ampersand &. Besides it looks good even you stick it to the source field (that is without spaces)
with the_table as
(
select '{"key-1": "val-1", "key-2": "val-2", "key-3": "val-3"}'::jsonb as d
)
select d->>>'key-1' as j from the_table
Tried the following, it works too, looks like a scissor (for cutting): %>
select d%>'key-1' as j from the_table
The only thing I can think of is to get the key/value pair and assemble that back into a single JSON value:
select jsonb_build_object(j.k, j.v)
from the_table t, jsonb_each(t.json_col) as j(k,v)
where j.k = 'key-1'
and ... more conditions ...;
Online example: https://rextester.com/VGSX43955

Split a string and populate a table for all records in table in SQL Server 2008 R2

I have a table EmployeeMoves:
| EmployeeID | CityIDs
+------------------------------
| 24 | 23,21,22
| 25 | 25,12,14
| 29 | 1,2,5
| 31 | 7
| 55 | 11,34
| 60 | 7,9,21,23,30
I'm trying to figure out how to expand the comma-delimited values from the EmployeeMoves.CityIDs column to populate an EmployeeCities table, which should look like this:
| EmployeeID | CityID
+------------------------------
| 24 | 23
| 24 | 21
| 24 | 22
| 25 | 25
| 25 | 12
| 25 | 14
| ... and so on
I already have a function called SplitADelimitedList that splits a comma-delimited list of integers into a rowset. It takes the delimited list as a parameter. The SQL below will give me a table with split values under the column Value:
select value from dbo.SplitADelimitedList ('23,21,1,4');
| Value
+-----------
| 23
| 21
| 1
| 4
The question is: How do I populate EmployeeCities from EmployeeMoves with a single (even if complex) SQL statement using the comma-delimited list of CityIDs from each row in the EmployeeMoves table, but without any cursors or looping in T-SQL? I could have 100 records in the EmployeeMoves table for 100 different employees.
This is how I tried to solve this problem. It seems to work and is very quick in performance.
INSERT INTO EmployeeCities
SELECT
em.EmployeeID,
c.Value
FROM EmployeeMoves em
CROSS APPLY dbo.SplitADelimitedList(em.CityIDs) c;
UPDATE 1:
This update provides the definition of the user-defined function dbo.SplitADelimitedList. This function is used in above query to split a comma-delimited list to table of integer values.
CREATE FUNCTION dbo.fn_SplitADelimitedList1
(
#String NVARCHAR(MAX)
)
RETURNS #SplittedValues TABLE(
Value INT
)
AS
BEGIN
DECLARE #SplitLength INT
DECLARE #Delimiter VARCHAR(10)
SET #Delimiter = ',' --set this to the delimiter you are using
WHILE len(#String) > 0
BEGIN
SELECT #SplitLength = (CASE charindex(#Delimiter, #String)
WHEN 0 THEN
datalength(#String) / 2
ELSE
charindex(#Delimiter, #String) - 1
END)
INSERT INTO #SplittedValues
SELECT cast(substring(#String, 1, #SplitLength) AS INTEGER)
WHERE
ltrim(rtrim(isnull(substring(#String, 1, #SplitLength), ''))) <> '';
SELECT #String = (CASE ((datalength(#String) / 2) - #SplitLength)
WHEN 0 THEN
''
ELSE
right(#String, (datalength(#String) / 2) - #SplitLength - 1)
END)
END
RETURN
END
Preface
This is not the right way to do it. You shouldn't create comma-delimited lists in SQL Server. This violates first normal form, which should sound like an unbelievably vile expletive to you.
It is trivial for a client-side application to select rows of employees and related cities and display this as a comma-separated list. It shouldn't be done in the database. Please do everything you can to avoid this kind of construction in the future. If at all possible, you should refactor your database.
The Right Answer
To get the list of cities, properly expanded, from a table containing lists of cities, you can do this:
INSERT dbo.EmployeeCities
SELECT
M.EmployeeID,
C.CityID
FROM
EmployeeMoves M
CROSS APPLY dbo.SplitADelimitedList(M.CityIDs) C
;
The Wrong Answer
I wrote this answer due to a misunderstanding of what you wanted: I thought you were trying to query against properly-stored data to produce a list of comma-separated CityIDs. But I realize now you wanted the reverse: to query the list of cities using existing comma-separated values already stored in a column.
WITH EmployeeData AS (
SELECT
M.EmployeeID,
M.CityID
FROM
dbo.SplitADelimitedList ('23,21,1,4') C
INNER JOIN dbo.EmployeeMoves M
ON Convert(int, C.Value) = M.CityID
)
SELECT
E.EmployeeID,
CityIDs = Substring((
SELECT ',' + Convert(varchar(max), CityID)
FROM EmployeeData C
WHERE E.EmployeeID = C.EmployeeID
FOR XML PATH (''), TYPE
).value('.[1]', 'varchar(max)'), 2, 2147483647)
FROM
(SELECT DISTINCT EmployeeID FROM EmployeeData) E
;
Part of my difficulty in understanding is that your question is a bit disorganized. Next time, please clearly label your example data and show what you have, and what you're trying to work toward. Since you put the data for EmployeeCities last, it looked like it was what you were trying to achieve. It's not a good use of people's time when questions are not laid out well.

How do you exclude a column from showing up if there is no value?

Question about a query I'm trying to write in SQL Server Management Studio 2008. I am pulling 2 rows. The first row being the header information, the second row being the information for a certain Line Item. Keep in mind, the actual header information reads as "Column 0, 1, 2, 3, 4,.... etc."
The data looks something like this:
ROW 1: Model # | Item Description| XS | S | M | L | XL|
ROW 2: 3241 | Gray Sweatshirt| | 20 | 20 | 30 | |
Basically this shows that there are 20 smalls, 20 mediums, and 30 larges of this particular item. There are no XS's or XL's.
I want to create a subquery that puts this information in one row, but at the same time, disinclude the sizes with a blank quantity amount as shown under the XS and XL sizes.
I want it to look like this when all is said and done:
ROW 1: MODEL #| 3241 | ITEM DESCRIPTION | Gray Sweatshirt | S | 10 | M | 20 | L | 30 |
Notice there are no XS or XL's included. How do I do make it so those columns do not appear?
Since you are not posting your query, nor your table structure, I guess it is with columns Id, Description, Size. If so, you could do this and just replace with your table and column names:
DECLARE #columns varchar(8000)
SELECT #columns = COALESCE (#columns + ',[' + cast(Size as varchar) + ']', '[' + cast(Size as varchar) + ']' )
FROM YourTableName
WHERE COUNT(Size) > 0
DECLARE #query varchar(8000) = 'SELECT Id, Description, '
+ #columns +'
FROM
(SELECT Id, Description, Size
FROM YourTableName) AS Source
PIVOT
(
COUNT(Size)
FOR Size IN ('+ #columns +')
) AS Pvt'
EXEC(#query)
Anyhow, I also agree with #MichaelFredickson. I have implemented this pivot solution, yet it is absolutely better to let the presentation layer to take care of this after just pulling the raw data from SQL. If not, you would be processing the data twice, one on SQL to create the table and the other in the presentation when reading and displaying the values with your c#/vb/other code.