I have a difficult task in TSQL that I can't seem to find a simple way to do. I am trying to use CROSS APPLY STRING_SPLIT(sentence, ' '), but I can only get one word to the method. Can you please help? Thank you.
Sample sentence:
I need to split strings using TSQL.
This approach is traditional, and is supported in all versions and editions of SQL Server.
Desired answer:
I need
to split
strings using
TSQL.
Desired Answer:
This approach
is traditional
, and
is supported
in all
versions and
editions of
SQL Server.
Here you go:
First add a space to any comma (you want a comma treated as a word), then split the string on each space into rows using some Json, then assign groups to pair each row using modulo and lag over(), then aggregate based on the groups:
declare #s varchar(100)='This approach is traditional, and is supported in all versions and editions of SQL Server';
select Result = String_Agg(string,' ') within group (order by seq)
from (
select j.[value] string, Iif(j.[key] % 2 = 1, Lag(seq) over(order by seq) ,seq) gp, seq
from OpenJson(Concat('["',replace(Replace(#s,',',' ,'), ' ', '","'), '"]')) j
cross apply(values(Convert(tinyint,j.[key])))x(seq)
)x
group by gp;
Result:
See Demo Fiddle
Related
I am novice to Postgres queries. I am trying to pull substring from each record of column based on specific set.
Suppose, I substring from each record between keywords 'start' & 'end'. So the thing is it can be multiple occurrences of 'start' & 'end' in one record and need to extract what occurs between each set of 'start' & 'end' keywords.
Do we have possibility to achieve this with single query in Postgres, rather than creating a procedure? If yes, could you please help on this or re-direct me where I can find related information?
Assuming that / always delimits the elements, you can use string_to_array() to convert the string into multiple elements and unnest() to turn the array into a result. You can then use regexp_replace() to get rid of the delimiters in the curly braces:
select d.id, regexp_replace(t.name, '{start}|{end}', '', 'g')
from the_able d
cross join unnest(string_to_array(d.body,'/')) as t(name);
SQLFiddle example: http://sqlfiddle.com/#!15/9eecb7db59d16c80417c72d1e1f4fbf1/8863
You achieve all this using regular expressions, and the PostgreSQL regex functions regexp_matches (to match content between your tags) and regexp_replace (to remove the tags):
with t(id,body) as (values
(1, '{start}John{end}/{start}Jack{end}'),
(2, '{start}David{end}'),
(3, '{start}Ken{end}/{start}Kane{end}/{start}John{end}'))
select id, regexp_replace(
(regexp_matches(body, '{start}.*?{end}', 'g'))[1],
'^{start}|{end}$', '', 'g') matches
from t
Server: SQL Server 2008 R2
I apologize in advance, as I'm not sure of the best way to verbalize the question. I'm receiving a string of email addresses and I need to see if, within that string, any of the addresses exist as a user already. The query that obviously doesn't work is shown below, but hopefully it helps to clarify what I'm looking for:
SELECT f_emailaddress
FROM tb_users
WHERE f_emailaddress LIKE '%user1#domain.com,user2#domain.com%'
I was hoping SQL had an "InString" operator, that would check for matches "within the string", but I my Google abilities must be weak today.
Any assistance is greatly appreciated. If there simply isn't a way, I'll have to dig in and do some work in the codebehind to split each item in the string and search on each one.
Thanks in advance,
Beems
Split the input string and use IN clause
to split the CSV to rows use this.
SELECT Ltrim(Rtrim(( Split.a.value('.', 'VARCHAR(100)') )))
FROM (SELECT Cast ('<M>'
+ Replace('user1#domain.com,user2#domain.com', ',', '</M><M>')
+ '</M>' AS XML) AS Data) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a)
Now use the above query in where clause.
SELECT f_emailaddress
FROM tb_users
WHERE f_emailaddress IN(SELECT Ltrim(Rtrim(( Split.a.value('.', 'VARCHAR(100)') )))
FROM (SELECT Cast ('<M>'
+ Replace('user1#domain.com,user2#domain.com', ',', '</M><M>')
+ '</M>' AS XML) AS Data) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a))
Or use can use Inner Join
SELECT f_emailaddress
FROM tb_users A
JOIN (SELECT Ltrim(Rtrim(( Split.a.value('.', 'VARCHAR(100)') )))
FROM (SELECT Cast ('<M>'
+ Replace('user1#domain.com,user2#domain.com', ',', '</M><M>')
+ '</M>' AS XML) AS Data) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a)) B
ON a.f_emailaddress = b.f_emailaddress
You first need to split the CSV list into a temp table and then use that to INNER JOIN with your existing table, as that will act as a filter.
You cannot use CONTAINS unless you have created a Full Text index on that table and column, which I doubt is the case here.
For example:
CREATE TABLE #EmailAddresses (Email NVARCHAR(500) NOT NULL);
INSERT INTO #EmailAddress (Email)
SELECT split.Val
FROM dbo.Splitter(#IncomingListOfEmailAddresses);
SELECT usr.f_emailaddress
FROM tb_users usr
INNER JOIN #EmailAddresses tmp
ON tmp.Email = usr.f_emailaddress;
Please note that the reference to "dbo.Splitter" is a placeholder for whatever string splitter you already have or might get. Please do not use any splitter that makes use of a WHILE loop. The best options are either the SQLCLR- or XML- based ones. The XML-based ones are generally fast but do have some issues with encoding if the string to be split has special XML characters such as &, <, or ". If you want a quick and easy SQLCLR-based splitter, you can download the Free version of the SQL# library (which I am the creator of, but this feature is in the free version) which contains String_Split and String_Split4k (for when the input is always <= 4000 characters).
SQL has a CONTAINS and an IN function. You can use either of those to accomplish your task. Click on either for more information via MSDNs website! Hope this helps.
CONTAINS
CONTAINS will look to see if any values in your data contain the entire string you provided. Kind of similar in presentations to LIKE '%myValue%';
SELECT f_emailaddress
FROM tb_users
WHERE CONTAINS (f_emailaddress, 'user1#domain.com');
IN
IN will return matches for any values in the provided comma delimited list. They need to be exact matches however. You can't provide partial terms.
SELECT f_emailaddress
FROM tb_users
WHERE f_emailaddress IN ('user1#domain.com','user2#domain.com')
As far as splitting each of the values out into separate strings, have a look at the StackOverflow question found HERE. This might point you in the proper direction.
You can try like this(not tested).
Before using this, make sure that you have created a Full Text index on that table and column.
Replace your comma with AND then
SELECT id,email
FROM t
where CONTAINS(email, 'user1#domain.com and user2#domain.com');
--prepare temp table for testing
DECLARE #tb_users AS TABLE
(f_emailaddress VARCHAR(100))
INSERT INTO #tb_users
( f_emailaddress)
VALUES ( 'user1#domain.com' ),
( 'user2#domain.com' ),
( 'user3#domain.com' ),
( 'user4#domain.com' )
--Your query
SELECT f_emailaddress
FROM #tb_users
WHERE 'user1#domain.com,user2#domain.com' LIKE '%' + f_emailaddress + '%'
I want to display a string on each row (Details section) in my Crystal Report. The contents of this string will be retrieved with the help of a SQL Expression.
The SQL I have is follows: However if multiple rows are returned, I am not sure how to convert that into a Comma Separated String. I have an Oracle 11g database.
(select distinct NAME from TEST
where SAMPLE_NUMBER = "TEST"."SAMPLE_NUMBER"
and X_BENCH <> '"TEST"."X_BENCH"')
The TEST Table looks like this:
My report will be filtered for all samples with a specific test (e.g. Calcium). For those samples on the report, My SQL Expression should retrieve all "Other" Tests on the sample. See output example.
You can accomplish this with a wm_concat. WM_CONCAT takes a bunch of rows in a group and outputs a comma separated varchar.
Using the substr function you can separate the first result with the last.
Please note that I am dirty coding this (without a compiler to check my syntax) so things may not be 100% correct.
select sample_number
, substr(wm_concat(name),1,instr(wm_concat(name),",")-1) as NAME
, substr(wm_concat(name),instr(wm_concat(name),","),length(wm_concat(name)-instr(wm_concat(name),",")+1) as OTHER_TEST_NAMES
from TEST
where SAMPLE_NUMBER = "TEST"."SAMPLE_NUMBER"
and X_BENCH <> '"TEST"."X_BENCH"'
and rownum < 2
group by sample_number
However, if it is not necessary to separate the name and the other test names, it actually is much simpler.
select sample_number
, wm_concat(name) as NAMES
from TEST
where SAMPLE_NUMBER = "TEST"."SAMPLE_NUMBER"
and X_BENCH <> '"TEST"."X_BENCH"'
and rownum < 2
group by sample_number
Also please try to organize your lines to make it easier to read.
You can use LISTAGG for Converting Rows to Comma-Separated String in Oracle.
Example:
SELECT user_id
, LISTAGG(expertise, ',')
WITHIN GROUP (ORDER BY expertise)
AS expertise
FROM TEMP_TABLE
GROUP BY user_id;
I have a problem with TSQL. I have a number of tables, each table contain different number of fielsds with different names.
I need dynamically take all this tables, read all records and manage each record into string list, where each value separated by commas. And do smth. with this string.
I think that I need to use CURSORS, but I can't FETCH em without knowing A concrete amount of fields with names and types. Maybe I can create a table variable with dynamic number of fields?
Thanks a lot!
Makarov Artem.
I would repurpose one of the many T-SQL scripts written to generate INSERT statements. They do exactly what you require. Namely
Reverse engineer a given table to determine columns names and types
Generate a delimited string of values
The most complete example I've found is here
But just a simple Google search for "INSERT STATEMENT GENERATOR" will yield several examples that you can repurpose to fit your needs.
Best of luck!
SELECT
ORDINAL_POSITION
,COLUMN_NAME
,DATA_TYPE
,CHARACTER_MAXIMUM_LENGTH
,IS_NULLABLE
,COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MYTABLE'
ORDER BY
ORDINAL_POSITION ASC;
from http://weblogs.sqlteam.com/joew/archive/2008/04/27/60574.aspx
Perhaps you can do something with this.
select T2.X.query('for $i in *
return concat(data($i), ",")'
).value('.', 'nvarchar(max)') as C
from (
select *
from YourTable
for xml path('Row'),elements xsinil, type
) as T1(X)
cross apply T1.X.nodes('/Row') T2(X)
It will give you one row for each row in YourTable with each value in YourTable separated by a comma in the column C.
This builds an XML for the entire table and then parses that XML. Might get you into trouble if you have tables with a lot of rows.
BTW: I saw from a comment that you can "use only pure SQL". I really don't think this qualifies as "pure SQL" :).
I would like to be able to sort (order by) in postgres ignoring leading words like "the, a, etc"
one way: script (using your favorite language) the creation of an extra column of the text with noise words removed, and sort on that.
Add a SORT_NAME column that has all that stuff stripped out. For bonus points, use an input trigger to populate it automatically, using your favorite SQL dialect's regex parser or similar.
Try splitting the column and sorting on the second item in the resulting array:
select some_col from some_table order by split_part(some_col, ' ', 2);
No need to add an extra column. Strip out the leading words in your ORDER BY:
SELECT col FROM table ORDER BY REPLACE(REPLACE(col, 'A ', ''), 'The ', '')