T-SQL Using Cross-Apply with Delete Statement - tsql

I have the following tables:
RecordID
101
102
103
104
105
106
TableOne
101
102
103
104
TableTwo
TableThree
101
102
and I need to delete the RecordsID rows, that are not included in the other tables. Please, note that sometimes the one of the tables TableOne,TableTwo,TableThree could be empty and no records should be deleted then.
The result table should be:
RecordID
101
102
Because of the empty tables I am not able to use INNER JOIN. And because I am using these code in a function I am not able to make a dynamic SQL statement containing only tables with records and executed it.
I could this with IF statements, but in my real situation I have many cases to check and many tables to join and a lot of code duplication is going as a result.
That's why I started to wonder is there a way to do this cleverer and cleaner with CROSS APPLY?

I don't see any advanage in using cross apply here. Here is a simple solution that does the job:
declare #t table(recordid int)
declare #tableone table(recordid int)
declare #tabletwo table(recordid int)
declare #tablethree table(recordid int)
insert #t values(101),(102),(103),(104),(105),(106)
insert #tableone values(101),(102),(103),(104)
insert #tablethree values(101),(102)
delete t
from #t t
where not exists (select 1 from #tableone where t.recordid = recordid)
and exists (select 1 from #tableone)
or not exists (select 1 from #tabletwo where t.recordid = recordid)
and exists (select 1 from #tabletwo)
or not exists (select 1 from #tablethree where t.recordid = recordid)
and exists (select 1 from #tablethree)
Result:
recordid
101
102

Use Except and Union
declare #t table(recordid int)
declare #tableone table(recordid int)
declare #tabletwo table(recordid int)
declare #tablethree table(recordid int)
insert #t values(101),(102),(103),(104),(105),(106)
insert #tableone values(101),(102),(103),(104)
insert #tablethree values(101),(102)
delete
from #t where recordid not in(
select * from #t
except select * from
(select * from #tableone union
select * from #tabletwo union
select * from #tablethree)x)
select * from #t
recordid
105
106

Related

Return the number of rows with a where condition after an INSERT INTO? postgresql

I have a table that regroups some users and which event (as in IRL event) they've joined.
I have set up a server query that lets a user join an event.
It goes like this :
INSERT INTO participations
VALUES(:usr,:event_id)
I want that statement to also return the number of people who have joined the same event as the user. How do I proceed? If possible in one SQL statement.
Thanks
You can use a common table expression like this to execute it as one query.
with insert_tbl_statement as (
insert into tbl values (4, 1) returning event_id
)
select (count(*) + 1) as event_count from tbl where event_id = (select event_id from insert_tbl_statement);
see demo http://rextester.com/BUF16406
You can use a function, I've set up next example, but keep in mind you must add 1 to the final count because still transaction hasn't been committed.
create table tbl(id int, event_id int);
✓
insert into tbl values (1, 2),(2, 2),(3, 3);
3 rows affected
create function new_tbl(id int, event_id int)
returns bigint as $$
insert into tbl values ($1, $2);
select count(*) + 1 from tbl where event_id = $2;
$$ language sql;
✓
select new_tbl(4, 2);
| new_tbl |
| ------: |
| 4 |
db<>fiddle here

Remove Duplicates

I have a table like below:
SuppID AreaID SuppNo SupName SupPrice
------------------------------------------------
1 3 526 ANC 100
1 3 985 JTT 200
3 4 100 HIK 300
In the above table, for same SuppID(1) and same AreaID(3), different SuppNo are there (526 & 985) in two different rows.
In this scenario , I'd like to make those two rows into a single row with SuppNo field as blank.
Also my output result should display rows with all the columns.
Any Help?
This should get you started:
DECLARE #TABLE TABLE (SuppID INT, AreaID INT, SuppNo VARCHAR(5), SupName VARCHAR(5), SupPrice INT)
INSERT INTO #TABLE
SELECT 1,3,'526','ANC',100 UNION
SELECT 1,3,'985','JTT',200 UNION
SELECT 3,4,'100','HIK',300
-- select data before updates
SELECT * FROM #TABLE
-- add a row count by AreaID/SuppID
;WITH T1 AS
(
SELECT *
,SUM(1) OVER(PARTITION BY AREAID,SUPPID) AS ROWCNT
FROM #TABLE
)
-- set the SuppNo blank on rows that have more than 1 match
UPDATE T1 SET SuppNo='' WHERE ROWCNT>1
-- add a row # by AreaID/SuppID
;WITH T2 AS
(
SELECT *
,ROW_NUMBER() OVER(PARTITION BY AREAID,SUPPID ORDER BY AREAID,SUPPID) AS ROWID
FROM #TABLE
)
-- delete duplicate rows
DELETE
FROM T2
WHERE ROWID>1
-- select data after updates
SELECT * FROM #TABLE

How to pivot a table to a view on matching-length delimited cells

Disclaimer: I'm dealing with a rather old legacy system so any comments telling me about poor design are redundant, although I do genuinely appreciate any such sentiment. There is a new version that solves most legacy problems but we still have to maintain the old system, so basically, we have to manage for now.
I have a table that looks like this (yes, that is a single column, I know):
And I need a view (for reporting purposes) that will dynamically process the data in said table and return this:
The values are \n-delimited (shudder) and you can assume there will always be the same number of values in each cell (9 in the example, although other databases could have 4 or 12 or any number), although I suppose having NULL-insertion in the event of missing values couldn't hurt. They will also always be in a matching order (as in the example, 'AUD', 'Australian Dollar', and '$' are all the first values in their respective cells, and so on).
I've found various approaches to splitting a single cell out into a view, but nothing that covers merging data in such a way as I require. Sitting at home with a cold has not helped my research capabilities. Help me StackOverflow, you're my only hope!
Bonus points for tidy, relatively readable SQL examples, although I'm anticipating messiness as a natural by-product of the hackish nature of my required solution.
Something like this. I didn't take the time to build out the tables, but it should be fairly obvious where you can replace my variables with your rows. You will also want to do a replace char(10) where I have used commas. You could package it up in a table valued function and then call as a view.
declare #xml1 xml
declare #xml2 xml
declare #xml3 xml
declare #c1 nvarchar(250)
declare #c2 nvarchar(250)
declare #c3 nvarchar(250)
set #c1 = N'AUD,CAD,EUR,GBP,JPY,NZD,USD,KES,CHF';
set #c2 = N'Australian Dollar,Canadian Dollar,Euro,Pound Sterling,Yen,New Zealand Dollar,United States Dollar,Kenyan Shilling, Swiss Franc';
set #c3 = N'$,$,C,L,Y,$,$,K,F';
-- you'd use replace(#c1, char(10), '</r><r>') etc etc for /n delimited code
set #xml1 = N'<root><r>' + replace(#c1,',','</r><r>') + '</r></root>';
set #xml2 = N'<root><r>' + replace(#c2,',','</r><r>') + '</r></root>';
set #xml3 = N'<root><r>' + replace(#c3,',','</r><r>') + '</r></root>';
select code.code, name.name, symbol.symbol
from
(select ROW_NUMBER() over (order by ##rowcount) as ck,
c.value('.','varchar(max)') as [code]
from #xml1.nodes('//root/r') as a(c)) as code
inner join
(select ROW_NUMBER() over (order by ##rowcount) as nk,
n.value('.','varchar(max)') as [name]
from #xml2.nodes('//root/r') as a(n)) as name on code.ck = name.nk
inner join
(select ROW_NUMBER() over (order by ##rowcount) as sk,
s.value('.','varchar(max)') as [symbol]
from #xml3.nodes('//root/r') as a(s)) as symbol on symbol.sk = name.nk
You can run this as a single script in SSMS for verification that it works. No schema necessary.
Using Jeff Moden's Tally Ho! CSV splitter:
CREATE FUNCTION [dbo].[DelimitedSplit8K]
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
--WARNING!!! DO NOT USE MAX DATA-TYPES HERE! IT WILL KILL PERFORMANCE!
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH
E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString,t.N,1) = #pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(#pString, l.N1, l.L1)
FROM cteLen l
;
and inline CTE data like this
with
data as (select Num,Currencies from (values
(1,'AUD'+char(10)+'CAD'+char(10)+'USD'+char(10)+'KES')
,(2,'Australian DOllar'+char(10)+'Canadian Dollar'+char(10)+'US Dollar'+char(10)+'Kenyan Shilling')
,(3,'$'+char(10)+'$'+char(10)+'$'+char(10)+'k')
)data(Num,Currencies)
),
The solution is as simple as this:
map as (select * from (values
(1,'Code')
,(2,'Name')
,(3,'Symbol')
)map(Num,Col )
)
select
ItemNumber
,max(Code) as Code
,max(Name) as Name
,max(Symbol) as Symbol
from (
select
map.Num
,map.Col
,c.Item
,c.ItemNumber
from data
join map
on map.Num = data.Num
cross apply dbo.DelimitedSplit8K(data.Currencies,char(10)) c
) t
pivot (max(Item) for Col in (Code,Name,Symbol)) pvt
group by ItemNumber
to give us:
ItemNumber Code Name Symbol
-------------- ---- -------------------- ---------------
1 AUD Australian DOllar $
2 CAD Canadian Dollar $
3 USD US Dollar $
4 KES Kenyan Shilling k
Hope this Helps. Run all together or replace the table variable with a temptable.
Sample Data:
IF OBJECT_ID(N'tempdb..#table') > 0
BEGIN
DROP TABLE #table
END
DECLARE #table TABLE(ATTRIBUTELVAUE VARCHAR(MAX))
INSERT INTO #table
SELECT
'AFN
ALL
DZD
USD
EUR
AOA
XCD
XCD
ARS'
INSERT INTO #table
SELECT
'Afghanistan
Albania
Algeria
American Samoa
Andorra
Angola
Anguilla
Antigua and Barbuda
Argentina'
INSERT INTO #table
SELECT
'AF
AL
DZ
AS
AD
AO
AI
AG
AR'
Query:
IF OBJECT_ID(N'tempdb..#TEMP') > 0
BEGIN
DROP TABLE #TEMP
END
DECLARE #StartLoop INT
DECLARE #EndLoop INT
DECLARE #Code TABLE (ID INT IDENTITY(1, 1),
Code VARCHAR(250))
DECLARE #Name TABLE (ID INT IDENTITY(1, 1),
Name VARCHAR(250))
DECLARE #Symbol TABLE (ID INT IDENTITY(1, 1),
Symbol VARCHAR(250))
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS ID,
*
INTO #Temp
FROM #table
SELECT #StartLoop = MIN(ID),
#EndLoop = MAX(ID)
FROM #Temp
WHILE #StartLoop <= #EndLoop
BEGIN
DECLARE #WorkingString VARCHAR(MAX)
SELECT #WorkingString = ATTRIBUTELVAUE + CHAR(10) + ' '
FROM #Temp
WHERE ID = #StartLoop
--print #WorkingString
WHILE CHARINDEX(CHAR(10), #WorkingString) > 0
BEGIN
DECLARE #SearchCharacter INT
DECLARE #WorkingStringLength INT
DECLARE #TempStringLength INT
DECLARE #TempString VARCHAR(MAX)
SET #WorkingStringLength = LEN(#WorkingString)
SET #SearchCharacter = CHARINDEX(CHAR(10), #WorkingString)
SET #TempString = SUBSTRING(#WorkingString, 1, #SearchCharacter - 1)
SET #TempStringLength = LEN(#TempString)
SET #WorkingString = SUBSTRING(#WorkingString, #SearchCharacter + 1, #WorkingStringLength)
SET #TempString = REPLACE(#TempString, CHAR(13), '')
IF #StartLoop = 1
BEGIN
INSERT INTO #Code
SELECT #TempString
END
IF #StartLoop = 2
BEGIN
INSERT INTO #Name
SELECT #TempString
END
IF #StartLoop = 3
BEGIN
INSERT INTO #Symbol
SELECT #TempString
END
END
SET #StartLoop = #StartLoop + 1
END
SELECT Code,
Name,
Symbol
FROM #Code AS c
JOIN #Name AS n
ON c.ID = n.ID
JOIN #Symbol AS s
ON s.ID = n.ID
Cleanup:
IF OBJECT_ID(N'tempdb..#TEMP') > 0
BEGIN
DROP TABLE #TEMP
END
IF OBJECT_ID(N'tempdb..#table') > 0
BEGIN
DROP TABLE #table
END
Because I needed a view, this ended up being my solution:
CREATE FUNCTION [dbo].[CurrencyTableGenerator]()
RETURNS
#CurrencyTable TABLE(
Code NVARCHAR(250)
,Name NVARCHAR(250)
,Symbol NVARCHAR(250)
)
AS
BEGIN
DECLARE #xml1 XML
DECLARE #xml2 XML
DECLARE #xml3 XML
DECLARE #C1 NVARCHAR(250)
DECLARE #C2 NVARCHAR(250)
DECLARE #c3 NVARCHAR(250)
SET #c1 = (SELECT ...)
SET #c2 = (SELECT ...)
SET #c3 = (SELECT ...)
SET #xml1 = N'<root><r>' + REPLACE(#c1, CHAR(10), '</r><r>') + '</r></root>';
SET #xml2 = N'<root><r>' + REPLACE(#c2, CHAR(10), '</r><r>') + '</r></root>';
SET #xml3 = N'<root><r>' + REPLACE(#c3, CHAR(10), '</r><r>') + '</r></root>';
INSERT INTO #CurrencyTable
SELECT Code.Code, Name.Name, Symbol.Symbol
FROM
(SELECT ROW_NUMBER() OVER (ORDER BY ##ROWCOUNT) AS ck,
c.value('.', 'VARCHAR(250)') AS [Code]
FROM #xml1.nodes('//root/r') AS a(c)) AS Code
INNER JOIN
(SELECT ROW_NUMBER() OVER (ORDER BY ##ROWCOUNT) AS nk,
n.value('.', 'VARCHAR(250)') AS [Name]
FROM #xml2.nodes('//root/r') AS a(n)) AS Name ON Code.ck = Name.nk
INNER JOIN
(SELECT ROW_NUMBER() OVER (ORDER BY ##ROWCOUNT) AS sk,
s.value('.', 'VARCHAR(250)') AS [Symbol]
FROM #xml3.nodes('//root/r') AS a(s)) AS Symbol ON Symbol.sk = Name.nk
RETURN
END
GO
CREATE VIEW [dbo].[CurrencyView]
AS
SELECT * FROM [dbo].[CurrencyTableGenerator]()
GO
Thanks to RThomas for the function.

Find exact FK matches

Have a very large table (over 200 million rows)
sID int, wordID int (PK sID, wordID)
Want to find the sID's that have the exact same wordID's (and no extras)
For a sID with over 100 wordID the chance of an exact match goes down so willing to limit it to 100
(but would like to go to 1000)
If this was school and sID were classes and wordID were students.
Then I want to find classes that have the exact same students.
sID, wordID
1, 1
1, 2
1, 3
2, 2
2, 3
3, 1
3, 4
5, 1
5, 2
6, 2
6, 3
7, 1
7, 2
8, 1
8, 1
sID 6 and 2 have the exact same wordID's
sID 5, 7, and 8 have the exact same wordID's
This is what I have so far
I would like to eliminate the two delete #temp3_sID1_sID2 and take care of that in the insert above
But I will try any ideas
It is not like you can easily create a table with 200 million rows to test with
drop table #temp_sID_wordCount
drop table #temp_count_wordID_sID
drop table #temp3_wordID_sID_forThatCount
drop table #temp3_sID1_sID2
drop table #temp3_sID1_sID2_keep
create table #temp_sID_wordCount (sID int primary key, ccount int not null)
create table #temp_count_wordID_sID (ccount int not null, wordID int not null, sID int not null, primary key (ccount, wordID, sID))
create table #temp3_wordID_sID_forThatCount (wordID int not null, sID int not null, primary key(wordID, sID))
create table #temp3_sID1_sID2_keep (sID1 int not null, sID2 int not null, primary key(sID1, sID2))
create table #temp3_sID1_sID2 (sID1 int not null, sID2 int not null, primary key(sID1, sID2))
insert into #temp_sID_wordCount
select sID, count(*) as ccount
FROM [FTSindexWordOnce] with (nolock)
group by sID
order by sID;
select count(*) from #temp_sID_wordCount where ccount <= 100; -- 701,966
truncate table #temp_count_wordID_sID
insert into #temp_count_wordID_sID
select #temp_sID_wordCount.ccount, [FTSindexWordOnce].wordID, [FTSindexWordOnce].sID
from #temp_sID_wordCount
join [FTSindexWordOnce] with (nolock)
on [FTSindexWordOnce].sID = #temp_sID_wordCount.sID
and ccount >= 1 and ccount <= 10
order by #temp_sID_wordCount.ccount, [FTSindexWordOnce].wordID, [FTSindexWordOnce].sID;
select count(*) from #temp_sID_wordCount; -- 34,860,090
truncate table #temp3_sID1_sID2_keep
declare cur cursor for
select top 10 ccount from #temp_count_wordID_sID group by ccount order by ccount
open cur
declare #count int, #sIDcur int
fetch next from cur into #count
while (##FETCH_STATUS = 0)
begin
--print (#count)
--select count(*), #count from #temp_sID_wordCount where #temp_sID_wordCount.ccount = #count
truncate table #temp3_wordID_sID_forThatCount
truncate table #temp3_sID1_sID2
-- wordID and sID for that unique word count
-- they can only be exact if they have the same word count
insert into #temp3_wordID_sID_forThatCount
select #temp_count_wordID_sID.wordID
, #temp_count_wordID_sID.sID
from #temp_count_wordID_sID
where #temp_count_wordID_sID.ccount = #count
order by #temp_count_wordID_sID.wordID, #temp_count_wordID_sID.sID
-- select count(*) from #temp3_wordID_sID_forThatCount
-- this has some duplicates
-- sID1 is the group
insert into #temp3_sID1_sID2
select w1.sID, w2.sID
from #temp3_wordID_sID_forThatCount as w1 with (nolock)
join #temp3_wordID_sID_forThatCount as w2 with (nolock)
on w1.wordID = w2.wordID
and w1.sID <= w2.sID
group by w1.sID, w2.sID
having count(*) = #count
order by w1.sID, w2.sID
-- get rid of the goups of 1
delete #temp3_sID1_sID2
where sID1 in (select sID1 from #temp3_sID1_sID2 group by sID1 having count(*) = 1)
-- get rid of the double dips
delete #temp3_sID1_sID2
where #temp3_sID1_sID2.sID1 in
(select distinct s1del.sID1 -- these are the double dips
from #temp3_sID1_sID2 as s1base with (nolock)
join #temp3_sID1_sID2 as s1del with (nolock)
on s1del.sID1 > s1base.sID1
and s1Del.sID1 = s1base.sID2)
insert into #temp3_sID1_sID2_keep
select #temp3_sID1_sID2.sID1
, #temp3_sID1_sID2.sID2
from #temp3_sID1_sID2 with (nolock)
order by #temp3_sID1_sID2.sID1, #temp3_sID1_sID2.sID2
fetch next from cur into #count
end
close cur
deallocate cur
select *
FROM #temp3_sID1_sID2_keep with (nolock)
order by 1,2
So, as I see, the task is to find equal subsets.
First we can find pairs of equal subsets:
;with tmp1 as (select sID, cnt = count(wordID) from [Table] group by sID)
select s1.sID, s2.sID
from tmp1 s1
cross join tmp1 s2
cross apply (
select count(1)
from [Table] d1
join [Table] d2 on d2.wordID = d1.wordID
where d1.sID = s1.sID and d2.sID = s2.sID
) c(cnt)
where s1.cnt = s2.cnt
and s1.sID > s2.sID
and s1.cnt = c.cnt
Output is:
sID sID
----------- -----------
6 2
7 5
8 5
8 7
And then pairs can be combined into groups, if necessary:
sID gNum
----------- -----------
2 1
6 1
5 2
7 2
8 2
See details in SqlFiddle sample below.
SqlFiddle Sample
The other approach is to calculate hash function for every subset data:
;with a as (
select distinct sID from [Table]
)
select sID,
hashbytes('sha1', (
select cast(wordID as varchar(10)) + '|'
from [Table]
where sID = a.sID
order by wordID
for xml path('')))
from a
Then subsets can be grouped based on hash value.
SqlFiddle Sample
The last one took less than a minute on my machine for a test data of about 10 million rows (20k sID values up to 1k wordID each). Also you can optimize it by excluding sIDs having no wordID count matches to any other.

where exists - all - group by?

I use SQL Server 2008 R2.
I have a weird problem as following. I have a table as shown in
I need to write such a query like:
SELECT DISTINCT Field1
FROM MYTABLE
WHERE Field2 IN (96,102)
in this query, WHERE Field2 IN (96,102) gives me 96 or 102 or both!
More over, I would like to return rows that contains 96 and 102 at the same time!
Is there any suggestion? please write result oriented...
I have made a sqlfiddle for this..
create table a (id int, val int)
go
insert into a select 1, 22
insert into a select 1, 122
insert into a select 2, 22
insert into a select 3, 122
insert into a select 4, 22
insert into a select 4, 122
then select like this
select count(distinct id), id
from a
where val in (22, 122)
group by id
having count(id) > 1
EDIT: count(distinct id) will only show distinct counts..
EDIT:
Here's a sqlfiddle example (thanks to Mark Kremers):
http://sqlfiddle.com/#!3/df201/1
create table mytable (field1 int, field2 int)
go
insert into mytable values (199201, 84)
insert into mytable values (199201, 96)
insert into mytable values (199201, 102)
insert into mytable values (199201, 103)
insert into mytable values (581424, 96)
insert into mytable values (581424, 84)
insert into mytable values (581424, 106)
insert into mytable values (581424, 122)
insert into mytable values (687368, 79)
insert into mytable values (687368, 96)
insert into mytable values (687368, 102)
insert into mytable values (687368, 104)
insert into mytable values (687368, 106)
Here's the query:
select distinct a.field1 from
( select field1 from mytable where field2=96) a
inner join
( select field1 from mytable where field2=102) b
on a.field1 = b.field1
And here are the results:
FIELD1
199201
687368
Finally, here's a simplified version of the query (thans to pst):
select distinct a.field1 from mytable a
inner join mytable b
on a.field1 = b.field1
where a.field2=96 and b.field2=102
Use a self-join? Not the most tidy, but I think it works well for 2 values
SELECT *
FROM T R1
JOIN T R2 -- join table with itself
ON R1.F1 = R2.F1 -- where the first field is the same
WHERE R1.F2 = 96 AND R2.F2 = 102 -- and each has one of the required values
(T = Table, Rx = Relation Alias, Fx = Field)
If there can be an arbitrary number of fields, this can be solved as
CREATE TABLE #T (id int, val int)
GO
INSERT INTO #T (id, val)
VALUES
(1, 22), (1, 22), -- no, only 22 (but 2 records)
(2, 22), (2, 122), -- yes, both values (only)
(3, 122), -- no, only 122
(4, 22), (4,122), -- yes, both values ..
(4, 444), (4, null), -- and extra values
(5, 555) -- no, neither value
GO
-- Using DISTINCT over filtered results first, as
-- SQL Server 2008 does not support HAVING COUNT(DISTINCT F1, F2)
SELECT id
FROM (SELECT DISTINCT id, val
FROM #T
WHERE val IN (22, 122)) AS R1
GROUP BY id
HAVING COUNT(id) >= 2 -- or 3 or ..
GO
-- Or a similar variation, as can COUNT(DISTINCT ..)
-- in the SELECT of a GROUP BY
SELECT id
FROM (SELECT id, COUNT(DISTINCT val) as ct
FROM #T
WHERE val IN (22, 122)
GROUP BY id) AS R1
WHERE ct >= 2 -- or 3 or ..
GO
For larger IN (..) sizes, say above 20 values, it may be advisable to use a separate table or table-value and a JOIN for performance reasons.
Try from your original query:
SELECT DISTINCT Field1
FROM MYTABLE
WHERE rtrim(ltrim(cast(Field2 as varchar))) IN ('96','102')