Another way of returning rows if any of the columns has different value for the same id - tsql

Is there any other way for returning rows for the same id by joining two tables and return the row if any of the columns value for the same id is different.
Select Table1.No,Table2.No,Table1.Name,Table2.Name,Table1.ID,Table2.ID,Table1.ID_N,Table2.ID_N
From MyFirstTable Table1
JOIN MySecondTable Table2
ON Table1.No=Table2.No where Table1.ID!=Table2.ID or Table1.ID_N != Table2.ID_N
In the example above , I have only two columns I need to check but in my real case there are at least 20 .
Is there any other statment I can use instead of enumerating each column in the where codition?

...WHERE BINARY_CHECKSUM(Table1.*) <> BINARY_CHECKSUM(Table2.*)
or
...WHERE BINARY_CHECKSUM(Table1.Field1, Table1.Field2, ...) <> BINARY_CHECKSUM(Table2..Field1, Table2.Field2, ...)
*this assumes you have no blob fields in your tables
http://technet.microsoft.com/en-us/library/ms173784.aspx

If No is a PK
Select Table1.No,Table1.Name,Table1.ID,Table1.ID_N
From MyFirstTable Table1
except
Select Table1.No,Table1.Name,Table1.ID,Table1.ID_N
From MySecondTable Table1

Related

How can I restrict a result to only include rows where one specific field is unique with UNION Select statement in BigQuery?

I have the following code. I try to stitch the two tables together, but restrict it to only add duplicate Opportunity_ID once, and then from the second table (OpportunitiesUpdates).
SELECT
Opportunity.Account_Name,
Opportunity.Opportunity_Name,
Opportunity.Opportunity_Owner,
Opportunity.Opportunity_ID
FROM
Opportunity
UNION DISTINCT
SELECT
OpportunityUpdates.Account_Name,
OpportunityUpdates.Opportunity_Name,
OpportunityUpdates.Opportunity_Owner,
OpportunityUpdates.Opportunity_ID
FROM
OpportunityUpdates
WHERE OpportunityUpdates.Opportunity_ID <> Opportunity.Opportunity_ID
This code consolidates all records from both tables (by Opportunity_ID) and gives priority to the OpportunityUpdates table based on Opportunity_ID.
It assumes that the same Opportunity_ID could be in either table ("duplicates"), but that within each table an Opportunity_ID is unique. It also assumes that Opportunity_ID is not nullable (never null).
SELECT DISTINCT
IF(ou.Opportunity_ID IS NOT NULL, ou.Account_Name, o.Account_Name) Account_Name,
IF(ou.Opportunity_ID IS NOT NULL, ou.Opportunity_Name, o.Opportunity_Name) Opportunity_Name,
IF(ou.Opportunity_ID IS NOT NULL, ou.Opportunity_Owner, o.Opportunity_Owner) Opportunity_Owner,
COALESCE(ou.Opportunity_ID, o.Opportunity_ID) Opportunity_ID
FROM OpportunityUpdates ou
FULL OUTER JOIN
Opportunity o
ON o.Opportunity_ID = ou.Opportunity_ID

Fetch rows from postgres table which contains a specific id in jsonb[] column

I have a details table with adeet column defined as jsonb[]
a sample value stored in adeet column is as below image
Sample data stored in DB :
I want to return the rows which satisfies id=26088 i.e row 1 and 3
I have tried array operations and json operations but it does'nt work as required. Any pointers
Obviously the type of the column adeet is not of type JSON/JSONB, but maybe VARCHAR and we should fix the format so as to convert into a JSONB type. I used replace() and r/ltrim() funcitons for this conversion, and preferred to derive an array in order to use jsonb_array_elements() function :
WITH t(jobid,adeet) AS
(
SELECT jobid, replace(replace(replace(adeet,'\',''),'"{','{'),'}"','}')
FROM tab
), t2 AS
(
SELECT jobid, ('['||rtrim(ltrim(adeet,'{'), '}')||']')::jsonb as adeet
FROM t
)
SELECT t.*
FROM t2 t
CROSS JOIN jsonb_array_elements(adeet) j
WHERE (j.value ->> 'id')::int = 26088
Demo
You want to combine JSONB's <# operator with the generic-array ANY construct.
select * from foobar where '{"id":26088}' <# ANY (adeet);

Inserting records into table1 depending on row value in table2

For each row in table exam 'where exam.examRegulation isnull', I want to insert one corresponding row in table examRegulation and copy columnvalues from exam to examregulation. Apparently the following query ist too naive and must be approved:
insert into examRegulation (graduation, course, examnumber, examversion)
values (exam.graduation, exam.course, exam.examnumber, exam.examversion)
where ?? (select graduation, course, examnumber, examversion
from exam
where exam.examRegulation isnull)
Is there a way to do this in postgresql?
You may rephrase this as an INSERT INTO ... SELECT statement:
INSERT INTO examRegulation (graduation, course, examnumber, examversion)
SELECT graduation, course, examnumber, examversion
FROM exam
WHERE examRegulation IS NULL;
The VALUES clause, as the name implies, can only be used with literal values. If you need to populate an insert using query logic, then you need to use a SELECT clause.

How to include the values of a select statement in an insert? (PostgreSQL)

I have this select statement:
select it.item_id
from item as it
where it.owning_collection in (select col.collection_id
from collection as col
where col.name like '%Restricted%'
Which returns around 3k results. Now I would like to make an insert in another table for each one of those results with that item_id as one of the parameters like this:
insert into metadatavalue (metadata_value_id, **item_id**, metadata_field_id, text_value, text_lang, place, confidence)
But since I'm not very experienced with databases, I'm not sure how to make these multiple inserts
All the other information needed in the insert statement are fixed values.
Table structures:
Table Item
*item_id
*submitter_id
*in_archive
*withdrawn
*last_modified
*owning_collection
*dicoverable
Table metadata
*metadata_value_id
*item_id
*metadata_field_id
*text_value
*text_lang
*place
*authority
*confidence
insert into metadatavalue (metadata_value_id, item_id, metadata_field_id, text_value, text_lang, place, confidence)
select 'metadata_value_id',it.item_id,'metadata_field_id','text_value', 'text_lang', 'place', 'confidence'
from item it
where it.owning_collection in (select col.collection_id
from collection as col
where col.name like '%Restricted%')
Replace 'apostrophed' columns with its default values.
Further reading.
INSERT INTO metadatavalue (item_id, metadata_field_id, text_value, text_lang, place, confidence)
SELECT it.item_id, <c1>, <c2>, <c3>, <c4>, <c5>
FROM item AS it
JOIN collection AS col ON col.collection_id = it.owning_collection
WHERE col.name LIKE '%Restricted%'
Where you replace <c1> etc with your constant values. Note also that I have rewritten your SELECT query to a more efficient JOIN.

help with TSQL IN statement with int

I am trying to create the following select statement in a stored proc
#dealerids nvarchar(256)
SELECT *
FROM INVOICES as I
WHERE convert(nvarchar(20), I.DealerID) in (#dealerids)
I.DealerID is an INT in the table. and the Parameter for dealerids would be formatted such as
(8820, 8891, 8834)
When I run this with parameters provided I get no rows back. I know these dealerIDs should provided rows as if I do it individually I get back what I expect.
I think I am doing
WHERE convert(nvarchar(20), I.DealerID) in (#dealerids)
incorrectly. Can anyone point out what I am doing wrong here?
Use a table values parameter (new in SQl Server 2008). Set it up by creating the actual table parameter type:
CREATE TYPE IntTableType AS TABLE (ID INTEGER PRIMARY KEY)
Your procedure would then be:
Create Procedure up_TEST
#Ids IntTableType READONLY
AS
SELECT *
FROM ATable a
WHERE a.Id IN (SELECT ID FROM #Ids)
RETURN 0
GO
if you can't use table value parameters, see: "Arrays and Lists in SQL Server 2005 and Beyond, When Table Value Parameters Do Not Cut it" by Erland Sommarskog, then there are many ways to split string in SQL Server. This article covers the PROs and CONs of just about every method. in general, you need to create a split function. This is how a split function can be used:
SELECT
*
FROM YourTable y
INNER JOIN dbo.yourSplitFunction(#Parameter) s ON y.ID=s.Value
I prefer the number table approach to split a string in TSQL but there are numerous ways to split strings in SQL Server, see the previous link, which explains the PROs and CONs of each.
For the Numbers Table method to work, you need to do this one time table setup, which will create a table Numbers that contains rows from 1 to 10,000:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this split function:
CREATE FUNCTION [dbo].[FN_ListToTable]
(
#SplitOn char(1) --REQUIRED, the character to split the #List string on
,#List varchar(8000)--REQUIRED, the list to split apart
)
RETURNS TABLE
AS
RETURN
(
----------------
--SINGLE QUERY-- --this will not return empty rows
----------------
SELECT
ListValue
FROM (SELECT
LTRIM(RTRIM(SUBSTRING(List2, number+1, CHARINDEX(#SplitOn, List2, number+1)-number - 1))) AS ListValue
FROM (
SELECT #SplitOn + #List + #SplitOn AS List2
) AS dt
INNER JOIN Numbers n ON n.Number < LEN(dt.List2)
WHERE SUBSTRING(List2, number, 1) = #SplitOn
) dt2
WHERE ListValue IS NOT NULL AND ListValue!=''
);
GO
You can now easily split a CSV string into a table and join on it:
Create Procedure up_TEST
#Ids VARCHAR(MAX)
AS
SELECT * FROM ATable a
WHERE a.Id IN (SELECT ListValue FROM dbo.FN_ListToTable(',',#Ids))
You can't use #dealerids like that, you need to use dynamic SQL, like this:
#dealerids nvarchar(256)
EXEC('SELECT *
FROM INVOICES as I
WHERE convert(nvarchar(20), I.DealerID) in (' + #dealerids + ')'
The downside is that you open yourself up to SQL injection attacks unless you specifically control the data going into #dealerids.
There are better ways to handle this depending on your version of SQL Server, which are documented in this great article.
Split #dealerids into a table then JOIN
SELECT *
FROM INVOICES as I
JOIN
ufnSplit(#dealerids) S ON I.DealerID = S.ParsedIntDealerID
Assorted split functions here (I'd probably a numbers table in this case for a small string