How to query all DBs in Azure ElasticDB Pool? - tsql

How do I perform SELECT on a table on all databases available in an ElasticDB Pool. All of them have same DB schema and they're created dynamically. I've explored Elastic Database Query but getting lost in the middle.
Reporting across scaled-out cloud databases
It asks to download a sample console application first, create a shard and then run the query which is a bit confusing. Is there anyway I can run T-SQL queries from SQL Server Management Studio to query all the databases.
PS: The DBs are not sharded. They're on DB per customer.
Thanks in Advance!

I'm thinking you need to add the databases as external sources so you can do a cross database query, you will be able to query the tables as if they were local.
I found a guide that can help you set it up:
Link to guide:
https://www.mssqltips.com/sqlservertip/4550/sql-azure-cross-database-querying/
the guide:
DB1 has Db1Table table:
CREATE TABLE DB1.dbo.Db1Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
CustomerName NVARCHAR(50));
INSERT INTO DB1.dbo.Db1Table(CustomerId, CustomerName) VALUES
( 1, 'aaaaaaa' ),
( 2, 'bbbbbbb' ),
( 3, 'ccccccc' ),
( 4, 'ddddddd' ),
( 5, 'eeeeeee' );
CREATE TABLE DB1.dbo.Db1Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
CustomerName NVARCHAR(50));
INSERT INTO DB1.dbo.Db1Table(CustomerId, CustomerName) VALUES
( 1, 'aaaaaaa' ),
( 2, 'bbbbbbb' ),
( 3, 'ccccccc' ),
( 4, 'ddddddd' ),
( 5, 'eeeeeee' );
DB2 has Db2Table table:
CREATE TABLE DB2.dbo.Db2Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
Country NVARCHAR(50));
INSERT INTO DB2.dbo.Db2Table(CustomerId, Country) VALUES
( 1, 'United States' ),
( 3, 'Greece' ),
( 4, 'France' ),
( 5, 'Germany' ),
( 6, 'Ireland' );
CREATE TABLE DB2.dbo.Db2Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
Country NVARCHAR(50));
INSERT INTO DB2.dbo.Db2Table(CustomerId, Country) VALUES
( 1, 'United States' ),
( 3, 'Greece' ),
( 4, 'France' ),
( 5, 'Germany' ),
( 6, 'Ireland' );
If we want to fetch customers whose country is Greece then we could do the following query:
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB2.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB2.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
but instead of returning customerId 3 we get the following error:
Reference to database and/or server name in 'DB2.dbo.Db2Table' is not supported in this version of SQL Server.
Reference to database and/or server name in 'DB2.dbo.Db2Table' is not supported in this version of SQL Server.
In order to be able to perform a cross database query we need to perform the following steps:
Step1: Create Master Key
The database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. More info here.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
-- Example --
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '3wbASg68un#q'
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
-- Example --
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '3wbASg68un#q'
Step 2: Create Database Scoped Credential “my_credential”
A database credential is not mapped to a server login or database user. The credential is used by the database to access the external location anytime the database is performing an operation that requires access.
CREATE DATABASE SCOPED CREDENTIAL <credential_name>
WITH IDENTITY = '<user>',
SECRET = '<secret>';
-- Example --
CREATE DATABASE SCOPED CREDENTIAL my_credential
WITH IDENTITY = 'dbuser',
SECRET = '9Pfwbg68un#q';
CREATE DATABASE SCOPED CREDENTIAL <credential_name>
WITH IDENTITY = '<user>',
SECRET = '<secret>';
-- Example --
CREATE DATABASE SCOPED CREDENTIAL my_credential
WITH IDENTITY = 'dbuser',
SECRET = '9Pfwbg68un#q';
credential_name
Specifies the name of the database scoped credential being created. credential_name cannot start with the number (#) sign. System credentials start with ##.
IDENTITY =’identity_name‘
Specifies the name of the account to be used when connecting outside the server.
SECRET =’secret‘
Specifies the secret required for outgoing authentication.
Step 3: Create External Data Source “my_datasource” of type RDBMS
This instruction creates an external data source for use in Elastic Database queries. For RDBMS, it specifies the logical server name of the remote database in Azure SQL Database.
-- (only on Azure SQL Database v12 or later)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH (
TYPE=RDBMS,
LOCATION='<server_name>.database.secure.windows.net',
DATABASE_NAME='<remote_database_name>',
CREDENTIAL = <sql_credential>);
-- Example --
CREATE EXTERNAL DATA SOURCE my_datasource
WITH (
TYPE=RDBMS,
LOCATION='ppolsql.database.secure.windows.net',
DATABASE_NAME='DB2',
CREDENTIAL = my_credential);
-- (only on Azure SQL Database v12 or later)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH (
TYPE=RDBMS,
LOCATION='<server_name>.database.secure.windows.net',
DATABASE_NAME='<remote_database_name>',
CREDENTIAL = <sql_credential>);
-- Example --
CREATE EXTERNAL DATA SOURCE my_datasource
WITH (
TYPE=RDBMS,
LOCATION='ppolsql.database.secure.windows.net',
DATABASE_NAME='DB2',
CREDENTIAL = my_credential);
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in Azure SQL Database.
TYPE = [ HADOOP | SHARD_MAP_MANAGER | RDBMS ]
Use RDBMS with external data sources for cross-database queries with Elastic Database query on Azure SQL Database.
LOCATION =
specifies the logical server name of the remote database in Azure SQL Database.
DATABASE_NAME = ‘remote_database_name’
The name of the remote database (for RDBMS).
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Step 4: Create External Table “mytable”
This instruction creates an external table for Elastic Database query.
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
DATA_SOURCE = <data_source_name>);
-- Example --
CREATE EXTERNAL TABLE [dbo].[Db2Table] (
[ID] int NOT NULL,
[CustomerId] INT,
[Country] NVARCHAR(50)
) WITH ( DATA_SOURCE = my_datasource )
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
DATA_SOURCE = <data_source_name>
);
-- Example --
CREATE EXTERNAL TABLE [dbo].[Db2Table] (
[ID] int NOT NULL,
[CustomerId] INT,
[Country] NVARCHAR(50)
) WITH ( DATA_SOURCE = my_datasource )
database_name . [ schema_name ] . | schema_name. ] table_name
The one to three-part name of the table to create. For an external table, only the table metadata is stored in SQL along with basic statistics about the file and/or folder referenced in Hadoop or Azure blob storage. No actual data is moved or stored in SQL Server.
[ ,…n ]
The column definitions, including the data types and number of columns, must match the data in the external files.
DATA_SOURCE = external_data_source_name
Specifies the name of the external data source that contains the location of the external data.
After running the DDL statements, you can access the remote table Db2Table as though it were a local table.
So, now if we want to fetch customers whose country is Greece the query would be executed successfully:
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB1.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
-- Result --
CustomerId | CustomerName
-------------------------
3 ccccccc
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB1.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
-- Result --
CustomerId | CustomerName
-------------------------
3 ccccccc

Related

pglogical logical plugin missing DML records

Created pglogical plugin replication slot.
SELECT 'init' FROM pg_create_logical_replication_slot('demo_slot', 'pglogical_output');
Created table which have primary key.
create table pglogical_test(id serial primary key,name text);
Inserted few rows into table
insert into pglogical_test(name) values('hi');
Used below query to check the replication slot data
SELECT *
FROM pg_logical_slot_peek_changes('demo_slot', NULL, NULL,
'min_proto_version', '1', 'max_proto_version', '1',
'startup_params_format', '1', 'proto_format', 'json');
Slot is missing actions of I(insert),U(update) & D(delete). Only B(begin) & C(commit) is available.Sample slot data is as below:
{"action":"S", "params": {"max_proto_version":"1","min_proto_version":"1","coltypes":"f","pg_version_num":"120009","pg_version":"12.9 (Debian 12.9-1.pgdg110+1)","pg_catversion":"201909212","database_encoding":"UTF8","encoding":"SQL_ASCII","forward_changeset_origins":"t","walsender_pid":"884","pglogical_version":"2.4.1","pglogical_version_num":"20401","binary.internal_basetypes":"f","binary.binary_basetypes":"f","binary.basetypes_major_version":"1200","binary.sizeof_int":"4","binary.sizeof_long":"8","binary.sizeof_datum":"8","binary.maxalign":"8","binary.bigendian":"f","binary.float4_byval":"t","binary.float8_byval":"t","binary.integer_datetimes":"f","binary.binary_pg_version":"1200","no_txinfo":"f"}}
{"action":"B", "has_catalog_changes":"t", "xid":"529", "first_lsn":"0/189E960", "commit_time":"2023-02-01 09:14:20.965952+00"}
{"action":"C", "final_lsn":"0/18C2190", "end_lsn":"0/18C28E8"}
{"action":"B", "has_catalog_changes":"f", "xid":"530", "first_lsn":"0/18C28E8", "commit_time":"2023-02-01 09:14:29.654792+00"}
{"action":"C", "final_lsn":"0/18C2A30", "end_lsn":"0/18C2A60"}
Followed documentation from https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/internals-doc/OUTPUT.md
Why action I ,U & D are missing from pg_logical_slot_peek_changes ?

Merge into dblink table does not work ORA-02022

i am trying to do merge in a remote table via dblink (oracle to postgreSQL). I've created a similar view on the remote database and tried modify the violating view in the SQL statement with the new view#remote. Here is my merge:
MERGE INTO
PLUGIN_SUGGEST2_LOV d
USING (
SELECT
l_page_item AS page_item,
l_items_arr(i) AS value,
p_app_user AS app_user,
p_app_id AS app_id
FROM
dual
)
s ON ( s.page_item = d.page_item
AND s.value = d.value
AND s.app_user = d.app_user
AND s.app_id = d.app_id )
WHEN MATCHED THEN UPDATE
SET d.capture_date = sysdate
WHEN NOT MATCHED THEN
INSERT (
capture_date,
app_user,
app_id,
page_item,
value )
VALUES
( sysdate,
s.app_user,
s.app_id,
s.page_item,
s.value );
I still get this error:
The local view is unoptimized and contains references to objects
at the remote database and the statement must be executed at the
remote database.
Any suggestion?

How do I use the Class::DBI->sequence() method to fill 'id' field automatically in perl?

I'm following the example Class::DBI.
I create the cd table like that in my MariaDB database:
CREATE TABLE cd (
cdid INTEGER PRIMARY KEY,
artist INTEGER, # references 'artist'
title VARCHAR(255),
year CHAR(4)
);
The primary key cdid is not set to auto-incremental. I want to use a sequence in MariaDB. So, I configured the sequence:
mysql> CREATE SEQUENCE cd_seq START WITH 100 INCREMENT BY 10;
Query OK, 0 rows affected (0.01 sec)
mysql> SELECT NEXTVAL(cd_seq);
+-----------------+
| NEXTVAL(cd_seq) |
+-----------------+
| 100 |
+-----------------+
1 row in set (0.00 sec)
And set-up the Music::CD class to use it:
Music::CD->columns(Primary => qw/cdid/);
Music::CD->sequence('cd_seq');
Music::CD->columns(Others => qw/artist title year/);
After that, I try this inserts:
# NORMAL INSERT
my $cd = Music::CD->insert({
cdid => 4,
artist => 2,
title => 'October',
year => 1980,
});
# SEQUENCE INSERT
my $cd = Music::CD->insert({
artist => 2,
title => 'October',
year => 1980,
});
The "normal insert" succeed, but the "sequence insert" give me this error:
DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use near ''cd_seq')' at line
1 [for Statement "SELECT NEXTVAL ('cd_seq')
"] at /usr/local/share/perl5/site_perl/DBIx/ContextualFetch.pm line 52.
I think the quotation marks ('') are provoking the error, because when I put the command "SELECT NEXTVAL (cd_seq)" (without quotations) in mysql client it works (see above). I proved all combinations (', ", `, no quotation), but still...
Any idea?
My versions: perl 5.30.3, 10.5.4-MariaDB
The documentation for sequence() says this:
If you are using a database with AUTO_INCREMENT (e.g. MySQL) then you do not need this, and any call to insert() without a primary key specified will fill this in automagically.
MariaDB is based on MySQL. Therefore you do not need the call to sequence(). Use the AUTO_INCREMENT keyword in your table definition instead.

Merge join not able to join properly on varchar column

I've created below code to implement SCD type 2 using merge, when i run the code i'm getting primary key violations on csname field. I have the below values as part of primary key, not sure whether merge SQL does support for varchar or not.
if I run the normal inner join SQL on the same key then i'm getting the matching records as well.
Any help much appreciated
csname
ER - Building Complaints
TR - Building Applications
CREATE PROCEDURE dbo.load_target
AS
BEGIN
INSERT INTO [TR_DW].[enum].[Rt]([csname],[enddatetime],[EffectiveToDate],[EffectiveFromDate],[CurrentRecord])
SELECT[csname],[enddatetime],[EffectiveToDate],[EffectiveFromDate],[CurrentRecord]
FROM
(
MERGE [TR_DW].[enum].[Rt] RtCSQSuTT
USING [TR].[enum].[Rt] RtCSQSuST
ON (RtCSQSuTT.csname = RtCSQSuST.csname)
WHEN NOT MATCHED THEN
INSERT ([csname],[enddatetime],[EffectiveToDate],[EffectiveFromDate],[CurrentRecord])
VALUES ([csname],[enddatetime],'12/31/9999', getdate(), 'Y')
WHEN MATCHED AND RtCSQSuTT.[CurrentRecord] = 'Y' AND
(ISNULL(RtCSQSuTT.[enddatetime], '') != ISNULL(RtCSQSuST.[enddatetime], ''))THEN
UPDATE SET
RtCSQSuTT.[CurrentRecord] = 'N',
RtCSQSuTT.[EffectiveFromDate] = GETDATE() - 1,
RtCSQSuTT.[EffectiveToDate] = GETDATE()
OUTPUT $Action Action_Taken,RtCSQSuST.[csqname],RtCSQSuST.[enddatetime],'12/31/9999' AS[EffectiveToDate],GETDATE() AS[EffectiveFromDate],'Y' AS[CurrentRecord]
)AS MERGE_OUT21
WHERE MERGE_OUT21.Action_Taken = 'UPDATE';
END
GO

T-Sql update and avoid conflict

I'm trying to migrate a Tomcat app from using Postgres 9.5 to SQL Server 2016 and I've got a problem statement I can't seem to duplicate.
It's basically an upsert but one of the complications is the request supplies arguments to do the update, but when there is conflict I need to use some of the existing values from conflicting rows to insert/update.
The primary keys in the table can sometimes cause a conflict, which requires updating rows and deleting the old ones.
The table schema in MS SQL looks like:
CREATE TABLE [dbo].[signup](
[site_key] [varchar](32) NOT NULL,
[list_id] [bigint] NOT NULL,
[email_address] [varchar](256) NOT NULL,
[customer_id] [bigint] NULL,
[attribute1] [varchar](64) NULL,
[date1] [datetime] NOT NULL,
[date2] [datetime] NULL,
CONSTRAINT [pk_signup] PRIMARY KEY CLUSTERED
(
[site_key] ASC,
[list_id] ASC,
[email_address] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The old Postgres SQL looked like this:
WITH updated_rows AS (
INSERT INTO signup
(site_key, list_id, email_address, customer_id, attribute1, date1, date2)
SELECT site_key, list_id, :emailAddress, customer_id, attribute1, date1, date2
FROM signup WHERE customer_id = :customerId and email_address <> :emailAddress
ON CONFLICT (site_key, list_id, email_address) DO UPDATE SET customer_id = excluded.customer_id
RETURNING site_key, customer_id, email_address, list_id
)
DELETE FROM signup AS signup_delete USING updated_rows
WHERE
signup_delete.site_key = updated_rows.site_key
AND signup_delete.customer_id = updated_rows.customer_id
AND signup_delete.list_id = updated_rows.list_id
AND signup_delete.email_address <> :emailAddress;
Two arguments are supplied, customer id and email address, shown here as Spring NamedParameterJdbcTemplate values :customerId and :emailAddress
It's trying to change the email address of the customer id to be the supplied one, but sometimes the supplied email address already exists in the primary key constraint.
In which case it needs to change the existing customer id to be supplied one, and remove the rows with that don't match the new email address.
I also need to try and maintain isolation so that nothing can change the data whilst I'm updating.
I'm trying to do it with a MERGE statement but I can't seem to get it to work, it's complaining I cant use values that aren't in the clause scope, but I think I've probably got other issues here too.
This is what I had so far. It doesn't even address the deleting part - only the upserting, but I can't even get this part to work. I was planning to use the OUTPUT from this as input to something to delete the rows similar to the postgres version.
WITH source AS (
SELECT cs.[site_key] as existing_site_key,
cs.list_id as existing_list_id,
cs.email_address as existing_email,
cs.customer_id as existing_customer_id,
cs.attribute1 as existing_attribute1,
cs.date1 as existing_date1,
cs.date2 as existing_date2,
cs2.email_address as conflicting_email,
cs2.customer_id AS conflicting_customer_id
FROM [dbo].[signup] cs
LEFT JOIN [dbo].[signup] cs2 ON cs2.email_address = :emailAddress
AND cs.site_key = cs2.site_key
AND cs.list_id = cs2.list_id
WHERE cs.customer_id = :customerId
)
MERGE signup WITH (HOLDLOCK) AS target
USING source
ON ( source.conflicting_customer_id is not null )
WHEN MATCHED AND source.existing_site_key = target.site_key AND source.existing_list_id = target.list_id AND source.conflicting_email = target.email_address THEN UPDATE
SET customer_id = :customerId
WHEN NOT MATCHED BY target AND source.existing_site_key = target.site_key AND source.existing_list_id = target.list_id AND source.conflicting_customer_id = :customerId THEN INSERT
(site_key, list_id, email_address, customer_id, attribute1, date1, date2) VALUES
(source.existing_site_key, source.existing_list_id, :emailAddress, source.customer_id, source.existing_attribute1, source.existing_date1, source.existing_date2)
Thanks,
mikee