Create database at linked server with T-SQL - tsql

Let's say currently my application connects to a database in server A and there is a function to create an audit database in server B. In this case, is that possible to create the database with T-SQL?

Usually, the databases should be created when the application is first set up, not on the fly. Normally, you should create the databases by connecting directly to that server. If you want to connect to another server using T-SQL, you may use the OPENROWSET function, but this is not designed for creating databases. To use the OPENROWSET function, you first need to enable the option, like this:
EXEC sp_configure 'show advanced options', 1
RECONFIGURE
go
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
RECONFIGURE
If you try to create a database using the OPENROWSET function, you may encounter several errors (like "The OLE DB provider "SQLNCLI10" for linked server "(null)" indicates that either the object has no columns or the current user does not have permissions on that object" or "CREATE DATABASE statement not allowed within multi-statement transaction").
You can overcome these limitations this way:
SELECT * FROM OPENROWSET('SQLNCLI',
'Server=YOURSERVERNAME;Trusted_Connection=yes;',
'ROLLBACK; CREATE DATABASE Test; SELECT 1 A')
However, this is not a normal usage of the OPENROWSET function, so if anything breaks, you did not learn this from me :-).

Related

In DBeaver, how can I run an SQL union query from two different connections..?

We recently migrated a large DB2 database to a new server. It got trimmed a lot in the migration, for instance 10 years of data chopped down to 3, to name a few. But now I find that I need certain data from the old server until after tax season.
How can I run a UNION query in DBeaver that pulls data from two different connections..? What's the proper syntax of the table identifiers in the FROM and JOIN keywords..?
I use DBeaver for my regular SQL work, and I cannot determine how to span a UNION query across two different connections. However, I also use Microsoft Access, and I easily did it there with two Pass-Through queries that are fed to a native Microsoft Access union query.
But how to do it in DBeaver..? I can't understand how to use two connections at the same time.
For instance, here are my connections:
And I need something like this...
SELECT *
FROM ASP7.F_CERTOB.LDHIST
UNION
SELECT *
FROM OLD.VIPDTAB.LDHIST
...but I get the following error, to which I say "No kidding! That's what I want!", lol... =-)
SQL Error [56023]: [SQL0512] Statement references objects in multiple databases.
How can this be done..?
This is not a feature of DBeaver. DBeaver can only access the data that the DB gives it, and this is restricted to a single connection at a time (save for import/export operations). This feature is being considered for development, so keep an eye out for this answer to be outdated sometime in 2019.
You can export data from your OLD database and import it into ASP7 using DBeaver (although vendor tools for this are typically more efficient for this). Then you can do your union as suggested.
Many RDBMS offer a way to logically access foreign databases as if they were local, in which case DBeaver would then be able to access the data from the OLD database (as far as DBeaver is concerned in this situation, all the data is coming from a single connection). In Postgres, for example, one can use a foreign data wrapper to access foreign data.
I'm not familiar with DB2, but a quick Google search suggests that you can set up foreign connections within DB2 using nicknames or three-part-names.
If you check this github issue:
https://github.com/dbeaver/dbeaver/issues/3605
The way to solve this is to create a task and execute it in different connections:
https://github.com/dbeaver/dbeaver/issues/3605#issuecomment-590405154

Does dropping a database have to be done not in any transaction?

From https://wiki.postgresql.org/wiki/Psycopg2_Tutorial
PostgreSQL can not drop databases within a transaction, it is an all
or nothing command. If you want to drop the database you would need to
change the isolation level of the database this is done using the
following.
conn.set_isolation_level(0)
You would place the above immediately preceding the DROP DATABASE
cursor execution.
Why "If you want to drop the database you would need to change the isolation level of the database"?
In particular, why do we need to change the isolation level to 0? (If I am correct, 0 means psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED)
From https://stackoverflow.com/a/51859484/156458
The operation of destroying a database is implemented in a way which
prevents undoing it - therefore you can not run it from inside a
transaction because transactions are always undoable. Also keep in
mind that unlike most other databases PostgreSQL allows almost all DDL
statements (obviously not the DROP DATABASE one) to be executed inside
a transaction.
Actually you can not drop a database if anyone (including you) is
currently connected to this database - so it does not matter what is
your isolation level, you still have to connect to another database
(e.g. postgres)
"you can not run it from inside a transaction because transactions are always undoable". Then how can I drop a database not from inside a transaction?
I found my answer at https://stackoverflow.com/a/51880577/156458
I'm unfamiliar with psycopg2 so I can only provide steps to be performed.
Steps to be taken to perform DROP DATABASE from Python:
Connect to a different database, which you don't want to drop
Store current isolation level in a variable
Set isolation level to 0
Execute DROP DATABASE query
Set isolation level back to original (from #2)
Steps to be taken to perform DROP DATABASE from PSQL:
Connect to a different database, which you don't want to drop
Execute DROP DATABASE query
Code in psql
\c second_db
DROP DATABASE first_db;
Remember, that there can be no live connections to the database you are trying to drop.

How can I obtain the creation date of a DB2 database without connecting to it?

How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).

db2 creating proxy user account

SQL server has an option to create proxy user accounts with the statement
CREATE USER proxyUser WITHOUT LOGIN;
I couldn't find much help on internet on getting the db2 (v8) equivalent of this. I'm not sure whether this is possible, if yes please let me know how.
The scenario where i want to use this is as follows.
I have table with ~8 million records which gets updated daily. Before the inserts happen, few records are deleted from the table and the number is ~2 million. Since these deletes need not be logged, we decided on setting off Logging during the deletes. Since our credentials do not have alter table rights, we decided to put the ALTER and DELETE statements in a script and and execute the script using the proxy account irrespective of what user executes the SP.
I foud this article which closely describes the scenario which i described above. The differences are that i need to do this on db2 and i need to do deletes without logging them.
http://www.mssqltips.com/sqlservertip/2583/grant-truncate-table-permissions-in-sql-server-without-alter-table/
Thanks
Arjun
It will work basically in the same manner in DB2, with a few exceptions. Firstly, there's no TRUNCATE TABLE statement in DB2 8.2 (and there's no DB2 version 8 on Linux). Secondly, there are no database users in DB2 -- all users are defined externally in the operating system, so there's no CREATE USER statement either.
All statements in a stored procedure, except dynamic SQL, are executed with the authorization of the procedure creator.
So, using the authorized ID, e.g. the database administrator's ID, create the stored procedure that does what you need (ALTER, DELETE, whatever), then grant the EXECUTE privilege on that procedure to whoever needs to run it.

Synchronize between an MS Access (Jet / MADB) database and PostgreSQL DB, is this possible?

Is it possible to have a MS access backend database (Microsoft JET or Access Database Engine) set up so that whenever entries are inserted/updated those changes are replicated* to a PostgreSQL database?
Two-way synchronization would be nice, but one way would be acceptable.
I know it's popular to link the two and use one as a frontend, but it's essential that both be backend.
Any suggestions?
* ie reflected, synchronized, mirrored
Can you use Microsoft SQL Server Express Edition? Or do you have to use Microsoft Access Database Engine? It's possible you'll have more options using MS SQL express, like more complete triggers and logging.
Either way, you're going to need a way to accumulate a log of changed rows from the source database engine, and a program to sync them to PostgreSQL by reading the log and converting it into suitable PostgreSQL INSERT, UPDATE and DELETE statements.
You could do this by having audit triggers in MADB/Express insert a row into an audit shadow table for every "real" table whenever it changed, including inserting special "row deleted" audit entries. Then your sync program could connect to both MADB/Express, read the audit tables, apply the changes to PostgreSQL, and empty the audit tables.
I'll be surprised if you find anything to do this out of the box. It's one area where Microsoft SQL Server has a big advantage because of all the deep Access and MADB engine integation to support the synchronisation and integration features.
There are some ETL ("Extract, Transform, Load") tools that might be helpful, like Pentaho and Talend. I don't know if you can achieve the desired degree of automation with them though.