How do you specify a local database instance in TSQL with the USE keyword? - tsql

I have several database names which exist on local, dev and live servers.
I want to ensure a potentially dangerous T-SQL script will always use the local db and not any other db by accident.
I can't seem to use the [USE] keyword with the local instance name followed by the db name.
It seems pretty trivial but I can't seem to get it to work.
I've tried this but no luck:
USE [MYMACHINE/SQLEXPRESS].[DBNAME]

The instance is going to be determined through your connection/connection string. You connect to a specific instance and then all subsequent T-SQL will be executed against that instance and that instance alone.

The current answer is not correct for the question asked. As you can specify a specific LocalDB file via the USE command in T-SQL. You just have to specify the fully qualified path name, which is what you will also see in the dropdown for the database list.
USE [C:\MyPath\MyData.mdf]
GO

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Rename mongo collection without knowing its namespace

I am using liquibase and mongo to execute a rename-migration like that:
<ext:runCommand>
<ext:command><![CDATA[
{
renameCollection: "XXX.foo",
to: "XXX.bar"
}
]]></ext:command>
</ext:runCommand>
The renaming happens within the bounds of the existing DB, so cross-db migrations are not relevant to my usecase. My problem is that I do now know XXX in advance. My liquibase migration is intended to run in multiple environmets and each one uses its unique version of XXX.
Also, liquibase limits me to runCommand/adminCommand semantics, and the spec for them clearly says that I should provide full namespaces for that, which I cannot have.
Of course I could create multiple liquibase change sets, one for each environment, and hardcode the proper namespace for each one. But I would like to avoid that option since it will does not scale very well.
Is there any way to rename a mongo collection (using runCommand/adminCommand semantics), in a namespace agnostic way?
Enter the database name as a parameter at the time of executing liquibase update and then use liquibase changelog property substitution in the changeset with the specified parameter. Should solve the problem.
Adding the way Alkis has achieved it as mentioned in comments :
Just for the record, I use standalone liquibase, and I had to call
liquibaseInstance.getChangeLogParameters().set("databaseName",
myRuntimeDetectedName);

Deploying DB2 user define functions in sequence of dependency

We have about 200 user define functions in DB2. These UDF are generated by datastudio into a single script file.
When we create a new DB, we need to run the script file several times because some UDF are dependent on other UDF and cannot be create until the precedent functions are created first.
Is there a way to generate a script file so that the order they are deployed take into account this dependency. Or is there some other technique to arrange the order efficiently?
Many thanks in advance.
That problem should only happen if the setting of auto_reval is not correct. See "Creating and maintaining database objects" for details.
Db2 allows to create objects in an "unsorted" order. Only when the object is used (accessed), the objects and its depending objects are checked. The behavior was introduced a long time ago. Only some old, migrated databases keep auto_reval=disabled. Some environments might set it based on some configuration scripts.
if you still run into issues, try setting auto_reval=DEFERRED_FORCE.
The db2look system command can generate DDL by by object creation time with the -ct option, so that can help if you don't want to use the auto_reval method.

SqlCmd variable reference is not allowed in object names

I'm creating a visual studio database project.
In one of the script, I want to achieve something like
CREATE USER [$(DatabaseName)\UserX]
WITHOUT LOGIN
WITH DEFAULT_SCHEMA = dbo
Which popups error
SQL70604: SqlCmd variable reference is not allowed in object names
($(DatabaseName)\UserX).
After a bit study, the closest solution I found was to create user by sp_executesql but it will lead schema compare feature invalid right?
I'm not quite familiar with database project, but I imagine there should be some better way to achieve this which I just need some direction.

Eclipse BIRT and Oracle: Need to set role before running report

Is it possible to set a database role before running a report? I have a number of databases each containing a number of schemas with the same set of tables, where each schema has a number of roles to control read, write, data management and so on. None of these are default roles.
In sqlplus or TOAD I can do SET ROLE , before running a select statement. I would like to do the same in BIRT.
It may be possible to do this using the afterOpen event for the ODA Data Source, but I have not found any examples on how to get and use the native connection in JavaScript.
I am not allowed to add or change anything on the server end.
You can make an additional call to the database in the afterOpen method of the Data Source using Java. You can use JavaScript or a Java Event Handler to execute the SET ROLE statement, or to call a stored procedure that will execute it for you. This happens after the initial db connection is made, but before the Data Set query runs. It will be a little tricky to use the data source connection to make that call however, and I don't have the code right now to provide as an example.
Another way is to create a stored proc Data Set that will execute the desired command, and have that execute first. Drag and drop the Data Set into the report design, and make it invisible. It will run first before any other queries. Not the cleanest solution, but easy to do
Hope that helps
Le Birt Expert
You can write a login trigger and do a set role in this trigger ( PL/SQL: DBMS_SESSION.SET_ROLE). You can determine the username, osuser, program and machine of the user who want to log in.
The approach to use a stored procedure for setting the role won't work - at least not on Apache Derby. Reason: lifetime of the set role is limited to the execution of the procedure itself - after returning from the procedure the role will be the same as before the procedure has been called, i.e. for executing the report the same as no role would have ever been set.