SqlCmd variable reference is not allowed in object names - tsql

I'm creating a visual studio database project.
In one of the script, I want to achieve something like
CREATE USER [$(DatabaseName)\UserX]
WITHOUT LOGIN
WITH DEFAULT_SCHEMA = dbo
Which popups error
SQL70604: SqlCmd variable reference is not allowed in object names
($(DatabaseName)\UserX).
After a bit study, the closest solution I found was to create user by sp_executesql but it will lead schema compare feature invalid right?
I'm not quite familiar with database project, but I imagine there should be some better way to achieve this which I just need some direction.

Related

Is it possible to prevent the SQL Producer from overwriting just one of the tables columns?

Scenario: A computed property needs to available for RAW methods. The IsComputed property set in the model will not work as its value will not be available to RAW methods.
Attempted Solution: Create a computed column directly on the SQL table as opposed to setting the IsComputed property in the model. Specify that CodefluentEntities not overwrite the computed column. I would than expect the BOM to read the computed SQL field no differently than if it was a normal database field.
Problem: I can't figure out how to prevent Codefluent Entities from overwriting the computed column. I attempted to use the production flags as well as setting produce="false" for the property in the .cfp. Neither worked.
Question: Is it possible to prevent Codefluent Entities from overwriting my computed column and if so, how?
The solution youre looking for is here
You can execute whatever custom T-SQL scripts you like, the only premise is to give the script a specific name so the Producer knows when to execute it.
i.e. if you want your custom script to execute after the tables are generated, name your script
after_[ProjectName]_tables.
Save your custom t-sql file alongside the codefluent generated files and build the project.
In my specific case, i had to enable full-text index in one of my table columns, i wrote the SQL script for the functionality, saved it as
`after_[ProjectName]_relations_add`
Heres how they look in my file directory
file directory
Alternate Solution: An alternate solution is to execute the following the TSQL script after the SQL Producer finishes generating.
ALTER TABLE PunchCard DROP COLUMN PunchCard_CompanyCodeCalculated
GO
ALTER TABLE PunchCard
ADD PunchCard_CompanyCodeCalculated AS CASE
WHEN PunchCard_CompanyCodeAdjusted IS NOT NULL THEN PunchCard_CompanyCodeAdjusted
ELSE PunchCard_CompanyCode
END
GO
Additional Configuration Needed to Make Solution Work: In order for this solution to work one must also configure the BOM so that it does not attempt to save the data associated with the computed columns. This can be done through Model using the advanced properties. In my case I selected the CompanyCodeCalculated property. Went to advanced settings. And set the Save setting to False.
Question: Somewhere in the Knowledge Center there is a passing reference on how to automate the execution SQL Scripts after the SQL Producer finishes but I can not find it. Anybody now how this is done?
Post Usage Comments: Just wanted to let people know I implemented this approach and am so far happy with the results.

Fetching object source programmatically

I need access to the complete source code of objects in order to automate certain tasks. For example: complete source of view is the view itself, it's rules, triggers, privileges...
By using different PostgreSQL tools like PgAdmin, pg_dump, psql, this can easily be fetched, but I need to be able to access it through a (sql/plpgsql) function call.
It's not too difficult to implement API looking like this: getFunctionSource, getTableSource, getFUnctionSource. However, it looks like this code would need a lot of maintenance along different versions of database.
Is there officially maintained or well tested extension, API, pg_dump wrapper or whatever I can use?
If you run psql -E, you'll see hidden queries that get run by Postgres to output data definitions.
A function's raw source, for instance, can be found by running \df foo, reading the query, and subsequently trying:
select prosrc from pg_proc where proname = 'foo'
\sf foo doesn't yield the relevant functions using that approach, but a cursory peek at the docs on system information functions (of which there are many) should suggest that it's just a wrapper around:
select pg_get_functiondef('foo'::regproc);
A few views to get you started, if you go the route of posting your stuff on github:
https://gist.github.com/ddebernardy/7893922
(You'll want to create a "system" schema before running the file using \i in psql.)

How do you specify a local database instance in TSQL with the USE keyword?

I have several database names which exist on local, dev and live servers.
I want to ensure a potentially dangerous T-SQL script will always use the local db and not any other db by accident.
I can't seem to use the [USE] keyword with the local instance name followed by the db name.
It seems pretty trivial but I can't seem to get it to work.
I've tried this but no luck:
USE [MYMACHINE/SQLEXPRESS].[DBNAME]
The instance is going to be determined through your connection/connection string. You connect to a specific instance and then all subsequent T-SQL will be executed against that instance and that instance alone.
The current answer is not correct for the question asked. As you can specify a specific LocalDB file via the USE command in T-SQL. You just have to specify the fully qualified path name, which is what you will also see in the dropdown for the database list.
USE [C:\MyPath\MyData.mdf]
GO

Oracle Global Temporary Tables and using stored procedures and functions

we recently changed one of the databases I develop on from Oracle accounts to LDAP login accounts and all went well for the front end used by the staff that access the system. However, we have a second method of entry restricted to admin staff that load the data onto the database and a lot of processing is called using the dbms_scheduler.
Most of the database tables have a created_by column which is defaulted to pick up their user name from a sys_context but when the data loads are run from dbms_scheduler this information is not available and hence the created_by columns all get populated with APP_GLOBAL.
I have managed to populate a Global Temporary Table (GTT) with the sys_context value and use this to populate the created_by from a stored procedure called by dbms_scheduler so my next logical step was to put this in a function and call it so it could be used throughout the system or even be referenced from a before insert trigger.
The problem is, when putting the code into a function the data from the GTT is not found. The table is set to preserve rows.
I have trawled many a site for an answer but have found nothing to help me can anyone here provide a solution?
The scheduler will be using a different session than the session that created the job - preserve rows will not make the GTT data visible in a different session.
I am assuming the created_by columns have a default value like nvl(sys_context(...),'APP_GLOBAL'). Consider passing the user name as a parameter to the job and set the context as the first step in the job.
A weekend off and a closer look at my code showed a fatal flaw in my syntax where the selection of data from the GTT would never happen. A quick tweak and recompile and all is well.
Jack, thanks for your help.

Eclipse BIRT and Oracle: Need to set role before running report

Is it possible to set a database role before running a report? I have a number of databases each containing a number of schemas with the same set of tables, where each schema has a number of roles to control read, write, data management and so on. None of these are default roles.
In sqlplus or TOAD I can do SET ROLE , before running a select statement. I would like to do the same in BIRT.
It may be possible to do this using the afterOpen event for the ODA Data Source, but I have not found any examples on how to get and use the native connection in JavaScript.
I am not allowed to add or change anything on the server end.
You can make an additional call to the database in the afterOpen method of the Data Source using Java. You can use JavaScript or a Java Event Handler to execute the SET ROLE statement, or to call a stored procedure that will execute it for you. This happens after the initial db connection is made, but before the Data Set query runs. It will be a little tricky to use the data source connection to make that call however, and I don't have the code right now to provide as an example.
Another way is to create a stored proc Data Set that will execute the desired command, and have that execute first. Drag and drop the Data Set into the report design, and make it invisible. It will run first before any other queries. Not the cleanest solution, but easy to do
Hope that helps
Le Birt Expert
You can write a login trigger and do a set role in this trigger ( PL/SQL: DBMS_SESSION.SET_ROLE). You can determine the username, osuser, program and machine of the user who want to log in.
The approach to use a stored procedure for setting the role won't work - at least not on Apache Derby. Reason: lifetime of the set role is limited to the execution of the procedure itself - after returning from the procedure the role will be the same as before the procedure has been called, i.e. for executing the report the same as no role would have ever been set.