SSRS 2008 to SSRS 2017 Migration - Subscription issues - ssrs-2008

We have migrated from SQL 2008/SSRS 2008 to SQL 2017/SSRS 2017. We still have one issue where I cannot see the subscriptions I created in 2008. Below is the error I get.
I have my domain account setup as System Administrator/System User in Site Settings and I am in an AD group that has Browser, Content Manager, My Reports, Publisher, Report Builder to all folders/objects.
I have googled & googled and all articles point to this being a SECURITY issue, but I cannot see what I am missing.

This was some how an subscription ownership issue. I changed the owner of all my subscriptions to my new domain\user & this solved this issue.
DECLARE #OldUserID uniqueidentifier
DECLARE #NewUserID uniqueidentifier
SELECT #OldUserID = UserID FROM dbo.Users WHERE UserName = 'old_domain\username'
SELECT #NewUserID = UserID FROM dbo.Users WHERE UserName = 'new_domain\username'
UPDATE dbo.Subscriptions SET OwnerID = #NewUserID WHERE OwnerID = #OldUserID

Related

SQL developer version 21.2 view definition not displaying.view sql don't display definition

I am using sql developer version 21.( Recently installed) I can't access the view sqls/ definition from the view tab. I can accessor see the view text from " details" tab but not from the "Sql" tab.
I don't have admin privilege.
The same user can view view sqls from sqldeveloper version 18...
In older versions of SQL Developer we had a 'try to generate DDL' method for when DBMS_METADATA.GET_DLL() wasn't available.
This wasn't a maintainable position. The 'internal generator' has multiple issues, and we decided to deprecate it.
In order to see the DDL for an object, you need for the DBMS_METADATA package to be available to your user, for said object.
What SQL Developer runs to get you the DDL for a VIEW, is approx:
SELECT
dbms_metadata.get_ddl(
'VIEW',
:name,
:owner
)
FROM
dual
UNION ALL
SELECT
dbms_metadata.get_ddl(
'TRIGGER',
trigger_name,
owner
)
FROM
dba_triggers
WHERE
table_owner = :owner
AND table_name = :name
UNION ALL
SELECT
dbms_metadata.get_dependent_ddl(
'COMMENT',
table_name,
owner
)
FROM
(
SELECT
table_name,
owner
FROM
dba_col_comments
WHERE
owner = :owner
AND table_name = :name
AND comments IS NOT NULL
UNION
SELECT
table_name,
owner
FROM
sys.dba_tab_comments
WHERE
owner = :owner
AND table_name = :name
AND comments IS NOT NULL
)
In a development environment, a developer should have full access to their application, and I would extend that to the data dictionary. It's another reason I advocate developers have their own private database (Docker/VirtualBox/Cloud/whatever).
If that fails, consult your data model.
If you don't have a data model, that's another problem.
If that fails, you do have workaround of checking the Details panel for a view to get the underlying SQL.
Just FYI, I searched for an answer to this problem and found no actual solutions.
thatjeffsmith was correct that earlier versions of SQLDeveloper do not have this issue or requirement of higher privs to view the SQL tab. However, the link he provided was version 20.4 and it sill did not display the SQL tab correctly. I reverted back to 3.1.07 (which I happened to be using prior to upgrading my laptop) and using the same login to the same instance it does display the SQL for views, full definition, without issue. This is against a 12c Oracle database.

How can I see the sql statement of a view (db resides on AWS)

I've just installed the vscode extension (Oracle Developer Tools for VS Code (SQL and PLSQL)
) and successfully connected the db.
The db resides on AWS.
I can connect the db and just wanted to test it by opening an existing view.
But, it just lets me "describe" the view. So I can see the columns but I need to edit the query statement.
What's missing? Or is the problem the AWS part?
I usually use SQL Developer but I'm furthermore interested in backing up the work via git commits. And I like the way "git graph" extensions presents the changes.
DDL view_name
Or
SELECT
text_vc
FROM
dba_views
WHERE
owner = :schema AND
view_name = :view_name;
With help from someone of the Oracle community I managed to get it working.
Basic query is:
select
dbms_metadata.get_ddl('VIEW', 'VIEW_NAME', 'VIEW_OWNER')
from
dual;
So, in my case it is:
select
dbms_metadata.get_ddl('VIEW', 'ALL_DATA_WAREHOUSE_BOSTON', 'WHB')
from
dual;
Owner is the name you fill in when connection to the database, which is the key/value pair (username/password).
If you are not sure who the owner of the view is, check it with this query:
select owner from ALL_VIEWS where VIEW_NAME ='ALL_DATA_WAREHOUSE_BOSTON';

After database migration to new DB2/400 server, the table and column labels are no longer accessible. What server settings to enable..?

We have a 3rd-party DB2/400 application that's the core of our business. It was recently migrated from our private server with AS400/i v6r1 on Power7 to a hosted cloud service with AS400/i v7r3 on Power9.
Since the migration, SQL clients cannot see TABLE_TEXT or COLUMN_TEXT when browsing tables in whatever sort of database explorer they have. In most cases, the text is supposed to show up under "Remarks" or "Description" when browsing tables or columns in the explorer, but it no longer does.
Even the IBM Data Studio won't show the data in columns, but it does provide the information buried deep and inconvenient to access.
What DB2 Server settings are involved in providing this metadata to SQL clients..?? I've searched the IBM website, but the mountains of answers are overwhelming.
I would like to be fore-armed with this information before I discuss the issue with our hosting provider. They provide the ODBC/JDBC connection "mostly unsupported", but I'm hoping they'll consider helping us with this issue if I can describe the server settings with as much detail as possible.
To be clear, what I'm looking for is the labels from the DDL statements, such as these:
LABEL ON TABLE "SCHEMA1"."TABLE1" IS 'Some Table Description';
LABEL ON COLUMN "SCHEMA1"."TABLE1"."COLUMN1" IS 'Some Column Desc';
The clients may not access the labels, yet the following SQL queries are able to do so:
SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TEXT
FROM QSYS2.SYSTABLES
WHERE TABLE_SCHEMA = 'SCHEMA1'
AND TABLE_NAME = 'TABLE1'
SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, COLUMN_TEXT
FROM QSYS2.SYSCOLUMNS
WHERE TABLE_SCHEMA = 'SCHEMA1'
AND TABLE_NAME = 'TABLE1'
I've tried the clients and drivers listed below, and none of them can access the labels for tables or columns. I've read many posts on StackOverflow and elsewhere, and tried many tweaks of settings in the clients and drivers, but nothing has worked. It seems clear this is an issue on the new server.
Clients:
DBeaver 5.2.5 (my preferred client) (very)
Squirrel SQL 3.8.1
SQL Workbench 124
IBM Data Studio 4.1.3
Drivers:
JTOpen 6.6
JTOpen 7.6 (with recent download of IBM Data Studio)
JTOpen 9.5
I posted this question in the IBM forums, and received the answer I needed:
table and column labels are no longer accessible to JDBC clients
The solution is to set the JDBC driver property as follows:
metadata source = 0
With this change, the other properties seem to not be necessary for my situation. After setting the metadata source property, I made test-changes to the other two, but I didn't see any obvious difference:
remarks = true
extended metadata = true
With SquirrelSQL 3.9 and JtOpen, you have to select two options in the driver properties:
remarks = true
extended metadata = true
In new session configuration, check SQL / Display metadata, and voilĂ  :
Checked with V7R1, with DDS comments or SQL Labels
ODBC/JDBC use a different set of catalogs...located in the SYSIBM schema...
sysibm.sqltables
sysibm.sqlcolumns
ect...
ODBC and JDBC Catalog views

Transaction context in use by another session

I have a table called MyTable on which I have defined a trigger, like so:
CREATE TRIGGER dbo.trg_Ins_MyTable
ON dbo.MyTable
FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
insert SomeLinkedSrv.Catalog.dbo.OtherTable
(MyTableId, IsProcessing, ModifiedOn)
values (-1, 0, GETUTCDATE())
END
GO
Whenever I try to insert a row in MyTable, I get this error message:
Msg 3910, Level 16, State 2, Line 1
Transaction context in use by another session.
I have SomeLinkedSrv properly defined as a linked server (for example, select * from SomeLinkedSrv.Catalog.dbo.OtherTable works just fine).
How can I avoid the error and successfully insert record+execute the trigger?
Loopback linked servers can't be used in a distributed transaction if MARS is enabled.
Loopback linked servers cannot be used in a distributed transaction.
Trying a distributed query against a loopback linked server from
within a distributed transaction causes an error, such as error 3910:
"[Microsoft][ODBC SQL Server Driver][SQL Server]Transaction context in
use by another session." This restriction does not apply when an
INSERT...EXECUTE statement, issued by a connection that does not have
multiple active result sets (MARS) enabled, executes against a
loopback linked server. Note that the restriction still applies when
MARS is enabled on a connection.
http://msdn.microsoft.com/en-us/library/ms188716(SQL.105).aspx
I solve It.
I was using the same linked server to call the second procedure and then into the procedure I was using the same linked server.
It's very Easy, only we have to know the restricctions of linked servers.
I have resolved it by removing linked server used in the stored procedure and then called stored procedure by the same linked server. It wasnt working in DEV environement.
One of causes of this situation is a trigger that works for linked-sever database table. An also SQL version of SQL-Server which processes database matters. To avoid this ERROR during sql query execution we should temporarily disable and after execution enable triggers for tables updated. All with database name check. Here is an example:
Select * From People where PersonId In (#PersonId, #PersonIdRight)
IF 'DOUBLE' = DB_NAME()
ALTER TABLE [dbo].[PeopleSites] DISABLE TRIGGER [PeopleSites_ENTDB_UPDATE]
Update PeopleSites Set PersonId = #PersonIdRight Where PersonId = #PersonId
IF 'DOUBLE' = DB_NAME()
ALTER TABLE [dbo].[PeopleSites] ENABLE TRIGGER [PeopleSites_ENTDB_UPDATE]
Select * From PeopleSites where PersonId In (#PersonId, #PersonIdRight)
I also got the same error in our DEV environemnt, moving the linked databases to another sql instance resolved the issue. In our production environment these databases are already on separate instances
In my case I was using SQL 2005 and got "transaction context in use by another session" when running Insert....exec over a linked server. The fix for me was to patch from SP2 build 3161 to SP3. SP2 cumulative 5 is supposed to fix though.
https://support.microsoft.com/en-us/kb/947486
When remote database sits on the same server,configure the linked server without specifying the database server ip / hostname and port. Just the database name should be sufficient.
I was getting the same "transaction context in use by another session error" when trying to run an UPDATE query:
BEGIN TRAN
--ROLLBACK TRAN
--COMMIT TRAN
UPDATE did
SET did.IsProcessed = 0,
did.ProcessingLockID = NULL
FROM [proddb\production].DLP.dbo.tbl_DLPID did (NOLOCK)
WHERE did.dlpid IN ('bunch of GUIDs')
--WHERE did.DLPID IN (SELECT DLPID FROM #TableWithData)
However I didn't realize I was already trying to run this on the DLP database on the ProdDb\Production server. Once I removed that "[proddb\production].DLP.dbo." prefix from the query, it worked fine.

SQL: OPENQUERY Not returning all rows

i have the following which queries a linked server i have to talk to.
SELECT
*
FROM
OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA ')
It will typically return most of the data but some rows are missing?
The linkeds server is coming from an oracle client
Is this a problem anyone has encountered w/ openquery?
I had exactly the same problem.
The root cause is that you've set up your linked server using ODBC instead of OLE DB.
Here's how I fixed it:
Delete the linked server from SQL Server
Right click on the "Linked Servers" folder and select "New Linked Server..."
Linked Server: enter anything..this will be the name of your new linked server
Provider: Select "Oracle Provider for OLE DB"
Product Name: enter "Oracle" (without the double quotes)
Data Source: enter the alias from your TNSNAMES.ORA file. In my case, it was "ABC.WORLD" (without the double quotes)
Provider String: leave it blank
Location: leave it blank
Catalog: leave it blank
Now go to the "Security" tab, and click the last radio button that says "Be made using this security context:" and enter the username & password for your connection
That should be it!
This seems to be related to the underlying provider capabilities and others have also run into this and similar size/row limitations. One possible work-around would be to implement an iterative/looping query with some filtering built in to pull back a certain amount of rows. With oracle, I think this might be using the rownum (not very familiar with oracle).
So something like
--Not tested sql, just winging it syntax-wise
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum between 0 AND 500')
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum between 500 AND 1000')
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum ...')
BOL:
link
This is subject to the capabilities of the OLE DB provider. Although the query may return multiple result sets, OPENQUERY returns only the first one.
I had this same problem using the Oracle 10 instant client and ODBC. I used this client as I am connecting to an Oracle 10gR2 database. I opened a ticket with Microsoft support and they suggested using the Oracle 11 instant client. Surprise! Uninstalling the 10g instant client, installing the 11g instant client and rebooting resolved the issue.
Ken
I had exact same problem with an SQL 2014 getting data from SQL 2000 through OPENQUERY. Because ODBC compatibility problem, I had to keep generic OLE DB for ODBC driver. Moreover, the problem was only with SQL non-admin account.
So finally, the solution I found was to add SET ROWCOUNT 0:
SELECT * FROM OPENQUERY(DWH_LINK, 'SET ROWCOUNT 0 SELECT * FROM TABLEA ')
It seems the rowcount might been change somewhere through the SQL procedure (or for this user session), so setting it to 0 force it to return "all rows".