I am a first time Sql Profiler user.I only want to check the parameters being passed from the SSRS report to SP it is calling.
So,I start the trace in SQL Profiler and then View the report during which the SP is called and the fields are populated.Yet,I dont see the parameters being passed to the report.
All I see in the Profiler is the SP:CacheMiss Event.
I dont know,if this matters but I would like to mention that the Profiler is being used on my Local machine whereas I am connecting the Application DB through VPN.
I set up the trace by giving DB username and password.One more thing I would like to mention is that I already have the application running,and at the point where the reports are being used before I even set up the trace,because as I understand trace takes up a lot of memory.
Am I doing something wrong?Please guide me.
Also,the SQL server DB is SQL Server 2012, whereas the version of the SQL profiler is SQL Server 2008 one.Could this be an issue?
Related
I am developing reports for a system using Workforce Time and Attendance. The system uses an Oracle database, Crystal Reports, and (I think) Business Objects. My reports use parameters. I want to know how those parameters are being used. Is there a way to see the actual SQL that's being sent to the database server?
Please don't respond with Database | Show SQL Query... That shows me the SQL I created (with the parameter names). I want to see what the database server is receiving.
I have a SQL Server 2008 R2 with a database in it.
How to find a certain query that was executed and from what IP ?
I have tried to go through the transaction logs but I cant understand nothing there.
You should use SQL Server Profiler. It's usually installed by default - look in the SQL Server folder on the Start Menu. When you open it, start a new trace and select the database. In the Trace Properties dialog choose the TSQL template. This will then record all the queries running on the database, along with a whole lot of other stuff. It's not massively easy to track stuff down in here, but look for the BatchStarting events to find the SQL that gets run. Then you should run the procedure sp_who2 on the database so you can match up SPIDs in the profiler to logins.
I need to figure out a way to run under debug in SQL Server Management Studio 2008 R2. But I am given some restrictions that I cannot find the workaround for:
The user needs to be owner of a specific schema.
For that reason, the user cannot have the SysAdmin privilege. The reason for this is we already have many Stored Procedures where we do not specify the name of the Schema for the SQL Queries.
Since the user needs to have SysAdmin privilege, this conflicts with the first requirement as you cannot modify the DEFAULT_SCHEMA for sysadmins.
As far as I know, the only way to debug in SQL Server 2008 is to be given a SysAdmin privilege as this is how M$FT designed their software. What could be a possible workaround for this?
I understand that the recommended answer would be somewhere along the line of change how we wrote the SP or to consider redesigning the database design but sadly, this is not an option.
Please help!
By run under debug do you mean you want to debig stored procedures interactively using the T-SQL debugger?
If you are having problems getting this running, and your objective is to debug your code, you could also use SQL Profiler to observe exactly what is being executed inside your stored procedure. It of course will not support break points and start/stop but it will let you observe what is being executed.
Taken from MSDN: Run the Transact-SQL Debugger
We recommend that Transact-SQL code be debugged on a test server, not
a production server, for the following reasons: Debugging is a highly
privileged operation. Therefore, only members of the sysadmin fixed
server role are allowed to debug in SQL Server. Debugging sessions
often run for long periods of time while you investigate the
operations of several Transact-SQL statements. Locks, such as update
locks, that are acquired by the session might be held for extended
periods, until the session is ended or the transaction committed or
rolled back.
If you have concerns you may want to report your wish at microsoft connect site:
http://connect.microsoft.com/sqlserver
Here's the situation: I receive reports written by a vendor which are all developed on their own Oracle DB. Normally, there is no issue in setting a new datasource to our own Oracle DB, but this one report in particular is not playing nicely.
The report in question has 8 SQL Expressions, and a subreport with an additional 3 SQL Expressions (I mention this because I suspect this may have something to do with it, but not sure. Almost like CR is attempting to verify the SQL Expressions on the old DB). I'm able to update the data source of the subreport just fine, but when I try it with the main report, Crystal prompts me repeatedly for the login to the OLD DB where the report was developed (which I obviously do not have access to). The prompt is inescapable and I have to terminate Crystal's process each time.
I've tried unchecking all report and database checking/verification options in CR to no avail. If anyone has any advice as to what I could try next, it would be greatly appreciated!
EDIT: Well, it looks like all I had to do is close the login window a BILLION times (OK, more like ~16, twice for each SQL Expression?). Leaving the question open, though, to see if there is any way to avoid having to go through this for future reports.
EDIT EDIT: Some more details. This is still happening with CR 2008 SP3 attempting to connect to an Oracle 11g database with 11g R2 client. I'm not sure about how these reports were developed, but it was with CR XI at the earliest.
I have seen this w/ migrated reports before, but it's been loong ago.
If you had to do it a million times, I would have guessed that you actually had 999,999 sub-reports -- all to the same old data source, all needing verification or failure to try the new data source. Sorry, I just re-read... I meant to put 'a BILLION' minus 1.
Did you check your TNSNames against (whatever config supplied by the vendor)?
Are you using the same driver used by the vendor Oracle reports? (Oracle driver vs MS ODBC for Oracle vs CR ODBC for Oracle vs MyPrettyPony ODBC Dri...)
Did you go through the Set DataSource exercise in CR?
Can you save the subreports seperately and run them individually w/out needing the (insert some large number) login window closures?
Can you create a new report, set to your own Oracle data source? (I have to assume this one is ok, based on your comments)
If you copy the Show SQL Query and use that as a Command query in a new report, does that run? (Rinse and repeat as sub-reports).
(I'm stalling for time as I search my memory for the last time I experienced the same...)
I have a report that runs one query in 9 different Oracle databases. If one of these databases is down, the whole report bombs. Is there any way to set it so it ignores this failure and proceeds with the rest of the report?
I'm not sure if there is a way to do what you're wanting within RS, but here's my idea:
If you can call from a SQL database -
Set up linked servers on a sql server.
Wrap the call in a stored procedure and return an alternate data set if the call fails to the linked server.
You could probably do something similar if there is an oracle server you can use.