Informing user about the progress of sql procedure execution in MS Access - tsql

In my application I have SQL Server 2008 in the backend and MS Access 2010 in the front end. I have long running procedures that are executed through MS Access. For example now I need to show a messege box in the MS ACCESS front end to the user when the SQL procedure is currently running. Please provide me an idea how can I accomplish that... Thanks in advance!

You can't use MS-Access in asynchronous calls - even if you had a way to set a file-based counter from your stored procedure - you wouldn't be able to track it from inside Access while the stored procedure is running.
The only two ways I see that could do something like this is to
change your stored procedure to handle some kind of paging counter -
using a loop in VBA, and passing, tracking and incrementing a first
pointer and last pointer to your stored procedure - then reporting on
progress, or,
find a way to shell-nowait out to call the stored procedure and then
monitor and report on the progress in VBA

Related

does every procedure call create a recursive session in oracle

I'm trying to solve a case of constant ora--00018 "maximum open sessions exceeded" crashes even though sessions parameter in set to 1500, during those crushes number of v)session entries is sometimes 50 % off v$ resource_limit. current_utilization value of sessions parameter. So we suspect there are a lot of recursive sessions being made and that quickly brings A DB Server down. I know that triggers can cause a lot off recursive sessions . Does every procedure call create the same effect or are any specific kind of procedure that generate it. I tried to test by checking the current_utilization value before and after I ran a simple procedure and didn't see the difference. Maybe because my test procedure is too simple and too fast to notice. I've read this article http://tech.e2sn.com/oracle/oracle-internals-and-architecture/recursive-sessions-and-ora-00018-maximum-number-of-sessions-exceeded but it's not clear to me if every procedure is run on a different session . I'm using oracle 10g

How to create warning message in trigger?

Is it possible to create a warning message in a trigger in Firebird 2.5?
I know I can create an exception message which will stop the user from saving the record changes, but in this instance I don't mind if the user continues.
Could I call a procedure that generates the message?
There is no mechanism in Firebird to produce warnings in PSQL code, you can only raise exceptions, which in triggers will result in the effect of the executed statement that fired the trigger to be undone.
In short, this is not possible.
There are workarounds possible, but those would require 'external' protocols, like, for example, inserting the warning message into a global temporary table, requiring the calling code to explicitly select from that temporary table after execution.
SQL model does provide putting query on pause and then waiting for extra input from client to either unfreeze it or fail it. SQL is not user-interactive service and there is no confirmation dialogs. You have to rethink your application design.
One possible avenue, nominally staying withing 2-tier client-server framework, would be creating temporary tabless for all the data you want to save (for example transaction-scope GTTs), and then have TWO stored procedures. One SP would be sanity-checking and returning list of warnings, if any. Another SP then would dump the data from GTTs to main, persistent tables without doing those checks.
Your client app would select warnings from the check-SP first, if it returns any then show them to the user, then either call save-SP and commit, or rollback without calling save-SP.
This is abusing C/S idea, so there would be dragons. First of all, you would have to have several GTTs and two SPs for E-V-E-R-Y pausable data saving in your app. And that can be a lot.
Also, notice, that database data may change after you called check-SP and before you called save-SP. Becuse some OTHER application running elsewhere could be changing and committing data during that pause. Especially if you transaction was of READ COMMMITTED kind. But with SNAPSHOT tx too.
Better approach would be to drop C/S scheme and go to 3-tier model, AKA multi-tier, AKA "Application Server". That way your client app sends the "briefcase" of data to the app-server, it would be app-server (not SQL triggers) doing all the data validation, and then it would be saving it to data storage backend, SQL or any other.
There, of course, still would be that problem, that data could had been changed by other users, why you paused one user and waited him to read and decide. But you would have more flexibility in app-server on data reconcilation, than you would have with plain SQL.

SSIS: sql statement as parameter

I have a button on my site, where user clicks to export the data to excel file. The problem I am facing now is that the data gets too large (40+ mb) and the web throws time out error.
The web takes a parameter from a dropdown box, and then pass it to a stored procedure.
My solution to this is to dump the data on an excel file on a network drive instead of returning it directly to the user. The user will be notified, via msdb.dbo.sp_send_dbmail, once the file is ready.
I found articles on the internet showing how to pass parameter to the stored procedure within ssis, but not how to pass the whole sql statement to ssis.
I'm new to SSIS and would really appreciate a detail example.
Thank you!
I am not familiar with the web, so I'm not interested to making any change to the web at this moment.
With the ExecuteSQL Task you can put the SQL statement in a variable, or a separate file.
In your SQL Task, set the SQLSourceType to either Variable or File connection.
Then for the SQLStatement set it to the appropriate variable or file connection.

How to transfer data from Sql Server 2008 R2 database to CRM 2011 internal database using Plugin?

Scenario:
X_source = N/A.
Y_source = SQL server 2008 R2.
Z_source = CRM 2011 database.
I have a Y_source that will be updated daily with information from X_source at certain intervals. After that is done Z_source has to connect to the Y_source and upload that information. I have no control over X & Y source but do know that Y_source will be on the same network as the Z_source.
Problem:
Since I know that there are more than 200,000 records in Y_source I can't just call all the records and upload them to the Z_source. I have to find a way where I can iterate through them either in batches or 1 by 1. The idea I have in mind is to use T-SQL cursor's but this may seem like the wrong aprroach.
Sources:
I have the address and credentials to both Y & Z. I also have control over Z_source.
Edit
Ok let me clear some things out that I think may be important.:
Z_source is indeed a database that is separate from CRM 2011 but it is the origin of it's source.
Also the process that updates Z_source can be an external process from CRM 2011. Which means as long as the Database is updated it does not matter if CRM triggered the update or not.
The amount of Records to be handled will be well over 200,000.
I don't know if you're used to SSIS but I think it could really help you !
Here's two nice posts about it : http://gotcrm.blogspot.be/2012/07/crm-2011-data-import-export-using-cozy.html and http://a33ik.blogspot.be/2012/02/integrating-crm-2011-using-sql.html
Regards,
Kévin
The solution that I came up was to create a C# console application to connect to the Y_source retrieve the data then with the CRM 2011 SDK use the quickstart app in: Sdk/samplecode/cs/quickstart and modified it to insert in Z_source. This app will run via a Windows Task 6 hours after the Y_source gets updated so yeah I don't need a precise trigger for this.
A few things:
Plugins in CRM 2011 are analogous to SQL triggers. CRM events, such as Create, Delete, Update, Merge, etc., trigger the execution of code you've written in a plugin. This doesn't seem appropriate for your situation as you want to do your operations in batches independently of CRM actions.
Nothing in CRM 2011 is done in set-based batches. Everything is done one database row at a time. (To prove this, profile any CRM event that you'd think should be done in one set and see the resultant SQL.) However, just because CRM 2011 can't use set based operations doesn't mean you have to gather all your source data in SQL Server one row at a time.
So I recommend the following:
Write a quick app that pulls all the data from SQL Server at once. Call .ToList() on the result to place the result set in memory.
Loop through the list of rows, and for each, do the appropriate action in CRM 2011 (Create, Update, Delete, etc.).
For each row, include the unique identifier of that row in the CRM record so you'll know in the future whether to delete or update the record when syncing with Y-Source.
Schedule your app to be run whenever the Y-Source is updated.
Depending on your needs, the app can become a CLR stored procedure that is scheduled or triggered in SQL Server, a console app that's run on a schedule on a server, or anything else that can accomplish the above. The recent question Schedule workflows via external tool speaks to this as well.

Eclipse BIRT and Oracle: Need to set role before running report

Is it possible to set a database role before running a report? I have a number of databases each containing a number of schemas with the same set of tables, where each schema has a number of roles to control read, write, data management and so on. None of these are default roles.
In sqlplus or TOAD I can do SET ROLE , before running a select statement. I would like to do the same in BIRT.
It may be possible to do this using the afterOpen event for the ODA Data Source, but I have not found any examples on how to get and use the native connection in JavaScript.
I am not allowed to add or change anything on the server end.
You can make an additional call to the database in the afterOpen method of the Data Source using Java. You can use JavaScript or a Java Event Handler to execute the SET ROLE statement, or to call a stored procedure that will execute it for you. This happens after the initial db connection is made, but before the Data Set query runs. It will be a little tricky to use the data source connection to make that call however, and I don't have the code right now to provide as an example.
Another way is to create a stored proc Data Set that will execute the desired command, and have that execute first. Drag and drop the Data Set into the report design, and make it invisible. It will run first before any other queries. Not the cleanest solution, but easy to do
Hope that helps
Le Birt Expert
You can write a login trigger and do a set role in this trigger ( PL/SQL: DBMS_SESSION.SET_ROLE). You can determine the username, osuser, program and machine of the user who want to log in.
The approach to use a stored procedure for setting the role won't work - at least not on Apache Derby. Reason: lifetime of the set role is limited to the execution of the procedure itself - after returning from the procedure the role will be the same as before the procedure has been called, i.e. for executing the report the same as no role would have ever been set.