I have a button on my site, where user clicks to export the data to excel file. The problem I am facing now is that the data gets too large (40+ mb) and the web throws time out error.
The web takes a parameter from a dropdown box, and then pass it to a stored procedure.
My solution to this is to dump the data on an excel file on a network drive instead of returning it directly to the user. The user will be notified, via msdb.dbo.sp_send_dbmail, once the file is ready.
I found articles on the internet showing how to pass parameter to the stored procedure within ssis, but not how to pass the whole sql statement to ssis.
I'm new to SSIS and would really appreciate a detail example.
Thank you!
I am not familiar with the web, so I'm not interested to making any change to the web at this moment.
With the ExecuteSQL Task you can put the SQL statement in a variable, or a separate file.
In your SQL Task, set the SQLSourceType to either Variable or File connection.
Then for the SQLStatement set it to the appropriate variable or file connection.
Related
We are trying to create a SSAS tabular model for 60-100 customers.
In regards to creating a single model and process all customer's data is time consuming (until the data refresh is finished,each and every customer need to wait for the latest data - we update every 15 min).
However creating multiple tabular models is easy to re process and trouble shoot but difficult to maintain or deploy changes. If I need to add new measures or tables,I would like to apply to all the models.
I was wondering if anyone can suggest best way to deploy changes/additions across different tabular models.
If you've worked with SSIS this can be used to deploy across multiple sites. An overview of this is below. What this will do is take a list a server names that you supply, iterate through them, and execute the DDL for the updated Tabular model to each one. This same method can also be used for cube processing, with the create DDL replaced with a processing script. If the model is deployed to a server for the first time ensure that it's processed before it's queried or used by any client tools, and make sure the processing of changed objects is handled accordingly as well.
When connected to SSAS in SSMS, right-click the model, select Script > Script Database As > Create or Replace To > then choose where to output the Script. Note that this will not include the password for security purposes and this will need to be handled accordingly.
Create an SSIS package. In the package create an Analysis Server Connection Manager. This can be set to a server where this Tabular database currently exists.
Create a String variable and leave in blank. This can be called DeployServerName. Also create an object variable, which can be called ServerList. On the SSAS Connection Manager, go to the properties window (press F4), then select the Expressions ellipsis. On the window that comes up, choose the ServerName property and set the DeployServerName variable as the expression. This will allow the server name to change to multiple servers for deployment.
Add an Execute SQL Task in the data flow. This is where you will get the server names to deploy to. If they're stored in a master/lookup table just select the column holding the server name as the SQL statement. You can also add the destination server names individually with UNIONs selecting plain text.
Example
SELECT 'Server1' AS DestServer
UNION
SELECT 'Server2' AS DestServer
On the Execute SQL Task, set the ResultSet property to Full Result Set. Then on the Result Set pane, enter 0 for the Result Name and the object variable created earlier (ServerList) for the Variable Name field.
Next create a Foreach Loop after the Execute SQL Task and connect this to it. Use the Foreach ADO Enumerator Enumerator type and select the object variable (ServerList) as the ADO Object Source Variable. On the Variable Mappings pane, place the string variable (DeployServerName) at Index 0.
Inside the Foreach loop add an Analysis Services Execute DDL Task. Use the SSAS Connection manager you created as the connection, Direct Input as the SourceType, and enter the script generated in SSMS as the SourceDirect statement.
I have a problem in dealing with SAS Enterprise Guide that runs on the server of my client.
I do not have access to the libraries so, in order to use the datasets the only thing we can do is to store them on the local disk C: of the computer and drag them to SAS.
We can not create libraries because the server does not read local paths.
Once you drag a table, let's call it "mydata" in SAS, the table is automatically renamed "mydata9865" with random numbers at the end and "mydata" is its label.
If you right-click the table and go to properties, you can't find the name of the table, just the label.
The only way I found to check the real name of the dataset is to open the Query Builder and check the name in the code preview.
The problem is that I am dealing with tables of millions of records and the machine I am using is very slow, so whenever I want to open the Query Building, just to check the table's name, it takes sometimes even an hour.
I am not a SAS expert, so I am sure there is a smarter way to do so. Is it possible for instance to use the table by calling it with its label?
data mydata2;
set mydata;
run;
instead of
set mydata9865?
Or is there some place I can rapidly check the name of the table without going through the query builder?
I tried to google it but I can't find anything, I hope someone will be able to help me!
Thank you in advance
Hover the mouse pointer over a data node to see it's attributes. The data set name is the File name: value.
For example:
In this example I had renamed the nodes created by two different queries to be the same (doable:yes, smart:maybe not). NOTE: A data node Label: is not necessarily the same as it's underlying data set's label metadata.
Regarding
use the table by calling it with its label?
Two nodes can have the same label, and is a a situation that defeats this approach.
Use the COPY task to upload your data explicitly. It sounds like you're not adding your data to the projects properly so SAS automatically assigns a name, rather than if you explicitly import or load your data.
Problem solved! I should have simply upload the data to the server with Tasks->Data->Upload Data Sets to Server but I didn't know this task so I didn't know it was possible to do it at all!
https://communities.sas.com/t5/SAS-Enterprise-Guide/Importing-sas-data-sets-from-C-drive-into-SAS-EG-not-possible/td-p/135184
Thank you everybody for you help!
Whenever I construct a report that uses an embedded dataset and try to use a parameter (such as #StartDate and #EndDate), I receive an error that states I must declare scalar values. However, this error only comes up if I set a data source that uses the "credentials stored securely in the report server" option. If I set the data source to use "Windows integrated security," I do not receive the error.
I am at a complete loss. These reports need to be accessed by a large amount of people. We have granted them "browser" privileges through an Active Directory Group through SSRS, including the data sources.
What is the best way to proceed? Is there an easy fix?
I generally deploy with the option already set by going into the Data Source and choosing 'Log on to SQL Server' section > 'Use SQL Server Authentication'> (Set your user and settings). When you use a windows user as your main user after you deploy there could be issues.
The other question would be does this work correctly at all times in Business Intelligence Development Studio, BIDS, and just not on the server? It is very interesting a permission issue alone would cause a scalar error to return. Generally when users have to get to the report they may still get the error but not storing the credentials merely asks them for credentials. It would help more to know the datasets and what they are returning or supposed to be returning. Generally a Start and End are typically defined as 'DataTime' in SSRS and are in a predicate like 'Where thing between #Start and #End' and there data is chosen from a calendar by a user. If you are binding them to other datasets and there is the possibility of a user selecting multiple values that could present an issue.
I took a look at the data source that had been set up by our DBA. It was set up as an ODBC connection. I changed it to Microsoft SQL. It works now. I do not understand why and would appreciate if a more seasoned individual might explain.
I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.
I am curious to know if there is a way to tell if a report has been printed or ran. For example, the user enters in a inspectionnumber and hits apply and then clicks print and then prints the report. Can i know if the report has been printed? is there a way to use local variables to track that, some sort of loop?
I've never tested this, but here's a theory you can try.
In your Database Expert, go to your Current Connections and Add Command. Use this to write up a SQL query to save the usage data to a table in your data source (If your data source is read only, just add a delimited text file as an additional data source and output your usage data to that instead.)
The best example I have of this is # http://www.scribd.com/doc/2190438/20-Secrets-of-Crystal-Reports. On page 39, you'll see a method for creating a table of contents that more or less uses this method.