Using SQL Server 2008 R2, BIDS, to create a SSIS package.
I'm trying to use a variable to define the length for one of my script component outputs in my data flow.
Obviously this doesn't work. I'm wondering if there is a workaround so I don't have to hard code that length.
No, you can't put variables there. Normally you will know the length for the columns (from business, database schema, etc). Otherwise, put a big enough safe size.
Related
We are trying to create a SSAS tabular model for 60-100 customers.
In regards to creating a single model and process all customer's data is time consuming (until the data refresh is finished,each and every customer need to wait for the latest data - we update every 15 min).
However creating multiple tabular models is easy to re process and trouble shoot but difficult to maintain or deploy changes. If I need to add new measures or tables,I would like to apply to all the models.
I was wondering if anyone can suggest best way to deploy changes/additions across different tabular models.
If you've worked with SSIS this can be used to deploy across multiple sites. An overview of this is below. What this will do is take a list a server names that you supply, iterate through them, and execute the DDL for the updated Tabular model to each one. This same method can also be used for cube processing, with the create DDL replaced with a processing script. If the model is deployed to a server for the first time ensure that it's processed before it's queried or used by any client tools, and make sure the processing of changed objects is handled accordingly as well.
When connected to SSAS in SSMS, right-click the model, select Script > Script Database As > Create or Replace To > then choose where to output the Script. Note that this will not include the password for security purposes and this will need to be handled accordingly.
Create an SSIS package. In the package create an Analysis Server Connection Manager. This can be set to a server where this Tabular database currently exists.
Create a String variable and leave in blank. This can be called DeployServerName. Also create an object variable, which can be called ServerList. On the SSAS Connection Manager, go to the properties window (press F4), then select the Expressions ellipsis. On the window that comes up, choose the ServerName property and set the DeployServerName variable as the expression. This will allow the server name to change to multiple servers for deployment.
Add an Execute SQL Task in the data flow. This is where you will get the server names to deploy to. If they're stored in a master/lookup table just select the column holding the server name as the SQL statement. You can also add the destination server names individually with UNIONs selecting plain text.
Example
SELECT 'Server1' AS DestServer
UNION
SELECT 'Server2' AS DestServer
On the Execute SQL Task, set the ResultSet property to Full Result Set. Then on the Result Set pane, enter 0 for the Result Name and the object variable created earlier (ServerList) for the Variable Name field.
Next create a Foreach Loop after the Execute SQL Task and connect this to it. Use the Foreach ADO Enumerator Enumerator type and select the object variable (ServerList) as the ADO Object Source Variable. On the Variable Mappings pane, place the string variable (DeployServerName) at Index 0.
Inside the Foreach loop add an Analysis Services Execute DDL Task. Use the SSAS Connection manager you created as the connection, Direct Input as the SourceType, and enter the script generated in SSMS as the SourceDirect statement.
Scenario: A computed property needs to available for RAW methods. The IsComputed property set in the model will not work as its value will not be available to RAW methods.
Attempted Solution: Create a computed column directly on the SQL table as opposed to setting the IsComputed property in the model. Specify that CodefluentEntities not overwrite the computed column. I would than expect the BOM to read the computed SQL field no differently than if it was a normal database field.
Problem: I can't figure out how to prevent Codefluent Entities from overwriting the computed column. I attempted to use the production flags as well as setting produce="false" for the property in the .cfp. Neither worked.
Question: Is it possible to prevent Codefluent Entities from overwriting my computed column and if so, how?
The solution youre looking for is here
You can execute whatever custom T-SQL scripts you like, the only premise is to give the script a specific name so the Producer knows when to execute it.
i.e. if you want your custom script to execute after the tables are generated, name your script
after_[ProjectName]_tables.
Save your custom t-sql file alongside the codefluent generated files and build the project.
In my specific case, i had to enable full-text index in one of my table columns, i wrote the SQL script for the functionality, saved it as
`after_[ProjectName]_relations_add`
Heres how they look in my file directory
file directory
Alternate Solution: An alternate solution is to execute the following the TSQL script after the SQL Producer finishes generating.
ALTER TABLE PunchCard DROP COLUMN PunchCard_CompanyCodeCalculated
GO
ALTER TABLE PunchCard
ADD PunchCard_CompanyCodeCalculated AS CASE
WHEN PunchCard_CompanyCodeAdjusted IS NOT NULL THEN PunchCard_CompanyCodeAdjusted
ELSE PunchCard_CompanyCode
END
GO
Additional Configuration Needed to Make Solution Work: In order for this solution to work one must also configure the BOM so that it does not attempt to save the data associated with the computed columns. This can be done through Model using the advanced properties. In my case I selected the CompanyCodeCalculated property. Went to advanced settings. And set the Save setting to False.
Question: Somewhere in the Knowledge Center there is a passing reference on how to automate the execution SQL Scripts after the SQL Producer finishes but I can not find it. Anybody now how this is done?
Post Usage Comments: Just wanted to let people know I implemented this approach and am so far happy with the results.
Apologies here as this is my first stackoverflow question.
What I'm trying to do is edit the location of a shared dataset within an RDL.
That is to say, I'm using powershell to deploy reports to a report server from my local hard drive. Unfortunately, some of these reports use shared datasets, and the location that the RDL references is different than the actual location of the shared datasets. The shared dataset name are the same though.
So, is there a way that when I'm uploading these reports, that I can loop through them and change the shared dataset reference, so that it points to the actual location? For example, right now the RDL references a shared dataset as "Employee", but I would like to, using powershell, change it to /IT/Sales/Datasets/Employee. Thank you very much in advance.
You can use use the ReportingService2010 web service for this, which you can create using New-WebServiceProxy. Beware the -Namespace parameter, because you'll need to instantiate other objects and will run into this problem.
The gist of it is:
$dataSources = $proxy.GetItemDataSources("/path/to/report")
$dsRef = New-Object -TypeName ($proxy.GetType().Namespace + ".DataSourceReference")
$dsRef.Reference = "/path/to/datasource"
$dataSources[0].Item = $dsRef
$proxy.SetItemDataSources("/path/to/report", $datasources)
You'll probably want to provide a mapping from data source names to server paths and then lookup the reference based on $dataSources[0].Name. Alternatively, you can interrogate the report XML to find the data sources being used and construct a new DataSource object rather than query the existing ones.
I've assumed SSRS 2008 R2 or higher - if this is for 2005 you'll need to adapt it to use the ReportingService2005 web service.
Tableau is an excellent tool for visualizing data. However, it is designed to be the final stop in a data (ETL) pipeline.
My Tableau workbook uses a bunch of Table Calcs to generate a list of "recommended orders". Rather than view these, I want to automate and execute them. This would make Tableau the engine of a quasi-ML process.
In other words, I would like to make Tableau a part of my ETL pipeline and send data to another tier. How can I write a back-end program that executes my Tableau workbook and receives a results dataset?
See the end of this article for example data I want to automate:
http://robm26.blogspot.com/2015/10/keep-your-factory-humming-with-tableau.html
Any ideas?
You're not not going to like the answer I'm going to give you -- "Don't do this".
Tableau isn't meant to be a task in a larger ETL pipeline and the reason you're having problems making it behave the way you want is it's not meant to be done.
Above and beyond the fact that you've figured out how to get a result that you want in Tableau ("the work is done"), Tableau isn't offering you any real value in the scenario you're describing. Use a tool (like Alteryx) that is really purpose built for this sort of work.
The above answer is correct that tabcmd is the way to pull it out. We use a function in python to generate the tabcmd requests so that they can be batched.
import subprocess
def runTabCmd(cmd):
# run tableau command and display the output
print cmd
if run_tabcmd == 'yes':
p = subprocess.Popen(
cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line
You probably already knew that, but for us it was a way to completely automate the pulling and loading into another python package like scikit-learn for a streamlined ML solution
I'm editing this answer to agree with Russell's answer. Tableau is not an ETL tool and should not be used as such. If you absolutely have to do something, you can use what I provided. Otherwise, the best practice is to use a tool designed for the job.
You can easily use tabcmd to get the results of a view in CSV, which can be used later in your ETL process. If you need to automate it, you can write a script and execute it with a cron job. I, myself, have a few views that are exported to CSV and used later in my ETL stream to feed our CRM.
Just remember to create the view exactly as you want it to be exported to CSV - usually including the order of the fields. Another tip is that I don't let it use the default "Measure Names" and "Measure Values" - to make sure everything is good on my CSV, I have the fields added manually in the row/columns section.
We are in the process of migrating our reports from Crystal Reports to SSRS. In Crystal Reports we use variables to dynamically generate our filenames so when the report gets sent out via email, the file has the report name and execution date. (e.g. MonthlyReport09-07-2012.xls).
Is this possible in SSRS? I don't see any straightforward approach to using variables in the filename when subscribing to a report. This could prove troublesome when sending multiple reports with the same filename to the same person because it would be difficult to discern which report is which.
Any help is greatly appreciated. Thank you SO.
There is no feature in SSRS as such but there is a work around for this. You have two options
Option 1:
Instead of emailing it directly first dump the file in fileshare location which can be something like \machine-name\ExportReports\ReportName\ then create a windows job which renames the file to the format you want and emails it in the next step.
Option 2:
Refer to this blog what you want starts from section "Generate a PDF output file programmatically" now you can use this in an assembly then have some scheduling mechanism which picks up the schedule. This then calls the DLL which generates the report and emails it.
Use #timestamp in the name of the file and it will translate at run time.
You cannot specify the report filename in a standard subscription in Reporting Services.
If you have Enterprise edition (or SQL 2012 Business Intelligence edition) you can use the Data-Driven Subscriptions features that allows you to specify the report filename (and other properties) based on data retrieved from a table.
If you have Standard edition, then your options are either of the ones suggested by Bhupendra, or you could look at scripting the report generation using the "rs.exe" utility supplied with Reporting Services and use Database Mail and SQL Server Agent to handle the emailing and scheduling.
This post looks pretty old , but better late then never ...
There are some tools on the market , which can run SSRS reports : CRD, R-Tag and RemiWare
These are Desktop tools but I guess you are not looking to replace SSRS , just to extend it.
I am not sure about the CRD and RemiWare , but R-Tag supports data driven reports and dynamic names. It also can be used with Standard license.
I was able to automate the emailing and change the file name by looping thru a table of accounts, invoices and emails, then setting the parameters and renaming the reportname and pathname in catalog be execution at the end of the after the execution I did a wait for 2 seconds then when to the next loop. I the end I set the path and name back to orginalname. Performed well.
#timestamp works for Windows File Share as answered by Chris.
For Email deliveries, you could use:
#ReportName -specifies the name of the report.
#ExecutionTime - specifies when the report was executed.
For more details- MS Docs
You can do this by changing the filename directly from the table [ReportServer].[dbo].[Catalog]
Make sure to add a forward slash to the Path
For e.g. I created a SQL Job that ran every night before the report and added the date to the filename.
Sample Job (replace [ItemID] corresponding to your Report):
DECLARE #PREFIX CHAR(25) = 'REPORT_NAME_';
DECLARE #SUFFIX CHAR(8) = CONVERT(CHAR(8),GETDATE(),112);
DECLARE #VARNAME CHAR(33) = CONCAT(#PREFIX,#SUFFIX);
DECLARE #VARPATH CHAR(34) = CONCAT('/',#PREFIX,#SUFFIX);
UPDATE [ReportServer].[dbo].[Catalog]
SET
[Name] = #VARNAME,
[Path] = #VARPATH
WHERE [ItemID]='63D051EE-3139-4F50-ADBB-1C944F3D5D47';