SSIS Email Task - email

I have an SSIS package that has a failure message email task and a success message email task. I have created two variables for each task to signify on which environment it is being run on (test or prod). The variables are as follow:-
EmailFailure_Prod = The package pointing to production (APP_MSR_ImportMemOutcomes) failed to execute at 11/28/2017
EmailFailure_Test = The package pointing to test (APP_MSR_ImportMemOutcomes) failed to execute at 11/28/2017
EmailSuccess_Prod = The package pointing to production (APP_MSR_ImportMemOutcomes) succeeded to execute at 11/28/2017 9:09:26 AM by username on server
EmailSucess_Test = The package pointing to test (APP_MSR_ImportMemOutcomes) succeeded to execute at 11/28/2017 9:09:26 AM by username on server
I want to configure these variables into the package configuration so that they switch off depending on which environment the package is being run on. Any help would be much appreciated.

Instead of 4 variables, you should have only two: EmailFailure and EmailSuccess.
Then you put the values of those variables in the .config file, with the Production message in the production .config file, and the Test message on test.
Alternatively, if you are using SSIS 2012+ with the project deployment method, you would make two parameters and populate them in the project in SSISDB. Same strategy, newer tools.

Related

Azure batch Application package not getting copied to Working Directory of Task

I have created Azure Batch pool with Linux Machine and specified Application Package for the Pool.
My command line is
command='python $AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py',
python3: can't open file '$AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py':
[Errno 2] No such file or directory
when i connect to node and look at working directory non of the Application Package files are present there.
How do i make sure that files from Application Package are available in working directory or I can invoke/execute files under Application Package from command line ?
Make sure that your async operation have proper await in place before you start using the package in your code.
Also please share your design \ pseudo-code scenario and how you are approaching it as a design?
Further to add:
Seems like this one is pool level package.
The error seems like that the application env variable is either incorrectly used or there is some other user level issue. Please checkout linmk below and specially the section where use of env variable is mentioned.
This seems like user level issue because In case of downloading the package resource, if there will be an error it will be visible to you via exception handler or at the tool level is you are using batch explorer \ Batch-labs or code level exception handling.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Reason \ Rationale:
If the pool level or the task application has error, an error-list will come back if there was an error in the application package then it will be returned as the UserError or and AppPackageError which will be visible in the exception handle of the code.
Key you can always RDP into your node and checkout the package availability: information here: https://learn.microsoft.com/en-us/azure/batch/batch-api-basics#connecting-to-compute-nodes
I once created a small sample to help peeps around so this resource might help you to checkeout the use here.
Hope rest helps.
On Linux, the application package with version string is formatted as:
AZ_BATCH_APP_PACKAGE_{0}_{1}
On Windows it is formatted as:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version
Where 0 is the application name and 1 is the version.
$AZ_BATCH_APP_PACKAGE_scriptv1_1 will take you to the root folder where the application was unzipped.
Does this "exact" path exist in that location?
tasks/XXX/get_XXXXX_data.py
You can see more information here:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Edit: Just saw this question: "or can I invoke/execute files under Application Package from command line"
Yes you can invoke and execute files from the application package directory with the environment variable above.
If you type env on the node you will see the environment variables that have been set.

Running Powershell scripts on Web App machine

I have an Azure web app. This web app has a QA deployment slot for pre-production testing. When I check in my code from VS, I have it setup to build and deploy to the QA deployment slot. This works great. However, a few configurations need to be updated in the QA web app so the application points to the correct service endpoints (i.e. not dev). To do this, my initial approach was to add a PS task to the Release that unzips my deployment zip, updates the configuration files, rezips them and then allows the Release flow to deploy the updated zip. This works locally, but running into filename length issues on the server when unzipping, which I can't change.
Now I'm trying to just include my update PS scripts in my deployment package, and then run the scripts AFTER the deployment has occurred. So, I'm looking at this Powershell on Target Machines task to run a PS on the QA slot server to update configurations. However, it's asking for Machines, which would be the server name of the slot server. I don't have that. I also don't know where to get it. I also don't have the path to the PS scripts once I have the server name. I dumped out the server variables and none of them help me, unless there is a cmdlet to look up environments that I'm not aware of.
System.DefaultWorkingDirectory: 'C:\a\2ed23b64d'
System.TeamFoundationServerUri: 'https://REDACTED.vsrm.visualstudio.com/DefaultCollection/'
System.TeamFoundationCollectionUri: 'https://REDACTEDvisualstudio.com/DefaultCollection/'
System.TeamProject: 'REDACTED'
System.TeamProjectId: 'REDACTED'
Release.DefinitionName: 'REDACTED'
Release.EnvironmentUri: 'vstfs:///ReleaseManagement/Environment/46'
Release.EnvironmentName: 'QA'
Release.ReleaseDescription: 'Triggered by REDACTED Build Definition 20160425.4.'
Release.ReleaseId: '31'
Release.ReleaseName: 'Release-31'
Release.ReleaseUri: 'vstfs:///ReleaseManagement/Release/31'
Release.RequestedFor: 'Matthew Mulhearn'
Release.RequestedForId: ''
Agent.HomeDirectory: 'C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agents\1.98.1'
Agent.JobName: 'Release'
Agent.MachineName: 'TASKAGENT5-0020'
Agent.Name: 'Hosted Agent'
Agent.RootDirectory: 'C:\a'
Agent.WorkingDirectory: 'C:\a\SourceRootMapping\REDACTED'
Agent.ReleaseDirectory: 'C:\a\2ed23b64d'
Anyone have any idea, or a better approach, to accomplish what I'm attempting?

Specify a connection string when building sqlproj

We have started using local SQL Servers (SQL 2012) for development. We have a tool that calls MSBUILD to deploy a SQL Project (.sqproj) to either our local, dev & test databases.
A requirement has come up where we want to use that tool to deploy to other local databases - it's a rare thing to do but needed.
We have setup a .publish.xml file for each normal environment (dev.publish.xml, test.publish.xml, local.publish.xml, where local points to (local)\SQL2012).
We normally run:
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="Local.publish.xml" "c:\workspaces\greg\...\databaseProject.sqlproj"
That works fine as it takes the connection string from the local.publish.xml file and deploys the sql project to our local database.
I'm not sure how to overwrite the publish file to make it point to a different database
I've tried
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="Local.publish.xml" /p:TargetConnectionString="Data Source=SomeOtherPC\SQL2012;Integrated Security=True;Pooling=False" "c:\workspaces\greg\...\databaseProject.sqlproj"
but it still points to (local)\sql2012 instead of SomeOtherPC\SQL2012
Create a different publish profile for this and populate it with the required details (SomeOtherPC, SQL 2012, etc.)
SomeOtherPC.publish.xml
And pass that as the paramter to MSBuild
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="SomeOtherPC.publish.xml"

Capistrano in Open Source Projects and different environments

I consider to use Capistrano to deploy my rails app on my server. Currently I'm using a script, which does all the work for me. But Capistrano looks pretty nice and I want to give it a try.
My first problem/question now is: How to use Capistrano properly in open source projects? I don't want to publish my deploy.rb for several reasons:
It contains sensible informations about my server. I don't want to publish them :)
It contains the config for MY server. For other people, which deploy that open source project to their own server, the configuration may differ. So it's pretty senseless to publish my configuration, because it's useless for other people.
Second problem/question: How do I manage different environments?
Background: On my server I provide two different environments for my application: The stable system using the current stable release branch and located under www.domain.com. And a integration environment for the develop team under dev.domain.com running the master branch.
How do I tell Capistrano to deploy the stable system or the dev system?
The way I handle sensitive information (passwords etc.) in Capistrano is the same way I handle them in general: I use an APP_CONFIG hash that comes from a YAML file that isn't checked into version control. This is a classic technique that's covered e.g. in RailsCast #226, or see this StackOverflow question.
There are a few things you have to do a little differently when using this approach with Capistrano:
Normally APP_CONFIG is loaded from your config/application.rb (so it happens early enough to be usable everywhere else); but Capistrano cap tasks won't load that file. But you can just load it from config/deploy.rb too; here's the top of a contrived config/deploy.rb file using an HTTP repository that requires a username/password.
require 'bundler/capistrano'
APP_CONFIG = YAML.load_file("config/app_config.yml")
set :repo_user, APP_CONFIG['repo_user']
set :repo_password, APP_CONFIG['repo_password']
set :repository, "http://#{repo_user}:#{repo_password}#hostname/repositoryname.git/"
set :scm, :git
# ...
The config/app_config.yml file is not checked into version control (put that path in your .gitignore or similar); I normally check in a config/app_config.yml.sample that shows the parameters that need to be configured:
repo_user: 'usernamehere'
repo_password: 'passwordhere'
If you're using the APP_CONFIG for your application, it probably needs to have different values on your different deploy hosts. So have your Capistrano setup make a symlink from the shared/ directory to each release after it's checked out. You want to do this early in the deploy process, because applying migrations might need a database password. So in your config/deploy.rb put this:
after 'deploy:update_code', 'deploy:symlink_app_config'
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
Now, for the second part of your question (about deploying to multiple hosts), you should configure separate Capistrano "stages" for each host. You put everything that's common across all stages in your config/deploy.rb file, and then you put everything that's unique to each stage into config/deploy/[stagename].rb files. You'll have a section in config/deploy.rb that defines the stages:
# Capistrano settings
require 'bundler/capistrano'
require 'capistrano/ext/multistage'
set :stages, %w(preproduction production)
set :default_stage, 'preproduction'
(You can call the stages whatever you want; the Capistrano stage name is separate from the Rails environment name, so the stage doesn't have to be called "production".) Now when you use the cap command, insert the stage name between cap and the target name, e.g.:
$ cap preproduction deploy #deploys to the 'preproduction' environment
$ cap production deploy #deploys to the 'production' environment
$ cap deploy #deploys to whatever you defined as the default

Deploying SSAS cube to environments

We are using BIDS 2008 locally (on our workstations) to develop our OLAP objects/cube. Come the time of promotion to Development we can deploy via BIDS. However when a hands-off deployment is required (eg. to UAT or Live) we are generating an XMLA file. This (the generated XMLA file) of course contains environment specific information (eg server name, database name, etc). If we would like to automate the generation of the XMLA file for deployment to each environment, is there a config type process to parameterise these values (like .NET : web.config : appSettings or SSIS : dtsConfig).
Note we could parse the XMLA file and replace these values depending on the environment (eg. via xmlpoke), but this is a little messy and depends on XML path structure, and hence would rather avoid this approach
This should point you in the right direction: http://blog.kejser.org/2006/11/28/automating-build-of-analysis-services-projects/
Here's more on the deployment utility and command line switches: http://msdn.microsoft.com/en-us/library/ms162758(v=sql.105).aspx
before using Micrsoft.analysisservice.deplyement to generate the XMLA file to deply in an AS istance, we need update the files bellows to change all connection string, deployment option,
project.asdatabase
project.deploymenttargets
project.configsettings
project.deploymentoptions
regards,