Specify a connection string when building sqlproj - deployment

We have started using local SQL Servers (SQL 2012) for development. We have a tool that calls MSBUILD to deploy a SQL Project (.sqproj) to either our local, dev & test databases.
A requirement has come up where we want to use that tool to deploy to other local databases - it's a rare thing to do but needed.
We have setup a .publish.xml file for each normal environment (dev.publish.xml, test.publish.xml, local.publish.xml, where local points to (local)\SQL2012).
We normally run:
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="Local.publish.xml" "c:\workspaces\greg\...\databaseProject.sqlproj"
That works fine as it takes the connection string from the local.publish.xml file and deploys the sql project to our local database.
I'm not sure how to overwrite the publish file to make it point to a different database
I've tried
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="Local.publish.xml" /p:TargetConnectionString="Data Source=SomeOtherPC\SQL2012;Integrated Security=True;Pooling=False" "c:\workspaces\greg\...\databaseProject.sqlproj"
but it still points to (local)\sql2012 instead of SomeOtherPC\SQL2012

Create a different publish profile for this and populate it with the required details (SomeOtherPC, SQL 2012, etc.)
SomeOtherPC.publish.xml
And pass that as the paramter to MSBuild
msbuild.exe /t:build;publish /p:SqlPublishProfilePath="SomeOtherPC.publish.xml"

Related

Testing Jetty server of Jasper Reports Integration

I am trying to use JasperReports integration for the first time. I am using the included Jetty server, Oracle database XE 18c and Windows 7.
I am following the quick start guide https://github.com/daust/JasperReportsIntegration/blob/main/src/doc/github/installation-quickstart.md
I downloaded the zip folder, configuired database access through adding schema credentials in application.properties file as follows...
[datasource:default] type=jdbc
url=jdbc:oracle:thin:#localhost:1521:XEPDB1 username=hr password=hr
this parameter is limiting access to the integration for the specified
list of ip addresses, e.g.:
ipAddressesAllowed=127.0.0.1,10.10.10.10,192.168.178.31 if the list is
empty, ALL addresses are allowed.
Then I deployed the jri.war file successfully. Then I started the server successfully as well. But when I tried to test it through http://localhost:8090/, I got the following page, and I do not know if that's the norm or there's something wrong...
I need to know if testing is successful, and what's meant by "context" here?
Thanks
You deployed the jri.war to the context path /jri, this isn't an error, and is quite normal.
Just access your webapp via http://localhost:8080/jri/

Running Powershell scripts on Web App machine

I have an Azure web app. This web app has a QA deployment slot for pre-production testing. When I check in my code from VS, I have it setup to build and deploy to the QA deployment slot. This works great. However, a few configurations need to be updated in the QA web app so the application points to the correct service endpoints (i.e. not dev). To do this, my initial approach was to add a PS task to the Release that unzips my deployment zip, updates the configuration files, rezips them and then allows the Release flow to deploy the updated zip. This works locally, but running into filename length issues on the server when unzipping, which I can't change.
Now I'm trying to just include my update PS scripts in my deployment package, and then run the scripts AFTER the deployment has occurred. So, I'm looking at this Powershell on Target Machines task to run a PS on the QA slot server to update configurations. However, it's asking for Machines, which would be the server name of the slot server. I don't have that. I also don't know where to get it. I also don't have the path to the PS scripts once I have the server name. I dumped out the server variables and none of them help me, unless there is a cmdlet to look up environments that I'm not aware of.
System.DefaultWorkingDirectory: 'C:\a\2ed23b64d'
System.TeamFoundationServerUri: 'https://REDACTED.vsrm.visualstudio.com/DefaultCollection/'
System.TeamFoundationCollectionUri: 'https://REDACTEDvisualstudio.com/DefaultCollection/'
System.TeamProject: 'REDACTED'
System.TeamProjectId: 'REDACTED'
Release.DefinitionName: 'REDACTED'
Release.EnvironmentUri: 'vstfs:///ReleaseManagement/Environment/46'
Release.EnvironmentName: 'QA'
Release.ReleaseDescription: 'Triggered by REDACTED Build Definition 20160425.4.'
Release.ReleaseId: '31'
Release.ReleaseName: 'Release-31'
Release.ReleaseUri: 'vstfs:///ReleaseManagement/Release/31'
Release.RequestedFor: 'Matthew Mulhearn'
Release.RequestedForId: ''
Agent.HomeDirectory: 'C:\LR\MMS\Services\Mms\TaskAgentProvisioner\Tools\agents\1.98.1'
Agent.JobName: 'Release'
Agent.MachineName: 'TASKAGENT5-0020'
Agent.Name: 'Hosted Agent'
Agent.RootDirectory: 'C:\a'
Agent.WorkingDirectory: 'C:\a\SourceRootMapping\REDACTED'
Agent.ReleaseDirectory: 'C:\a\2ed23b64d'
Anyone have any idea, or a better approach, to accomplish what I'm attempting?

capistrano (v3) deploys the same code on all roles

If I understand correctly the standard git deploy implementation with capistrano v3 deploys the same repository on all roles. I have a more difficult app that has several types of servers and each type has its own code base with its own repository. My database server for example does not need to deploy any code.
How do I tackle such a problem in capistrano v3?
Should I write my own deployment tasks for each of the roles?
How do I tackle such a problem in capistrano v3?
All servers get the code, as in certain environments the code is needed to perform some actions. For example in a typical setup the web server needs your static assets, the app server needs your code to serve the app, and the db server needs your code to run migrations.
If that's not true in your environment and you don't want the code on the servers in some roles, you could easily send a pull request to add the no_release feature back from Cap2 in to Cap3.
You can of course take the .rake files out of the Gem, and load those in your Capfile, which is a perfectly valid way to use the tool, and modify them for your own needs.
The general approach is that if you don't need code on your DB server, for example, why is it listed in your deployment file?
I can confirm you can use no_release: true to disable a server from deploying the repository code.
I needed to do this so I could specifically run a restart task for a different server.
Be sure to give your server a role so that you can target it. There is a handy function called release_roles() you can use to target servers that have your repository code.
Then you can separate any tasks (like my restart) to be independent from the deploy procedure.
For Example:
server '10.10.10.10', port: 22, user: 'deploy', roles: %w{web app db assets}
server '10.10.10.20', port: 22, user: 'deploy', roles: %w{frontend}, no_release: true
namespace :nginx do
desc 'Reloading PHP will clear OpCache. Remove Nginx Cache files to force regeneration.'
task :reload do
on roles(:frontend) do
execute "sudo /usr/sbin/service php7.1-fpm reload"
execute "sudo /usr/bin/find /var/run/nginx-cache -type f -delete"
end
end
end
after 'deploy:finished', 'nginx:reload'
after 'deploy:rollback', 'nginx:reload'
# Example of a task for release_roles() only
desc 'Update composer'
task :update do
on release_roles(:all) do
execute "cd #{release_path} && composer update"
end
end
before 'deploy:publishing', 'composer:update'
I can think of many scenarios where this would come in handy.
FYI, this link has more useful examples:
https://capistranorb.com/documentation/advanced-features/property-filtering/

How to handle EF code-first migrations from my local machine when deploying to Azure?

I finally figured out how to get web.config transformations working, so that locally I have one connection (the one in my default web.config), and then when I publish to Azure, the "debug" transformation is applied so that an Azure-SQL database connection string is used.
That much is working, but now I'm running into a problem with database migrations.
In my Configuration:
protected override void Seed(MG.Context.MentorContext context)
{
System.Data.Entity.Database.SetInitializer(new MigrateDatabaseToLatestVersion<MentorContext, Configuration>());
if (!WebSecurity.Initialized)
WebSecurity.InitializeDatabaseConnection("DefaultConnection",
"User", "UserId", "Username", autoCreateTables: true);
}
Now, when I'm running locally and want to update my local database, I open up Package Manager Console and type in 'update-database' and everything works wonderfully.
Sometimes I want to update the remote Azure-SQL database though - so in the past I've done this:
Update-Database -ConnectionString "azure connection string here" -verbose
which was working when I was manually updating my local web.config. Now that I'm using the above transformations, even though I specify a connectionString, DefaultConnection in my Seed method resolves to the un-transformed connection string (my local db), so the Membership tables never get created on the Azure database.
This can be solved by manually updating the default web.config, but that defeats the purpose of using these transformations.
How can I have these transformations applied so that the Seed method of my EF migrations uses the Azure connection strings - OR - how can I tell update-database to use the azure connection string?
I'm trying to avoid manually swapping the connection strings if I can.
You mention a Web.config and "publishing" to Azure; are you using Azure Web Sites?
If so, look at this article. In short, if you configure a connection string on the K/V store of Azure Web Sites with the same name as your connection string, the value you set on Azure will automatically take precedence:
Connection strings work in a similar fashion, with a small additional
requirement. Remember from earlier that there is a connection string
called “example-config_db” that has been associated with the website.
If the website’s web.config file references the same connection string
in the configuration section, then Windows Azure
Web Sites will automatically update the connection string at runtime
using the value shown in the portal.
This should ensure that you Seed method attempts to connect to the right database.

Deploying SSAS cube to environments

We are using BIDS 2008 locally (on our workstations) to develop our OLAP objects/cube. Come the time of promotion to Development we can deploy via BIDS. However when a hands-off deployment is required (eg. to UAT or Live) we are generating an XMLA file. This (the generated XMLA file) of course contains environment specific information (eg server name, database name, etc). If we would like to automate the generation of the XMLA file for deployment to each environment, is there a config type process to parameterise these values (like .NET : web.config : appSettings or SSIS : dtsConfig).
Note we could parse the XMLA file and replace these values depending on the environment (eg. via xmlpoke), but this is a little messy and depends on XML path structure, and hence would rather avoid this approach
This should point you in the right direction: http://blog.kejser.org/2006/11/28/automating-build-of-analysis-services-projects/
Here's more on the deployment utility and command line switches: http://msdn.microsoft.com/en-us/library/ms162758(v=sql.105).aspx
before using Micrsoft.analysisservice.deplyement to generate the XMLA file to deply in an AS istance, we need update the files bellows to change all connection string, deployment option,
project.asdatabase
project.deploymenttargets
project.configsettings
project.deploymentoptions
regards,