SSAS Versioning Without Source Control - version-control

What is the best way to manage and combine different versions of SSAS solutions, without using version control?
Currently, we have a network drive where the "master" copy is stored. So individual develoeprs work with a local copy, but we recently ran into a problem with adding changes to the "master" copy.
Any suggestions? Microsoft appears to have souce control for SSIS. SSRS is easy enough to migrate by just copy/pasting the rdl files. There seems to be no easy way to accomplish this with SSAS packages.

What version are you talking about? I just recently added an entire SSAS project into source control. There was no issue at all.
We must somehow be talking about two different things.

You could try creating a single master xmla script that holds the entire cube definition. this script would live on the network drive.
The XMLA Script can be generated using the analysis service deployment tool. Then you would have to rely on a diff tool to try and manually merge the changes from each developer into the master file.This would be extremely cumbersome and error prone.
I would recommend just storing the project in source control. As previously mentioned MSAS will work with any version control provider. since it source files are just xml. For best results use a source control provider that integrates with visual studio.

Related

Snowflake connectivity with GIT

Is there any way we can connect snowflake with GIT for version control. With the help of that, we can maintain version of our merge statement and any other sql script in GIT.
DBeaver has git integration and is the best solution my team has found for version control with Snowflake. It's not perfect but it allows you to run your scripts against Snowflake and then push your SQL code to a git repository through the app UI or command line.
Yes! One way to do this is to store your Snowflake SQL code in a file/files with the sql extension (i.e. filename.sql). You can add those files to a GIT repo and track them in the repo accordingly.
This is an age old question when dealing with databases and how one should go about versioning them. Unfortunately, no database really integrates directly into any VCS that I'm aware of.
My team has settled on using dbt. This essentially turns the database into a series of text files that are easily integrated with git. The short of it is that you edit your models as local text files, and then use dbt run to put these models into Snowflake itself. This is kind of nice as you can configure separate environments such as dev and prod.
Other answers help with using an IDE as a go-between for git and Snowflake. These projects could be useful also:
https://medium.com/snowflake/snowflake-vs-code-sql-tools-and-github-7eab915e10cb
use VSCode as the IDE with a useful snowflake extension
https://github.com/Snowflake-Labs/schemachange
manage schema changes as script in git, deploy them with CI/CD
https://github.com/Snowflake-Labs/sfsnowsightextensions#get-sfworksheets
the missing feature of SnowSight -- export worksheets
There is now a VSCode extension for Snowflake. I'm able to connect vscode to our repo (Azure DevOps in my case) and Snowflake. It's got some nice features too like being able to easily cycle through past queries (including query results) and gives the same level of detail (or more) than the Snowflake UI.

tSQLt Object Organization

We are using RedGate combined with SQL Test (tSQLt). In order to unit test, we install the framework on each database.
Is there a way to use the tSQLt framework in such a way where your unit tests and framework objects can reside in one central location which can then be used by multiple databases?
We are also using RedGate's SQL Source Control with TFS as our repository to track schema changes. These changes get promoted in the following environment order: Development --> Test --> Production.
Needless to say, the addition of the framework combined with the tests themselves represent large amount of new SQL objects (tables, stored procedures, etc) now in our databases. Ideally we would like these objects to reside only in Development and Test and avoid cluttering our production database. We could skip merging the tSQLt changes to Production, but then we would have unmerged changes sitting around in the Test environment's source control until the end of time.
Any thoughts on getting around this problem?
As you're using SQL Source Control to manage your database changes, checking in your tSQLt tests is the right thing to do. If you want to ensure that these don't get pushed to staging or production, you need to ensure that the tools you use to push the changes exclude the tSQLt tests. If you are using Redgate SQL Compare for this, use the option "Ignore tSQLt framework and tests". See the product documentation for a detailed explanation. If you are using a different tool or process, post a comment and I'll amend this answer.
There is currently no way to install tSQLt in a separate database. I have started the process of making tSQLt database agnostic, but that is basically a complete rewrite, so it will take a while.
In the meantime, you can exclude tSQLt from SQL Source Control: https://redgate.uservoice.com/forums/39019-sql-source-control/suggestions/4901910-faster-way-to-exclude-all-tsqlt-content
If you still want your tests in source control but don't want to promote them to the higher environments, that is the default behaviour in Redgate's DLM Automation Suite. You can either use one of the build server plugins (like TeamCity or TFS for build/test then Octopus Deploy for release) or do it all in PowerShell using SQL Release. https://documentation.red-gate.com/display/SR1/SQL+Release+documentation
If you have a license for Redgate's SQL Toolbelt, you might already be licensed for the Automation tools (this is a change to previous licensing); http://www.red-gate.com/products/sql-development/sql-toolbelt/#automation

Release Management to different environments (Dev/QA/Integration/Stable)

I recently joined a company as Release Engineer where a large number of development teams develop numerous services, applications, web-apps in various languages with various inter-dependencies among them.
I am trying to find a way to simplify and preferably automate releases. Currently the release team is doing the following to "release" the software:
CURRENT PROCESS OF RELEASE
Diff the latest revision from SCM between QA and INTEGRATION branches.
Manually copy/paste "relevant" changes between those branches.
Copy the latest binaries to the right location (this is automated using a .cmd script).
Restart any services
MY QUESTION
I am hoping to avoid steps 1. and 2. altogether (obviously), but am running into issues where differences between the environments is causing the config files to be different for different environments (e.g. QA vs. INTEGRATION). Here is a sample:
IN THE QA ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.QA.domain.net/</value>
</setting>
IN THE INTEGRATION ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.integration.domain.net/</value>
</setting>
If you look closely then the only difference between the two <setting> tags above is the URL in the <value> tag. This is because the QA and INTEGRATION environments are in different data-centers and are ever so slightly not in sync (with them growing apart as development gets faster/better/stronger). Changes such as this where the URL/endpoint is different are TO BE IGNORED during "release" (i.e. these are not "relevant" changes to merge from QA to INTEGRATION).
Even in a regular release (about once a week) I have to deal with a dozen config files changes that have to released from QA to integration and I have to manually go through each config file and copy/paste non URL-related changes between the files. I can't simply take an entire package that the CI tool spits out from QA (or after QA), since the URL/endpoints are different.
Since there are multiple programming languages in use, the config file example above could be C#, C++ or Java. So am hoping any solution would be language agnostic.
SUMMARY OF ENVIRONMENTS/PROGRAMMING LANGUAGES/OS/ETC.
Multiple programming languages - C#, C++, Java, Ruby. Management is aware of this as one of the problems, since Release team is has to be king-of-all-trades and is addressing this.
Multiple OS - Windows 2003/2008/2012, CentOS, Red Hat, HP-UX. Management is addressing this too - starting to consolidate and limit to Windows 2012 and CentOS.
SCM - Perforce, TFS. Management is trying to move everyone to a single tool (likely TFS)
CI is being advocated, though not mandatory - Management is pushing change through but is taking time.
I have given example of QA and INTEGRATION, but in reality there is QA (managed by developers+testers), INTEGRATION (managed by my team), STABLE (releases to STABLE by my team but supported by Production Ops), PRODUCTION (supported by Production Ops). These are the official environments - others are currently unofficial, but devs or test teams have a few more. I would eventually want to start standardizing/consolidating these unofficial envs too, since devs+tests should not have to worry about doing this kind of stuff.
There is a lot of work being done to standardize how the binaries are being deployed using tools like DeployIT (http://www.xebialabs.com/products) which may provide some way to simplify these config changes.
The devs teams are agile and release often, but that just means more work diffing config files.
SOLUTIONS SUGGESTED BY TEAM MEMBERS:
Current mind-set is to use a LoadBalancer and standardize names across different environments, but I am not sure if "a process" such as this is the right solution. There must be a better way that can start with how devs write configs to how release environments meet dependencies.
Alternatively some team members are working on install-scripts (InstallShield / MSI) to automate find/replace or URLs/enpoints between envs. I am hoping this is not the solution, but it is doable.
If I have missed anything or should provide more information, please let me know.
Thanks
[Update]
References:
Managing complex Web.Config files between deployment environments - C# web.config specific, though a very good start.
http://www.hanselman.com/blog/ManagingMultipleConfigurationFileEnvironmentsWithPreBuildEvents.aspx - OK, though as a first look, this seems rather rudimentary, that may break easily.
Generally the problem isn't too difficult - you need branches for each of the environments and CI build setup for them. So a merge to the QA branch would trigger a build of that code and a custom deployment to QA. Simple.
Now managing multiple config files isn;t quite so easy (unless you have 1 for each environment, in which case you just call them Int.config, QA.config etc, store them all in the SCM, and pick the appropriate one to use in each branch's deployment script - eg, when the build for QA runs, it picks qa.config and copies it to the correct location and renames it to the correct name)(incidentally, this is the approach I tend to use as its very simple).
If you have multiple configs you need to use, then its always going to be a manual process - but you can help yourself by copying all the relevant configs to a build staging area that an admin will use to perform the deployment. Its a good first step in that the build they have in a staging directory will be the correct one for them, they just have to choose which config to use either during (eg as an option in the installer) or by manually copying the appropriate config over.
I would not try to manage some automated way of taking a single config file in source control and re-writing it with different data in the build, or pre-deploy steps. That way lies madness, and a lot of continual hassle trying to maintain the data and the tooling. Keep separate configs in place and make sure the devs know to update all of them when they make a change. (Or, you can hold 1 config in the SCM tree and make sure they know that merging their changes must not overwrite any existing modifications - multiple configs is easier)
I agree with #gbjbaanb. Have one config for each environment. Get your developers to write apps that read their properties (including their URLs) from config files and commit config files for each environment. Not only does this help you with deployment, but config files under revision control provides reproducibility, full transparency, and an audit trail of your environment specific settings.
Personally, I prefer to create a single deployable package that works on any environment by including all of the environment configs (even the ones you aren't using). You can then have some deployment automation that figures out which config files the apps should use and sets that up appropriately.
Thanks to #gman and #gbjbaanb for the the answers (https://stackoverflow.com/a/16310735/143189, https://stackoverflow.com/a/16246598/143189), but I felt that they didn't help me solve the underlying problem that I am facing, and restating just to make clear.
The code seems very aware of the environment in which they run. How to write environment-agnostic code?
The suggestions in the answers above are to store 1 config file for each environment (environment-config). This is possible, but any addition/deletion/edit of non-environment settings will have to be ported over to each environment-config.
After some study, I wonder if the following would work better?
Keep the config file's structure consistent/standardized e.g. XML. Try to keep the environment-specific endpoints in this config-file but store them in a way that allows easy access to the specific individual nodes/settings (e.g. using XPath).
When deploying to a specific environment, then your deployment tool should be able to parse (e.g. using XPath) and update the environment-specific endpoint to the value for the specific environment to which you are deploying.
The above is not a unique idea. There are some existing implementations that tackle the above solution already:
http://www.iis.net/learn/develop/windows-web-application-gallery/reference-for-the-web-application-package & http://www.iis.net/learn/publish/using-web-deploy/web-deploy-parameterization (WebDeploy)
http://docs.xebialabs.com/releases/3.9/deployit/packagingmanual.html#using-placeholders-in-ci-properties (DeployIt)
Home-spun solutions using XPath find and replace.
In short, while there are programming-language-specific solutions, and programming-language-agnostic solutions, I guess the big downfall is that Release Management needs to be considered during development too, else it will cause deployment headaches - I don't like that, since it sounds like "development should be aware of what tests will be designed". Is there a need AND a way to avoid this, is the big questions.
I'm working through the process of creating a "deployment pipeline" for a web application at the moment and am sifting my way through similar problems. Your environment sounds more complicated than ours, but I've got some thoughts.
First, read this book, I'm 2/3 the way through it and it's answering every question I ever had about software delivery, and many that I never thought to ask: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/ref=sr_1_1?s=books&ie=UTF8&qid=1371099379&sr=1-1
Version Control Systems are your best friend. Absolutely everything required to build a deployable package should be retrievable from your VCS.
Use a Continuous Integration server, we use TeamCity and are pretty happy with it so far.
The CI server builds packages that are totally agnostic to the eventual target environment. We still have a lot of code that "knows" about the target environments, which of course means that if we add a new environment, we have to modify all such code to make sure it will cope and then re-test it to make sure we didn't break anything in the process. I now see that this is error-prone and completely avoidable.
Tools like Visual Studio support config file transformation, which we looked at briefly but quickly realized that it depends on environment-specific config files being prepared with the code, by the developers in order to be added to the package. Instead, break out any settings that are specific to a particular environment into their own config mechanism (e.g. another xml file) and have your deployment tool apply this to the package as it deploys. Keep these files in VCS, but use a separate repository so that revisions to config don't trigger new builds and cause the build number to get falsely inflated.
This way, your environment-specific config files only contain things that change on a per-environment basis, and only if that environment needs something different to the default. Contrary to #gbjbaanb's recommendation, we are planning to do whatever is necessary to keep the package "pure" and the environment-specific config separate, even if it requires custom scripting etc. so I guess we're heading down the path of madness. :-)
For us, Powershell, XML and Web Deploy parameterization will be instrumental.
I'm also planning to be quite aggressive about refactoring the config files so that the same information isn't repeated several times in various places.
Good luck!

What is a typical workflow to put my local MVC3 project on to a "live server"?

I develop on my local machine with VS2010 and SQL Server. Naturally, my web.config points to my local SQL Server and I can debug/development and all is well. Unfortunately, I am not entirely sure on how to go about deploying my code to a live server.
Currently, my live server consists of a virtual machine (my site is accessible from the internet). When I'm ready to put my changes on the live server I publish my app (right click on solution explorer -> publish). Then I go to the directory it publishes to and dump all the files into a network share that goes to my site on the live server. On the initial copy over, I have to manually edit the web.config so that the connection string points to the SQL Server on the live server instead of my local machine. So this is my first stumbling block. How can I easily manage development settings and "live" settings in the web.config?
Now, I also use version control (Kiln). Can I possibly tag a changeset and have it automatically deployed to my live server somehow? Let's say someone submits a bug and I fix it. I push my changeset and now Kiln has the latest version of my code with the bug fix. What's the best way to get these changes on to a live server?
I'm unable to find any documentation that covers the entire workflow but I feel like there has go to be a better way. Surely, something like this can be accomplished without having to manually edit the web.config everytime I publish and pray to the computer Gods that I didn't miss something in the connection string.
It's just me so I have complete control over all of my environments, including the server and what's accessible via the internet, and anything is possible if only I knew what to do.
How can I easily manage development settings and "live" settings in the web.config?
Re: With VS 2010 web.config transformations, it is quite easy. Please take a look at this blog:
http://blogs.msdn.com/b/webdevtools/archive/2009/05/04/web-deployment-web-config-transformation.aspx
For VS 2008 or older, we used to have multiple config file based on environment and we used to create Debug/Release/DevTest/UAT/PROD release configuration and then in the post build event we used to replace the web.config with the release configuration based config. For example - if you build the project using "Prod" release configuration then we copy the PROD web.config to the publishing folder.
Now, I also use version control (Kiln).
Can I possibly tag a changeset and have it automatically deployed to my live server somehow? Let's say someone submits a bug and I fix it. I push my changeset and now Kiln has the latest version of my code with the bug fix. What's the best way to get these changes on to a live server?
Re: Source control and publishing to live server are two different things. The first question you are asking here related to how you manage multiple releases and have control over bug fixes for each release. The way I would do it is I will have PROD branch in my source control which will be the first release and for every major release I will sub branch it to have more control over e-fixes.
For the other question about how to get it to live server, it depends on your environment. We do it differently based on how customer environment is setup. If they have given us the FTP, we use that or otherwise we package the application into an MSI and then deploy it to UAT.. Until UAT signoff is done, we keep on updating the MSI. Once signoff received, the MSI goes to PROD.
Hope this helps.

Deployment with CakePhp

I have a CakePhp Website that is currently live. I would like to keep working on the site, without impacting the deployed site.
What is the best way to keep a production version separate from a deployed version, and then merging the two when appropriate?
Currently, I am using Git for version control.
Thanks!
First thing, get to know a version control system Subversion, Git, Bazaar, Mercurial are some examples. They are a safety net that can save your bacon because they save EVERY change to EVERY file in your fileset.
Then, typically I have a local development server and also a subdomain (staging.example.com) on the production server. I then do my heavy development on the local development server. Then I use SVN to archive all my site changes. Then, using a shell account on the production server I check out the new version of the software to the staging subdomain. If it works ok there, I can then update the live site using just a single SVN check out.
I've also heard of people placing a symbolic link in the location where the site root should be (/var/www/public_html) that points to the live directory (/var/www/site_ver_01234) , then set up the new version in a parallel directory (/var/www/site_ver_23456). Finally, just recreate the symbolic link pointing to the new version's directory. The switch is instantaneous and transparent. I'm sorry I'm not more clear on this method though, I read about it a while back but never tried it myself though.
I've also looked at Bazaar (another version control system) that has a plugin that automatically ftps any changed files to a given server every time a version is checked in.
The general idea, first of all, is to use a version control system. Using this, you're developing your site on your local machine or with several people, having a central repository somewhere.
When you're happy with a certain revision and would like to deploy it, you "tag" it. That means you freeze the state of that revision and separate it from the continually evolving "trunk". What that means specifically depends on your version control system.
You then take that tagged revision and copy it to the live server. Possibly you may copy it to a "staging server" before to test it in another environment. This copying can be as simple as overwriting all existing files using FTP, or it can involve automated deployment systems which will take care of the details for you and allow you to roll back an unsuccessful deployment. If a database is involved as well, you're probably also looking at database schema migration scripts that need to be run.
Each of these steps can be done in many different ways, and you'll have to figure out what's the best approach for you. If you're not doing so already, start using a version control system such as SVN or git. Do it now! Then you might want to google or search on SO about different techniques to tag and branch using that system. For serious deployment, start with a keyword like Capistrano or one of its PHP clones.