How to implement a limited feature rollout (language agnostic) to your users? - deployment

I would like to know some common or best practices of rolling out a new website feature to a select group of the userbase.
The users could be, for example, based solely on a percentage of your overall user base (10%). The rollout should be customizable (configurable) and support any number of features. It would also be useful to associate rollouts to specific user roles or privileges (ACL).
So, in essence, what is an architecture that would scale reasonably well?
As for the language agnostic portion, you can either provide pseudo-code, general concepts or ideas, or snippets from your preferred language to get your point across.
Links to any examples or tutorials are welcome.

My recommendation would be that, for the people who are getting the new feature, the site they are on should be as close as possible to the site that everyone will be on once the feature is public.
In other words, I wouldn't have, for example, conditional logic on a page to determine whether a button should be visible or not, if that condition would only exist during this beta period. Rather, I would determine at login time whether this person is a beta participant or not (that determination might be random, might reference their role, etc). If they are a participant, they would be redirected to a separately deployed site that is deployed from a version control branch.
Once the rollout is complete, the branch is merged in and everyone sees the new feature.
This way, you don't have to worry that the codebase the public beta users are seeing is different (in some way that could break something) from the eventual release codebase.

At my last job we accomplished this using the load balancer and a current revision cookie.
The load balancer set a cookie identifying the revision number of the instance that the user was using. If that cookie was already present, the load balancer would just send that request to a running instance with the corresponding revision. When we deployed a new revision, the load balancer continued to send traffic with an existing revision cookie to their original revision until that revision was no longer running or the user closed their browser. New traffic would get sent to the newly deployed revision. This allowed us to test out changes for a bit while our existing users were continuing to run on the old revision. We could also manually set that cookie to test out the new rev internally on the production environment before turning new traffic onto it. When we were comfortable that the new revision had no major problems we could bring down the old revision and all traffic would start going to the latest revision.
That setup does not support backwards incompatible database changes though. There's pretty much no way to do it where you can have part of your users on one db schema and part on another, and be able to take writes to both and then somehow merge those writes together after you've decided the new rev is ok. I mean, it's possible but it really depends on what the schema changes are exactly and how they effect your app, so you can't do it in a reliable, agnostic way on deployment. For us, we just generally tried not to do backwards incompatible schema changes. If we really did need to, we'd try to postpone the destructive part (dropping a column or table) to a later revision where we could migrate the schema and have both versions running with no adverse effect on current users.

For a random selection process I like the concept of asking each customer when they have a successful login if they want to participate in beta testing, once the total required or wanted quantity of users is reached you stop asking. In the database I tend to store what server to redirect the user to and run a standard script that moves each user to the correct location upon login.
We design our app development months in advance and avoid changes to the existing schema. The reason is very obvious, of course this is not always possible so when we have a change like this we always fully document the change when it is written and plan the migration for this field as early as possible. This way we have a battle plan of what changes are being made and we can put in place the best solution possible for us. This unfortunately does change depending on the circumstances.
We always run multiple environments, we have production, development and beta generally. This means we 1 do not mess with production services that equal money, we don't have people breaking code and pulling the service offline when optimizing.
Development uses GIT for version monitoring and users never see this as we get all sorts of weird and wonderful experiments uploaded for playing with. It also uses it's own database vs the live data.
With beta, we do migrate specific user data generally but recently we have had a better experience with duplicating the entire database and planning a specific date for the start of beta, what this does is allows users to opt out of beta and others to opt in with minimal changes required to support this option. What we generally do is migrate new data between the 2 databases once per day, new opt-ins and opt-outs only take effect from the time the data has been migrated to the other platform.
We have also had success on a small scale using the existing production database for beta testing some new functions that operated out of their own table so depending on what you're doing data wise using the same live database could be a good option.
I hope this is useful for you... good luck with your testing mate.

Google website optimizer seems like exactly what you are looking for.

Related

How to implement continuous migration for large website?

I am working on a website of 3,000+ pages that is updated on a daily basis. It's already built on an open source CMS. However, we cannot simply continue to apply hot fixes on a regular basis. We need to replace the entire system and I anticipate the need to replace the entire system on a 1-2 year basis. We don't have the staff to work on a replacement system while the other is being worked on, as it results in duplicate effort. We also cannot have a "code freeze" while we work on the new site.
So, this amounts to changing the tire while driving. Or fixing the wings while flying. Or all sorts of analogies.
This brings me to a concept called "continuous migration." I read this article here: https://www.acquia.com/blog/dont-wait-migrate-drupal-continuous-migration
The writer's suggestion is to use a CDN like Fastly. The idea is that a CDN allows you to switch between a legacy system and a new system on a URL basis. This idea, in theory, sounds like a great idea that would work. This article claims that you can do this with Varnish but Fastly makes the job easier. I don't work much with Varnish, so I can't really verify its claims.
I also don't know if this is a good idea or if there are better alternatives. I looked at Fastly's pricing scheme, and I simply cannot translate what it means to a specific price point. I don't understand these cryptic cloud-service pricing plans, they don't make sense to me. I don't know what kind of bandwidth the website uses. Another agency manages the website's servers.
Can someone help me understand whether or not using an online CDN would be better over using something like Varnish? Is there free or cheaper solutions? Can someone tell me what this amounts to, approximately, on a monthly or annual basis? Any other, better ways to roll out a new website on a phased basis for a large website?
Thanks!
I think I do not have the exact answers to your question but may be my answer helps a little bit.
I don't think that the CDN gives you an advantage. It is that you have more than one system.
Changes to the code
In professional environments I'm used to have three different CMS installations. The fist is the development system, usually on my PC. That system is used to develop the extensions, fix bugs and so on supported by unit-tests. The code is committed to a revision control system (like SVN, CVS or Git). A continuous integration system checks the commits to the RCS. When feature is implemented (or some bugs are fixed) a named tag will be created. Then this tagged version is installed on a test-system where developers, customers and users can test the implementation. After a successful test exactly this tagged version will be installed on the production system.
A first sight this looks time consuming. But it isn't because most of the steps can be automated. And the biggest advantage is that the customer can test the change on a test system. And it is very unlikely that an error occurs only on your production system. (A precondition is that your systems are build on a similar/equal environment. )
Changes to the content
If your code changes the way your content is processed it is an advantage when your
CMS has strong workflow support. Than you can easily add a step to your workflow
which desides if the content is old and has to be migrated for the current document.
This way you have a continuous migration of the content.
HTH
Varnish is a cache rather than a CDN. It intercepts page requests and delivers a cached version if one exists.
A CDN will serve up contents (images, JS, other resources etc) from an off-server location, typically in the cloud.
The cloud-based solutions pricing is often very cryptic as it's quite complicated technology.
I would be careful with continuous migration. I've done both methods in the past (continuous and full migrations) and I have to say, continuous is a pain. It means double the admin time for everything, and assumes your requirements are the same at all points in time.
Unfortunately, I would say you're better with a proper rebuilt on a 1-2 year basis than a continuous migration, but obviously you know best about that.
I would suggest you maybe also consider a hybrid approach? Build yourself an export tool to keep all of your content in a transferrable state like CSV/XML/JSON so you can just import into a new system when ready. This means you can incorporate new build requests when you need them in a new system (what's the point in a new system if it does exactly the same as the old one) and you get to keep all your content. Plus you don't need to build and maintain two CMS' all the time.

SDLC: Managing changes in a 'Closed System' (M1 - ERP)

I am working with a client who has an ERP system in place, called M1, that they are looking to make custom changes to.
I have spent a little bit of time investigating the ERP system in terms of making customizations. Here is a list of what I have found with regards to custom changes:
Custom changes cannot be exported/imported. There is an option in the M1 Design Studio, however, they always appear to be disabled... I tried everything and I couldn't find a mention of it in the help documentation.
You can export a customizations change log (CSV, XML, Excel, HTML) that provides type, name, location and description. In essence, it is a read-only document that provides a list of changes you made. You cannot modify the contents of this log.
Custom form changes made, go into effect for all data sources (Test, Stage, LIVE). In other words, there does not appear an ability to limit the scope of a form change.
Custom field changes must be made in each data source (Test, Stage, LIVE). What's odd here is that if add a field in Test, adjust a grid to display it, subsequently change to LIVE, it detects that the field doesn't exist and negates the grid changes.
I'm unable to find documentation indicating that this application supports version control.
sigh
....
So...
How do I manage changes from an SDLC: ALM methodology and tools standpoint?
I could start by bringing in a change request system to manage pending and completed customizations. But then what? How should changes me managed and released? Put backups of application under source control and deploy when needed?
There might not be a good answer to this question since I'm unable to take advantage of version control and create a separation of environments, but I figured I'd ask in case anybody has had similar experience or worked with M1.
I take it from the lack of answers in two months that your question is unanswerable. SDLC is something you could write a textbook on, or read a textbook on, and not know enough about your environment, other than that probably in order to get hired at your shop, "SDLC" would be a bullet point on the hiring qualifications.
I have no experience with M1, but I am assuming that you're going to have to ask your peers at work for their ideas, because it sounds like you're asking a vertically closed (your shop, your tools, your practices) question that has no exact technical answer.
As for best practices; I suggest you investigate best practices outside your M1 ERP silo and apply them as makes sense to you.
The company I work for also uses M1 erp. We have similar issues regarding version control of the customisations. From what I can tell, all customisations are stored in the M1DD database. You could backup a copy of this database before any major development work as a basic revision control system.
I am familiar with the issue of all changes becoming immediately active in all datasets. This is particularly annoying when you are making changes to a commonly used modules as you don't know how live data will be affected during the development process. One technique I have found useful is to surround untested code with an if statement so it is only executed when I am logged in.
If App.UserID = "MYUSERNAME" Then
'new code here
End If
I would be interested in hearing how you solved this problem.

Ideas on setting up a version control system

I've been tasked with setting up a version control for our web developers. The software, which was chosen for me because we already have other non-web developers using it, is Serena PVCS.
I'm having a hard time trying to decide how to set it up so I'm going to describe how development happens in our system, and hopefully it will generate some discussion on how best to do it.
We have 3 servers, Development, UAT/Staging, and Production. The web developers only have access to write and test their code on the Development server. Once they write the code, they must go through a certification process to get the code moved to UAT/Staging, then after the code is tested thoroughly there, it gets moved to Production.
It seems like making the Developers use version control for their code on Development which they are constantly changing and testing would be an annoyance. Normally only one developer works on a module at a time so there isn't much, if any, risk of over-writing other people's work.
My thought was to have them only use version control when they are ready to go to UAT/Staging. This allows them to develop and test without constantly checking in their code.
The certification group could then use the version control to help see what changes had been made to the module and to make sure they were always getting the latest revision from the developer to put up on UAT/Staging (now we rely on the developer zip'ing up their changed files and uploading them via a web request system).
This would take care of the file side of development, but leaves the whole database side out of version control. That's something else that I need to consider...
Any thoughts or ideas would be greatly appreciated. Thanks.
I would not treat source control as annoyance. See Nicks answer for the reasons.
If I were You, I would not decide this on my own, because it is not a
matter of setting up a version control software on some server but
a matter of changing and improving development procedures.
In Your case, it might be worth explaining and discussing release branches
with Your developers and with quality assurance.
This means that Your developers decide which feature to include into a release
and while the staging crew is busy on testing the "staging" branch of the source,
Your developers can already work on the next release without interfering with the staging team.
You can also think about feature branches, which means that there is a new branch for every specific new feature of the web site. Those branches are merged back, if the feature is implemented.
But again: Make sure, that Your teams agreed to the new development process. Otherwise, You waste Your time by setting up a version control system.
The process should at least include:
When to commit.
When to branch/merge.
What/When to tag.
The overall work flow.
I have used Serena, and it is indeed an annoyance. In addition to the unpleasantness of the workflow overhead Serena puts on top of the check in-check out process, it is a real pain with regard to doing anything besides the simplest of tasks.
In Serena ChangeMan, all code on local machines is managed through a central server. This is a really bad design. This means a lot of day-to-day branch maintenance work that would ordinarily be done by developers has to go through whomever has administrator privileges, making that person 1) a bottleneck and 2) embittered because they have a soul-sucking job.
The centralized management also strictly limits what developers are able to do with the code on their own machine. For example, if you want to create a second copy of the code locally on your box, just to do a quick test or whatever, you have to get the administrator to set up a second repository on your box. When you limit developers like this, you limit the productivity and creativity of your team.
Also, the tools are bad and the user interface is horrendous. And you will never be able to find developers who are already trained to use it, because its too obscure.
So, if another team says you have to use Serena, push back. That product is terrible.
Using source control isn't any annoyance, it's a tool. Having the benefits of branching and tagging is invaluable when working with new APIs and libraries.
And just a side note, a couple of months back one of the dev's machine's failed and lost all his newest source, we asked when the last time he committed code to the source control and it was 2 months. Sometimes just having it to back up stuff when you reach milestones is nice.
I usually commit to source control a couple of times a week, depending if I've hit a good stopping point and I'm about to move on to something different or bigger.
Following on from the last two good points I would also ask your other non-web developers what developmet process they are using so you won't have to create a new one. They would also have encountered many of he problems that occur in your environment, both technical using the same OS and setup and managerial.

How to manage multiple clients with slightly different business rules? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have written a software package for a particular niche industry. This package has been pretty successful, to the extent that we have signed up several different clients in the industry, who use us as a hosted solution provider, and many others are knocking on our doors. If we achieve the kind of success that we're aiming for, we will have literally hundreds of clients, each with their own web site hosted on our servers.
Trouble is, each client comes in with their own little customizations and tweaks that they need for their own local circumstances and conditions, often (but not always) based on local state or even county legislation or bureaucracy. So while probably 90-95% of the system is the same across all clients, we're going to have to build and support these little customizations.
Moreover, the system is still very much a work in progress. There are enhancements and bug fixes happening continually on the core system that need to be applied across all clients.
We are writing code in .NET (ASP, C#), MS-SQL 2005 is our DB server, and we're using SourceGear Vault as our source control system. I have worked with branching in Vault before, and it's great if you only need to keep 2 or 3 branches synchronized - but we're looking at maintaining hundreds of branches, which is just unthinkable.
My question is: How do you recommend we manage all this?
I expect answers will be addressing things like object architecture, web server architecture, source control management, developer teams etc. I have a few ideas of my own, but I have no real experience in managing something like this, and I'd really appreciate hearing from people who have done this sort of thing before.
Thanks!
I would recommend against maintaining separate code branches per customer. This is a nightmare to maintain working code against your Core.
I do recommend you do implement the Strategy Pattern and cover your "customer customizations" with automated tests (e.g. Unit & Functional) whenever you are changing your Core.
UPDATE:
I recommend that before you get too many customers, you need to establish a system of creating and updating each of their websites. How involved you get is going to be balanced by your current revenue stream of course, but you should have an end in mind.
For example, when you just signed up Customer X (hopefully all via the web), their website will be created in XX minutes and send the customer an email stating it's ready.
You definitely want to setup a Continuous Integration (CI) environment. TeamCity is a great tool, and free.
With this in place, you'll be able to check your updates in a staging environment and can then apply those patches across your production instances.
Bottom Line: Once you get over a handful of customers, you need to start thinking about automating your operations and your deployment as yet another application to itself.
UPDATE: This post highlights the negative effects of branching per customer.
Our software has very similar requirements and I've picked up a few things over the years.
First of all, such customizations will cost you both in the short and long-term. If you have control over it, place some checks and balances such that sales & marketing do not over-zealously sell customizations.
I agree with the other posters that say NOT to use source control to manage this. It should be built into the project architecture wherever possible. When I first began working for my current employer, source control was being used for this and it quickly became a nightmare.
We use a separate database for each client, mainly because for many of our clients, the law or the client themselves require it due to privacy concerns, etc...
I would say that the business logic differences have probably been the least difficult part of the experience for us (your mileage may vary depending on the nature of the customizations required). For us, most variations in business logic can be broken down into a set of configuration values which we store in an xml file that is modified upon deployment (if machine specific) or stored in a client-specific folder and kept in source control (explained below). The business logic obtains these values at runtime and adjusts its execution appropriately. You can use this in concert with various strategy and factory patterns as well -- config fields can contain names of strategies etc... . Also, unit testing can be used to verify that you haven't broken things for other clients when you make changes. Currently, adding most new clients to the system involves simply mixing/matching the appropriate config values (as far as business logic is concerned).
More of a problem for us is managing the content of the site itself including the pages/style sheets/text strings/images, all of which our clients often want customized. The current approach that I've taken for this is to create a folder tree for each client that mirrors the main site - this tree is rooted at a folder named "custom" that is located in the main site folder and deployed with the site. Content placed in the client-specific set of folders either overrides or merges with the default content (depending on file type). At runtime the correct file is chosen based on the current context (user, language, etc...). The site can be made to serve multiple clients this way. Efficiency may also be a concern - you can use caching, etc... to make it faster (I use a custom VirtualPathProvider). The largest problem we run into is the burden of visually testing all of these pages when we need to make changes. Basically, to be 100% sure you haven't broken something in a client's custom setup when you have changed a shared stylesheet, image, etc... you would have to visually inspect every single page after any significant design change. I've developed some "feel" over time as to what changes can be comfortably made without breaking things, but it's still not a foolproof system by any means.
In my case I also have no control other than offering my opinion over which visual/code customizations are sold so MANY more of them than I would like have been sold and implemented.
This is not something that you want to solve with source control management, but within the architecture of your application.
I would come up with some sort of plugin like architecture. Which plugins to use for which website would then become a configuration issue and not a source control issue.
This allows you to use branches, etc. for the stuff that they are intended for: parallel development of code between (or maybe even over) releases. Each plugin becomes a seperate project (or subproject) within your source code system. This also allows you to combine all plugins and your main application into one visual studio solution to help with dependency analisys etc.
Loosely coupling the various components in your application is the best way to go.
As mention before, source control does not sound like a good solution for your problem. To me it sounds that is better yo have a single code base using a multi-tenant architecture. This way you get a lot of benefits in terms of managing your application, load on the service, scalability, etc.
Our product using this approach and what we have is some (a lot) of core functionality that is the same for all clients, custom modules that are used by one or more clients and at the core a the "customization" is a simple workflow engine that uses different workflows for different clients, so each clients gets the core functionality, its own workflow(s) and some extended set of modules that are either client specific or generalized for more that one client.
Here's something to get you started on multi-tenancy architecture:
Multi-Tenant Data Architecture
SaaS database tenancy patterns
Without more info, such as types of client specific customization, one can only guess how deep or superficial the changes are. Some simple/standard approaches to consider:
If you can keep a central config specifying the uniqueness from client to client
If you can centralize the business rules to one class or group of classes
If you can store the business rules in the database and pull out based on client
If the business rules can all be DB/SQL based (each client having their own DB
Overall hard coding differences based on client name/id is very problematic, keeping different code bases per client is costly (think of the complete testing/retesting time required for the 90% that doesn't change)...I think more info is required to properly answer (give some specifics)
Layer the application. One of those layers contains customizations and should be able to be pulled out at any time without affect on the rest of the system. Application- and DB-level "triggers" (quoted because they may or many not employ actual DB triggers) that call customer-specific code or are parametrized with customer keys) are very helpful.
Core should never be customized, but you must layer it in somewhere, even if it is simplistic web filtering.
What we have is a a core datbase that has the functionality that all clients get. Then each client has a separate database that contains the customizations for that client. This is expensive in terms of maintenance. The other problem is that when two clients ask for a simliar functionality, it is often done differnetly by the two separate teams. There is currently little done to share custiomizations between clients and make common ones become part of the core application. Each client has their own application portal, so we don't have the worry about a change to one client affecting some other client.
Right now we are looking at changing to a process using a rules engine, but there is some concern that the perfomance won't be there for the number of records we need to be able to process. However, in your circumstances, this might be a viable alternative.
I've used some applications that offered the following customizations:
Web pages were configurable - we could drag fields out of view, position them where we wanted with our own name for the field label.
Add our own views or stored procedures and use them in: data grids (along with an update proc) and reports. Each client would need their own database.
Custom mapping of Excel files to import data into system.
Add our own calculated fields.
Ability to run custom scripts on forms during various events.
Identify our own custom fields.
If you clients are larger companies, you're almost going to need your own SDK, API's, etc.

What are the best practices for versioning with a data driven web app and many devs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've read many answers to similar questions, but still didn't get to the answer that I was looking for.
We've got a group of about 12 devs and business analysts working on one app. It's an enormous application, I'd guess about 1000+ pages in a mix of ASP and ASP.NET.
What I'm wondering is how the pros manage versioning of a large app like this one? Especially how to manage deployments, database changes and source control. Do they build the source control procedures in such a way that the app can be rolled back to a stable point at any time? How does the database fit in to that? Are all database schema changes and procs etc stored in source control?
I think my ideal solution here is to be able to re-hydrate the entire app from scratch to a specific version, including data. Is that overkill?
Update: my first guess at the size of the app was way off. I did an actual count and came up with a bigger number. 90% of those pages are frozen for development or unused.
While you are certainly asking a question that covers a lot of ground, there are 3 major parts you need to have in place:
Use a version control system. Something like Subversion or GIT is going to automatically let you roll back the code of your application to any point in time, or to any point at which you "tagged" your source code. As long as everyone only commits code that builds and runs successfully, every commit will be a valid point to which you can roll changes back to. Your version control system will manage multiple lines of code for you as well with branches. There are many strategies for handling branches, find one that works best for your process.
Use a build server. A build server like cruise control, Hudson, or TeamCity is going to automatically ensure that every commit is a "good" commit that compiles and can run successfully (assuming you have tests in place)
Include your database schema and static database data with your code in version control. There are tools like LiquiBase that allow you to manage your database structure and are designed to work with multiple developers working concurrently, even across multiple branches. Your code is no good without the corresponding database structure, so you do need to make sure you keep both in sync. Storing your database changes in your source code repository is the easiest way to do that. That being said, you cannot store your full database in your repository. It is too big and it changes too much for your source control's diff/merge support to handle. Depending on your industry, it may also not be legal due to government regulations. If you find you need to roll back your production database due to a bad release, you will need to determine what SQL statements would best undo the changes applied by looking at your stored update scripts.
Perforce has a great white paper addressing some of the issues you raise. From their white paper, that talks about web pages, you can extend the concept to stored procedures and scripts for generating/modifying tables.
As for being able to rehydrate from scratch, that really depends on your disaster recovery plans, as well as, how you setup and configure your development, and integration testing environments; and how much downtime you are willing to accept. I don't deal with these issues day to day, so perhaps people who have a lot more real world experience, specially people on the Operations side of things can impart their wisdom.
120 pages is "enormous"? Wow...
Versioning an app like this is quite straightforward. You have a production rollout process that includes tagging every release in the revision control system before it goes out, and only installing tagged releases. You back up the database at the time of the upgrade, before changing the schema, and store it somewhere with a descriptive name that allows you to link the code and DB backup. If you need to roll back, you just install the old code and restore the backup.
If you need to rollback, but with the current data, well, that's a harder problem, and is more about how your database is structured and how it's structure is evolved, but Rails migrations are an interesting study in how it can be done.
Beyond that, your question is pretty huge, and touches on a lot of areas. It might be best to contract someone who has dealt with all these things before and put you on the right path.
This is really quite a normal, and moderately-sized application. You just need to follow good source control processes, use proper unit testing, continuous integration, etc. Many, many, development organizations have solved this problem.
With all due respect, the fact that you consider 120 pages to be enormous, and the fact that you feel this is a big deal, both indicate to me that you should consider stopping development Right Now, and improving your development process.
I'm concerned that if you don't do that now, you'll learn later why you should have done so.
For something this small, rather then worry about being able to roll way back in time, what you are really after is the ability to roll back the most current release to the last one. That way if you break the build, you can roll back to the build prior. How?
1) Your web server reads from /www/working
2) Push your changes into /www/new
3) Move /www/current to /www/old
4) Move /www/new to /www/current
If /www/current blows up, just move /www/old back in its place.
The only catch is rolling back the database schema isn't part of this. Rolling back schema changes is difficult, if not impossible. All you can really do is restore the old version from backup and loose any activity between the live database and what was on backup. This makes sense if you think about it--if you add three new tables and somehow refactor another table into a better schema, how can you roll back the schema change after that? You can't--you just have to loose all the new data. Moral? Take your database changes seriously--there is a reason DBA's are sometimes considered assholes. They have to be--they can't just roll back mistakes in the database like you can roll back your buggy code.