I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.
I am working on a website of 3,000+ pages that is updated on a daily basis. It's already built on an open source CMS. However, we cannot simply continue to apply hot fixes on a regular basis. We need to replace the entire system and I anticipate the need to replace the entire system on a 1-2 year basis. We don't have the staff to work on a replacement system while the other is being worked on, as it results in duplicate effort. We also cannot have a "code freeze" while we work on the new site.
So, this amounts to changing the tire while driving. Or fixing the wings while flying. Or all sorts of analogies.
This brings me to a concept called "continuous migration." I read this article here: https://www.acquia.com/blog/dont-wait-migrate-drupal-continuous-migration
The writer's suggestion is to use a CDN like Fastly. The idea is that a CDN allows you to switch between a legacy system and a new system on a URL basis. This idea, in theory, sounds like a great idea that would work. This article claims that you can do this with Varnish but Fastly makes the job easier. I don't work much with Varnish, so I can't really verify its claims.
I also don't know if this is a good idea or if there are better alternatives. I looked at Fastly's pricing scheme, and I simply cannot translate what it means to a specific price point. I don't understand these cryptic cloud-service pricing plans, they don't make sense to me. I don't know what kind of bandwidth the website uses. Another agency manages the website's servers.
Can someone help me understand whether or not using an online CDN would be better over using something like Varnish? Is there free or cheaper solutions? Can someone tell me what this amounts to, approximately, on a monthly or annual basis? Any other, better ways to roll out a new website on a phased basis for a large website?
Thanks!
I think I do not have the exact answers to your question but may be my answer helps a little bit.
I don't think that the CDN gives you an advantage. It is that you have more than one system.
Changes to the code
In professional environments I'm used to have three different CMS installations. The fist is the development system, usually on my PC. That system is used to develop the extensions, fix bugs and so on supported by unit-tests. The code is committed to a revision control system (like SVN, CVS or Git). A continuous integration system checks the commits to the RCS. When feature is implemented (or some bugs are fixed) a named tag will be created. Then this tagged version is installed on a test-system where developers, customers and users can test the implementation. After a successful test exactly this tagged version will be installed on the production system.
A first sight this looks time consuming. But it isn't because most of the steps can be automated. And the biggest advantage is that the customer can test the change on a test system. And it is very unlikely that an error occurs only on your production system. (A precondition is that your systems are build on a similar/equal environment. )
Changes to the content
If your code changes the way your content is processed it is an advantage when your
CMS has strong workflow support. Than you can easily add a step to your workflow
which desides if the content is old and has to be migrated for the current document.
This way you have a continuous migration of the content.
HTH
Varnish is a cache rather than a CDN. It intercepts page requests and delivers a cached version if one exists.
A CDN will serve up contents (images, JS, other resources etc) from an off-server location, typically in the cloud.
The cloud-based solutions pricing is often very cryptic as it's quite complicated technology.
I would be careful with continuous migration. I've done both methods in the past (continuous and full migrations) and I have to say, continuous is a pain. It means double the admin time for everything, and assumes your requirements are the same at all points in time.
Unfortunately, I would say you're better with a proper rebuilt on a 1-2 year basis than a continuous migration, but obviously you know best about that.
I would suggest you maybe also consider a hybrid approach? Build yourself an export tool to keep all of your content in a transferrable state like CSV/XML/JSON so you can just import into a new system when ready. This means you can incorporate new build requests when you need them in a new system (what's the point in a new system if it does exactly the same as the old one) and you get to keep all your content. Plus you don't need to build and maintain two CMS' all the time.
We're building a new website design and instead of cutting over to it 100%, we'd like to ease into it so we can test as we go. The goal would be to have users that visit http://oursite.com to either get the "old" website or the new, and we could control the percentage of who gets the new site by 10%, 50%, etc.
I'm familiar with A/B tests for pages, but not an entire website domain. We're on a LAMP stack so maybe this can be done with Apache VHosts? We have 2 cloud servers running behind a cloud load balancer in production. The new site is entirely contained in an svn branch and the current production site runs out of the svn trunk.
Any recommendations on how I can pull this off?
Thanks you!
You absolutely can do this, and it's a great way to quickly identify things that make a big difference in improving conversion rates. It's dependent on a couple of things:
Your site has a common header. You'll be A/B testings CSS files, so this will only work if there's a single CSS call for the entire site (or section of the site).
You're only testing differences in site design. In this scenario all content, forms, calls to action, etc. would be identical between the versions. It is technically possible to run separate tests on other page elements concurrently. I don't recommend this as interpretation of results gets confusing.
The A/B testing platform that you choose supports showing the same version to a visitor throughout their visit. It would be pretty frustrating to see a site's theme change every time they hit another page. Most A/B testing platforms that I've used have an option for this.
Since you're testing substantial differences between versions, I also recommend that you calculate sample sizes before you begin. This will keep you from showing the losing version to too many people, and it will also give you confidence in the results (it's a mistake to let tests run until they reach statistical significance). There are a couple of online calculators that you can use (VisualWebsiteOptimizer, Evan Miller).
Introduction
Drools Guvnor has it's own versioning system, that in production use allows the users of an application to modify the rules and decision tables in order to adapt to change in their business. Yet, the same assets continue to live on the development version control system, where new features to the app are developed.
This post is for looking insight/ideas/experience on rule development and deployment when working with Drools rules and Guvnor.
Below are some key concepts I've been puzzling about.
Deployment to Guvnor
First of all, what is the best way to deploy the drl files and decision tables to production environment? Just simply put them on a zip package and then unzip to Web-Dav folder? What I have navigated around Drools, I haven't found a way to import more than one file at a time. The fact model can be added as a jar archive, though. Guvnor seems to have a REST API of some sort, but using that would require custom deployment scripts.
Change management
Secondly, once the application is in production, the users will likely want to change the values in decision tables in order to set the discount percentages to higher for premium clients etc. This is all fine and dandy, until comes the time to start development of version 2.0 of the app.
Now what we have at this point is
drl files and decision tables in version controlling system
drl files and decision tables in production environment with user modifications, versioned by Guvnor
Now we are in the point of getting the rules and decision tables back from the Guvnor. And again is the Web-Dav folder the best for this, what other options there are?
Merge tools today can even handle Excel file diffs, but sounds like a merge hell to me on a big scale projects.
Keeping the fact model backwards compatible
Yet another topic is fact model integrity. For the assumed version 2.0, developers always want to make refactoring and tear the whole fact model upside down. Still, it must remain backwards compatible with the previous versions as there may be user modified rules that depend on that. Any tips on this? Just keep the fact model simple and clean? Plan ahead / suggest what the users could want to change?
Summary
I'm certain I'm not the first, and surely not the last, to consider options on deployment and change management with Drools and Guvnor. So, what I'd like to hear is comment, discussion, tips etc. on some best (and also the worst in order to avoid them) practices to handle these situations.
Thanks.
The best way to do things depends very much on your specific application and the environment you work in. However the following are pointers from my own experience. Note that I'll add just a few points for now. I'll probably come back to this answer when things come to me.
After the initial go-live, keep releases incremental and small
For your first release you have the opportunity to try things out. Take advantage of this opportunity, and do as much refactoring as possible, because...
Your application has gone live and your business users are maintaining rules in decision tables. You have made great gains in what folks in the industry like to call "business agility". Unfortunately, this tends to be at the expense of application development agility. All of your guided editor rules and decision table rules are tied to your fact model, so any changes existing properties of that fact model will break your decision tables. Unlike in most IDEs these days, you can't just right-click on a fact's getX() method, rename it, and expect all code which relies on that property to be updated.
Decision tables and guided rules are hard to refactor. If a fact has been renamed, then in many (all?) versions of Guvnor, that rule/table will no longer open. You need to get at the underlying XML file via WebDav and do some text searching and replacing. That may be very difficult, considering that to do this, you need to download the file from production to a test environment, make changes, test them, deploy them to a test environment. When you're happy with your changes you need to push them back up to the 'production' Guvnor. Unfortunately, while you were doing that, the users have updated a number of decision tables and you need to either start again, or re-apply the past couple of days' changes. In an ideal world, your business users will agree to make no rule changes for a period of time. But the only way to make that feasible is to keep the changes small so that you can make them in a couple of hours or a day depending on how generous they feel.
To mitigate this:
Keep facts used within Guvnor separate from your application domain classes. Your application developers can then refactor the internal application model to their hearts content, but such changes will not affect the business model. Maintain mappings between the two and ensure there are tests covering those mappings.
Avoid changes such as renaming facts or their properties. Make sure that facts you create and their properties have names which suit the domain and agree these with the business. On the other hand, adding a new property is relatively painless. It is well worth prompting the users to give you an eye on their future plans.
Keep facts as simple as possible. Don't go more complex than name-value pairs unless you really need to. For one thing, your business users will find it much easier to maintain rules. Your priority with anything managed within Guvnor should always be about making it easy for business users to maintain.
Keep external dependencies out of your facts. A developer may think it's a good idea to annotate a fact as a JPA #Entity, for easy persistence. Unfortunately, that adds dependencies which need to be added to Guvnor's classpath, requiring a restart.
Tips & tricks
My personal technique for making cross-environment changes is to connect Eclipse to two Guvnor WebDav directories, and checkout the rules into local directories, where each local directory maps to an environment. I then use the Eclipse diff tooling.
When building a Guvnor-managed knowledge base, I create a separate Maven project containing only the facts, and with no dependencies on anything else. It makes it a lot easier to keep them clean this way. Also, when I really do need to add a dependency (i.e. I use JodaTime where possible), then the build can have a step to generate a shaded JAR containing all the dependencies. That way you only ever deploy one JAR to Guvnor, which is guaranteed to contain the correct versions of your dependencies.
I'm sure there will be more that I think of. I'll try to remember to come back to this...
Given a large scale software project with several components written in different languages, configuration files, configuration scripts, environment settings and database migration scripts - what are the common practices for deployment to production?
What are the difficulties to consider? Can the process be simplified with tools like Ant or Maven? How can rollback and database management be handled? Is it advisable to use version control for the production environment?
As I see it, you're mostly asking about best practices and tools for release engineering AKA releng -- it's important to know the "term of art" for a subject, because it makes it much easier to search for more information.
A configuration management system (CMS -- aka revision control system or version control system) is indispensable for today's software development; if you use one or more IDEs, it's also nice to have good integration between them and the CMS, though that's more of an issue for purposes of development than for purposes of deployment / releng.
From a releng viewpoint, the key thing about a CMS is that it must have good support for "branching" (under whatever name), because releases must be made from a "release branch" where all the code under development and all of its dependencies (code and data) are in a stable "snapshot" from which the exact, identical configuration can be reproduced at will.
The need for good branching support may be more obvious if you have to maintain multiple branches (customized for different uses, platforms, whatever), but even if your releases are always, strictly in a single linear sequence, releng best practices still dictate making a release branch. "Good branching support" includes ease of merging (and "conflict resolution" when different changes are made to a file), "cherry-picking" (taking one patch or changeset from one branch, or the head/trunk, and applying it to another branch), and the like.
In practice, you start off the release process by making a release branch; then, you run exhaustive testing on that branch (typically MUCH more than what you run everyday in your continuous build -- including extensive regression testing, integration testing, load testing, performance verification, etc, and possibly even more costly quality assurance processes, depending). If and when the exhaustive testing and QA reveal defects in the release-candidate (including regressions, performance degradation, etc), they must be fixed; in a large team, development on head/trunk may be continuing while the QA is being done, whence the need for ease of cherry-picking / merging / etc (whether your practice is to perform the fix on head or on the release branch, it still needs to be merged to the other side;-).
Last but not least, you're NOT getting full releng value from your CMS unless you're somehow tracking with it "everything" that your releases depend on -- simplest would be to have copies or hard links to all the binaries for the tools you need to build your release, etc, but that may often be impractical; so at the very least track the exact release, version, bugfix &c numbers of those tools that are used (operating system, compilers, system libraries, tools that preprocess image, sound or video files into final form, etc, etc). The key is being able, at need, to exactly reproduce the environment required to rebuild the exact version that's proposed for release (otherwise you'll go mad tracking down subtle bugs that may depend on third party tools' changes as their versions change;-).
After a CMS, the second most important tool for releng is a good issue tracking system -- ideally one that's well integrated with the CMS. That's also important for the development process (and other aspects of product management), but in terms of the release process the importance of the issue tracker is the ability to easily document exactly what bugs have been fixed, what features have been added, removed, or changed, and what modifications in performance (or other user-observable characteristics) are expected in the new forthcoming release. For the purpose, a key "best practice" in development is that every changeset that gets committed to the CMS must be connected to one (or more) issue in the issue tracking system: after all, there's gotta be some purpose for that change (fix a bug, change a feature, optimize something, or some internal refactor that's supposed to be invisible to the software's user); similarly, every tracked issue that's marked as "closed" must be connected to one (or more) changesets (unless the closing is of the "won't fix / working as intended" kind; issues related to bugs &c in third-party components, which have been fixed by the third-party supplier, are easy to treat similarly if you do manage to keep track of all third-party components in the CMS too, see above; if you don't, at least there should be text files under CMS documenting third-party components and their evolution, again see above, and they need to be changed when some tracked issue on a 3rd party component gets closed).
Automating the various releng processes (including building, automated testing, and deployment tasks) is the third top priority -- automated processes are much more productive and repeatable than asking some poor individual to manually go through a list of steps (for sufficiently complex tasks, of course, the workflow of the automation may need to "get a human being in the loop"). As you surmise, tools such as Ant (and SCons, etc, etc) can help here, but inevitably (unless you're lucky enough to get away with very simple and straightforward processes) you'll find yourself enriching them with ad-hoc scripts &c (some powerful and flexible scripting language such as perl, python, ruby, &c, will help). A "workflow engine" can also be precious when your release workflow is sufficiently complex (e.g. involving specific human beings or groups thereof "signing off" on QA compliance, legal compliance, UI guidelines compliance, and so forth).
Some other specific issues you're asking about vary enormously depending on specifics of your environment. If you can afford programmed downtime, your life is relatively easy, even with a large database in play, as you can operate sequentially and deterministically: you shut the existing system down gracefully, ensure the current database is saved and backed up (easing rollback, in the hopefully VERY rare case it's needed), run the one-off scripts for schema migration or other "irreversible" environment changes, fire the system back up again in a mode that's still unaccessible to general users, run another extensive suite of automated tests -- and finally if everything's gone smoothly (including the saving and backup of the DB in its new state, if relevant) the system's opened up to general use again.
If you need to update a "live" system, without downtime, this may range anywhere from a slight inconvenience to a systematic nightmare. In the best case, transactions are reasonably short, and synchronization between the state set by transactions may be delayed a bit without damage... and you have a reasonable abundance of resources (CPUs, storage, &c). In this case, you run two systems in parallel -- the old one and the new one -- and just make sure all new transactions target the new system, while letting old ones complete on the old system. A separate task periodically syncs "new data in the old system" to the new system, as transactions on the old system terminate. Eventually you can determine that no transactions are running on the old system and all changes that happened there are synced up to the new one -- and at that time you can finally shut down the old system. (You need to be prepared to "reverse sync" too of course, in case a rollback of the change is needed).
This is the "simple, sweet" extreme for live system updating; at the other extreme, you can find yourself in such an overconstrained situation that you can prove the task is impossible (you just cannot logically meet all the stated requirements with the given resources). Long sessions opened on the old system which just cannot be terminated -- scarce resources that make it impossible to run two systems in parallel -- core requirements for hard-real-time sync of every transaction -- etc, etc, can all make your life miserable (and as I noticed, at the extreme, can make the stated task absolutely impossible). The two best things you can do about this: (1) ensure you have abundant resources (this will also save your skin when some server unexpected goes belly-up... you'll have another one to fire up to meet the emergency!-); (2) consider this predicament from the start, when initially defining the architecture of the overall system (e.g.: prefer short-lived transactions to long-lived sessions that just can't be "snapshot, closed down, and restarted seamlessly from the snapshot", is one good arcitectural pointer;-).
disclaimer: where I work we use something I wrote that I'm going to mention
I'll tell you how I do it.
For configuration settings, and general deployment of code and content, I use a combination of NAnt, a CI server, and dashy (an automatic deployment tool). You can replace dashy with any other 'thing', that automates the uploads to your server (perhaps capistrano).
For DB scripts, we use RedGate SQL Compare to get the scripts, and then for most changes, I actually make them manually, where appropriate. This is because some of the changes are slightly complex, and I feel more comfortable doing it by hand. You can actually use this tool to do it for you, (or at least generate the scripts).
Depending on your language, there are some tools that can script the DB updates for you (I think someone on this forum wrote one; hopefully he'll reply), but I have no experience with those. It's something I'd like to add though.
Difficulties
I forgot to answer one of your questions.
The biggest problem in updating any significantly complicated/distributed site is DB synchronisation. You need to consider if you will have any downtime, and if you do, what will happen to the DBs. Will you shutdown everything, so that no transactions can be processed? Or will you shift everything to one server, update DB B, and then sync DB A & B, then update DB B? Or something else?
Whatever you choose, you need to choose it, and either say "Okay for each update, there will be X downtime", or whatever. Just have it documented.
The worst thing you can do is have someones transaction fail, mid processing, because you were updating that server; or to somehow leave only part of your system operational.
I don't think you have any option about using versioning or not.
You cannot go without versioning (as mentioned in a comment).
I am talking from experience since I am currently working on a website where we have several items that have to work together:
The software itself, its functionalities at a given point in time
The external systems (6 of them) we are linked with (messages are versioned)
A database, which holds the configuration (and translations)
The static resources (images, css, javascripts) hosted on an Apache server (several in fact)
A java applet, which has to be synchronized with the javascript and software
This works because we use versioning, although I must admit that our database is pretty simple, and because the deployment is automated.
Versioning means that we have actually at any point in time several concurrent versions of the messages, the database, the static resources and the java applet.
It is especially important in the event of a 'fallback'. If you discover a flaw when loading you new software and suddenly you cannot afford to load, you'll have a crisis if you have not versioned, and you'll just have to load the old soft if you have.
Now as I said, the deployment is scripted:
- the static resources are deployed first (+ the java applet)
- the database goes next, several configurations cohabit since they are versioned
- the soft is queued for load in a 'cool' time-window (when our traffic is at its lowest point, so at night)
And of course we take care of backward / forward compatibility issues with the external servers, which should not be underestimated.