Maintaining a large website [closed] - deployment

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
When I update a website I just replace the content with a new file.
How do larger websites update their content? When thousands of visitors are viewing the site.
How do example, how do Facebook or Twitter do it? Thousands of developers working and millions of visitors on the website. Are they working on a duplicate of the website and then switching the DNS? Are they using Git?

Blue-Green is a widely used deployment strategy that avoids downtime.
First of, you need a router/load balancer that can forward requests to a Virtual IP to an actual machine. Where I work, we use F5.
You must also have two production environments, called "blue" and "green".
Only one of them is "live" at any time.
By this, I mean that your router must forward all incoming requests to either the "blue" environment, or the "green" environment.
Let's say "green" is live, and you need to release a new version of your app to production.
You deploy your new content/application to your "blue" environment (remember, no requests are being routed here, so the environment is "offline")
Then you test your "blue" environment and make sure everything's been deployed correctly before going live.
Then you change your router to forward all requests to your new and stable "blue" env.
If after going live you discover there's a bug, simply rollback by changing your router again to route all requests to your "green" environment, with the "old" application.
More about blue-green deployments here: BlueGreenDeployment
Another well known deployment strategy is the Canary Release, which enables new features for a small number of users, and once everything's been tested properly, it's enabled for all users.

They are working all with versioning systems like Git, SVN etc. So they can work in a team of different functions and push and review commits for being pulled on the live environment (pull requests). Also, the big sites have a really big testing infrastructure also.

Related

Deployment gaps at fast pace growing application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 months ago.
Improve this question
Some context:
I have little experience with ci/CD and manage a fast paced growing application since it saw the light of the day for the first time. It is composed by several microservices at different environments. Devs are constantly pushing new code to DEV , but they frequently forget about sending new values from their local .env ove to the openshift cloud, regardless of this being a brand new environment or existing ones.
The outcome? Services that fail because they lack to have their secrets updated.
I understand the underlying issue is lack of communication between both us DevOps staff and devs themselves. But I've been trying to figure out some sort of process that would make sure we are not missing anything. Maybe something like a "before takeoff checklist" (yes, like the ones pilots do in a real flight preparation): if the chack fails then the aircraft is not ready to takeoff.
So the question is for everyone out there that practices DevOps. How do you guys deal with this?
Does anyone automates this within Openshift/kubernetes, for example? From your perspective and experience, would you suggest any tools for that, or simply enforce communication?
Guess no checklist or communication would work for team that ...frequently forget about sending new values from their local .env ove..., which you must have already done.
Your step in the pipeline should check for service availability before proceeding to next step, eg. does the service has endpoint registered within acceptable time, no endpoint means the backing pod(s) did not enter readiness state as expected. In such case, rollback and send notification to the team responsible for the service/application and exit cleanly.
There's no fix formula for CI/CD, especially human error. Check & balance at every step is the least you can do to trigger early warning and avoid a disastrous deployment.

How to sync a mobile app offline state with a remote database? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am building a mobile app using Flutter. All user's data is stored online in a MySQL database, so the app needs an internet connection for almost every user interaction (there is a backend REST API).
Users have to be able to create some lists of tasks, update and delete every task and list, and so on. But from the user's perspective, the need for an internet connection for every simple operation like adding or deleting a task is a bad experience. I need a way to support these operations even without connection with the backend and to apply these changes later when it is possible. But what is the best practice to handle this case?
How to keep the app behaving like normal even without an internet connection and sync all changes that the user has done with the backend when the internet is available again?
For example, if the user creates a new list the app expects to receive the new list's object (with id) from the backend. Later this id is used for every backend call about this list like adding a task in it.
What you can do is use a state management approaches like
Providers, Bloc etc and have a local state of your database or the needed list inside them and apply all the changes on them when offline and implement all these on to the server when connected to internet.
Read here about flutter state Management
also you can check when the device is connected to internet with this connectivity and data_connection_checker packages

Is there a way that an application or a system to update without shutting down? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I work in a hospital where the system shuts down when updating. making all orders hanging with no approvals or modifications. considering it's a hospital, this is a huge problem. so, my question is how can we update the system without it shutting down. I'm most interested in rolling updates where there's no down time.
This is a very broad question, but generally, yes, it is perfectly possible to update a system without shutting down the system.
The simplest possible solution is to have a duplicate system. Let's say you are currently working with System A. When you want to do an update, you update System B. The update can take as long as it needs, since you are not using System B. There will be no impact at all.
Once the update is finished, you can test the hell out of System B to make sure the update didn't break anything. Again, this has no impact on working with the system. Only after you are satisfied that the update didn't break anything, do you switch over to using System B.
This switchover is near instantaneous.
If you discover later that there are problems with the update, you can still switch back to System A which is still running the old version.
For the next update, you again update the system which is currently not in use (in this case System A) and follow all the same steps.
You can do the same if you have a backup system. Update the backup system, then fail over, then update the main system. Just be aware of the fact that while the update is happening, you do not have a backup system. So, if the main system crashes during the update process, you are in trouble. (Thankfully, this is not entirely as bad as it sounds, because it least you will already have a qualified service engineer on the system anyway who can immediately start working on either pushing the update forward to get the backup online or fix the problem with the main system.)
The same applies when you have a redundant system. You can temporarily disable redundancy, then update the disabled system, flip over, do it again. Of course, just like in the last option, you are operating without a safety net while the update process is ongoing.
If your system is a cluster system, it's even easier. If you have enough resources, you can take one machine out of the cluster, update it, then add it back into the cluster again, then do the next machine, and so on. (This is called a "rolling update", and is how companies like Netflix, Google, Amazon, Microsoft, Salesforce, etc. are able to never have any downtime.)
If you don't have enough resources, you can add a machine to the cluster just for the update, and then you are back to the situation that you do have enough resources.
Yes.
Every kind of component may be updated rebootlessly.
For windows you always can postpone reboots.

Version Control for Virtual Appliances [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
My understanding of a virtual appliance is 1+ pre-configured VM(s) designed to work with one another and each with a pre-configured:
Virtual hardware configuration (disks, RAM, CPUs, etc.)
Guest OS
Installed & configured software stack
Is this (essentially) the gist of what an appliance is? If not please correct me and clarify!
Assuming that my understanding is correct, it begins to beg the question: what are the best ways to back up an appliance? Obviously a SCM like SVN would not be appropriate because an appliance isn't source code - its an enormous binary file representing an entire machine or even set of machines.
So how does SO keep "backups" of appliances? How does SO imitate version control for appliance configurations?
I'm using VBox so I'll use that in the next example, but this is really a generic virtualization question.
If I develop/configure an appliance and label it as the "1.0" version, and deploy that appliance to a production server running the VBox hypervisor, then I'll use software terms and call that a "release". What happens if I find a configuration issue with the guest OS of that appliance and need to release a 1.0.1 patch?
Thanks in advance!
From what I've seen and used, appliances are released with the ability to restore their default VM, probably from a ghost partition of some kind (I'm thinking about Comrex radio STL units I've worked with). Patches can be applied to the appliance, with the latest patch usually containing all the previous patches (if needed).
A new VM means a new appliance - Comrex ACCESS 2.0 or whatever, and 1.0 patches don't work on it. It's never backed up, rather it can just be restored to a factory state. The Comrex units store connection settings, static IP configuration, all that junk, but resetting kills all that and has to be re-entered (which I've had to do before).

How To Deploy Web Application

We have an internal web system that handles the majority of our companies business. Hundreds of users use it throughout the day, it's very high priority and must always be running. We're looking at moving to ASP.NET MVC 2; at the moment we use web forms. The beauty of using web forms is we can instantaneously release a single web page as opposed to deploying the entire application.
I'm interested to know how others are deploying their applications whilst still making them accessible to the user. Using the deployment tool in Visual Studio would supposedly cause a halt. I'm looking for a method that's super quick.
If you had high priority bug fixes for example, would it be wise to perhaps mix web forms with MVC and instead replace the view with a code-behind web form until you make the next proper release which isn't a web form?
I've also seen other solutions on the same server of having the same web application run side-by-side and either change the root directory in IIS or change the web.config to point to a different folder, but the problem with this is that you have to do an entire build and deploy even if it were for a simple bug fix.
EDIT: To elaborate, how do you deploy the application without causing any disruption to users.
How is everyone else doing it?
I guess you can run the MVC application uncompiled also? and just replace .cs/views and such on the run.
A websetup uninstall/install is very quick, but it kills the application pool.. which might cause problem. Depending on how your site is built.
The smoothest way is to run it on two servers and store the sessions in sql server or shared state. Then you can just bring S1 down and patch it => bring s1 back up again and bring S2 down => patch S2 and then bring it up again. Al thought this might not work if you make any major changes to the session parts of the code.
Have multiple instances of your website running on multiple servers. The best way to do it is to have a production environment, a test environment, and a developement environment. You can create test cases and run the load every time you have a new build, if can get through all the tests, move the version into production ;).
You could have two physical servers each running IIS and hosting a copy of the site. OR you could run two copies of the site under different IIS endpoints on the SAME server.
Either way you cut it you are going to need at least two copies of the site in production.
I call this an A<->B switch method.
Firstly, have each production site on a different IP address. In your company's DNS, add an entry set to one of the IPs and give it a really short TTL. Then you can update site B and also pre-test/warm-up the site by hitting the IP address. When it's ready to go, get your DNS switched to the new site B. Once your TTL has expired you can take down site A and update it.
Using a shared session state will help to minimise the transition of users between sites.