Best Practices for deployments on a 24x7 system asp.net platform - deployment

We have built an enterprise web application on asp.net platform which is well load balanced across several servers. We are struggling a bit in terms of doing regular deployments as the application has been defined with an SLA of zero downtime.
Any guidance / tips would be highly appreciated for Implementing best practices to support uninterrupted deployment.

My two favorite books that cover some of these topics are Continuous Delivery by Humble/Farley and Web Operations by Allspaw/Robbins.
I think the "easy" part here is to do a rolling deployment where you pull a node out of the load balancer, upgrade it, run smoke tests, and place it back in the load balancer. Different users will encounter different versions of the app, but you get zero downtime.
The hard part is the backend system / database that these web-apps are likely hitting. You basically need to have both old and new schemas available concurrently which is challenging. Look at techniques like the expand / contract database pattern as an approach to pulling this off.

Related

On Db2 v11.1, how do we get or setup the notification for DBA team if there is any hang or slowness situation in offshift working hours?

On Db2 v11.1, how do we get or setup the notification for DBA team if there is any hang or slowness situation in off shift working hours?
The answer depends on the external monitoring and alerting solution you deployed, and how you configure that tooling in your environment.
This application layer tooling is not built into Db2-LUW, although APIs exist in Db2-LUW for such tooling to get the data it needs in order to operate.
IBM and several third parties offer solutions for real time monitoring and alerting in this space. Many cover app-servers, web-servers, database layers, networks and operating-system layers and have different alerting configurability. Many have plugin type architecture with plugins for Db2-LUW monitoring. Do not use stackoverflow for product recommendations however.
For "slowness", this is only meaningful to measure usually at the application layer, in terms of response times and other metrics etc.
For database-hangs, IBM offers a db2-hang_detect script that tooling can orchestrate , requires careful interpretation and even more careful testing.

Tradeoff between building own distributed system and using kubernetes to deploy my application

I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.
I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing, etc.
But I want to know whether Kubernetes already can satisfy my objective? what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?
Our applications are built with Django and most are likely used by students such course planner or search/recommend systems.
You didn't give any details of your app, so I'll provide some generic thoughts. Shortly speaking, Kubernetes gives you scheduling, load balancing and (sort of) high availability for free. You still have to plan proper application architecture but Kubernetes gives you a good starting point where you can say like "ok, I want this number of app containers to run on this number of nodes". It also gives you internal load balancing and DNS resolution.
Of course, the tradeoff is that you have to learn Kubernetes and Docker up to some certain point. But I wouldn't say it's too hard for enthusiast.
I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.
This is a great idea. What you really want to do here is to use as many existing tools as you can, to let you focus on improving the core functionality of your current server - e.g. serving your users with your business logic and data and increase availability of this.
Focusing on your core functionality means that you should NOT do, e.g.
NOT write your own memory allocation algorithm or garbage collection
NOT write you own operating system
NOT write your own container scheduler (e.g. what Kubernetes can do for you)
I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing
Most applications deployed on Kubernetes or that have your availability requirements actually should be a distributed system - e.g. be composed of more than one instance, designed for elasticity and resiliency.
what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?
Kubernetes is a tool, almost an distributed operating system that e.g. schedules containerized apps to a server farm. It is a tool that can help you a lot when developing and designing your distribued application that should follow the Twelve Factor principles.

How would you architect a three-tiered web application to limit downtime?

My specific questions are :
1) How would you architect a three-tiered web application to limit downtime?
2) How to eliminate point of failures from a 3 tiered architecture
I could not find any resources that specifically answer these questions. I would like to the opinion of the community
Regardless of which architecture you choose failure could happen and would happen. the real question isn't elimination of fault but reducing.
In 3 tier you may have ui, business layer and db access layer. any of these is single point of failure. so one goes down whole app stops working.
You have to rely on redundancy. You may need to deploy mutiple copies of each tier. the more copies you deploy the more fault tolerant it is. generally each tier talks to load balancer to talk to down stream services. and load balancer will be balancing multiple copies on each tier.

Staging slot and vip-swap

Coming from the classic Cloud Service model, after having used it to 5 years now, we are very used to the concept of a staging slot and the vip-swap capability. Yes this upgrade model has many warts but also many benefits.
Clearly the SF doesn't expose this model. So I wonder was it just not a popular model in Cloud Services, or does it just really not make sense 6 years later?
Is this one of those paradigm changes where I just have to re-think how we deploy, and forge ahead with the newly prescribed model (rolling upgrades)? Or are there known techniques to setting up something like staging slots with SF?
Looking for advice...
VIP swaps don't make sense for stateful compute, and Service Fabric is largely a stateful compute platform (even if you only use stateless services, the system services themselves are stateful). If your services have your data in them, you have to do a rolling upgrade if you want to keep your data and keep it consistent.
So yeah, it's a paradigm change, but a good one. It encourages continuous delivery and frequent upgrades because upgrades are integrated right into the platform and don't cost you anything extra. You don't need to pay for staging VMs, which can get expensive for large deployments, and that might even discourage continuous delivery.
Now, you can do something similar to a staging deployment for stateless services. In Service Fabric, your "deployments" are applications, not VMs. So you can create an instance of a new application version side-by-side with an instance of the previous application version and route your traffic however you want, whether that's gradually move users to the instance of the new version, or just flip a switch and send all your traffic to the new version all at once. This of course doesn't work for stateful services, because all of your data is still in the previous version application instance.

How to setup a scalable environment for the MEAN stack on AWS?

I'm developing a web app on the MEAN stack (and probably other stuff, like some way of storing images).
In the startup accelerator I'm working in I've been suggested to let go the IAAS approach for a PAAS one (namely AWS). I must admit being used to working on small scale apps on single virtual machines I'm very confused about how to approach the task and the whole PAAS thing.
Reading through AWS documentation looks like Elastic Beanstalk is the way to go for building a scalable web app. From my understanding it abstracts away the infrastructure management workload taking care of load balancing and resource scaling.
I decided to give it a try.
Now I'm a bit confused on how to setup the infrastructure. Particularly I'm wondering how to fit MongoDB it the puzzle.
I guess I shouldn't install it on the node.js machine but on a different one, so that the two can scale out depending on needs.
My questions are:
where and how should I install Mongo?
should I let go MongoDB in favour of something like DynamoDB? in this case how can I set up a local development environment?
Any suggestions would be appreciated.