Quartz.net scheduler Dev vs. Production Best Practice when use SQL as a store - quartz-scheduler

According to the Quartz.net docs, it's strictly prohibited to point two instances of a scheduler to a single table set if they are not in a cluster mode. So, the question is how to deal with development if the app is already in production?
Disable persistence store for development
Make settings clustered (but time accuracy between production and dev machines should be within 1 sec... not so easy to handle?)
Make a separate copy of tables for development
How do you normally do this?

Related

Tradeoff between building own distributed system and using kubernetes to deploy my application

I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.
I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing, etc.
But I want to know whether Kubernetes already can satisfy my objective? what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?
Our applications are built with Django and most are likely used by students such course planner or search/recommend systems.
You didn't give any details of your app, so I'll provide some generic thoughts. Shortly speaking, Kubernetes gives you scheduling, load balancing and (sort of) high availability for free. You still have to plan proper application architecture but Kubernetes gives you a good starting point where you can say like "ok, I want this number of app containers to run on this number of nodes". It also gives you internal load balancing and DNS resolution.
Of course, the tradeoff is that you have to learn Kubernetes and Docker up to some certain point. But I wouldn't say it's too hard for enthusiast.
I’m proposing a project to my school supervisor, which is to improve our current server to be more fault tolerant, easily scaling and able to handle high traffic.
This is a great idea. What you really want to do here is to use as many existing tools as you can, to let you focus on improving the core functionality of your current server - e.g. serving your users with your business logic and data and increase availability of this.
Focusing on your core functionality means that you should NOT do, e.g.
NOT write your own memory allocation algorithm or garbage collection
NOT write you own operating system
NOT write your own container scheduler (e.g. what Kubernetes can do for you)
I have a plan to build a distributed system starting from deploying our server to different PCs and implement caching and load balancing
Most applications deployed on Kubernetes or that have your availability requirements actually should be a distributed system - e.g. be composed of more than one instance, designed for elasticity and resiliency.
what are the tradeoff between using Kubernetes and building own distributed system to deploy applications?
Kubernetes is a tool, almost an distributed operating system that e.g. schedules containerized apps to a server farm. It is a tool that can help you a lot when developing and designing your distribued application that should follow the Twelve Factor principles.

How to synchronize deployments (especially of database object changes) on multiple environments

I have this challenge. I am the DevOps engineer and a software engineer in a team where months back, the developers moved from having a central Oracle DB to having the DB on a CentOS VM on their individual laptops. The move from a central DB was to reduce dependency on the DBAs and also to eliminate issues that stemmed from inconsistent data.
The plan for sharing and ensuring synchronization of the Database with everyone on the team was that each person will share change scripts with everyone. The problem is that we use Skype for communication (we just setup slack but are yet to start using it fully), and although people sometimes post the text of DB change scripts, it could be missed by some. The other problem is that some developers miss posting the changes. Further, new releases are deployed in Production without being deployed on the Test and Demo environments.
This has posed a serious challenge for us, especially myself who of recent, became responsible for ensuring that our Demo deployments were in sync with the Production deployments.
Most of the synchronization issues border on the lack of sync of the Database due to missing change scripts or missing DB objects. Oracle is our DB of preference.
A typical deployment in the Demo environment is a very painful process that involves testing an application and as issues occur due to missing DB table columns, functions, stored procs, we have to look for the missing DB objects, apply them to the DB and then continue until all issues are resolved.
How can I solve this problem to ensure smooth, painless and less time-consuming deployments? Can migrating our applications to Docker help with the DB synchronization issues and the associated lack of discipline of the developers? What process can we put into place to improve in this area?
Thank you very much in advance for your help.
Have a look # http://www.dbmaestro.com
I strongly recommend you to join the live demo session
DBmaetro TeamWork can help you merge the changes from multiple DBs into a single shared DB and to move safely the changes from one environment to the other
Danny

Postgres Multi-Tenant setup for production and tests

I am being asked to extend our production/QA database to include an additional schema reserved for testing. My gut keeps telling me this will lead to no good.
The reasoning I've been given is to avoid spinning up an additional RDS instance. Doing so will cut cost and increase efficiency. I proposed running these test on a local instance, or even a micro EC2 instance. Both were shot down due to the complexity and what I felt was other nonsense.
Before I push back, I am wondering if others may have done this with some success. My experience in testing databases is that the environment should mimic one another as much as possible and that each environment should be isolated.
My Questions are:
Is a multi-tenant schema the way to go for this? Or is there another shared schema method?
Has it been heard of to run a multi-tenant schema to support both production and testing interaction?
If so, where might I look for inspiration, examples or how-tos?
What are some of the benefits/pitfalls of taking on this approach?

Staging slot and vip-swap

Coming from the classic Cloud Service model, after having used it to 5 years now, we are very used to the concept of a staging slot and the vip-swap capability. Yes this upgrade model has many warts but also many benefits.
Clearly the SF doesn't expose this model. So I wonder was it just not a popular model in Cloud Services, or does it just really not make sense 6 years later?
Is this one of those paradigm changes where I just have to re-think how we deploy, and forge ahead with the newly prescribed model (rolling upgrades)? Or are there known techniques to setting up something like staging slots with SF?
Looking for advice...
VIP swaps don't make sense for stateful compute, and Service Fabric is largely a stateful compute platform (even if you only use stateless services, the system services themselves are stateful). If your services have your data in them, you have to do a rolling upgrade if you want to keep your data and keep it consistent.
So yeah, it's a paradigm change, but a good one. It encourages continuous delivery and frequent upgrades because upgrades are integrated right into the platform and don't cost you anything extra. You don't need to pay for staging VMs, which can get expensive for large deployments, and that might even discourage continuous delivery.
Now, you can do something similar to a staging deployment for stateless services. In Service Fabric, your "deployments" are applications, not VMs. So you can create an instance of a new application version side-by-side with an instance of the previous application version and route your traffic however you want, whether that's gradually move users to the instance of the new version, or just flip a switch and send all your traffic to the new version all at once. This of course doesn't work for stateful services, because all of your data is still in the previous version application instance.

Best Practices for deployments on a 24x7 system asp.net platform

We have built an enterprise web application on asp.net platform which is well load balanced across several servers. We are struggling a bit in terms of doing regular deployments as the application has been defined with an SLA of zero downtime.
Any guidance / tips would be highly appreciated for Implementing best practices to support uninterrupted deployment.
My two favorite books that cover some of these topics are Continuous Delivery by Humble/Farley and Web Operations by Allspaw/Robbins.
I think the "easy" part here is to do a rolling deployment where you pull a node out of the load balancer, upgrade it, run smoke tests, and place it back in the load balancer. Different users will encounter different versions of the app, but you get zero downtime.
The hard part is the backend system / database that these web-apps are likely hitting. You basically need to have both old and new schemas available concurrently which is challenging. Look at techniques like the expand / contract database pattern as an approach to pulling this off.