I've multiple consuming app instances connecting to spring cloud config server. Config server gets config from SVN Repo.
Just wanted to understand how config server (instance) is managing possibly concurrent requests.
Thanx,
That's a bug in the SVN support (the git version has a synchronized method). https://github.com/spring-cloud/spring-cloud-config/issues/128
Related
I am using the spring config server using the git backend as a property repository. I have deployed my application using k8s pods and there are multiple pods of my application.
How do I make sure upon pushing new properties to bitbucket, all pods get refreshed?
How Kafka will release the patch updates?
How users will get to know Kafka patch updates?
Kafka is typically available as a zip/tar that contains the binary files which we will use to start/stop/manage Kafka. You may want to:
Subscribe to https://kafka.apache.org/downloads by generating a feed for it.
Subscribe to any feeds that give you updates
Write a script that checks for new kafka releases https://downloads.apache.org/kafka/ periodically to notify or download.
The Kafka versioning format typically is major.minor.patch release.
Every time, there is a new Kafka release, we need to download the latest zip, use the old configuration files (make changes if required) and start Kafka using new binaries. The upgrade process is fully documented in the Upgrading section at https://kafka.apache.org/documentation
For production environments, we have several options:
1. Using Managed Kafka Service (like in AWS, Azure, Confluent etc)
In this case, we need not worry about patching and security updates to Kafka because it is taken care by the service provider itself. For AWS, you will typically get notifications in the Console regarding when your Kafka update is scheduled.
It is easy to get started to use Managed Kafka service for production environments.
2. Using self-hosted kafka in Kubernetes (eg, using Strimzi)
If you are running Kafka in Kubernetes environment, you can use Strimzi operator and helm upgrade to update to the version you require. You need to update helm chart info from repository using helm repo update.
Managed services and Kubernetes operators make managing easy, however, manually managing Kafka clusters is relatively difficult.
I'm working on a web project(built with the .Net framework) on a remote windows server, and this project is connected to a database my SQL server management studio, now on multiple other remote windows servers exist the same web project linked to the same database, now I change a page's code in my project or add/remove a table or stored procedure in my database, is there a way(or an already existing software) which will my to deploy the changes that I made to all the others(or to choose multiple servers if I don't want to deploy the changes to all of them)?
If it were me, I would stand up a git server somewhere (cloud or local vm), make a branch called something like Prod or Stable, and create a script (powershell if the servers are windows, bash on anything else) on a nightly or hourly job to pull from that branch. Only push to that branch after testing thoroughly. If your code requires compilation, you have the choice to compile once before committing (in which case you're probably going to commit to releases), or on each endpoint after the pull. I would have the script that does the pull also compile and restart the service (only if there was something new in the pull).
You can probably achieve this by following two things :
Create a separate publishing profile for each server.
Use git/vsts branches to keep the code separate. (as suggested by #memtha).
Let's say you have total 6 servers and two branches A and B. So, you'll have to create 6 publishing profiles. Then, you can choose which branch to deploy where. e.g. you can deploy branch B on server 1,3 and 4.
For the codebase you could use Git Hooks.
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
And for the database, maybe you could use migrations or something similar. You will need to provide more info about your database, do you store your database across multiple servers etc.
If the same web project is connecting to the same database and the database changes, I suspect you would need to update all the web apps to ensure the database changes don't break any of the apps and to keep all the apps updated to prevent any being left behind.
You should look at using Azure Devops to build and deploy your apps and update the database.
If you use Entity Framework, you can run the migrations on startup and have the application update the database when deployed manually or automatically using devops.
To maintain the software updated in multiple server you could use Git with hooks, post-receive hook is what you need.
The idea is to use one server as your Remote Repository and here configure the post-receive hook to update the codebase in the same server and the others.
I have been playing with Spring Cloud Configuration. I like the simplicity of the solution and the fact that it uses git as it's default configuration store.
There are two aspects I need to figure out before pushing it as a solution for centralized configuration management.
The aspects are:
High availability
How to gradually roll out configuration changes (to support canary releases)
If you already implemented this in your data center or just playing with that please share your ideas!
Also I would like to hear from the creators, how they see the recommended deployment in single/cross data-center environments.
The Config Server itself is stateless, so you can spin up as many as these as you need and find them via eureka. Underneath the server itself, the git implementation you point to needs to be highly available as well. So if you point to github (private or public), then git is as available as github is. If the config server can't reach git it will continue to serve what it has checked out even if it is stale.
As far as gradual config changes, you could use a different branch and configure the canary to use that branch via spring.cloud.config.label and them merge the branch. You could also use profiles (eg application-<profilename>.properties) and configure the canary to use the specified profile.
I think the branch makes a little more sense, because you wouldn't have to reconfigure the non-canary nodes to use the new profile each time, just configure canary to use the branch.
Either way, the only time apps see config chages (when using spring cloud config client) is on startup or when you POST to /refresh on each node. You can also POST to /bus/refresh?destination=<servicename> if you use the Spring Cloud Bus to refresh all instances of a service at once.
As many apps do, we have a number of config and properties files for our Java applications. We have gone with the approach of keeping these files separate from our codebase (i.e. they are not included in the war files for deployment) but in a separate directory. However, I would still like to track changes to these files in a source control and deploy them using our CI.
I'm looking for strategies on how others have done this. Did you write a script to push the files to the app server(s). Does the script live on the CI server?
Our SCM is Mercurial which we have set up on its own server to use as a central repo. Our CI is Hudson (not Jenkins) set up on its own server and of course our app servers are separate from these as well. All servers are *nix OS.
Consider using configuration management tools like puppet or chef for managing all your application configuration files. Both tools use "manifests" or "receipes" which can placed under revision control and matched to each server deploying the application.
Another option is to consider is to develop an install package for your OS, see the following articles for more details:
http://www.sonatype.com/people/2011/11/bringing-java-and-linux-together-on-the-way-to-continuous-live-deployment/
The advantage of doing it this way is that the install can be configured to generate the correct configuration tailored for the environment it is deployed onto. A more important benefit is that it's simpler to manage and install.