I am having Kubernetes Config map in spring boot project and My application should dynamically get the values from config map if any values changes in config map so for that I have used spring cloud kubernetes config like below in bootstarp.yml file
spring:
profiles: dev, preprod
cloud:
kubernetes:
reload:
enabled: true
config:
enabled: true
sources:
- namespace: ${kubernetesnamespace}
name: ${configmapname}
After deploying the application , If I go and change the config map value I am able to get in application without restart which is expected but If I change the config map value after 1 hour of deployment this new value of config map is not reflecting in application but same I f do after 5 mins of deployment it is working.
So what could be the reason.
what could be the reason
My first suspect would be the application itself. That probably "watches" for the ConfigMap (opens a long-living connection, subscribing for changes on that object). Eventually, those connections may close, which is normal/expected (container restart, etcd leader changes, SDN issues, ...).
I would make sure my application properly acknowledges those and re-connects to the API.
For reload to work, the classes candidate for reloading are needed to be annotated with #RefreshScope.
Please see documentation reference:
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#propertysource-reload
Notice that default reload strategy is "refresh" and whole context is not reloaded unless specified. I hope that helps.
Related
We have a go lang service which will go to redis, to fetch data for each request and we want to read data from redis slave node as well. We went through the documentation of redis and go-redis library and found that, in order to read data from redis slave we should fire readonly command from redis side. We are using ClusterOptions on go-redis library to setup a readonly connection to redis.
redis.NewClusterClient(&redis.ClusterOptions{
Addrs: []string{redisAddress},
Password: "",
ReadOnly: true,
})
After doing all this we are able to see (Using monitoring) that read requests are handled by master nodes only. I hope this is not expected and I am missing something or doing it wrong. Any pointers will be appreciated to solve this problem.
Some more context:
redisAddress in above code is single kubernetes cluster IP. Redis is deployed using kubernetes operator with 3 masters and 1 replica for each master.
I`ve done it setting the option RouteRandomly: true
I'm trying to deploy this docker GCE project in a deploy.yaml but every time I update my git repository, the server goes down due to 1.
The original instance being deleted and 2. The new instance hasn't finished starting up yet (or at least the web app hasn't finished starting up yet).
What command should I use or how should I change this so that I have a canary deployment that destroys the old instances once a new one is up (I only have one instance running at a time)? I have no health checks on the instance group, only the load balancer.
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'instance-groups', 'managed', 'rolling-action', 'replace', 'a-group', '--max-surge', '1']
Thanks for the help!
Like John said - you can set max-unavailable and max-surge variables to alter the behavior of your deployment during updates.
I am building a CI/CD pipeline to release SF Stateless Application packages into clusters using parameters for everything. This is to ensure environments (DEV/UAT/PROD) can be scoped with different settings.
For example in a DEV cluster an application package may have an instance count of 3 (in a 10 node cluster)
I have noticed that if an application is in the cluster and running with an instance count (for example) of 3, and I change the deployment parameter to anything else (e.g. 5), the application package will upload and register the type, but will fail on attempting to do a rolling upgrade of the running application.
This also works the other way e.g. if the running app is -1 and you want to reduce the count on next rolling deployment.
Have I missed a setting or config somewhere, is this how it is supposed to be? At present its not lending itself to being something that is easily scaled without downtime.
At its simplest form we just want to be able to change instance counts on application updates, as we have an infrastructure-as-code approach to changes, builds and deployments for full tracking ability.
Thanks in advance
This is a common error when using Default services.
This has been already answered multiple times in these places:
Default service descriptions can not be modified as part of upgrade set EnableDefaultServicesUpgrade to true
https://blogs.msdn.microsoft.com/maheshk/2017/05/24/azure-service-fabric-error-to-allow-it-set-enabledefaultservicesupgrade-to-true/
https://github.com/Microsoft/service-fabric/issues/253#issuecomment-442074878
I have question about placeholder resolution priority when using consul-config and vault-config
I created simple app using this information
My dependencies are:
dependencies {
compile('org.springframework.cloud:spring-cloud-starter-consul-config')
compile('org.springframework.cloud:spring-cloud-starter-vault-config')
compile('org.springframework.boot:spring-boot-starter-webflux')
compile('org.springframework.cloud:spring-cloud-starter')
testCompile('org.springframework.boot:spring-boot-starter-test')
}
Note that I'm not using service discovery.
Doing next step I created property foo.prop = consul (in consul storage)
and foo.prop = vault.
When using:
#Value("${foo.prop}")
private String prop;
I'm getting vault as an output, but when I delete foo.prop from vault and restart app, I will get consul.
I did this few times in different combinations and seems vault config has higher priority over consul.
My question is where I can find information about resolving strategy.(Imagine that we added as third zookeeper-config). Seems spring-core documentation keep quiet about this.
From what I understood by debugging the Spring source code... Now Vault has a priority.
My investigation results:
PropertySourceBootstrapConfiguration.java is responsible to initialize all property sources in bootstrap phase. Before locating properties it sorts all propertySourceLocators by Order:
AnnotationAwareOrderComparator.sort(this.propertySourceLocators);
Vault always "wins" because instance of LeasingVaultPropertySourceLocator (at least this one was created during my debugging) implements PriorityOrdered interface. Instance of ConsulPropertySourceLocator has #Order(0) annotation. According to OrderComparator : instance of PriorityOrdered is 'more important'.
In case you have another PriorityOrdered property source (e.g. custom one) you can influence this order by setting spring.cloud.vault.config.order for Vault.
For now without customization I don't know how to change priority between Vault and Consul.
Everytime I make a new deployment of my app, my nodes start reporting nodeHasDiskPressure . After around 10 minutes or so the node goes back to a normal state. I found this SO answer regarding setting thresholds: DiskPressure crashing the node
.. but I am not sure how to actually set these thresholds on Google Kubernetes Engine
The kubelet option you mentioned can be added to you cluster "instance-template"
Make a copy of the instance-template that has been used for your cluster (instance-group) after clicked on copy before to save you can make some changes at the instance template,you can add those flags into : Instance-template --> Custom metadata--> kube-env
The flag will be added in this way;
KUBELET_TEST_ARGS: --image-gc-high-threshold=[your value] KUBELET_TEST_ARGS: --low-diskspace-threshold-mb=[your value] KUBELET_TEST_ARGS: --image-gc-low-threshold=[your value]
Once you set your values,save the instance template then edit the instance group of your cluster by changing the instance-template from the default to your custom one, once done it hit "rolling restart/replace" on your Dashboard on the instance group main page. This will restart your instances of your cluster with the new values.