Programmatically configure Kafka Binder configurations through Beans instead of application.yml file - spring-cloud

I'm looking for a way to configure Kafka Binder configurations programmatically, especially brokers, topic info and a few Kafka consumer properties.
Why - I don't want to build and deploy the application every time there is a change in config (due to DR or otherwise)
I was looking into the samples and couldn't find relevant sample to do so.
I tried creating Bean of org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties but then there are so many other properties to configure in that class and I don't know a way how to update org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBinderSupportAutoConfiguration to use the new instance instead.
I am looking to create similar config as here programmatically. Is this even possible right now? Any help regarding this much appreciated.

Hmm. . . you don't need to build and deploy the application when changing properties. What made you think that?
You can provide application configuration properties as standard command line options via -D flag (e.g., -Dspring.cloud.function.definition=blah).

Related

How to provide custom ActiveMQ Artemis configuration in kubernetes

I am using the Artemis Cloud operator for deploying ActiveMQ Artemis in k8s cluster. I wanted to change some properties of brokers that were not available in ActiveMQ Artemis custom resources. Specifically, I wanted to change log level from INFO to WARN. Below were the options I came across.
Create a custom broker init image and have a script written to modify the logging.properties file
Add properties in broker.properties config map. (Which I am not able to because the config map is immutable)
My questions are
Whether my above observations are correct or not?
Whether any environmental variables present for this configuration?
Do we have better way to change this specific configuration?
Creating a custom broker init image and having a script written to modify the logging.properties file is the only supported way at the moment.
Soon ActiveMQ Artemis will move to SLF4J and after that ArtemisCloud will provides an easy way to change default logging properties. The idea to use a config map is great, feel free to raise an issue for the ArtemisCloud operator https://github.com/artemiscloud/activemq-artemis-operator/issues

Programmatically create Artemis cluster on remote server

Is it possible to programmatically create/update a cluster on a remote Artemis server?
I will have lots of docker instances and would rather configure on the fly than have to set in XML files if possible.
Ideally on app launch I'd like to check if a cluster has been set up and if not create one.
This would probably involve getting the current server configuration and updating it with the cluster details.
I see it's possible to create a Configuration.
However, I'm not sure how to get the remote server configuration, if it's at all possible.
Configuration config = new ConfigurationImpl();
ClusterConnectionConfiguration ccc = new ClusterConnectionConfiguration();
ccc.setAddress("231.7.7.7");
config.addClusterConfiguration(ccc);
// need a way to get and update the current server configuration
ActiveMQServer.getConfiguration();
Any advice would be appreciated.
If it is possible, is this a good approach to take to configure on the fly?
Thanks
The org.apache.activemq.artemis.core.config.impl.ConfigurationImpl object can be used to programmatically configure the broker. The broker test-suite uses this object to configure broker instances. However, this object is not available in any remote sense.
Once the broker is started there is a rich management API you can use to add things like security settings, address settings, diverts, bridges, addresses, queues, etc. However, the changes made by most (although not all) of these operations are volatile which means many of them would need to be performed every time the broker started. Furthermore, there are no management methods to add cluster connections.
You might consider using a tool like Ansible to manage the configuration or even roll your own solution with a templating engine like FreeMarker to customize the XML and then distribute it to your Docker instances using some other technology.

Populating a config file when a pod starts

I am trying to run a legacy application inside Kubernetes. The application consists of one of more controllers, and one or more workers. The workers and controllers can be scaled independently. The controllers take a configuration file as a command line option, and the configuration looks similar to the following:
instanceId=hostname_of_machine
Memory=XXX
....
I need to be able to populate the instanceId field with the name of the machine, and this needs to be stable over time. What are the general guidelines for implementing something like this? The application doesn't support environment variables, so my first thought was to record the stateful set stable network ID in an environment variable and rewrite the configuration file with an init container. Is there a cleaner way to approach this? I haven't found a whole lot of solutions when I searched the 'net.
Nope, that's the way to do it (use an initContainer to update the config file).

What is upconfig and downconfig in zookeeper?

I am a noob in Solr and zookeeper and trying to learn by myself. I understood that zookeeper is a file structure that manages solr cluster and prevents race condition using locks. I didn’t understand what is upconfig and downconfig and when we do that. It would be of great help if someone can give me a clear picture on it. Thanks in advance!
A better and more general description of Zookeeper is an application that provides centralised configuration for distributed systems. So in Solr Cloud, you can have multiple Solr instances across multiple servers acting together as a single cloud. However, if you want to update a collection's configuration, you don't want to have to go to each server and update them all individually. You want only one version of the config which is then used by any collection that needs it. Hence the conf commands.
upconfig uploads a configuration to ZooKeeper, which then ensures that all collections using that configuration (throughout the Cloud, on all the servers) have that specific config. So you only need to upload it once, on one server.
downconfig lets you fetch a configuration from Zookeeper.

Running a spring batch with partitions in cloud foundry

I have created an app with spring batch(with partition) application taking example of this https://github.com/mminella/S3JDBC. My app is reading some files from object store and doing some processing and writing back to object the store. My app with local partition works fine in my machine.
I changed the maven, to run in cloud foundry , did change for deployer partition handler and step execution listener and deploying on pcf.
But while trying to push and run the app on pcf , I am getting an issue :
Failing URI /v2/info. I tried to log the error found that there is one call to my app e.g https://mypcf.com:443/v2/info and after that it gives the error. I cant provide full logs because of some restrictions. So I want to know :
To deploy a spring batch in pcf(is there any extra configuration
needed except the maven dependency and code changes for
deployerpartitionhandler and stepexecutionlistener and #cloudtask):
org.springframework.cloud spring-cloud-deployer-cloudfoundry
1.1.0.M1
Is it mandatory to have a separate data base service like my-sql for the partition job. Cant I use H2(the default one, if I
don't configure anything)?
Do I need to do any configuration in pcf to support running multiple partitions ?
As I am running remote partitioning , can I run that app on local STS or Intellij(not on PCF-DEV)so that it will run my app in
pcf(remote) and launch the workers.(Sorry for the stupid question ,
I am new to PCF).
Thanks for checking out my example. To answer your questions:
You should be able to use the latest deployer release (instead of that rather old version).
Yes. Partitioned steps need to all be able to share the same job repository data store so an in memory database like H2 will not work for that use case.
Besides defining your datasource, that's all that is required to live in PCF. That being said, there are other things that need to be configured, but you can use other mechanisms to do so (Spring Cloud Config Server, application.properties/yml, etc).
Yes, you should be able to run the master locally and have it deploy the workers onto PCF if you're using the CF deployer.