I wish to use consul strictly as a config source.
I am using spring-cloud-consul-config to get my config.
I am using git2consul to load files into consul and read them.
As per the spring cloud documentation I have added the following to my build.gradle
compile ("org.springframework.cloud:spring-cloud-starter-consul-config")
and have the following in my application.properties
spring.application.name=test-service
spring.cloud.consul.config.enabled=true
spring.cloud.consul.enabled=true
spring.cloud.consul.config.format=FILES
The problem I am facing is that the expected properties are not being loaded into the ConfigurationProperties beans. On further debugging in the ConsulPropertySourceLocator::locate(Environment environment) method, I see that the this.properties object is still loaded with KEY_VALUE enum.
This led me to ConsulConfigBootstrapConfiguration class, where the ConsulConfigProperties bean is being instantiated using a constructor.
Is this the problem or do I have something wrong in my setup.
If someone has a working setup of git2consul and spring cloud config, please can you point me to it for reference.
These values that you have in application.properties
spring.application.name=test-service
spring.cloud.consul.config.enabled=true
spring.cloud.consul.enabled=true
spring.cloud.consul.config.format=FILES
need to be in bootstrap.properties.
Related
I'm setting up my Quarkus app to run in the cloud, but I couldn't find how to handle encrypted properties in the configuration file.
In my Spring Boot apps I know that I can prefix a property with some tags so it'll be decrypted before usage:
password={cipher}{key:alias}<encrypted-text>
Is there any Quarkus AWS plugin that handles such syntax?
Or any way that I can access the configuration properties before usage so I decrypt them manually?
This is not supported in Quarkus. There are some prototype to support something similar in the future but they are not complete yet. Please follow: https://github.com/quarkusio/quarkus/issues/7442
The recommendation is to Vault: https://quarkus.io/guides/vault
If you want to access configuration before usage, you can implement an interceptor: https://smallrye.io/docs/smallrye-config/main/interceptors/interceptors.html
I am creating a Spring Boot App with Mongo DB and scratching my head a bit with how to set up the production database configuration.
With a SQL-based Database, I'd be used to setting up a data source bean like this
#Bean
public DataSource getDataSource()
{
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("org.h2.Driver");
dataSourceBuilder.url("jdbc:h2:file:C:/temp/test");
dataSourceBuilder.username("sa");
dataSourceBuilder.password("");
return dataSourceBuilder.build();
}
However,
It doesn't seem to be needed - my local app connects to a spun up instance of mongo db without any explicit configuration.
It doesn't seem to be a standard with mongo according to [this post][1]
I figured I'd give it a go to see if it would automagically configure in production, but I'm getting a DataAccessResourceFailureException. Info: heroku, did the mLab MongoDB add on.
I have no problem getting the url and I can certainly throw that in an environment variable, but I'm just not sure what I need to add to my app to configure it.
Set values in application.properties file like below
spring.data.mongodb.database = ${SPRING_DATA_MONGODB_DATABASE}
spring.data.mongodb.host = ${SPRING_DATA_MONGODB_HOST}
spring.data.mongodb.port = ${SPRING_DATA_MONGODB_PORT}
You can use the #Value annotation and access the property in whichever Spring bean you're using
#Value("${userBucket.path}")
private String userBucketPath;
The Externalized Configuration section of the Spring Boot docs, explains all the details that you might need.
I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.
The Play framework requires (by default) that you configure logging through a logback.xml file. I'd like to build my log appenders through code so I can fetch parameters at runtime (e.g. the graylog destination for the logs is fetched from the deployment environment, rather than baking it in statically through an XML file).
This sort of thing is fairly easy to achieve in Java (by overriding logging factories and the like), I wondered if the same were possible in Play.
Yes, you can configure logback programmatically, see: https://akhikhl.wordpress.com/2013/07/11/programmatic-configuration-of-slf4jlogback/
But I wouldn't recommend it. For starters it's a verbose API that isn't pleasant to work with. Beyond that, it generally nice for configuration to be declarative (even if it is in XML in this case).
For your usecase, Logback's XML does support variables which can come from System properties or Environment variables: https://logback.qos.ch/manual/configuration.html#definingProps
However, you probably want a different config across environments (no greylog locally). I think many projects do that by specifying the logback XML location as a system property at startup: https://logback.qos.ch/manual/configuration.html#configFileProperty
Alternatively, I suspect greylog has some method of watching a file to pickup your logging. That's what we do for picking up logs in Splunk in my team. We don't want to make a change to our code when someone reconfigures Splunk/Greylog.
The solution I used in the end was to use a logback contextlistener to populate the context with the parameters pulled from the environment. The listener can be added as follows to the logback.xml:
<contextListener class="LoggerStartup"/>
The LoggerStartup can then populate the context, which I achieved through AWS SSM (see the simplified code below).
class LoggerStartup extends ContextAwareBase with LoggerContextListener with LifeCycle {
override def start() = {
val context = getContext()
val graylogUrl = ... // Go get value from remote store
context.putProperty("GRAYLOG_URL", graylogUrl)
}
}
And then referenced this context variable in the logback file:
<appender name="GELF UDP APPENDER" class="me.moocar.logbackgelf.GelfUDPAppender">
<remoteHost>${GRAYLOG_URL}</remoteHost>
...
</appender>
How can I start a stand-alone Spring Boot JPA application -- not via cli -- with a choice of databases to get data, e.g., localhost:5432/my_db; or 192.168.1.100:5432/our_db, or example.com:5432/their_db?
Mine currently uses the one in the application.properties file that contains:
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://localhost:5432/my_db
spring.datasource.username=postgres
spring.datasource.password=postgres
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.generate-ddl=true
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=create
Thanks in advance
Since you probably need to configure username and password as well, I recommend creating separate application-mydatasource.properties files for each data source configuration. You will then activate the datasource you want to use based on setting the active profile. You can set the active profile either in application.properties (spring.profiles.active) or via a command line argument:
$ java -jar -Dspring.profiles.active=mydatasource demo-0.0.1-SNAPSHOT.jar
The application-mydatasource.properties will then override any properties in your application.properties. I believe you will also need to set spring.profiles= to the list of profiles available.
See Profile specific properties.
Another options besides the #Profile label, that you will have to declare in every enviroment that you will deploy the application, you could use in Spring Boot the label:
#ConditionalOnProperty(name="propertyName", havingValue="propertyValue")
And declare a property to decide wich database you want to load in each case!
Hope being helping!!