How to add functionality to Spring Cloud Bootstrap - spring-cloud

I want to add some lookup before the Spring context is loaded, which is ideally on the bootstrap phase of Spring Cloud (When it lookup for Configuration Server, Cloud connectors etc). How can I make my code to be executed on that phase ?
What I want to do is query Vault to get all my databases secrets and api keys and set the properties, I know I can encrypt with Spring Cloud Config, but I liked the strong box of Vault. (The integration with Vault part I can handle)

As I saw in the code of Spring Cloud Config, the bootstrap configuration is auto-configured by using the class org.springframework.cloud.bootstrap.BootstrapConfiguration on the resources/META-INF/spring.factories file, the same which you can use to register new auto configuration classes for Spring Boot, as reference you can refer to the file on the project here. This will make your configuration be started and registered before the "normal" application context.

Related

Is it possible to make auto-refresh properties for Spring Cloud clients in a **multi-pod** environment

Is it possible to make auto-refresh properties for Spring Cloud clients in a multi-pod environment (Google Kubernetes Engine)?
I found several work arounds:
Using Spring Cloud Bus (too heavy solution).
Running refresh inside code using RefreshEvent and #Schedule it (not recommended by Spring).
Creating a new endpoint in Config Server to perform a refresh on all Spring Cloud clients.

Handling Spring Cloud Config Remote Refresh Event Failures

I've built up a Spring Cloud Config Server integrated using Spring Cloud Bus via Kafka for refreshing properties dynamically. I've another application that is Spring Cloud Gateway that consumes those properties and refreshes them dynamically.
One of the things I'm struggling with is if I (unintentionally) update a bad property (for example: spring.cloud.gateway.routes[0].predicates[0]=Path=/demo/{demoId\:[0-9]+}, here backslash is something that is wrong here) in a Spring Gateway route.
The routing breaks in Spring Cloud Gateway with the error something like unable to initialize bean GatewayProperties and things starts behaving weirdly.
Two questions:
Is there a way to ignore bad config refresh events? Probably skipping refresh event that has a bad config.
If it's possible, Is there a way to evaluate those properties even before those are applied to the Spring context?

How can Spring Cloud Dataflow Server use new tables( with custom prefix ) created for Spring batch and Spring cloud task?

I have created spring cloud task tables i.e. TASK_EXECUTION, TASK_TASK_BATCH with prefix as MYTASK_ and spring batch Tables with prefix MYBATCH_ in oracle database.
There are default tables also there in the same schema which got created automatically or by other team mate.
I have bound my Oracle database service to SCDF server deployed on PCF.
How can i tell my Spring Cloud Dataflow server to use tables created with my prefix to render data on dataflow server dashboard?
Currently, SCDF dashboard uses tables with default prefix to render data. It works fine. I want to use my tables to render SCDF dashboard screens.
I am using Dataflowserver version - 1.7.3 and Deployed it on PCF using manifest.yml
There's an open story to add this enhancement via spring-cloud/spring-cloud-dataflow#2048.
Feel free to consider contributing or share use-case details in the issue.
Currently in a spring-cloud-dataflow and spring-cloud-skipper we use flyway to manage database schemas and it's not possible to prefix table names. Trying to support for this would add too much complexity and I'm not even sure if it'd be possible.

Spring Cloud Dataflow - how to pass credentials to task

I use spring cloud dataflow deployed to pivotal cloud foundry, to run spring batch jobs as spring cloud tasks, and the jobs require aws credentials to access an s3 bucket.
I've tried passing the aws credentials as task properties, but the credentials are showing up in the task's log files as arguments or properties. (https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#spring-cloud-dataflow-global-properties)
For now, I am manually setting the credentials as env variables in pcf after each deployment, but I'm trying to automate this. The tasks aren't deployed until the tasks are actually launched, so on a deployment I have to launch the task, then wait for it to fail due to missing credentials, then set the env variable credentials with the cf cli. How so I provide these credentials, without them showing in the pcf app's logs?
I've also explored using vault and spring cloud config, but again, I would need to pass credentials to the task to access spring cloud config.
Thanks!
Here's a Task/Batch-Job example.
This App uses spring-cloud-starter-aws. And this starter already provides the Boot autoconfiguration and the ability to override AWS creds as Boot properties.
You'd override the properties while launching from SCDF like:
task launch --name S3LoaderJob --arguments "--cloud.aws.credentials.accessKey= --cloud.aws.credentials.secretKey= --cloud.aws.region.static= --cloud.aws.region.auto=false"
You can decide to control the log-level of the Task, so it doesn't log them in plain text.
Secure credentials for tasks should be configured either via environment variables in your task definition or by using something like Spring Cloud Config Server to provide them (and store them encrypted at rest). Spring Cloud Task stores all command line arguments in the database in clear text which is why they should not be passed that way.
After considering the approaches included in the provided answers, I continued testing and researching and concluded that the best approach is to use a Cloud Foundry "User Provided Service" to supply AWS credentials to the task.
https://docs.cloudfoundry.org/devguide/services/user-provided.html
Spring Boot auto-processes the VCAP_SERVICES environment variable included in each app's container.
http://engineering.pivotal.io/post/spring-boot-injecting-credentials/
I then used properties placeholders in the application-cloud.properties to map the processed properties into spring-cloud-aws properties:
cloud.aws.credentials.accessKey=${vcap.services.aws-s3.credentials.aws_access_key_id}
cloud.aws.credentials.secretKey=${vcap.services.aws-s3.credentials.aws_secret_access_key}

Spring Cloud Configuration Server Through Sidecar

We are using spring cloud sidecar with a node.js application. It would be extremely useful if we could serve up configuration from the spring configuration server and make that configuration available to the node application.
I would like the sidecar to resolve any property place holders on behalf of the node application.
The sidecar already hits the configuration server and I know that the Environment in the sidecar WILL resolve all the property place holders. My problem is, how do I efficiently expose all those properties to the node application? I could create a simple rest endpoint that accepts a key and then returns environment.getProperty(key) but that would be extremely inefficient.
I am thinking that I could iterate over all property sources (I know that not all property sources can be enumerated), collect a unique set of the names and then turn around and call environment.getProperty() for each name....
But is there a better way?
I have to imagine this is functionality that others have needed when using Spring Cloud in a polyglot environment?