How to properly create Spring Cloud Task with custom parameters? - spring-cloud

According to the samples here (actually - timestamp task), I have implemented a small task class:
#SpringBootApplication
#EnableTask
#EnableConfigurationProperties({ RestProcessorTaskProperties.class })
public class RestProcessorTaskApplication {
public static void main(String[] args) {
SpringApplication.run(RestProcessorTaskApplication.class, args);
}
#Autowired
private RestProcessorTaskProperties config;
// some fields and beans
#Bean
public CommandLineRunner run(RestTemplate restTemplate) {
return args -> {
// doing some stuff
};
}
}
and then I've created Properties class (in the same package)
#ConfigurationProperties("RestProcessor")
public class RestProcessorTaskProperties {
private String host = "http://myhost:port";
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
}
But after I've registered task on my local Spring Cloud Data Server, I see numerous parameters, that, I suppose, was added automatically. I those mean parameters like:
abandon-when-percentage-full java.lang.Integer
abandoned-usage-tracking java.lang.Boolean
acceptors java.lang.Integer
access-to-underlying-connection-allowed java.lang.Boolean
and others...
Is it possible somehow to hide (or remove) them, so that when launching task I could configure only those parameters, that was added by me (single host property in my example above)?

By default Spring Cloud Data Flow will show you all the available properties for a boot application. However, you can create a whitelist of properties that you wish to show.
Here is a link to the Spring Cloud Data Flow reference doc that will discuss how to do this: http://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#spring-cloud-dataflow-stream-app-whitelisting.
And here is link to the timestamp starter app that has an example of this: https://github.com/spring-cloud/spring-cloud-task-app-starters/tree/master/spring-cloud-starter-task-timestamp

Related

SpringBoot: Create document in mongodb on startup if not exists

I have a small service on SpringBoot and Mongodb as a DB.
I need to be able create a small collection with one document ( very basic: id, name, status) on startup. An analog of sql create table if not exists, but for mongo. How do I do that?
I tried to initialize values in the document attributes, but it didn't help.
Currently, collection and the document appear only if I use API to add it.
You may want to use something like ApplicationRunner or CommandLineRunner which can be defined as a bean.
Example:
#SpringBootApplication
public class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication .class, args);
}
#Bean
public CommandLineRunner initialize(MyRepository myRepository) {
return args -> {
// Insert elements into myRepository
};
}
}
Both CommandLineRunner and ApplicationRunner are functional interfaces, so we can use a lambda for them. Spring Boot will execute them at the startup of the application.
You can leverage the spring internal event mechanism.
When your application is ready, spring triggers the event ApplicationReadyEvent
You can listen to this event and init your collection:
#Component
public class DataInit implements ApplicationListener<ApplicationReadyEvent> {
private final MyRepository myRepository;
public DataInit(MyRepository myRepository) {
this.myRepository = myRepository;
}
#Override
public void onApplicationEvent(ApplicationReadyEvent event) {
// init data
}
}

How to know the host with ribbon?

I have an application with eureka, ribbon and feign. I have a feign RequestInterceptor, but the problem is that I need to know which is the host where I'm making the call. So far, with my current code I just can get the path with the RequestTemplate, but not the host.
Any idea?
I'm new to this as well, but I believe I've just learned something relevant to your question.
In the service you're creating, each instance can be given some unique identifier tied to a property included in its instance profile. Some .properties examples below:
application.properties (shared properties between instances)
spring.application.name=its-a-service
eureka.client.service-url.defaultZone=http://localhost:8761/eureka
application-instance1.properties
server.port=5678
instance.uid=some-unique-property
application-instance2.properties
server.port=8765
instance.uid=other-unique-property
This service, as an extremely contrived example will show, can send out #Value annotated attributes to be consumed by the Ribbon app:
#SpringBootApplication
#EnableDiscoveryClient
#RestController
public class ItsAServiceApplication {
#Value("${server.port}")
private int port;
#Value("${instance.uid}")
private String instanceIdentifier;
public static void main(String[] args) {
SpringApplication.run(ItsAServiceApplication.class, args);
}
#RequestMapping
public String identifyMe() {
return "Instance: " + instanceIdentifier + ". Running on port: " + port + ".";
}
}
And just to complete the example, the Ribbon app that might consume these properties could look like this:
#SpringBootApplication
#EnableDiscoveryClient
#RestController
public class ServiceIdentifierAppApplication {
#Autowired
private RestTemplate restTemplate;
public static void main(String[] args) {
SpringApplication.run(ServiceIdentifierAppApplication.class, args);
}
#GetMapping
public String identifyMe() {
return restTemplate.getForEntity("http://its-a-service", String.class).getBody();
}
#Bean
#LoadBalanced
public RestTemplate restTemplate() {
return new RestTemplate();
}
}
Results:
Reloading the rest template created by the services
As I said earlier, I'm pretty new to this and have just been learning myself. Hopefully this has given you an idea for how you might send these properties! I would imagine creating a dynamic property identifier through Spring Cloud Config would be ideal here.

Implementing Projection with Specification in Spring Data JPA

I am trying to implement the projection with specification in Spring Data JPA via this implementation:
https://github.com/pramoth/specification-with-projection
Related classes are as follows:
Spec:
public class TopicSpec {
public static Specification<Topic> idEq(String id){
return (root, query, cb) -> cb.equal(root.get(Topic_.id),id);
}
}
Repository
#Repository
public interface TopicRepository extends JpaRepository<Topic,String>,JpaSpecificationExecutorWithProjection<Topic> {
public static interface TopicSimple{
String getId();
String getName();
}
List<TopicSimple> findById(String id);
}
Test
#Test
public void specificationWithProjection() {
Specification<Topic> where= Specifications.where(TopicSpec.idEq("Bir"));
List<Topic> all = topicRepository.findAll(where);
Assertions.assertThat(all).isNotEmpty();
}
I have this response from the Get method:
However the tests fail. Besides when I pull the github project of pramoth I can run the tests with success. Does anyone have any opinion about this issue?
The full project can be found here:
https://github.com/dengizik/projectionDemo
I have asked the same question to the developer of the project Pramoth Suwanpech, who was kind enough to check my code and give answer. My test class should've implement the test object like this:
#Before
public void init() {
Topic topic = new Topic();
topic.setId("İki");
topic.setName("Hello");
topicRepository.save(topic); }
With this setting the tests passed.

#EnableMongoAuditing for MongoDB on Cloud Foundry / mongolab

My setup works on my local but not when I deploy it to CloudFoundry/mongolab.
The config is very similar to the docs.
My local spring config
#Configuration
#Profile("dev")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringMongoConfiguration extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
return "myDb";
}
#Override
public Mongo mongo() throws Exception {
return new MongoClient("localhost");
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
This is the cloud foundry setup
#Configuration
#Profile("cloud")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringCloudMongoDBConfiguration extends AbstractMongoConfiguration {
private Cloud getCloud() {
CloudFactory cloudFactory = new CloudFactory();
return cloudFactory.getCloud();
}
#Bean
public MongoDbFactory mongoDbFactory() {
Cloud cloud = getCloud();
MongoServiceInfo serviceInfo = (MongoServiceInfo) cloud.getServiceInfo(cloud.getCloudProperties().getProperty("cloud.services.mongo.id"));
String serviceID = serviceInfo.getId();
return cloud.getServiceConnector(serviceID, MongoDbFactory.class, null);
}
#Override
protected String getDatabaseName() {
Cloud cloud = getCloud();
return cloud.getCloudProperties().getProperty("cloud.services.mongo.id");
}
#Override
public Mongo mongo() throws Exception {
Cloud cloud = getCloud();
return new MongoClient(cloud.getCloudProperties().getProperty("cloud.services.mongo.connection.host"));
}
#Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoDbFactory());
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
And the error I'm getting when I try to save a document in Cloud Foundry is:
OUT ERROR: org.springframework.data.support.IsNewStrategyFactorySupport - Unexpected error
OUT java.lang.IllegalArgumentException: Unsupported entity com.foo.model.project.Project! Could not determine IsNewStrategy.
OUT at org.springframework.data.mongodb.core.MongoTemplate.insert(MongoTemplate.java:739)
OUT at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
OUT at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
Any ideas? Is it my config file etc..?
Thanks in advance
Niclas
This is usually caused if the Mongo mapping metadata obtained for entities does not scan entities at application startup. By default, AbstractMongoConfiguration uses the package of the actual configuration class to look for #Document annotated classes at startup.
The exception message makes me assume that SpringCloudMongoDBConfiguration is not located in any of the super packages of com.foo.model.project. There are two solutions to this:
Stick to the convenience of putting application configuration classes into the root package of your application. This will cause your application packages be scanned for domain classes, metadata obtained, and the is-new-detection work as expected.
Manually hand the package containing domain classes to the infrastructure by overriding MongoConfiguration.getMappingBasePackage().
The reason you might see the configuration working in the local environment is that the mapping metadata might be obtained through a non-persisting persistence operation (e.g. a query) and everything else proceeding from there.

catalina.base system property override in spring-boot?

My project uses the catalina.base system property to get resources relative to that property's folder location. When working in Eclipse, I pass -Dcatalina.base=. as a VM argument to the application's Main class.
My Main class is implemented like this:
#ComponentScan
#EnableAutoConfiguration
public class Main {
private static final Logger logger = LogManager.getLogger();
public static void main(String[] args) {
if ( logger.isDebugEnabled() ) {
String debugCatalinaBase = System.getProperty("catalina.base");
// here catalinaBase is set to "." as expected.
logger.debug("catalinaBase: {}", debugCatalinaBase);
}
SpringApplication.run(Main.class, args);
}
}
As part of this project, I then have an ApplicationConfig class which is implemented like this:
#Configuration
public class ApplicationConfig {
private static final Logger logger = LogManager.getLogger();
#Value("${catalina.base}")
private String catalinaBase;
#Bean
public Properties jndiProperties() {
if ( logger.isDebugEnabled() ) {
String debugCatalinaBase = System.getProperty("catalina.base");
// here catalinaBase is set to "C:\Users\theUser\AppData\Local\Temp\tomcat.8558104871268204693.8081". NOT as expected!!!
logger.debug("catalinaBase: {}", debugCatalinaBase);
}
Properties properties = new Properties();
properties.setProperty(Context.INITIAL_CONTEXT_FACTORY,
CnsContextFactory.class.getName());
properties.setProperty(Context.PROVIDER_URL,
"file:" + catalinaBase + "\\conf\\jndi.properties");
return properties;
}
#Bean( name = "propertyConfigurer" )
public static PropertySourcesPlaceholderConfigurer propertyConfigurer() {
PropertySourcesPlaceholderConfigurer configurer =
new PropertySourcesPlaceholderConfigurer();
return configurer;
}
}
As you can see in the code snippet's comments, in ApplicationConfig the catalina.base system property has changed and I am not sure why or where this happened?
For info, my project uses spring-boot-starter-xxx:1.1.4.RELEASE jars, and spring-xxx:4.0.6.RELEASE jars.
Also, I have another project which follows pretty much the same overall pattern and uses the same jars and that project always sees catalina.base set to "." as expected. So I am wondering if the problem has something to do with the order in which spring configurations are loaded in the problematic project or something along those lines?
Thanks in advance for any help on this.
PM
String Boot calls Tomcat.setBaseDir(...) from TomcatEmbeddedServletContainerFactory using a temporary directory as the source. Tomcat actually sets the catalina.base property when the initBaseDir() method in org.apache.catalina.startup.Tomcat is called.
You need to add server.tomcat.basedir to your application.properties if you don't want Spring Boot to use a temp folder.