I am building an web services that exposes data from Cassandra to RESTful interface. Also, using Spring-boot-web for the REST service part, and spring-boot-actuator for production ready features, Spring-data-cassandra for Cassandra interface. I'm looking for a custom HealthIndicator (http://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-endpoints.html#production-ready-health) for CassandraTemplate that I can plug-in.
I haven't found any from the Spring-data-cassandra documentation. Do we have any under-development?
More in general, what would be a good strategy to check the health of CassandraTemplate?
Neither Spring Boot nor Spring Data Cassandra provides a HealthIndicator for Cassandra out of the box, but building your own is straightforward. You just need to create a new HealthIndicator bean that interacts with Cassandra. For example:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health.Builder;
import org.springframework.dao.DataAccessException;
import org.springframework.data.cassandra.core.CassandraOperations;
import org.springframework.stereotype.Component;
#Component
public class CassandraHealthIndicator extends AbstractHealthIndicator {
private final CassandraOperations cassandra;
#Autowired
public CassandraHealthIndicator(CassandraOperations cassandra) {
this.cassandra = cassandra;
}
#Override
protected void doHealthCheck(Builder builder) throws Exception {
try {
this.cassandra.execute("SELECT now() FROM system.local");
builder.up().build();
} catch (DataAccessException ex) {
builder.down(ex).build();
}
}
}
As long as this bean exists in your application context Spring Boot's Actuator component will find it and use it when determining your application's health.
Exactly what query you execute against Cassandra may vary depending on your requirements. The query used above was taken from this answer to a question about performing a health check on Cassandra.
CassandraHealthIndicator is available since Spring Boot 2.0
Related
I'm writing a custom implementation for a Spring Data JPA repository. So I have:
MyEntityRepositoryCustom => interface with the custom methods
MyEntityRepositoryUmpl => implementation of the above interface
MyEntityRepository => standard interface which extends JpaRepository and MyEntityRepositoryCustom
My problem is this: within the implementation of MyEntityRepositoryUmpl I need to access the entity manager that was injected into Spring Data. How to get it?
I can use #PersistenceContext to get it autowired, but the problem is that this repository must work in an application that sets up more than one persistence units. So, to tell Spring which one I need, I would have to use #PersistenceContext(unitName="myUnit"). However, since my repositories are defined in a reusable service layer, I can't know at that point what will be the name of the persistence unit that the higher-level application layer will configure to be injected into my repositories.
In other words, what I would need to do is to access the entity manager that Spring Data itself is using, but after a (not so quick) look at Spring Data JPA documentation I couldn't find anything for this.
Honestly, the fact that the Impl classes are totally unaware of Spring Data, although described as a strength in Spring Data manual, is actually a complication whenever you need to access something that is usually provided by Spring Data itself in your custom implementation (almost always, I would say...).
Since version Spring Data JPA 1.9.2 you have access to EntityManager through JpaContext, see: http://docs.spring.io/spring-data/jpa/docs/1.9.2.RELEASE/reference/html/#jpa.misc.jpa-context.
Example:
#Component
public class RepositoryUtil
{
#Autowired
private JpaContext jpaContext;
public void deatach(T entity)
{
jpaContext.getEntityManagerByManagedType(entity.getClass()).detach(entity);
}
}
P.S.
This approach will not work if you have more than one EntityManager candidate for some Class, see implementation of JpaContext#getEntityManagerByManagedType -> DefaultJpaContext#getEntityManagerByManagedType.
The best I could find is to set up a "convention": my repositories declare that they expect a persistence unit named myConventionalPU to be made available. The application layer then assigns that alias to the entity manager factory that it sets up and injects into Spring Data, so my custom implementations can receive the correct EMF with autowiring by using that alias. Here's an excerpt of my application context:
<bean id="myEntityManagerFactory" name="myConventionalPU"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
[...]
</bean>
<jpa:repositories base-package="com.example"
entity-manager-factory-ref="myEntityManagerFactory"
transaction-manager-ref="transactionManager" />
And within my custom implementation:
#PersistenceContext(unitName = "myConventionalPU")
private EntityManager em;
I opened DATAJPA-669 with this requirement.
Spring Data JPA uses Auto configuration classes to auto generate entityManagerFactory, dataSource and transactionManager.
If you want get access to entityManager and control the instantiation and settings, you need to define your own PersistenceConfiguration. Below is the sample code using Java Config
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(basePackages = { "com.test.repositories.*" })
public class PersistenceJpaConfig {
#Autowired
JpaVendorAdapter jpaVendorAdapter;
#Bean
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setName("testdb")
.setType(EmbeddedDatabaseType.HSQL)
.build();
}
#Bean
public EntityManager entityManager() {
return entityManagerFactory().createEntityManager();
}
#Bean
public EntityManagerFactory entityManagerFactory() {
LocalContainerEntityManagerFactoryBean lef = new LocalContainerEntityManagerFactoryBean();
lef.setDataSource(dataSource());
lef.setJpaVendorAdapter(jpaVendorAdapter);
lef.setPackagesToScan("com.test.domain.*");
lef.afterPropertiesSet();
return lef.getObject();
}
#Bean
public PlatformTransactionManager transactionManager() {
return new JpaTransactionManager(entityManagerFactory());
}
}
If you have multiple data sources, follow this article.
I am trying to understand the spring data solr showcase project.
https://github.com/christophstrobl/spring-data-solr-showcase
After spending quite a bit of time, I could not find how the productRepository is implemented and injected in https://github.com/christophstrobl/spring-data-solr-showcase/blob/master/src/main/java/org/springframework/data/solr/showcase/product/ProductServiceImpl.java
#Service class ProductServiceImpl implements ProductService {
private static final Pattern IGNORED_CHARS_PATTERN = Pattern.compile("\\p{Punct}");
private ProductRepository productRepository;
#Autowired
public void setProductRepository(ProductRepository productRepository) {
this.productRepository = productRepository;
}
The ProductRepository is defined as interface (https://github.com/christophstrobl/spring-data-solr-showcase/blob/master/src/main/java/org/springframework/data/solr/showcase/product/ProductRepository.java) and I did not find any code implementing this interface
interface ProductRepository extends SolrCrudRepository<Product, String> {
#Highlight(prefix = "<b>", postfix = "</b>")
#Query(fields = { SearchableProductDefinition.ID_FIELD_NAME,
SearchableProductDefinition.NAME_FIELD_NAME,
SearchableProductDefinition.PRICE_FIELD_NAME,
SearchableProductDefinition.FEATURES_FIELD_NAME,
SearchableProductDefinition.AVAILABLE_FIELD_NAME },
defaultOperator = Operator.AND)
HighlightPage<Product> findByNameIn(Collection<String> names, Pageable page);
#Facet(fields = { SearchableProductDefinition.NAME_FIELD_NAME })
FacetPage<Product> findByNameStartsWith(Collection<String> nameFragments, Pageable pagebale);
}
Below is how the spring context is configured:
https://github.com/christophstrobl/spring-data-solr-showcase/blob/master/src/main/java/org/springframework/data/solr/showcase/Application.java
If anyone could point me to the direction where this interface is implemented and injected, that would be great.
The showcase makes use of Spring Data repository abstractions using query derivation from method name. So the infrastructure provided by Spring Data and the Solr module take care of creating the required implementations for you. Please have a look at the Reference Documentation for a more detailed explanation.
The showcase itself is built in a way that allows you to step through several stages of development by having a look at the diffs transitioning from one step to the other. So having a look at Step 2 shows how to make use of Custom Repository Implementation, while Step 4 demonstractes how to enable Highlighting using #Highlight.
The goal of Spring Data is to reduce the amount of boilerplate coding (means to reduce repetition of code).
For the basic methods like save,find the implementation will provide by spring and spring will create beans(Objetcs) for these interfaces. To tell the spring that these are my repositories inside this package, we are writing #EnableJpaRepositories(basePackeges="com.spring.repositories") or <jpa:repositories base-package="com.acme.repositories"/> for JPA repositories
Foring solr repositores we have to write #EnableSolrRepositories(basePackages="com.spring.repositories" or <solr:repositories base-package="com.acme.repositories" /> Spring will create objetcs for these interfaces, we can inject these interface objetcs using #Autowire annotation.
Example:
#Service
Pulic class SomeService{
#Autowire
private SampleRepository;
/* #postConstruct annotation is used to execute method just after creating bean
and injecting all dependencies by spring*/
#PostConstruct
public void postConstruct(){
System.out.println("SampleRepository implementation class name"+ SampleRepository.getClass());
}
}
The above example is to see the implementation class of SampleRepository interface (This class is not user defined, it is class given by spring).
For reference documentation link http://docs.spring.io/spring-data/solr/docs/2.0.2.RELEASE/reference/html/. Try to read this simple documentation you can get more knowlede on spring-data.
I've been trying some examples with OSGi Declarative Services (among other things, such as Blueprint) on Karaf. The problem I am trying to solve now, is how to get references to certain services at runtime (so annotations and/or XML are not really an option here)
I will explain my use case:
I am trying to design (so far only in my head, that's why I am still only experimenting with OSGi :) ) a system to control certain automation processes in industry. To communicate with devices, a special set of protocols is being used. To make the components as reusable as possible, I designed a communication model based on layers (such as ISO/OSI model for networking, but much more simple)
To transform this into OSGi, each layer of my system would be composed of a set of bundles. One for interfaces of that layer, and then one plugin for each implementation of that layer (imagine this as TCP vs. UDP on the Transport layer of OSI).
To reference any device in such network, a custom address format will be used (two examples of such addresses can be xpa://12.5/03FE or xpb://12.5/03FE). Such address contains all information about layers and their values needed in order to access the requested device. As you can guess, each part of this address represents one layer of my networking model.
These addresses will be stored in some configuration database (so, again, simple .cfg or .properties files are not an option) so that they can be changed at runtime remotely.
I am thinking about creating a Factory, that will parse this address and, according to all its components, will create a chain of objects (get appropriate services from OSGi) that implement all layers and configure them accordingly.
As there can be more implementations of a single layer (therefore, more services implementing a single interface), this factory will need to decide, at runtime (when it gets the device address passed as string), which particular implementation to choose (according to additional properties the services will declare).
How could this be implemented in OSGi? What approach is better for this, DS, Blueprint or something else?
I realise that this is now a very late answer to this question, but both answers miss the obvious built-in support for filtering in Declarative Services.
A target filter can be defined for a DS reference using the #Reference annotation:
#Component
public class ExampleComponent {
#Reference(target="(foo=bar)")
MyService myService;
}
This target filter can also be added (or overriden) using configuration. For the component:
#Component(configurationPid="fizz.buzz")
public class ExampleComponent {
#Reference
MyService myService;
}
A configuration dictionary for the pid fizz.buzz can then set a new filter using the key myService.target.
This is a much better option than jumping down to the raw OSGi API, and has been available for several specification releases.
I revoke my answer, because the acceppted answer is correct. When I answered that question I missed this little, but very important detail in the spec.
There is a nice way given by OSGi called service tracker. You can use inside a declarative service. In this example there is a config which holds the filter for the service you want to use. If the filter configuration changes, the whole component reactivating, so the tracking mechanism is restarting.
import org.apache.felix.scr.annotations.Activate;
import org.apache.felix.scr.annotations.Component;
import org.apache.felix.scr.annotations.Deactivate;
import org.apache.felix.scr.annotations.Properties;
import org.apache.felix.scr.annotations.Property;
import org.osgi.framework.BundleContext;
import org.osgi.framework.InvalidSyntaxException;
import org.osgi.framework.ServiceReference;
import org.osgi.service.component.ComponentContext;
import org.osgi.util.tracker.ServiceTracker;
import org.osgi.util.tracker.ServiceTrackerCustomizer;
#Component(immediate = true, metatype = true)
#Properties(value = {
#Property(name = "filterCriteria", value = "(objectClass=*)")
})
public class CustomTracker {
private CustomServiceTracker customServiceTracker;
#Activate
protected void activate(ComponentContext componentContext) throws InvalidSyntaxException {
String filterCriteria = (String) componentContext.getProperties().get("filterCriteria");
customServiceTracker = new CustomServiceTracker(componentContext.getBundleContext(), filterCriteria);
customServiceTracker.open(true);
}
#Deactivate
protected void deactivate() {
customServiceTracker.close();
}
/**
* OSGi framework service tracker implementation. It is able to listen all serivces available in the system.
*/
class CustomServiceTracker extends ServiceTracker {
CustomServiceTracker(BundleContext bundleContext, String filterCriteria) throws InvalidSyntaxException {
super(bundleContext, bundleContext.createFilter(filterCriteria), (ServiceTrackerCustomizer) null);
}
#SuppressWarnings("checkstyle:illegalcatch")
#Override
public Object addingService(ServiceReference serviceReference) {
try {
Object instance = super.addingService(serviceReference);
// TODO: Whatever you need
return instance;
} catch (Exception e) {
LOGGER.error("Error adding service", e);
}
return null;
}
#Override
public void removedService(ServiceReference serviceReference, Object service) {
// TODO: Whatever you need
super.removedService(serviceReference, service);
}
#Override
public void modifiedService(ServiceReference serviceReference,
Object service) {
super.modifiedService(serviceReference, service);
}
}
}
The only option I see for this use case is to use the OSGi API directly. It sounds like you have to do the service lookups each time you get an address to process. Thus you will have to get the appropriate service implementation (based on a filter) each time you are going to process an address.
Declarative approaches like DS and Blueprint will not enable you to do this, as the filters cannot be altered at runtime.
I am using Spring-Data-JPA and in my persistence tier use both Repositories to access the database and then sometimes use
1) the Service class with repositories within it, and then autowire the service
2) autowiring the repository directly
Do both approaches behave in the same way? What is the difference between using the Repository directly versus using the Repository from within a service?
Repository:
package com.me.repository;
import com.me.myentities.MyEntity;
import org.springframework.data.jpa.repository.JpaRepository;
public interface MyEntityRepository extends JpaRepository<MyEntity, Long> {
}
Service:
package com.me.service;
import com.me.entities.*;
import org.springframework.stereotype.Service;
import com.me.repository.*;
import javax.annotation.Resource;
#Service
public class MyService {
#Resource
private MyEntityRepository myEntityRepository ;
public void update(MyEntity myEntity)
{
myEntityRepository.save(myEntity);
}
}
Either way you're working with the same sort of concrete Repository object, which is created by Spring when the container loads. And if you don't do anything with scoping, you're working with references to the exact same object.
Do do the approaches work the same way? If the service does nothing but make a pass-through call to the delegate Repository, then it's not going to result in any different end behavior. And if this pass-through behavior is all the Service does, it doesn't have an inherent reason for existing.
Spring Data JPA provides standard ways to add custom behavior to the Repositories, or to restricting the behavior of the repository to a smaller set of methods. The Spring Data JPA documentation (while brief) provides demonstrations of these techniques.
The end result is the same but generally you'll find working directly with a Repository more concise.
We have a Play! application where we need to expose a set of REST interfaces to an intranet and a set of REST interfaces we have to expose to the public internet. They share a data layer so we would like to run them together if possible. My assumption is that they will be running on different ports. Being new to Play!, I don't know if this is possible to do within a single Play! instance. I have looked at modules but that didn't seem to fit what we are doing. Has anyone had any experience with this sort of scenario?
Forgot to mention we are using Play! 2.
You could restrict/permit access to resources by checking the ip.
public class IPLocalSecurity extends Controller {
#Before
public static void checkAccess() throws Exception {
if (!request.remoteAddress.matches("192\.168\.1\.*")) {
forbidden();
}
}
}
and use that in the resources controller.
#With(IPLocalSecurity.class)
public class IntranetController extends Controller{
....
}