Spring data elastic search findAll with OrderBy - spring-data

I am using spring data's elastic search module, but I am having troubles building a query. It is a very easy query though.
My document looks as follows:
#Document(indexName = "triber-sensor", type = "event")
public class EventDocument implements Event {
#Id
private String id;
#Field(type = FieldType.String)
private EventMode eventMode;
#Field(type = FieldType.String)
private EventSubject eventSubject;
#Field(type = FieldType.String)
private String eventId;
#Field(type = FieldType.Date)
private Date creationDate;
}
And the spring data repository looks like:
public interface EventJpaRepository extends ElasticsearchRepository<EventDocument, String> {
List<EventDocument> findAllOrderByCreationDateDesc(Pageable pageable);
}
So I am trying to get all events ordered by creationDate with the newest event first. However when I run the code I get an exception (also in STS):
Caused by: org.springframework.data.mapping.PropertyReferenceException: No property desc found for type Date! Traversed path: EventDocument.creationDate.
So it seems that it is not picking up the 'OrderBy' part? However a query with a findBy clause (eg findByCreationDateOrderByCreationDateDesc) seems to be okay. Also a findAll without ordering works.
Does this mean that the elastic search module of spring data doesn't allow findAll with ordering?

Try adding By to method name:
findAllByOrderByCreationDateDesc

Related

Any way I can change in runtime mongo document name

In the project we need to change collection name suffix everyday based on date.
So one day collection is named:
samples_22032019
and in the next day it is
samples_23032019
Everyday I need to change suffix and recompile spring-boot application because of this. Is there any way I can change this so the collection/table can be calculated dynamically based on current date? Any advice for MongoRepository?
Considering the below is your bean. you can use #Document annotation with spring expression language to resolve suffix at runtime. Like show below,
#Document(collection = "samples_#{T(com.yourpackage.Utility).getDateSuffix()}")
public class Samples {
private String id;
private String name;
}
Now have your date change function in a Utility method which spring can resolve at runtime. SpEL is handy in such scenarios.
package com.yourpackage;
public class Utility {
public static final String getDateSuffix() {
//Add your real logic here, below is for representational purpose only.
return DateTime.now().toDate().toString();;
}
}
HTH!
Make a cron job to run daily and generateNewName for your collection and execute the below code. Here I am getting collection using MongoDatabse than by using MongoNamespace we can rename the collection.
To get old/new collection name you can write a separate method.
#Component
public class RenameCollectionTask {
#Scheduled(cron = "${cron}")
public void renameCollection() {
// creating mongo client object
final MongoClient client = new MongoClient(HOST_NAME, PORT);
// selecting the mongo database
final MongoDatabase database = client.getDatabase("databaseName");
// selecting the mongo collection
final MongoCollection<Document> collection = database.getCollection("oldCollectionName");
// creating namespace
final MongoNamespace newName = new MongoNamespace("databaseName", "newCollectionName");
// renaming the collection
collection.renameCollection(newName);
System.out.println("Collection has been renamed");
// closing the client
client.close();
}
}
To assign the name of the collection you can refer this so that every time restart will not be required.
The renameCollection() method has the following limitations:
1) It cannot move a collection between databases.
2) It is not supported on sharded collections.
3) You cannot rename the views.
Refer this for detail.

Spring Data JPA: Work with Pageable but with a specific set of fields of the entity

I am working with Spring Data 2.0.6.RELEASE.
I am working about pagination for performance and presentation purposes.
Here about performance I am talking about that if we have a lot of records is better show them through pages
I have the following and works fine:
interface PersonaDataJpaCrudRepository extends PagingAndSortingRepository<Persona, String> {
}
The #Controller works fine with:
#GetMapping(produces=MediaType.TEXT_HTML_VALUE)
public String findAll(Pageable pageable, Model model){
Through Thymeleaf I am able to apply pagination. Therefore until here the goal has been accomplished.
Note: The Persona class is annotated with JPA (#Entity, Id, etc)
Now I am concerned about the following: even when pagination works in Spring Data about the amount the records, what about of the content of each record?.
I mean: let's assume that Persona class contains 20 fields (consider any entity you want for your app), thus for a view based in html where a report only uses 4 fields (id, firstname, lastname, date), thus we have 16 unnecessary fields for each entity in memory
I have tried the following:
interface PersonaDataJpaCrudRepository extends PagingAndSortingRepository<Persona, String> {
#Query("SELECT p.id, id.nombre, id.apellido, id.fecha FROM Persona p")
#Override
Page<Persona> findAll(Pageable pageable);
}
If I do a simple print in the #Controller it fails about the following:
java.lang.ClassCastException:
[Ljava.lang.Object; cannot be cast to com.manuel.jordan.domain.Persona
If I avoid that the view fails with:
Caused by:
org.springframework.expression.spel.SpelEvaluationException:
EL1008E:
Property or field 'id' cannot be found on object of type
'java.lang.Object[]' - maybe not public or not valid?
I have read many posts in SO such as:
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to
I understand the answer and I am agree about the Object[] return type because I am working with specific set of fields.
Is mandatory work with the complete set of fields for each entity? Should I simply accept the cost of memory about the 16 fields in this case that never are used? It for each record retrieved?
Is there a solution to work around with a specific set of fields or Object[] with the current API of Spring Data?
Have a look at Spring data Projections. For example, interface-based projections may be used to expose certain attributes through specific getter methods.
Interface:
interface PersonaSubset {
long getId();
String getNombre();
String getApellido();
String getFecha();
}
Repository method:
Page<PersonaSubset> findAll(Pageable pageable);
If you only want to read a specific set of columns you don't need to fetch the whole entity. Create a class containing requested columns - for example:
public class PersonBasicData {
private String firstName;
private String lastName;
public PersonBasicData(String firstName, String lastName) {
this.firstName = fistName;
this.lastName = lastName;
}
// getters and setters if needed
}
Then you can specify query using #Query annotation on repository method using constructor expression like this:
#Query("SELECT NEW some.package.PersonBasicData(p.firstName, p.lastName) FROM Person AS p")
You could also use Criteria API to get it done programatically:
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<PersonBasicData> query = cb.createQuery(PersonBasicData.class);
Root<Person> person = query.from(Person.class);
query.multiselect(person.get("firstName"), person.get("lastName"));
List<PersonBasicData> results = entityManager.createQuery(query).getResultList();
Be aware that instance of PersonBasicData being created just for read purposes - you won't be able to make changes to it and persist those back in your database as the class is not marked as entity and thus your JPA provider will not work with it.

how to pass namedQuery parameters in Apache Camel JPA by header?

I have this camel route:
from("direct:getUser")
.pollEnrich("jpa://User?namedQuery=User.findById&consumeDelete=false");
This is my user Entity:
#Entity
#NamedQueries({
#NamedQuery(name="User.findAll", query="SELECT u FROM User u"),
#NamedQuery(name="User.findById", query="SELECT u FROM User u WHERE u.id = :id")
})
public class User{
#Id
private String id;
}
I have tried this route by setting the header:
from("direct:getUser")
.setHeader("id", simple("myid"))
.pollEnrich("jpa://User?namedQuery=User.findById&consumeDelete=false");
But it is not working
Is there any method to set jpa properties by the headers? The camel documentation quote this in parameters option but i don't found the examples
Options: parameters
This option is Registry based which requires the # notation. This
key/value mapping is used for building the query parameters. It is
expected to be of the generic type java.util.Map where
the keys are the named parameters of a given JPA query and the values
are their corresponding effective values you want to select for. Camel
2.19: it can be used for producer as well. When it's used for producer, Simple expression can be used as a parameter value. It
allows you to retrieve parameter values from the message body header
and etc.
I hope it's not too late to answer. In any case I had a similar issue in my project, the client does a HTTP GET with a parameter id, which is used by the JPA query and the result is finally marshalled back to the HTTP client. I'm running camel in a Spring application.
I finally figured out how to achieve it in a reasonably clean way.
This is the RouteBuilder where the route is defined:
#Override
public void configure() throws Exception {
Class dataClass = SomeClass.class;
JacksonDataFormat format = new JacksonDataFormat();
format.setUnmarshalType(dataClass);
String jpaString = String
.format("jpa://%1$s?resultClass=%1$s&namedQuery=q1" +
"&parameters={\"id\":${headers.id}}", dataClass.getName());
from("jetty://http://localhost:8080/test").toD(jpaString) // note the .toD
.marshal(format)
}
And this is the StringToMapTypeConverter class, otherwise camel cannot convert {"id": X} to a map
public class StringToMapTypeConverter implements TypeConverters {
private static final ObjectMapper mapper = new ObjectMapper();
private static JavaType mapType;
static {
mapType = mapper.getTypeFactory().constructMapType(Map.class,
String.class, Object.class);
}
#Converter
public Map<String, Object> toMap(String map) throws IOException {
return mapper.readValue(map, mapType);
}
}
Remember to add it to the context. In Spring is something like:
<bean id="myStringToMapTypeConverter" class="....StringToMapTypeConverter" />
Refs:
http://camel.apache.org/jpa.html
http://camel.apache.org/message-endpoint.html#MessageEndpoint-DynamicTo
http://camel.apache.org/type-converter.html#TypeConverter-Addtypeconverterclassesatruntime

How to query using fields of subclasses for Spring data repository

Here is my entity class:
public class User {
#Id
UserIdentifier userIdentifier;
String name;
}
public class UserIdentifier {
String ssn;
String id;
}
Here is what I am trying to do:
public interface UserRepository extends MongoRepository<User, UserIdentifier>
{
User findBySsn(String ssn);
}
I get an exception message (runtime) saying:
No property ssn found on User!
How can I implement/declare such a query?
According to Spring Data Repositories reference:
Property expressions can refer only to a direct property of the managed entity, as shown in the preceding example. At query creation time you already make sure that the parsed property is a property of the managed domain class. However, you can also define constraints by traversing nested properties.
So, instead of
User findBySsn(String ssn);
the following worked (in my example):
User findByUserIdentifierSsn(String ssn);

Play framework: issues with implementing restful update operation

We're creating RESTFul API based on Play framework 2.1.x which transfers/accepts data in JSON format. Create, read and delete operations were easy to implement but we've got stuck with update operation.
Here are the entities we have:
Event:
#Entity
public class Event extends Model {
#Id
public Long id;
#NotEmpty
public String title;
#OneToOne(cascade = CascadeType.ALL)
public Location location;
#OneToMany(cascade = CascadeType.ALL)
public List<Stage> stages = new LinkedList<Stage>();
...
}
Location:
#Entity
public class Location extends Model {
#Id
public Long id;
#NotEmpty
public String title;
public String address;
...
}
Stage:
#Entity
public class Stage extends Model {
#Id
public Long id;
#NotEmpty
public String title;
public int capacity;
...
}
In our router we have following entry:
PUT /events/:id controllers.Event.updateEvent(id: Long)
updateEvent method in controller looks following way (note: we use Jackson library to map objects to JSON and back):
#BodyParser.Of(BodyParser.Json.class)
public static Result updateEvent(Long id) {
Event event = Event.find.byId(id);
Http.RequestBody requestBody = request().body();
JsonNode jsonNode = requestBody.asJson();
try {
ObjectMapper mapper = new ObjectMapper();
ObjectReader reader = mapper.readerForUpdating(event);
event = reader.readValue(jsonNode);
event.save();
} catch (IOException e) {
e.printStackTrace();
}
return ok();
}
After we've got Event from database, updated its values by reading from JSON with ObjectReader we try to save updated Event and get exception (similar one we get when trying to update list of Stages):
org.h2.jdbc.JdbcSQLException: Unique index or primary key violation: "PRIMARY_KEY_9F ON PUBLIC.LOCATION(ID)"; SQL statement: insert into location (id, title, address) values (?,?,?) [23505-168]
According to H2 logs framework tries to perform insert operation for location and fails as location with specified id already exists. We've investigated further ant it looks like when we get Event from DB, location is not joined because of lazy fetch. Looks like the problem occurs with saving other entities which our Event has relationships with. We've tried to force fetch operation for location by doing following:
Event event = Ebean.find(Event.class).fetch("location").where().eq("id", id).findUnique();
but still when we update this event with ObjectReader's readValue method and save Event we get the same exception.
We've also tried to create separate Event object from JSON and update Event from DB field by field (implemented merge operation by ourselves) and it worked but it looks odd that framework doesn't provide any means of merging and updating entities with data passed from client.
Could someone advise on how to solve this problem ? Any example showing how to implement merge of entity with JSON data coming from client and updating it in storage would be highly appreciated.
You've probably already fixed the error by now, but in case this helps someone else, I'm answering it anyway.
I'm just a beginner with Play Framework as well, only started a few days ago. But I believe when you have in your code:
event.save();
you should be doing instead:
event.update();
The problem here is that you're not inserting a new entity into the database, but in fact just updating the one already there, so you need to use the second method.
You can find more info about this at http://www.playframework.com/documentation/2.0/api/java/play/db/ebean/Model.html