I would like my MongoRepository in Spring Boot to automatically delete documents at a certain point in time after creation. Therefore I created the following class:
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.index.Indexed;
public class MyDocument {
#Id
private String id;
#Indexed
private String name;
#Indexed(expireAfterSeconds = 0)
private LocalDateTime deleteAt;
}
Then, I save it in the Spring Boot MongoRepository:
MyDocument doc = modelMapper.map(myDocumentDto, MyDocument.class);
LocalDateTime timePoint = LocalDateTime.now();
timePoint = timePoint.plusMinutes(1L);
doc.setDeleteAt(timePoint);
docRepository.save(doc);
I periodically query the repository and would assume that after one minute, the document will not be there anymore. Unfortunately, I get the document every time I query and it is never deleted.
What am I doing wrong?
The document is persisted as follows (.toString()):
MyDocument{id='5915c65a2e9b694ac8ff8b67', name='string', deleteAt=2017-05-12T16:28:38.787}
Is MongoDB possibly unable to read and process the LocalDateTime format? I'm using org.springframework.data:spring-data-mongodb:1.10.1.RELEASE, so JSR-310 should already be supported as announced in 2015 here: https://spring.io/blog/2015/03/26/what-s-new-in-spring-data-fowler
I could fix the issue:
First of all, java.time.LocalDateTime is no problem with Spring Data / MongoDB.
For clarification, add a name to an index, e.g. #Indexed(name = "deleteAt", expireAfterSeconds = 0). This step might not be needed though. However, adding the #Document annotation helped a lot:
#Document(collection = "expiringDocument")
When I delete the whole collection while my application is running, a new document insert will create the collection again but without the indexes. To ensure indexes are created, restart the application.
Related
I have a specific requirement needed in my project is to store logging data in my MongoDB database. There are lots of blogs for storing logs in a relational database but I can't find anything that works with MongoDB.
After hours of searching, I found this Wordpress article but after implementing it nothing happened. Blog: https://assylias.wordpress.com/2013/03/22/a-simple-logback-appender-for-mongodb/?unapproved=1424&moderation-hash=a5ff2a0d2832b77e2d7c0be3173ea667#comment-1424
But it's not working
Problem: I need to persist the log data to MongoDB.
Does anyone know how to append log data into MongoDB with Spring Boot?
Edit: I've figured a way around how to do it but it can be done with any type of database no matter MySQL or MongoDB. I'm providing the answer to how I did it but the question is still open. If anyone knows how to do it feel free to answer it and if it works I will accept the answer.
So the trick here is making a custom method that returns a string to the Logger class and saves the data to the database(any database relational or NoSQL doesn't matter).
I will try to explain the whole scenario:
This is a document that will store the data to the MongoDB
#Document
public class Logs {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private String id;
private Date date;
private String level;
private String message;
//getters and setters
}
If you are using MySQL you can use #Entity. Then create a MongoRepository of this class or CrudRepository if using JPA
public interface LogsRepository extends MongoRepository<Logs, String> {
}
Now you have to make a CustomLogger class which is used to insert data to the database
#Component
public class CustomLogger{
#Autowired
MongoTemplate mongoTemplate;
public String info(String message) {
Logs logs = new Logs();
logs.setLevel("INFO");
logs.setMessage(message);
logs.setDate(new Date());
mongoTemplate.insert(loggerDetail);
return message;
// same for other methods like debug(), error(), etc
}
Here I used MongoTemplate instead of LogsRepostiry to save data because Mongo allow to insert data if extends MongoRepository of every class
Now all you have to do is autowire this component to the class where you are using the logger. For my case it was a controller. When I'm hitting an API logs will show in console and also will save to the database
#RestController
public class LogsController {
#Autowired
CustomLogger customLogger;
Logger logger = LoggerFactory.getLogger(CustomLogger.class);
#GetMapping("/logger")
public String basicControllerToSaveData() {
logger.info(customLogger.info("Saving Logs to database"));
return "Success";
}
}
This will do the trick!
What is the best way to check if a record exists and if it doesn't, create it (avoiding duplicates)?
Keep in mind that this is a distributed application running across many application servers.
I'm trying to avoid these:
Race Conditions
TOCTOU
A simple example:
Person.java
#Entity
public class Person {
#Id
#GeneratedValue
private long id;
private String firstName;
private String lastName;
//Getters and Setters Omitted
}
PersonRepository.java
public interface PersonRepository extends CrudRepository<Person, Long>{
public Person findByFirstName(String firstName);
}
Some Method
public void someMethod() {
Person john = new Person();
john.setFirstName("John");
john.setLastName("Doe");
if(personRepo.findByFirstName(john.getFirstName()) == null){
personRepo.save(john);
}else{
//Don't Save Person
}
}
Clearly as the code currently stands, there is a chance that the Person could be inserted in the database in between the time I checked if it already exists and when I insert it myself. Thus a duplicate would be created.
How should I avoid this?
Based on my initial research, perhaps a combination of
#Transactional
#Lock
But the exact configuration is what I'm unsure of. Any guidance would be greatly appreciated. To reiterate, this application will be distributed across multiple servers so this must still work in a highly-available, distributed environment.
For Inserts: if you want to prevent same recordsto be persisted, than you may want to take some precoutions on DB side. In your example, if firstname should be unique, then define a unique index on that column, or a agroup of colunsd that should be unique, and let the DB handle the check, you just insert & get exception if you're inserting a record that's already inserted.
For updates: use #Version (javax.persistence.Version) annotation like this:
#Version
private long version;
Define a version column in tables, Hibernate or any other ORM will automatically populate the value & also verison to where clause when entity updated. So if someone try to update the old entity, it prevent this. Be careful, this doesn't throw exception, just return update count as 0, so you may want to check this.
I annotate a document with #Index(unique = true) like so:
public class ADocumentWithUniqueIndex {
private static final long serialVersionUID = 1L;
#Indexed(unique = true)
private String iAmUnique;
public String getiAmUnique() {
return iAmUnique;
}
public void setiAmUnique(String iAmUnique) {
this.iAmUnique = iAmUnique;
}
}
When saving the object, I specify a custom collection:
MongoOperations mongoDb = ...
mongoDb.save(document, "MyCollection");
As a result I get:
A new document in "MyCollection"
An index in the collection "ADocumentWithUniqueIndex"
How can I create the index in "MyCollection" instead without having to explicitly specify it in the annotation?
BACKGROUND:
The default collection name is too ambiguous in our use case. We cannot guarantee, that there wouldn't be two documents with the same name but in different packages. So we added the package name to the collection.
Mapping a document to a collection is dealt with in an infrastructure component.
The implementation details like collection name etc. shouldn't leak into the individual documents.
I understand this is a bit of an "abstraction on top of an abstraction" smell but required since we had to support MongoDb and Windows Azure blob storage. Not anymore though...
This seemed like a fairly standard approach to hide the persistence details in a infrastructure component. Any comments on the approach appreciated as well.
It's kind of unusual to define the collection for an object to be stored and then expect the index annotations to work. There's a few options you have here:
Use #Document on ADocumentWithUniqueIndex and configure the collection name manually. This will cause all objects of that class to be persisted into that collection of course.
Manually create indexes via MongoOperations.indexOps() into the collections you'd like to use. This would be more consistent to your approach of manually determining the collection name during persistence operations.
I'm still trying to get my hands around mongodb and how best Entities can be mapped. if you take for example: the entity user and the entity addresses. there could be one-to-many when someone is coming from jpa background. Here in mongo i don't want to use dbref. So addresses are in a Set collection in user.
Supposing i was using spring-data-mongo:
Question 1 : should both User and Address have the #Document annotation?Or just User?
Question 2 : what is the best way to query for addresses of a user. It is possible at first place? Because right now I query to get the User by username or Id and then get the addresses of the user.Can I query directly for sub-document? if yes how is it done using spring-data-mongo Criteria Query:
#Document
public class User{
#Id
private Long ID;
private String username;
private Set<Address> addresses = new HashSet<Address>();
...
}
#Document
public class Address {
#Id
private Long ID;
private String city;
private String line1;
...
}
Question 1: No, #Document is not strictly necessary at all. We just leverage this on application startup if you activate classpath-scanning for document classes. If you don't the persistence metadata scanning will be done on the first persistence operation. We traverse properties of domain objects then, so Address will be discovered.
Question 2: You'll have to read the User objects entirely as MongoDB currently does not allow returning sub-documents. So you'll have to query for the entire Userdocument but can restrict the fields being returned to the addresses field using a fieldSpec on the Query object or the repository abstraction's #Query annotation (see ref docs).
Using JPA with EclipseLink, I would like to track the timestamp of the last update made to an entity instance. Assuming that this would be easy to combine with optimistic locking, I defined the entity as follows:
import javax.persistence.Version;
[...]
#Entity
public class Foo {
#Id int id;
#Version Timestamp lastChange;
[...]
}
Updating a changed object is done with the following code:
EntityManager em = Persistence.createEntityManagerFactory("myConfiguration");
em.getTransaction().begin();
em.merge(foo);
em.getTransaction().commit();
I would expect that foo.lastChange would be set to the new timestamp each time an update to a changed instance is committed. However, while the field LASTCHANGE is updated in the database, it is not updated in the object itself. A second attempt to save the same object again thus fails with an OptimisticLockException. I know that EclipseLink allows to choose between storing the version-field in cache or directly in the object and I made sure that the configuration is set to IN_OBJECT.
The obvious question is: How to get the foo.lastChange field set to the updated timestamp value when saving to the database? Would
foo = em.find(Foo.class, foo.id);
be the only option? I suspect there must be a simpler way to this.
merge does not modify its argument. It copies the state from its argument to the attached version of its argument, and returns the attached version. You should thus use
foo = em.merge(foo);
// ...
return foo;