So, I was trying to find a way to remove/rename( and change the fields value ) the _class field from the document generated by spring data couchbase as the document is going to be stored by one service and in all likeliness be consumed by someone totally different.
I was playing around with the api for spring couchbase and through a bit of trial and error found that I can rename the _class field with a custom value using the following way ->
1) Override the typeKey method in a class inheriting AbstractCouchbaseConfiguration . For example, let's say we overrided typeKey to do the following ->
#Override
public String typeKey() {
return "type";
}
2) In the POJO that stores the data into couchbase, add a field with the same field name as what you gave into the return value of the typeKey method and give it the custom value as needed -
private final String type = "studentDoc";
I wanted to check if this is a valid way of going about this or/and some better way is available to do something like this now
That is the only way to do it with spring data at this moment, we would like to add a few extra ways to do that but we are limited to the Spring Data interface contracts. That is why most of the extra configs are done via AbstractCouchbaseConfiguration.
Spring data library needs a field with the fully qualified class name as it's value to understand which class object to deserialize the data from couchbase into. By default, this field would be named _class, but can be modified by overriding the typeKey() method in your Couchbase configuration (extending AbstractCouchbaseConfiguration) as mentioned by you.
#Override
public String typeKey() {
return "customType";
}
But as far as I know, you shouldn't modify the value of the field since the library would not be able to understand which object to deserialize the data into.
Related
I use Spring data by extending SimpleJpaRepository, Sometimes we need only a few special fields of an entity on sometimes other fields. if we create a projection class or interface for every need, there will be many classes that are used only for one application. is there any way to pass fields/columns as map/list to createQuery ?
I use Spring data by extending SimpleJpaRepository
That is at least weird, if not wrong. you'd normally extend on or multiple of Spring Data interfaces.
Anyway, yes this is possible like so:
Is there a way to achieve this?
Yes, there is.
Version 2.6 RC1 of Spring Data JPA introduced fluent APIs for Query By Example, Specifications, and Querydsl.
This you can use among other things to configure projections. Note that only interface projections are supported.
You can use projections like this:
interface SomeRepository extends CrudRepository, JpaSpecificationExecutor{}
MyService {
#Autowired
SomeRepository repository;
void doSomething(){
List<User> users = repository.findBy(
someSpecification,
q -> q.project("firstname", "roles").all()
);
// ...
}
}
It will return an entity, but only the fields given in the project clause will be filled.
All the examples for storing multi-field data require specifying a value class. However, I do not know the fields or their types until run-time. I would like to be able to create a region with a dynamic set of field values. For example,
put --key=101 --value=('firstname':'James','lastname':'Gosling')
--region=/region1 --value-class=data.Person
However, the data.Person class does not exist.
Furthermore, I would like to be able to query on the firstname field (or any other field of the value).
How can I do this with Geode?
You don't need a domain class to store data in Geode. You can store json natively in Geode. OQL queries make no distinction between PDX serialized objects and json values. In fact, when you store a json value in Geode, beneath the covers it is converted into a PDXInstance. You can read more about PDX Serialization in the documentation.
You can use PdxInstance.
Example using Java:
region.put(101, cache.createPdxInstanceFactory("data.Person").writeString("firstname","James")
.writeString("lastname","lastname").create());
I am using Spring Data with MongoDB to store very dynamic config data in a toolkit. These Config objects consist of a few organizational fields, along with a data field of type Object. On some instances of Config, the data object refers to a more deeply nested subdocument (such as "data.foo.bar" within the database. – this field name is set by getDataField() below). These Config objects are manipulated as they're sent to the database, so the storage code looks something like this:
MongoTemplate template; // This is autowired into the class.
Query query; // This is the same query which (successfully) finds the object.
Config myConfig; // The config to create or update in Mongo
Update update = new Update()
.set(getDataField(), myConfig.getData())
.set(UPDATE_TIME_FIELD, new Date())
.setOnInsert(CREATE_TIME_FIELD, new Date())
.setOnInsert(NAME_FIELD, myConfig.getName());
template.upsert(query, update, Config.class);
Spring recursively converts the data object into a DBObject correctly, but neither the data document nor any of its subdocuments have "_class" fields in the database. Consequentially, they do not deserialize correctly.
These issues seem quite similar to those previously reported in DATAMONGO-392 , DATAMONGO-407, and DATAMONGO-724. Those, however, have all been fixed. (I am using spring-data-mongodb 1.4.2.RELEASE)
Am I doing something incorrectly? Is there a possibility that this is a Spring issue?
Came across a similar issue. One solution is to write your own Converter for Config.class.
How do I use the 'exists' keyword in Spring Data in a query method?
I would like to have a method like this:
public interface ProfileRepository extends JpaRepository<Profile, Long> {
boolean existsByAttribute(String attribute);
}
where Attribute is a field of the Profile.
A workaround would be to use a custom-implementation. But the appendix defines exists as keyword. Could someone give me an example how to use this keyword?
Documented keywords are intended to be used in combination with a property reference. Thus, the semantics of EXISTS in this case are that it checks whether the property exists. Note, that the part of the documentation is pulled it from Spring Data Commons and the keyword being listed there doesn't mean it's supported in Spring Data JPA (indicated in the first paragraph of the section you linked). Exists is not supported by Spring Data JPA as it only makes sense in MongoDB for example as there's a difference between a field not present entirely and the field available with a logically null value.
So what you're looking for seems to be around the (Is)Null keyword with the current limitation that it would return objects and you'd have to check the returned list for content. There's a ticket to add support for projections for derived query methods which you might wanna follow for further progress.
I've yet to use Morphia, but I'm considering it for a current project.
Suppose I have a POJO with a number of #Reference annotations and I ask Morphia to fetch the object graph from the database. If I then make another DAO or DataStore call and ask Morphia to fetch some object that was already instantiated in the first graph, would Morphia return a reference to the already instantiated object or would it create a new instance?
If Morphia returns a new instance of the object each time, does anyone have a recommendation of how to best approach creating a Morphia-backed repository that won't duplicate already-instantiated objects?
As I see it in Morphia, it will re read every reference.
This is one of the problems, why I created Morphium. I integrated a caching layer there, so if you read a reference, this one won't be read again (at least, if you search by ID...)
We use morphia in production and there are two ways to make sure you don't load the references which is something we came across too.
One is to use the lazy loading option when you define the #Reference element in your main class. This of course means that this behavior is 'global' to that object.
The better way to do this is to not define an #Reference using Morphia and instead managing the references yourself. Let me know if you need a code sample.
I've stopped using #Reference too and instead declare something like:
ObjectId itemId
rather than having a field item. This has 2 benefits: (1) it lets me define a getter through a helper getObject(...) method which I have written with object caching and (2) it stores a simple ObjectId in the Mongo object rather than a full DBRef which includes the collection name and thus about twice the data size.