I'm trying to migrate my application from hibernate search 5 to 6. What I noticed in version 6 is that a field validation is added and you can't search on fields that doesn't exists in the index. Unfortunately I have business logic that relied on that. Is there a way to access ElasticsearchIndexModel class(as far as I can see in this class is the fields state) and check if a specific field exists? Or is there any way to do that at all?
There is a metamodel API.
Something like this should work:
<T> boolean isSearchable(SearchMapping mapping, Class<T> entityClass,
String fieldPath) {
SearchIndexedEntity<T> entity = mapping.indexedEntity( entityClass );
IndexDescriptor index = bookEntity.indexManager().descriptor();
Optional<IndexFieldDescriptor> fieldOptional = index.field(fieldPath)
if (!fieldOptional.isPresent()) {
return false;
}
IndexFieldDescriptor field = fieldOptional.get();
return field.isValueField() && field.toValueField().type().searchable();
}
You can access the SearchMapping this way:
SearchMapping mapping = Search.mapping( entityManagerFactory );
Or:
SearchMapping mapping = Search.mapping( entityManager.getEntityManagerFactory() );
Or in Quarkus, you can simply have it injected into your beans:
public class MyBean {
#Inject
SearchMapping mapping;
...
}
Related
I would like to use hibernate search's #IndexingDependency with a PropertyBridge but I can't seem to make it work.
I get this error :
Hibernate ORM mapping: type 'com.something.Person': path '.currentStatus':
failures:
- HSEARCH700020: Unable to find the inverse side of the association on type
'com.something.Person' at path '.currentStatus<no value extractors>'. Hibernate Search
needs this information in order to reindex 'com.something.Person' when
'com.something.Status' is modified. You can solve this error by defining the inverse
side of this association, either with annotations specific to your integration
(#OneToMany(mappedBy = ...) in Hibernate ORM) or with the Hibernate Search
#AssociationInverseSide annotation. Alternatively, if you do not need to reindex
'com.something.Person' when 'com.something.Status' is modified, you can disable
automatic reindexing with #IndexingDependency(reindexOnUpdate = ReindexOnUpdate.SHALLOW)
Not sure if I'm doing something wrong or if what I'm trying to do isn't possible. Thank for the help.
Here are the files involved.
Person.class
#Entity
#Table
#Indexed
public class Person {
#OneToMany(mappedBy = "patient", cascade = CascadeType.ALL)
private Set<Status> status = new HashSet<>();
#Transient
#StatusBinding(fieldName = "currentStatus")
#IndexingDependency(derivedFrom = #ObjectPath(#PropertyValue(propertyName = "status")))
public Status getCurrentStatus() {
return this.status.stream()
.filter(it -> it.getDate().isAfter(LocalDate.now()))
.max(Comparator.comparing(Status::getDate))
.orElse(null);
}
}
StatusBinding.class
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.METHOD, ElementType.FIELD})
#PropertyMapping(processor = #PropertyMappingAnnotationProcessorRef(type = StatusBinding.Processor.class))
#Documented
public #interface StatusBinding {
String fieldName() default "";
class Processor implements PropertyMappingAnnotationProcessor<StatusBinding> {
#Override
public void process(PropertyMappingStep mapping, StatusBindingannotation, PropertyMappingAnnotationProcessorContext context) {
StatusBinderbinder = new StatusBinder();
if (!annotation.fieldName().isBlank()) binder.setFieldName(annotation.fieldName());
mapping.binder(binder);
}
}
}
StatusBinder.class
public class StatusBinder implements PropertyBinder {
#Setter private String fieldName = "mainStatus";
#Override
public void bind(PropertyBindingContext context) {
context.dependencies()
.use("status")
.use("date")
.use("note");
IndexSchemaObjectField mainStatusField = context.indexSchemaElement().objectField(this.fieldName);
context.bridge(Status.class, new StatusBridge(
mainStatusField.toReference(),
mainStatusField.field("status", context.typeFactory().asString()).toReference(),
mainStatusField.field("date", context.typeFactory().asLocalDate()).toReference(),
mainStatusField.field("note", context.typeFactory().asString()).toReference()
));
}
private static class StatusBrige implements PropertyBridge<Status> {
private final IndexObjectFieldReference mainStatusField;
private final IndexFieldReference<String> statusField;
private final IndexFieldReference<LocalDate> dateField;
private final IndexFieldReference<String> noteField;
public StatusBrige(
IndexObjectFieldReference mainStatusField,
IndexFieldReference<String> statusField,
IndexFieldReference<LocalDate> dateField,
IndexFieldReference<String> noteField
) {
this.mainStatusField = mainStatusField;
this.statusField = statusField;
this.dateField = dateField;
this.noteField = noteField;
}
#Override
public void write(DocumentElement target, Status mainStatus, PropertyBridgeWriteContext context) {
DocumentElement statutElement = target.addObject(this.mainStatusField);
statutElement.addValue(this.statusField, mainStatus.getStatus);
statutElement.addValue(this.dateField, mainStatus.getDate());
statutElement.addValue(this.noteField, mainStatus.getNote());
}
}
}
Problem
When a Status entity is modified, Hibernate Search doesn't know how to retrieve the corresponding Person having that Status as its currentStatus.
Solution
Assuming the currentStatus is always contained in status, and since Status.patient is the inverse side of the Person.status association, you should only need to add this:
#Transient
#StatusBinding(fieldName = "currentStatus")
#IndexingDependency(derivedFrom = #ObjectPath(#PropertyValue(propertyName = "status")))
// ADD THIS ANNOTATION
#AssociationInverseSide(
inversePath = #ObjectPath(#PropertyValue(propertyName = "patient"))
)
public Status getCurrentStatus() {
// ...
}
Why?
I'll try to explain this, but it's a bit complex, so bear with me.
Derived properties and the inverse side of associations are related concepts: they share the common purpose of allowing Hibernate Search to perform automatic reindexing.
However, they are still separate concepts, and Hibernate Search is not able to infer one from the other.
With #IndexingDependency(derivedFrom), you are defining what the computation of currentStatus depends on:
#IndexingDependency(derivedFrom = #ObjectPath(#PropertyValue(propertyName = "status")))
public Status getCurrentStatus() {
This tells Hibernate Search that currentStatus will change whenever the status property changes. With that information, Hibernate Search is able to determine that whenever you call person.getStatus().remove(...) or person.getStatus().add(...) (for example), your Person entity needs reindexing, because currentStatus is indexed, and it probably changed.
In your custom binder, you're also defining dependencies:
context.dependencies()
.use("status")
.use("date")
.use("note");
This tells Hibernate Search that whenever the status, date, and note properties of a Status entity change, the Person having that Status as its currentStatus will need reindexing.
However... what Hibernate Search doesn't know is how to retrieve the person having that Status as its currentStatus.
It may know how to retrieve all persons having that Status in their status set, but that's a different thing, isn't it? Hibernate Search doesn't know that currentStatus is actually one of the elements contained in the status property. For all it knows, getCurrentStatus() could very well be doing this: status.iterator().next().getParentStatus(). Then the current status wouldn't be included in Person#status, and it's unclear if myStatus.getPatient() could return a Person whose currentStatus is myStatus.
So you need to tell Hibernate Search explicitly: "from a given Status myStatus, if you retrieve the value of myStatus.getPatient(), you get the Person whose currentStatus property may point back to myStatus". That's exactly what #AssociationInverseSide is for.
I have a Spring batch application where BeanWrapperFieldSetMapper is used to map fields using a prototype object. However, the CSV file that is being read (via a FlatFileItemReader) contains one (indicator) field that determines the mapping of another field. If the indicator field has a value of Y, then the value of the another field should be mapped to property foo otherwise it should be mapped to property bar.
I know that I can use a custom FieldSetMapper to do this, but then I have to code the mapping all of the other fields (of which there are a quite a few). Alternatively, I could do this post reading via an ItemProcessor but then my domain (prototype) object must have a property representing the indicator field (which I prefer not to do since it is not really part of the business domain).
Is it possible to perhaps use a custom FieldSetMapper to only map these custom fields and delegate the other mappings to BeanWrapperFieldSetMapper? Or is there some other better way to solve for this?
Here is my current attempt to use a custom FieldSetMapper and delegate to BeanWrapperFieldSetMapper:
public class DelegatedFieldSetMapper extends BeanWrapperFieldSetMapper<MyProtoClass> {
#Override
public MyProtoClass mapFieldSet(FieldSet fieldSet) throws BindException {
String indicator = fieldSet.readString("indicator");
Properties fieldProperties = fieldSet.getProperties();
if (indicator.equalsIgnoreCase("y")) {
fieldProperties.put("test.foo", fieldSet.readString("value");
} else {
fieldProperties.put("test.bar", fieldSet.readString("value");
}
fieldProperties.remove("indicator");
Set<Object> keys = fieldProperties.keySet();
List<String> names = new ArrayList<String>();
List<String> values = new ArrayList<String>();
for (Object key : keys) {
names.add((String) key);
values.add((String) fieldProperties.getProperty((String) key));
}
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
return super.mapFieldSet(domainObjectFieldSet);
}
}
However, a FlatFileParseException is thrown. The relevant parts of the batch config class are as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Value("${file}")
private File file;
#Bean
#Scope("prototype")
public MyProtoClass () {
return new MyProtoClass();
}
#Bean
public ItemReader<MyProtoClass> reader(LineMapper<MyProtoClass> lineMapper) {
FlatFileItemReader<MyProtoClass> flatFileItemReader = new FlatFileItemReader<MyProtoClass>();
flatFileItemReader.setResource(new FileSystemResource(file));
final int NUMBER_OF_HEADER_LINES = 1;
flatFileItemReader.setLinesToSkip(NUMBER_OF_HEADER_LINES);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
#Bean
public LineMapper<MyProtoClass> lineMapper(LineTokenizer lineTokenizer, FieldSetMapper<MyProtoClass> fieldSetMapper) {
DefaultLineMapper<MyProtoClass> lineMapper = new DefaultLineMapper<MyProtoClass>();
lineMapper.setLineTokenizer(lineTokenizer);
lineMapper.setFieldSetMapper(fieldSetMapper);
return lineMapper;
}
#Bean
public LineTokenizer lineTokenizer() {
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setNames(new String[] {"value", "test.bar", "test.foo", "indicator"});
return lineTokenizer;
}
#Bean
public FieldSetMapper<MyProtoClass> fieldSetMapper(PropertyEditor emptyStringToNullPropertyEditor) {
BeanWrapperFieldSetMapper<MyProtoClass> fieldSetMapper = new DelegatedFieldSetMapper();
fieldSetMapper.setPrototypeBeanName("myProtoClass");
Map<Class<String>, PropertyEditor> customEditors = new HashMap<Class<String>, PropertyEditor>();
customEditors.put(String.class, emptyStringToNullPropertyEditor);
fieldSetMapper.setCustomEditors(customEditors);
return fieldSetMapper;
}
Finally, the CSV flat file look like this:
value,bar,foo,indicator
abc,,,y
xyz,,,n
Let's say that BatchWorkObject is the class to be mapped.
Here's a sample code in Spring Boot style that needs only your custom logic to be added.
new BeanWrapperFieldSetMapper<BatchWorkObject>(){
{
this.setTargetType(BatchWorkObject.class);
}
#Override
public BatchWorkObject mapFieldSet(FieldSet fs)
throws BindException {
BatchWorkObject tmp= super.mapFieldSet(fs);
// your custom code here
return tmp;
}
});
The code actually accomplishes what is desired except for one issue that results in the FlatFileParseException. The DelegatedFieldSetMapper contains the issue as follows:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
To resolve, change to:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(values.toArray(new String[values.size()]), names.toArray(new String[names.size()]));
Write your own FieldSetMapper with a set of prepared delegates inside.
Those delegates are pre-built for every different kind of fields mapping.
In your object route to correct delegate based on indicator field (with a Classifier, for example).
I can't see any other way, but this solution is quite easy and straightforward to maintain.
Processing based on the input format/data can be done using a custom implementation of ItemProcessor which is either changing values in the same entity (that was populated by IteamReader) or creates a new one output entity.
I'm using Filehelpers to parse a very wide, fixed format file and want to be able to take the resulting object and load it into a DB using EF. I'm getting a missing key error when I try to load the object into the DB and when I try and add an Id I get a Filehelpers error. So it seems like either fix breaks the other. I know I can map a Filehelpers object to a POCO object and load that but I'm dealing with dozens (sometimes hundreds of columns) so I would rather not have to go through that hassle.
I'm also open to other suggestions for parsing a fixed width file and loading the results into a DB. One option of course is to use an ETL tool but I'd rather do this in code.
Thanks!
This is the FileHelpers class:
public class AccountBalanceDetail
{
[FieldHidden]
public int Id; // Added to try and get EF to work
[FieldFixedLength(1)]
public string RecordNuber;
[FieldFixedLength(3)]
public string Branch;
// Additional fields below
}
And this is the method that's processing the file:
public static bool ProcessFile()
{
var dir = Properties.Settings.Default.DataDirectory;
var engine = new MultiRecordEngine(typeof(AccountBalanceHeader), typeof(AccountBalanceDetail), typeof(AccountBalanceTrailer));
engine.RecordSelector = new RecordTypeSelector(CustomSelector);
var fileName = dir + "\\MOCK_ACCTBAL_L1500.txt";
var res = engine.ReadFile(fileName);
foreach (var rec in res)
{
var type = rec.GetType();
if (type.Name == "AccountBalanceHeader") continue;
if (type.Name == "AccountBalanceTrailer") continue;
var data = rec as AccountBalanceDetail; // Throws an error if AccountBalanceDetail.Id has a getter and setter
using (var ctx = new ApplicationDbContext())
{
// Throws an error if there is no valid Id on AccountBalanceDetail
// EntityType 'AccountBalanceDetail' has no key defined. Define the key for this EntityType.
ctx.AccountBalanceDetails.Add(data);
ctx.SaveChanges();
}
//Console.WriteLine(rec.ToString());
}
return true;
}
Entity Framework needs the key to be a property, not a field, so you could try declaring it instead as:
public int Id {get; set;}
I suspect FileHelpers might well be confused by the autogenerated backing field, so you might need to do it long form in order to be able to mark the backing field with the [FieldHidden] attribute, i.e.,
[FieldHidden]
private int _Id;
public int Id
{
get { return _Id; }
set { _Id = value; }
}
However, you are trying to use the same class for two unrelated purposes and this is generally bad design. On the one hand AccountBalanceDetail is the spec for the import format. On the other you are also trying to use it to describe the Entity. Instead you should create separate classes and map between the two with a LINQ function or a library like AutoMapper.
I'm new to Laravel-Mongodb, trying to get result by parameter but it's not working
Model:
use Jenssegers\Mongodb\Model as Eloquent;
class Customer extends Eloquent {
protected $connection = 'mongodb';
protected $collection = 'Customer';
}
Controller:
class AdminController extends Controller
{
public function index() {
return Customer::all();
}
public function show($id) {
return Customer::find($id);
}
}
It's alright for index() but it will return empty for show($id), it will work if using:
return Customer::find(1);
I'm not sure why it's not working with parameter, am I missing something?
You need to add one protected variable in your model like below
protected $primaryKey = “customerId”
You can add your own primary key to this variable but if you won’t add this line in model, model will by default take _id as your primary key and _id is autogenerated mongodb’s unique id.
Thats the reason why you are not able to get record by id.
1 is not a valid ObjectId. Try to find a valid ID with a tool like Robomongo or just list your customers with your index method to find out what the IDs are.
The query should look more like this:
return Customer::find("507f1f77bcf86cd799439011");
You can read more about MongoDBs ObjectId here:
https://docs.mongodb.org/manual/reference/object-id/
I'm a little confused as to the purpose of a data model in Entity Framework code-first. Because EF will auto-generate a database from scratch for you if it doesn't already exist using nothing more than the data model (including data annotations and Fluent API stuff in DbContext.OnModelCreating), I was assuming that the data model should fully describe your database's structure, and you wouldn't need to modify anything fundamental after that.
However, I came across this Codeplex issue in which one of the EF Triage Team members suggests that custom indexes be added in data migrations, but not as annotations to your data model fields, or Fluent API code.
But wouldn't that mean that anyone auto-generating the database from scratch would not get those custom indexes added to their DB? The assumption seems to be that once you start using data migrations, you're never going to create the database from scratch again. What if you're working in a team and a new team member comes along with a new SQL Server install? Are you expected to copy over a database from another team member? What if you want to start using a new DBMS, like Postgres? I thought one of the cool things about EF was that it was DBMS-independent, but if you're no longer able to create the database from scratch, you can no longer do things in a DBMS-independent way.
For the reasons I outlined above, wouldn't adding custom indexes in a data migration but not in the data model be a bad idea? For that matter, wouldn't adding any DB structure changes in a migration but not in the data model be a bad idea?
Are EF code-first models intended to fully describe a database's structure?
No, they don't fully describe the database structure or schema.Still there are methods to make the database fully described using EF. They are as below:
You can use the new CTP5’s ExecuteSqlCommand method on Database class which allows raw SQL commands to be executed against the database.
The best place to invoke SqlCommand method for this purpose is inside a Seed method that has been overridden in a custom Initializer class. For example:
protected override void Seed(EntityMappingContext context)
{
context.Database.ExecuteSqlCommand("CREATE INDEX IX_NAME ON ...");
}
You can even add Unique Constraints this way.
It is not a workaround, but will be enforced as the database will be generated.
OR
If you are badly in need of the attribute, then here it goes
[AttributeUsage(AttributeTargets.Property, Inherited = false, AllowMultiple = true)]
public class IndexAttribute : Attribute
{
public IndexAttribute(string name, bool unique = false)
{
this.Name = name;
this.IsUnique = unique;
}
public string Name { get; private set; }
public bool IsUnique { get; private set; }
}
After this , you will have an initializer which you will call in your OnModelCreating method as below:
public class IndexInitializer<T> : IDatabaseInitializer<T> where T : DbContext
{
private const string CreateIndexQueryTemplate = "CREATE {unique} INDEX {indexName} ON {tableName} ({columnName});";
public void InitializeDatabase(T context)
{
const BindingFlags PublicInstance = BindingFlags.Public | BindingFlags.Instance;
Dictionary<IndexAttribute, List<string>> indexes = new Dictionary<IndexAttribute, List<string>>();
string query = string.Empty;
foreach (var dataSetProperty in typeof(T).GetProperties(PublicInstance).Where(p => p.PropertyType.Name == typeof(DbSet<>).Name))
{
var entityType = dataSetProperty.PropertyType.GetGenericArguments().Single();
TableAttribute[] tableAttributes = (TableAttribute[])entityType.GetCustomAttributes(typeof(TableAttribute), false);
indexes.Clear();
string tableName = tableAttributes.Length != 0 ? tableAttributes[0].Name : dataSetProperty.Name;
foreach (PropertyInfo property in entityType.GetProperties(PublicInstance))
{
IndexAttribute[] indexAttributes = (IndexAttribute[])property.GetCustomAttributes(typeof(IndexAttribute), false);
NotMappedAttribute[] notMappedAttributes = (NotMappedAttribute[])property.GetCustomAttributes(typeof(NotMappedAttribute), false);
if (indexAttributes.Length > 0 && notMappedAttributes.Length == 0)
{
ColumnAttribute[] columnAttributes = (ColumnAttribute[])property.GetCustomAttributes(typeof(ColumnAttribute), false);
foreach (IndexAttribute indexAttribute in indexAttributes)
{
if (!indexes.ContainsKey(indexAttribute))
{
indexes.Add(indexAttribute, new List<string>());
}
if (property.PropertyType.IsValueType || property.PropertyType == typeof(string))
{
string columnName = columnAttributes.Length != 0 ? columnAttributes[0].Name : property.Name;
indexes[indexAttribute].Add(columnName);
}
else
{
indexes[indexAttribute].Add(property.PropertyType.Name + "_" + GetKeyName(property.PropertyType));
}
}
}
}
foreach (IndexAttribute indexAttribute in indexes.Keys)
{
query += CreateIndexQueryTemplate.Replace("{indexName}", indexAttribute.Name)
.Replace("{tableName}", tableName)
.Replace("{columnName}", string.Join(", ", indexes[indexAttribute].ToArray()))
.Replace("{unique}", indexAttribute.IsUnique ? "UNIQUE" : string.Empty);
}
}
if (context.Database.CreateIfNotExists())
{
context.Database.ExecuteSqlCommand(query);
}
}
private string GetKeyName(Type type)
{
PropertyInfo[] propertyInfos = type.GetProperties(BindingFlags.FlattenHierarchy | BindingFlags.Instance | BindingFlags.Public);
foreach (PropertyInfo propertyInfo in propertyInfos)
{
if (propertyInfo.GetCustomAttribute(typeof(KeyAttribute), true) != null)
return propertyInfo.Name;
}
throw new Exception("No property was found with the attribute Key");
}
}
Then overload OnModelCreating in your DbContext
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
Database.SetInitializer(new IndexInitializer<MyContext>());
base.OnModelCreating(modelBuilder);
}
Apply the index attribute to your Entity type, with this solution you can have multiple fields in the same index just use the same name and unique.
OR
You can do the migrations later on.
Note:
I have taken a lot of this code from here.
The question seems to be if there is value in having migrations added mid-stream, or if those will cause problems for future database initializations on different machines.
The initial migration that is created also contains the entire data model as it exists, so by adding migrations (enable-migrations in the Package Manager Console) you are, in effect, creating the built-in mechanism for your database to be properly created down the road for other developers.
If you're doing this, I do recommend modifying the database initialization strategy to run all your existing migrations, lest EF should start up and get the next dev's database out of sync.
Something like this would work:
Database.SetInitializer(new MigrateDatabaseToLatestVersion<YourNamespace.YourDataContext, Migrations.Configuration>());
So, no, this won't inherently introduce problems for future work/developers. Remember that migrations are just turned into valid SQL that executes against the database...you can even use script mode to output the TSQL required to make the DB modifications based on anything in the migrations you have created.
Cheers.