I'm planning to switch from Hibernate Search 5.11 to 6, but can't find the way to query DSL for range query on LocalDateTime. I prefer to use native Lucene QueryParser. In previous version I used NumericRangeQuery, because using #FieldBridge (convert to long value).
Here are my previous version codes.
#Entity
...
#NumericField //convert to long value
#FieldBridge(impl = LongLocalDateTimeFieldBridge.class)
#Field(index = Index.YES, analyze = Analyze.NO, store = Store.NO)
private LocalDateTime createDate;
...
These are QueryParser
public class NumericLocalDateRangeQueryParser extends QueryParser {
private static final Logger logger = LogManager.getLogger();
private String f;
private static final Long DEFAULT_DATE = -1L;
private String dateFormat;
public NumericLocalDateRangeQueryParser(final String f, final Analyzer a) {
super(f, a);
this.f = f;
}
public NumericLocalDateRangeQueryParser(final String dateFormat,final String f, Analyzer a) {
super(f, a);
this.f = f;
this.dateFormat = dateFormat;
logger.debug("date formate: {}", ()->dateFormat);
}
//check a field if found, have to set to -1
#Override
protected Query newFieldQuery(Analyzer analyzer, String field, String queryText, boolean quoted) throws ParseException {
if (f.equals(field)) {
try {
return NumericRangeQuery.newLongRange(
field,
stringToTime(queryText).toEpochDay(), stringToTime(queryText).toEpochDay(),
true,
true
);
} catch (final DateTimeParseException ex) {
return super.newFieldQuery(analyzer, field, queryText, quoted);
}
}
return super.newFieldQuery(analyzer, field, queryText, quoted);
}
/**
*
* #param field = filed when indexing
* #param part1 = date 1 e.g. date 1 to date 2 in string
* #param part2 = date 2
* #param startInclusive
* #param endInclusive
* #return
*/
#Override
protected Query newRangeQuery(final String field, final String part1, final String part2,
final boolean startInclusive, final boolean endInclusive) {
if (f.equals(field)) {
try {
return NumericRangeQuery.newLongRange(
field,
stringToTime(part1).toEpochDay(), stringToTime(part2).toEpochDay(),
true,
true
);
} catch (final DateTimeParseException ex) {
return NumericRangeQuery.newLongRange(field, DEFAULT_DATE, DEFAULT_DATE, true, true);
}
} else {
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
#Override
protected org.apache.lucene.search.Query newTermQuery(final Term term) {
if (term.field().equals(f)) {
try {
return NumericRangeQuery.newLongRange(term.field(),
stringToTime(term.text()).toEpochDay(), stringToTime(term.text()).toEpochDay(), true, true);
} catch (final DateTimeParseException ex) {
logger.debug("it's not numeric: {}", () -> ex.getMessage());
return NumericRangeQuery.newLongRange(field, DEFAULT_DATE,DEFAULT_DATE, true, true);
}
} else {
logger.debug("normal query term");
return super.newTermQuery(term);
}
}
private LocalDate stringToTime(final String date) throws DateTimeParseException {
final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(dateFormat);
return LocalDate.parse(date, formatter);
}
}
First, on the mapping side, you'll just need this:
#GenericField
private LocalDateTime createDate;
Second, the query. If you really want to write native queries and skip the whole Search DSL, I suppose you have your reasons. Would you mind sharing them in a comment? Maybe it'll give me some ideas for improvements in Hibernate Search.
Regardless, the underlying queries changed a lot between Lucene 5 and 8. You can find how we query long-based fields (such as LocalDateTime) here, and how we convert a LocalDateTime to a long here.
So, something like this should work:
long part1AsLong = stringToTime(part1).toInstant(ZoneOffset.UTC).toEpochMilli();
long part2AsLong = stringToTime(part2).toInstant(ZoneOffset.UTC).toEpochMilli();
Query query = LongPoint.newRangeQuery(field, part1AsLong, part2AsLong);
Alternatively, if you can rely on the Search DSL, you can do this:
SearchSession searchSession = Search.session(entityManager);
SearchScope<MyEntity> scope = searchSession.scope(MyEntity.class);
// Pass the scope to your query parser somehow
MyQueryParser parser = new MyQueryParser(..., scope);
// Then in your parser, do this to create a range query on a `LocalDateTime` field:
LocalDateTime part1AsDateTime = stringToTime(part1);
LocalDateTime part2AsDateTime = stringToTime(part2);
Query query = LuceneMigrationUtils.toLuceneQuery(scope.predicate().range()
.field(field)
.between(part1AsDateTime, part2AsDateTime)
.toPredicate());
Note however that LuceneMigrationUtils is SPI, and as such it could change or be removed in a later version. If you think it's useful, we could expose it as API in a future version, so that it's guaranteed to stay there.
I suspect we could address your problem better by adding something to Hibernate Search, though. Why exactly do you need to rely on a query parser?
Here are my codes (modified from previous version):
public class LongPointLocalDateTimeRangeQueryParser extends QueryParser {
private static final Logger logger = LogManager.getLogger();
private String f;
private static final Long DEFAULT_DATE = -1L;
private String dateFormat;
public LongPointLocalDateTimeRangeQueryParser(final String f, final Analyzer a) {
super(f, a);
this.f = f;
}
public LongPointLocalDateTimeRangeQueryParser(final String dateFormat, final String f, Analyzer a) {
super(f, a);
this.f = f;
this.dateFormat = dateFormat;
}
#Override
protected Query newFieldQuery(Analyzer analyzer, String field, String queryText, boolean quoted) throws ParseException {
if (f.equals(field)) {
logger.debug("newFieldQuery, with field: {}, queryText: {}, quoted: {}", () -> f, () -> queryText, () -> quoted);
try {
return LongPoint.newRangeQuery(
field,
stringToTime(queryText).toInstant(ZoneOffset.UTC).toEpochMilli(), stringToTime(queryText).plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli()
);
} catch (final DateTimeParseException ex) {
logger.debug("it's not date format, error: {}", () -> ex.getMessage());
return super.newFieldQuery(analyzer, field, queryText, quoted);
//return null;
}
}
logger.debug("newFieldQuery, normal, queryText: {}, quoted: {}", () -> queryText, () -> quoted);
return super.newFieldQuery(analyzer, field, queryText, quoted); //To change body of generated methods, choose Tools | Templates.
}
/**
*
* #param field = filed when indexing
* #param part1 = date 1 in string
* #param part2 = date 2
* #param startInclusive
* #param endInclusive
* #return
*/
#Override
protected Query newRangeQuery(final String field, final String part1, final String part2,
final boolean startInclusive, final boolean endInclusive) {
if (f.equals(field)) {
try {
logger.debug("date 1: {}, str: {}", () -> stringToTime(part1).toInstant(ZoneOffset.UTC).toEpochMilli(), () -> part1);
logger.debug("date 2: {}, str: {}", () -> stringToTime(part2).plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli(), () -> part2);
return LongPoint.newRangeQuery(
field,
stringToTime(part1).toInstant(ZoneOffset.UTC).toEpochMilli(), stringToTime(part2).plusDays(1).toInstant(ZoneOffset.UTC).toEpochMilli()
);
} catch (final DateTimeParseException ex) {
logger.debug("it's not date format, error: {}", () -> ex.getMessage());
return LongPoint.newRangeQuery(field, DEFAULT_DATE, DEFAULT_DATE);
}
} else {
logger.debug("normal query range");
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
private LocalDateTime stringToTime(final String date) throws DateTimeParseException {
//... same as previous posted
}
}
And this is query parser
//...
final SearchQuery<POSProcessInventory> result = searchSession.search(POSProcessInventory.class).extension(LuceneExtension.get())
.where(f -> f.bool(b -> {
b.must(f.bool(b1 -> {
//...
try {
if (searchWord.contains("createDate:")) {
logger.info("doing queryParser for LocalDateTime: {}", () -> searchWord);
b1.should(f.fromLuceneQuery(queryLocalDateTime("createDate", searchWord)));
}
} catch (ParseException ex) {
logger.error("#3 this is not localDateTime");
}
And another method
private org.apache.lucene.search.Query queryLocalDateTime(final String field, final String dateTime)
throws org.apache.lucene.queryparser.classic.ParseException {
final LongPointLocalDateTimeRangeQueryParser createDateQ = new LongPointLocalDateTimeRangeQueryParser(accessCompanyInfo.getDateTimeFormat().substring(0, 10), field, new KeywordAnalyzer());
createDateQ.setAllowLeadingWildcard(false);
final org.apache.lucene.search.Query queryLocalDate = createDateQ.parse(dateTime);
logger.debug(field + "query field: {} query Str: {}", () -> field, () -> queryLocalDate);
return queryLocalDate;
}
Related
I had implement a interceptor of myabtis. but we found a problem, execute interceptor lead to throw so many IllegalAccessException, it affects cpu performence
Shown below is where the problem is, why did not check access permision of feild befor executed code "field.get(target)".
public class GetFieldInvoker implements Invoker {
private final Field field;
public GetFieldInvoker(Field field) {
this.field = field;
}
#Override
public Object invoke(Object target, Object[] args) throws IllegalAccessException {
try {
return field.get(target);
} catch (IllegalAccessException e) {
if (Reflector.canControlMemberAccessible()) {
field.setAccessible(true);
return field.get(target);
} else {
throw e;
}
}
}
#Override
public Class<?> getType() {
return field.getType();
}
}
the intercepor of mine:
#Intercepts({
#Signature(
type = StatementHandler.class,
method = "prepare",
args = {Connection.class, Integer.class})
})
public class SqlIdInterceptor implements Interceptor {
private static final int MAX_LEN = 256;
private final RoomboxLogger logger = RoomboxLogManager.getLogger();
#Override
public Object intercept(Invocation invocation) throws Throwable {
StatementHandler statementHandler = realTarget(invocation.getTarget());
MetaObject metaObject = SystemMetaObject.forObject(statementHandler);
BoundSql boundSql = (BoundSql) metaObject.getValue("delegate.boundSql");
String originalSql = boundSql.getSql();
MappedStatement mappedStatement =
(MappedStatement) metaObject.getValue("delegate.mappedStatement");
String id = mappedStatement.getId();
if (id != null) {
int len = id.length();
if (len > MAX_LEN) {
logger.warn("too long id", "id", id, "len", len);
}
}
String newSQL = "# " + id + "\n" + originalSql;
metaObject.setValue("delegate.boundSql.sql", newSQL);
return invocation.proceed();
}
#SuppressWarnings("unchecked")
public static <T> T realTarget(Object target) {
if (Proxy.isProxyClass(target.getClass())) {
MetaObject metaObject = SystemMetaObject.forObject(target);
return realTarget(metaObject.getValue("h.target"));
}
return (T) target;
}
}
Flame Graph
enter image description here
enter image description here
I need help, how to avoid throw exceptions, is any other way to reslove this problem?
thanks.
I am new to Drools and I'm using Drools 7.12.0 to try and validate a set of meter readings, which look like
public class MeterReading() {
private long id;
private LocalDate readDate;
private int value;
private String meterId
private boolean valid;
/* Getters & Setters omitted */
}
As part of the validation I need to compare the values of each MeterReading with its immediate predecessor by readDate.
I first tried using 'accumulate'
when $mr: MeterReading()
$previousDate: LocalDate() from accumulate(MeterReading($pdate: readDate < $mr.readDate ), max($pdate))
then
System.out.println($mr.getId() + ":" + $previousDate);
end
but then discovered that this only returns the date of the previous meter read, not the object that contains it. I then tried a custom accumulate with
when
$mr: MeterReading()
$previous: MeterReading() from accumulate(
$p: MeterReading(id != $mr.id),
init( MeterReading prev = null; ),
action( if( prev == null || $p.readDate < prev.readDate) {
prev = $p;
}),
result(prev))
then
System.out.println($mr.getId() + ":" + $previous.getId() + ":" + $previous.getReadDate());
end
but this selects the earliest read in the set of meter readings, not the immediate predecessor. Can someone point me in the right direction as to what I should be doing or reading to be able to select the immediate predecessor to each individual meter read.
Regards
After further research I found this article http://planet.jboss.org/post/how_to_implement_accumulate_functions which I used to write my own accumulate function;\
public class PreviousReadFinder implements AccumulateFunction {
#Override
public Serializable createContext() {
return new PreviousReadFinderContext();
}
#Override
public void init(Serializable context) throws Exception {
PreviousReadFinderContext prfc = (PreviousReadFinderContext) context;
prfc.list.clear();
}
#Override
public void accumulate(Serializable context, Object value) {
PreviousReadFinderContext prfc = (PreviousReadFinderContext) context;
prfc.list.add((MeterReading) value);
}
#Override
public void reverse(Serializable context, Object value) throws Exception {
PreviousReadFinderContext prfc = (PreviousReadFinderContext) context;
prfc.list.remove((MeterReading) value);
}
#Override
public Object getResult(Serializable context) throws Exception {
PreviousReadFinderContext prfc = (PreviousReadFinderContext) context;
return prfc.findLatestReadDate();
}
#Override
public boolean supportsReverse() {
return true;
}
#Override
public Class<?> getResultType() {
return MeterReading.class;
}
#Override
public void writeExternal(ObjectOutput out) throws IOException {
}
#Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
}
private static class PreviousReadFinderContext implements Serializable {
List<MeterReading> list = new ArrayList<>();
public Object findLatestReadDate() {
Optional<MeterReading> optional = list.stream().max(Comparator.comparing(MeterReading::getReadDate));
if (optional.isPresent()) {
MeterReading to = optional.get();
return to;
}
return null;
}
}
}
and my rule is now
rule "Opening Read With Previous"
dialect "mvel"
when $mr: MeterReading()
$pmr: MeterReading() from accumulate($p: MeterReading(readDate < $mr.readDate ), previousReading($p))
then
System.out.println($mr.getId() + ":" + $pmr.getMeterReadDate());
end
How do I write a rule to select the eatliest meter reading in the set which does not have a previous read?
I have implemented a custom MongoDB CodecProvider to map to my java objects, using this Github gist. However, i cannot deserialize Date values, rather null values are returned. Here is the snippet of my custom encoder implementation for my pojo - AuditLog:
public void encode(BsonWriter writer, AuditLog value, EncoderContext encoderContext) {
Document document = new Document();
DateCodec dateCodec = new DateCodec();
ObjectId id = value.getLogId();
Date timestamp = value.getTimestamp();
String deviceId = value.getDeviceId();
String userId = value.getUserId();
String requestId = value.getRequestId();
String operationType = value.getOperationType();
String message = value.getMessage();
String serviceName = value.getServiceName();
String className = value.getClassName();
if (null != id) {
document.put("_id", id);
}
if (null != timestamp) {
document.put("timestamp", timestamp);
}
if (null != deviceId) {
document.put("deviceId", deviceId);
}
if (null != userId) {
document.put("userId", userId);
}
if (null != requestId) {
document.put("requestId", requestId);
}
if (null != operationType) {
document.put("operationType", operationType);
}
if (null != message) {
document.put("message", message);
}
if (null != serviceName) {
document.put("serviceName", serviceName);
}
if (null != className) {
document.put("className", className);
}
documentCodec.encode(writer, document, encoderContext);
}
and decoder:
public AuditLog decode(BsonReader reader, DecoderContext decoderContext) {
Document document = documentCodec.decode(reader, decoderContext);
System.out.println("document " + document);
AuditLog auditLog = new AuditLog();
auditLog.setLogId(document.getObjectId("_id"));
auditLog.setTimestamp(document.getDate("timestamp"));
auditLog.setDeviceId(document.getString("deviceId"));
auditLog.setUserId(document.getString("userId"));
auditLog.setRequestId(document.getString("requestId"));
auditLog.setOperationType(document.getString("operationType"));
auditLog.setMessage(document.getString("message"));
auditLog.setServiceName(document.getString("serviceName"));
auditLog.setClassName(document.getString("className"));
return auditLog;
}
and the way I an reading:
public void getAuthenticationEntries() {
Codec<Document> defaultDocumentCodec = MongoClient.getDefaultCodecRegistry().get(Document.class);
AuditLogCodec auditLogCodec = new AuditLogCodec(defaultDocumentCodec);
CodecRegistry codecRegistry = CodecRegistries.fromRegistries(MongoClient.getDefaultCodecRegistry(),
CodecRegistries.fromCodecs(auditLogCodec));
MongoClientOptions options = MongoClientOptions.builder().codecRegistry(codecRegistry).build();
MongoClient mc = new MongoClient("1.2.3.4:27017", options);
MongoCollection<AuditLog> collection = mc.getDatabase("myDB").getCollection("myCol",
AuditLog.class);
BasicDBObject neQuery = new BasicDBObject();
neQuery.put("myFiltr", new BasicDBObject("$eq", "mystuffr"));
FindIterable<AuditLog> cursor = collection.find(neQuery);
List<AuditLog> cleanList = new ArrayList<AuditLog>();
for (AuditLog object : cursor) {
System.out.println("timestamp: " + object.getTimestamp());
}
}
My pojo:
public class AuditLog implements Bson {
#Id
private ObjectId logId;
#JsonProperty("#timestamp")
private Date timestamp;
#JsonProperty("deviceId")
private String deviceId;
#JsonProperty("userId")
private String userId;
#JsonProperty("requestId")
private String requestId;
#JsonProperty("operationType")
private String operationType;
#JsonProperty("message")
private String message;
#JsonProperty("serviceName")
private String serviceName;
#JsonProperty("className")
private String className;
After a thorough research, I fixed the problem of returned null values. The mongoimport command was used to import the log files into Mongodbfrom elasticsearch. However, the time format was not converted to ISODate during the import operation. What I had to do was to update the time format to ISODate using the below command:
db.Collection.find().forEach(function (doc){
doc.time = Date(time);
});
db.dummy.save(doc);
Here is a related question that tackles a similar challenge.
routes.conf
GET /api/v1/jurisdictions controllers.v1.JurisdictionController.getJurisdictions()
JurisdictionController
def getJurisdictions() = Action { implicit request =>
// this is returning None
val filters = request.queryString.get("filters")
val result = jurisdictionService.getJurisdictions()
Ok(serializer.serialize(result)).as("application/json")
}
Relevant request URI:
http://localhost:9000/api/v1/jurisdictions?filter[name]=Ryan&filter[number]=333333
How can I grab this query string filter?
You have to create a custom binder this is a Java implementation but it follows the same principle:
public class AgeRange implements QueryStringBindable<AgeRange> {
public Integer from;
public Integer to;
//A simple example of the binder’s use binding the :from and :to query string parameters:
#Override
public Optional<AgeRange> bind(String key, Map<String, String[]> data) {
try{
from = new Integer(data.get("from")[0]);
to = new Integer(data.get("to")[0]);
return Optional.of(this);
} catch (Exception e){ // no parameter match return None
return Optional.empty();
}
}
#Override
public String unbind(String key) {
return new StringBuilder()
.append("from=")
.append(from)
.append("&to=")
.append(to)
.toString();
}
}
Java documentation
Scala documentation
i know how to implement it,using field assist and search pattern, but the mechanism each time triggers a new search. I am not sure, how the mechanism is implemented in Open Type for example ( i think with indexes). How to use this cache to make in time classpath search
This almost my entire solution. Each time a call createProposalData
private TreeSet<String> data;
private SearchParticipant[] participants = new SearchParticipant[] { SearchEngine
.getDefaultSearchParticipant() };
private SearchPattern pattern;
private IJavaProject prj;
private JavaSearchScope scope;
private SearchEngine searchEngine = new SearchEngine();
private SearchRequestor requestor = new SearchRequestor() {
#Override
public void acceptSearchMatch(SearchMatch match) throws CoreException {
String text = getText(match.getElement());
if (text != null) {
data.add(text);
}
}
public String getText(Object element) {
...
}
};
public ProposalEngine(IJavaProject prj) {
super();
this.prj = prj;
scope = new JavaSearchScope();
try {
scope.add(prj);
} catch (JavaModelException e) {
//
}
}
public Collection<String> createProposalData(final String patternText) {
data = new TreeSet<String>();
try {
pattern = getPatternForSeach(patternText);
searchEngine.search(pattern, participants, scope, requestor, null);
} catch (Exception e) {
// skip
}
return data;
}
protected SearchPattern getPatternForSeach(String patternText) {
return SearchPattern.createPattern(patternText,
IJavaSearchConstants.CLASS_AND_INTERFACE,
IJavaSearchConstants.DECLARATIONS,
SearchPattern.R_CAMELCASE_MATCH);
}
I believe that you are doing exactly what the Open Type dialog is doing. Indexing to speed up search happens underneath JDT API.