CacheBuilder using guava cache for query resultant - guava

To reduce the DB hits to read the data from DB using the query, I am planning to keep resultant in the cache. To do this I am using guava caching.
studentController.java
public Map<String, Object> getSomeMethodName(Number departmentId, String departmentType){
ArrayList<Student> studentList = studentManager.getStudentListByDepartmentType(departmentId, departmentType);
----------
----------
}
StudentHibernateDao.java(criteria query )
#Override
public ArrayList<Student> getStudentListByDepartmentType(Number departmentId, String departmentType) {
Criteria criteria =sessionFactory.getCurrentSession().createCriteria(Student.class);
criteria.add(Restrictions.eq("departmentId", departmentId));
criteria.add(Restrictions.eq("departmentType", departmentType));
ArrayList<Student> studentList = (ArrayList)criteria.list();
return studentList;
}
To cache the criteria query resultant i started off with building CacheBuilder, like below.
private static LoadingCache<Number departmentId, String departmentType, ArrayList<Student>> studentListCache = CacheBuilder
.newBuilder().expireAfterAccess(1, TimeUnit.MINUTES)
.maximumSize(1000)
.build(new CacheLoader<Number departmentId, String departmentType, ArrayList<Student>>() {
public ArrayList<Student> load(String key) throws Exception {
return getStudentListByDepartmentType(departmentId, departmentType);
}
});
Here I dont know where to put CacheBuilder function and how to pass multiple key parameters(i.e departmentId and departmentType) to CacheLoader and call it.
Is this the correct way of caching using guava. Am I missing anything?

Guava's cache only accepts two type parameters, a key and a value type. If you want your key to be a compound key then you need to build a new compound type to encapsulate it. Effectively it would need to look like this (I apologize for my syntax, I don't use Java that often):
// Compound Key type
class CompoundDepartmentId {
public CompoundDepartmentId(Long departmentId, String departmentType) {
this.departmentId = departmentId;
this.departmentType = departmentType;
}
}
private static LoadingCache<CompoundDepartmentId, ArrayList<Student>> studentListCache =
CacheBuilder
.newBuilder().expireAfterAccess(1, TimeUnit.MINUTES)
.maximumSize(1000)
.build(new CacheLoader<CompoundDepartmentId, ArrayList<Student>>() {
public ArrayList<Student> load(CompoundDepartmentId key) throws Exception {
return getStudentListByDepartmentType(key.departmentId, key.departmentType);
}
});

Related

storing object in cosmos db returns bad request?

I seem to be unable to store a simple object to cosmos db?
this is the database model.
public class HbModel
{
public Guid id { get; set; }
public string FormName { get; set; }
public Dictionary<string, object> Form { get; set; }
}
and this is how I store the data into the database
private static void SeedData(HbModelContext dbContext)
{
var cosmosClient = dbContext.Database.GetCosmosClient();
cosmosClient.ClientOptions.AllowBulkExecution = true;
if (dbContext.Set<HbModel>().FirstOrDefault() == null)
{
// No items could be picked hence try seeding.
var container = cosmosClient.GetContainer("hb", "hb_forms");
HbModel first = new HbModel()
{
Id = Guid.NewGuid(),//Guid.Parse(x["guid"] as string),
FormName = "asda",//x["name"] as string,
Form = new Dictionary<string, object>() //
}
string partitionKey = await GetPartitionKey(container.Database, container.Id);
var response = await container.CreateItemAsync(first, new PartitionKey(partitionKey));
}
else
{
Console.WriteLine("Already have data");
}
}
private static async Task<string> GetPartitionKey(Database database, string containerName)
{
var query = new QueryDefinition("select * from c where c.id = #id")
.WithParameter("#id", containerName);
using var iterator = database.GetContainerQueryIterator<ContainerProperties>(query);
while (iterator.HasMoreResults)
{
foreach (var container in await iterator.ReadNextAsync())
{
return container.PartitionKeyPath;
}
}
return null;
}
but when creating the item I get this error message
A host error has occurred during startup operation '3b06df1f-000c-4223-a374-ca1dc48d59d1'.
[2022-07-11T15:02:12.071Z] Microsoft.Azure.Cosmos.Client: Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 24bac0ba-f1f7-411f-bc57-3f91110c4528; Reason: ();.
Value cannot be null. (Parameter 'provider')
no idea why it fails?
the data should not be formatted incorreclty?
It also fails in case there is data in the dictionary.
What is going wrong?
There are several things wrong with the attached code.
You are enabling Bulk but you are not following the Bulk pattern
cosmosClient.ClientOptions.AllowBulkExecution = true is being set, but you are not parallelizing work. If you are going to use Bulk, make sure you are following the documentation and creating lists of concurrent Tasks. Reference: https://learn.microsoft.com/azure/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import#step-6-populate-a-list-of-concurrent-tasks. Otherwise don't use Bulk.
You are blocking threads.
The call to container.CreateItemAsync(first, new PartitionKey("/__partitionKey")).Result; is a blocking call, this can lead you to deadlocks. When using async operations (such as CreateItemAsync) please use the async/await pattern. Reference: https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait
The PartitionKey parameter should be the value not the definition.
On the call container.CreateItemAsync(first, new PartitionKey("/__partitionKey")) the Partition Key (second parameter) should be the value. Assuming your container has a Partition Key Definition of /__partitionKey then your documents should have a __partitionKey property and you should pass the Value in this parameter of such property in the current document. Reference: https://learn.microsoft.com/azure/cosmos-db/sql/troubleshoot-bad-request#wrong-partition-key-value
Optionally, if your documents do not contain such a value, just remove the parameter from the call:
container.CreateItemAsync(first)
Be advised though that this solution will not scale, you need to design your database with Partitioning in mind: https://learn.microsoft.com/azure/cosmos-db/partitioning-overview#choose-partitionkey
Missing id
The model has Id but Cosmos DB requires id, make sure the content of the document contains id when serialized.

How to optimize SQL query in Anylogic

I am generating Agents with parameter values coming from SQL table in Anylogic. when agent is generated at source I am doing a v look up in table and extracting corresponding values from table. For now it is working perfectly but it is slowing down the performance.
Structure of Table looks like this
I am querying the data from this table with below code
double value_1 = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.avg_value)).get(0);
double value_min = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.min_value)).get(0);
double value_max = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.max_value)).get(0);
// Fetch the cluster number from account table
int cluster_num = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.cluster)).get(0);
int act_no = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.actno)).get(0);
String pay_term = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.pay_term)).get(0);
String pay_term_prob = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.pay_term_prob)).get(0);
But this is very slow and wants to improve the performance. someone mentioned that we can create a Java class and then add the table into collection . Is there any example where I can refer. I am finding it difficult to put entire code.
I have created a class using below code:
public class Customer {
private String act_code;
private int actno;
private double avg_value;
private String pay_term;
private String pay_term_prob;
private int cluster;
private double min_value;
private double max_value;
public String getact_code() {
return act_code;
}
public void setact_code(String act_code) {
this.act_code = act_code;
}
public int getactno() {
return actno;
}
public void setactno(int actno) {
this.actno = actno;
}
public double getavg_value() {
return avg_value;
}
public void setavg_value(double avg_value) {
this.avg_value = avg_value;
}
public String getpay_term() {
return pay_term;
}
public void setpay_term(String pay_term) {
this.pay_term = pay_term;
}
public String getpay_term_prob() {
return pay_term_prob;
}
public void setpay_term_prob(String pay_term_prob) {
this.pay_term_prob = pay_term_prob;
}
public int cluster() {
return cluster;
}
public void setcluster(int cluster) {
this.cluster = cluster;
}
public double getmin_value() {
return min_value;
}
public void setmin_value(double min_value) {
this.min_value = min_value;
}
public double getmax_value() {
return max_value;
}
public void setmax_value(double max_value) {
this.max_value = max_value;
}
}
Created collection object like this:
Pls provide an reference to add this database table into collection as a next step. then I want to query the collection based on the condition
You are on the right track here!
Every time you access the database to read data there is a computational overhead. So the best option is to access the database only once, at the start of the model. Create all the objects you need, store other data you will need later into Java classes, and then use the Java classes.
My suggestion is to create a Java class for each row in your table, like you have done. And then create a map object - like you have done, but with the key as String and the value as this new object.
Then on model start you can populate this map as follows:
List<Tuple> rows = selectFrom(customer).list();
for (Tuple row : rows) {
Customer customerData = new Customer(
row.get( customer.act_code ),
row.get( customer.actno ),
row.get( customer.avg_value )
);
mapOfCustomerData.put(customerData.act_code, customerData);
}
Where mapOfCustomerData is a linkedHashMap and customer is the name of the table
See the model created in this blog post for more details and an example on using a scenario object to store all the data from the Database in a separate object
Note: The code above is just an example - read this blog post for more details on using the AnyLogic INternal Database
Before using Java classes, try this first: click the "index" tickbox for all columns that you query with a WHERE clause.

NET Core MongoDb Update/Replace exclude fields

I'm trying to complete a general repository for all of the entities in my application. I Have a BaseEntity with property Id, CreatorId and LastModifiedUserId. Now I'd like to Update a record in a collection, without having to modify the field CreatorId, so I have (from the client) an Entity valorized with some fields updated that I want to update.
Hi have 2 ways:
UpdateOneAsync
ReplaceOneAsync
The repo is created like this:
public class BaseRepository<T> : IRepository<T> where T : BaseEntity
{
public async Task<T> Replace/Update(T entity){...}
}
So it's very hard to use Update(1), since I should retrieve with reflection all the fields of T and exclude the ones that I don't want to update.
With Replace(2) I cannot find a way to specify which fields i should exclude when replacing an object with another. Projectionproperty in FindOneAndReplaceOptions<T>() just excludes the fields on the document that is returned after the update.
Am I missing a way in the replace method to exclude the fields or should I try to retrieve the fields with reflection and use a Update?
I don't know if this solution is ok for you .. what i do is :
Declare in Base Repo a method like
public virtual bool Update(TEntity entity, string key)
{
var result = _collection.ReplaceOne(x => x.Id.Equals(key), entity, new UpdateOptions
{
IsUpsert = false
});
return result.IsAcknowledged;
}
then in your controller when you want to update your entities is there where you set the prop you want to change .. like:
[HttpPut]
[ProducesResponseType(typeof(OrderDTO), 200)]
[ProducesResponseType(400)]
public async Task<ActionResult<bool>> Put([FromBody] OrderDTO value)
{
try
{
if (!ModelState.IsValid) return BadRequest(ModelState);
var orderOnDb = await _orderService.FindAsync(xx => xx.Id == value.Id);
if (orderOnDb == null) return BadRequest(Constants.Error.NOT_FOUND_ON_MONGO);
// SET PROPERTY TO UPDATE (MANUALLY)
orderOnDb.LastUpdateDate = DateTime.Now;
orderOnDb.PaymentMethod = value.PaymentMethod;
orderOnDb.StateHistory = value.StateHistory;
//Save on db
var res = await _orderRepo.UpdateAsync(orderOnDb, orderOnDb.Id);
return res;
}
catch (Exception ex)
{
_logger.LogCritical(ex, ex.Message);
throw ex;
}
}
Hope it helps you!!!

Get and Set attribute values of a class using aspectJ

I am using aspectj to add some field to a existing class and annotate it also.
I am using load time weaving .
Example :- I have a Class customer in which i am adding 3 string attributes. But my issues is that I have to set some values and get it also before my business call.
I am trying the below approach.
In my aj file i have added the below, my problem is in the Around pointcut , how do i get the attribute and set the attribute.
public String net.customers.PersonCustomer.getOfflineRiskCategory() {
return OfflineRiskCategory;
}
public void net.customers.PersonCustomer.setOfflineRiskCategory(String offlineRiskCategory) {
OfflineRiskCategory = offlineRiskCategory;
}
public String net.customers.PersonCustomer.getOnlineRiskCategory() {
return OnlineRiskCategory;
}
public void net.customers.PersonCustomer.setOnlineRiskCategory(String onlineRiskCategory) {
OnlineRiskCategory = onlineRiskCategory;
}
public String net.customers.PersonCustomer.getPersonCommercialStatus() {
return PersonCommercialStatus;
}
public void net.customers.PersonCustomer.setPersonCommercialStatus(String personCommercialStatus) {
PersonCommercialStatus = personCommercialStatus;
}
#Around("execution(* net.xxx.xxx.xxx.DataMigration.populateMap(..))")
public Object invoke(ProceedingJoinPoint joinPoint) throws Throwable {
Object arguments[] = joinPoint.getArgs();
if (arguments != null) {
HashMap<String, String> hMap = (HashMap) arguments[0];
PersonCustomer cus = (PersonCustomer) arguments[1];
return joinPoint.proceed();
}
If anyone has ideas please let me know.
regards,
FT
First suggestion, I would avoid mixing code-style aspectj with annotation-style. Ie- instead of #Around, use around.
Second, instead of getting the arguments from the joinPoint, you should bind them in the pointcut:
Object around(Map map, PersonCustomer cust) :
execution(* net.xxx.xxx.xxx.DataMigration.populateMap(Map, PersonCustomer) && args(map, cust) {
...
return proceed(map, cust);
}
Now, to answer your question: you also need to use intertype declarations to add new fields to your class, so do something like this:
private String net.customers.PersonCustomer.OfflineRiskCategory;
private String net.customers.PersonCustomer.OnlineRiskCategory;
private String net.customers.PersonCustomer.PersonCommercialStatus;
Note that the private keyword here means private to the aspect, not to the class that you declare it on.

Case-insensitive indexing with Hibernate-Search?

Is there a simple way to make Hibernate Search to index all its values in lower case ? Instead of the default mixed-case.
I'm using the annotation #Field. But I can't seem to be able to configure some application-level set
Fool that I am ! The StandardAnalyzer class is already indexing in lowercase. It's just a matter of setting the search terms in lowercase too. I was assuming the query would do that.
However, if a different analyzer were to be used, application-wide, then it can be set using the property hibernate.search.analyzer.
Lowercasing, term splitting, removing common terms and many more advanced language processing functions are applied by the Analyzer.
Usually you should process user input meant to match indexed strings with the same Analyzer used at indexing; configuring hibernate.search.analyzer sets the default (global) Analyzer, but you can customize it per index, per entity type, per field and even on different entity instances.
It is for example useful to have language specific analysis, so to process Chinese descriptions with Chinese specific routines, Italian descriptions with Italian tokenizers.
The default analyzer is ok for most use cases, and does lowercasing and splits terms on whitespace.
Consider as well that when using the Lucene Queryparser the API requests you the appropriate Analyzer.
When using the Hibernate Search QueryBuilder it attempts to apply the correct Analyzer on each field; see also http://docs.jboss.org/hibernate/search/4.1/reference/en-US/html_single/#search-query-querydsl .
There are multiple way to make sort insensitive in string type field only.
1.First Way is add #Fields annotation in field/property on entity.
Like
#Fields({#Field(index=Index.YES,analyze=Analyze.YES,store=Store.YES),
#Field(index=Index.YES,name = "nameSort",analyzer = #Analyzer(impl=KeywordAnalyzer.class), store = Store.YES)})
private String name;
suppose you have name property with custom analyzer and sort on that. so it's not possible then you can add new Field in index with nameSort apply sort on that field.
you must apply Keyword Analyzer class because that is not tokeniz field and by default apply lowercase factory class in field.
2.Second way is that you can implement your comparison class on sorting like
#Override
public FieldComparator newComparator(String field, int numHits, int sortPos, boolean reversed) throws IOException {
return new StringValComparator(numHits, field);
}
Make one class with extend FieldComparatorSource class and implement above method.
Created new Class name with StringValComparator and implements FieldComparator
and implement following method
class StringValComparator extends FieldComparator {
private String[] values;
private String[] currentReaderValues;
private final String field;
private String bottom;
StringValComparator(int numHits, String field) {
values = new String[numHits];
this.field = field;
}
#Override
public int compare(int slot1, int slot2) {
final String val1 = values[slot1];
final String val2 = values[slot2];
if (val1 == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return val1.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public int compareBottom(int doc) {
final String val2 = currentReaderValues[doc];
if (bottom == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return bottom.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public void copy(int slot, int doc) {
values[slot] = currentReaderValues[doc];
}
#Override
public void setNextReader(IndexReader reader, int docBase) throws IOException {
currentReaderValues = FieldCache.DEFAULT.getStrings(reader, field);
}
#Override
public void setBottom(final int bottom) {
this.bottom = values[bottom];
}
#Override
public String value(int slot) {
return values[slot];
}
}
Apply sorting on Fields Like
new SortField("name",new StringCaseInsensitiveComparator(), true);