How to optimize SQL query in Anylogic - anylogic

I am generating Agents with parameter values coming from SQL table in Anylogic. when agent is generated at source I am doing a v look up in table and extracting corresponding values from table. For now it is working perfectly but it is slowing down the performance.
Structure of Table looks like this
I am querying the data from this table with below code
double value_1 = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.avg_value)).get(0);
double value_min = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.min_value)).get(0);
double value_max = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.max_value)).get(0);
// Fetch the cluster number from account table
int cluster_num = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.cluster)).get(0);
int act_no = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.actno)).get(0);
String pay_term = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.pay_term)).get(0);
String pay_term_prob = (selectFrom(account_details)
.where(account_details.act_code.eq(z))
.list(account_details.pay_term_prob)).get(0);
But this is very slow and wants to improve the performance. someone mentioned that we can create a Java class and then add the table into collection . Is there any example where I can refer. I am finding it difficult to put entire code.
I have created a class using below code:
public class Customer {
private String act_code;
private int actno;
private double avg_value;
private String pay_term;
private String pay_term_prob;
private int cluster;
private double min_value;
private double max_value;
public String getact_code() {
return act_code;
}
public void setact_code(String act_code) {
this.act_code = act_code;
}
public int getactno() {
return actno;
}
public void setactno(int actno) {
this.actno = actno;
}
public double getavg_value() {
return avg_value;
}
public void setavg_value(double avg_value) {
this.avg_value = avg_value;
}
public String getpay_term() {
return pay_term;
}
public void setpay_term(String pay_term) {
this.pay_term = pay_term;
}
public String getpay_term_prob() {
return pay_term_prob;
}
public void setpay_term_prob(String pay_term_prob) {
this.pay_term_prob = pay_term_prob;
}
public int cluster() {
return cluster;
}
public void setcluster(int cluster) {
this.cluster = cluster;
}
public double getmin_value() {
return min_value;
}
public void setmin_value(double min_value) {
this.min_value = min_value;
}
public double getmax_value() {
return max_value;
}
public void setmax_value(double max_value) {
this.max_value = max_value;
}
}
Created collection object like this:
Pls provide an reference to add this database table into collection as a next step. then I want to query the collection based on the condition

You are on the right track here!
Every time you access the database to read data there is a computational overhead. So the best option is to access the database only once, at the start of the model. Create all the objects you need, store other data you will need later into Java classes, and then use the Java classes.
My suggestion is to create a Java class for each row in your table, like you have done. And then create a map object - like you have done, but with the key as String and the value as this new object.
Then on model start you can populate this map as follows:
List<Tuple> rows = selectFrom(customer).list();
for (Tuple row : rows) {
Customer customerData = new Customer(
row.get( customer.act_code ),
row.get( customer.actno ),
row.get( customer.avg_value )
);
mapOfCustomerData.put(customerData.act_code, customerData);
}
Where mapOfCustomerData is a linkedHashMap and customer is the name of the table
See the model created in this blog post for more details and an example on using a scenario object to store all the data from the Database in a separate object
Note: The code above is just an example - read this blog post for more details on using the AnyLogic INternal Database

Before using Java classes, try this first: click the "index" tickbox for all columns that you query with a WHERE clause.

Related

Can Kaitai Struct be used to describe TLV data without creating new types for each field?

I'm reverse engineering a file format that stores each field as TLV blocks (type, length, value).
The fields do not have to be in order, or even present at all. Their presence is denoted with a sentinel, which is a 16-bit type identifier and a 32-bit end offset. There are hundreds of unique identifiers, but a decent chunk of those are just single primitive values. aside from denoting the type, they can also identify what field the data should be stored in.
It is also worth noting that there will never be a duplicate id on a parent structure. The only time is can occur is if there are multiple of the same object type in an array/list.
I have successfully written a Kaitai definition for one of them:
meta:
id: struct_02ea
endian: le
seq:
- id: unk_00
type: s4
- id: fields
type: field_block
repeat: eos
types:
sentinel:
seq:
- id: id
type: u2
- id: end_offset
type: u4
field_block:
seq:
- id: sentinel
type: sentinel
- id: value
type:
switch-on: sentinel.id
cases:
0xF0: u1
0xF1: u1
0xF2: u1
0xF3: u1
0xF4: u4
0xF5: u4
size: sentinel.end_offset - _root._io.pos
Handling things this way does work, and I could likely map out the entire format like this. However, when it comes time to compiling this definition into another format, things get nasty.
Since I am wrapping each field in a field_block, the generated code stores these values in that type of object. This is incredibly inefficient when half of the generated field_block objects store a single integer. It would also require the consuming code to iterate through a list of each field block in order to get the actual field's value.
Ideally, I would like to define this structure so that the sentinels are only parsed while Kaitai is reading the data, and each value would be mapped to a field on the parent structure.
Is this possible? This technology is really cool, and I'd love to use it in my project, but I feel like the overhead that this is generating is a lot more trouble than it's worth.
Here's an example of the definition when compiled into C#:
using System.Collections.Generic;
namespace Kaitai
{
public partial class Struct02ea : KaitaiStruct
{
public static Struct02ea FromFile(string fileName)
{
return new Struct02ea(new KaitaiStream(fileName));
}
public Struct02ea(KaitaiStream p__io, KaitaiStruct p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root ?? this;
_read();
}
private void _read()
{
_unk00 = m_io.ReadS4le();
_fields = new List<FieldBlock>();
{
var i = 0;
while (!m_io.IsEof) {
_fields.Add(new FieldBlock(m_io, this, m_root));
i++;
}
}
}
public partial class Sentinel : KaitaiStruct
{
public static Sentinel FromFile(string fileName)
{
return new Sentinel(new KaitaiStream(fileName));
}
public Sentinel(KaitaiStream p__io, Struct02ea.FieldBlock p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root;
_read();
}
private void _read()
{
_id = m_io.ReadU2le();
_endOffset = m_io.ReadU4le();
}
private ushort _id;
private uint _endOffset;
private Struct02ea m_root;
private Struct02ea.FieldBlock m_parent;
public ushort Id { get { return _id; } }
public uint EndOffset { get { return _endOffset; } }
public Struct02ea M_Root { get { return m_root; } }
public Struct02ea.FieldBlock M_Parent { get { return m_parent; } }
}
public partial class FieldBlock : KaitaiStruct
{
public static FieldBlock FromFile(string fileName)
{
return new FieldBlock(new KaitaiStream(fileName));
}
public FieldBlock(KaitaiStream p__io, Struct02ea p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root;
_read();
}
private void _read()
{
_sentinel = new Sentinel(m_io, this, m_root);
switch (Sentinel.Id) {
case 243: {
_value = m_io.ReadU1();
break;
}
case 244: {
_value = m_io.ReadU4le();
break;
}
case 245: {
_value = m_io.ReadU4le();
break;
}
case 241: {
_value = m_io.ReadU1();
break;
}
case 240: {
_value = m_io.ReadU1();
break;
}
case 242: {
_value = m_io.ReadU1();
break;
}
default: {
_value = m_io.ReadBytes((Sentinel.EndOffset - M_Root.M_Io.Pos));
break;
}
}
}
private Sentinel _sentinel;
private object _value;
private Struct02ea m_root;
private Struct02ea m_parent;
public Sentinel Sentinel { get { return _sentinel; } }
public object Value { get { return _value; } }
public Struct02ea M_Root { get { return m_root; } }
public Struct02ea M_Parent { get { return m_parent; } }
}
private int _unk00;
private List<FieldBlock> _fields;
private Struct02ea m_root;
private KaitaiStruct m_parent;
public int Unk00 { get { return _unk00; } }
public List<FieldBlock> Fields { get { return _fields; } }
public Struct02ea M_Root { get { return m_root; } }
public KaitaiStruct M_Parent { get { return m_parent; } }
}
}
Affiliate disclaimer: I'm a Kaitai Struct maintainer (see my GitHub profile).
Since I am wrapping each field in a field_block, the generated code stores these values in that type of object. This is incredibly inefficient when half of the generated field_block objects store a single integer. It would also require the consuming code to iterate through a list of each field block in order to get the actual field's value.
I think that rather than trying to describe the entire format with an ultimate Kaitai Struct specification, it's better for you not to let the generated code parse all the fields automatically. Move the parsing control to your application code, where you use the type Struct02ea.FieldBlock that represents the individual field and basically replicate the "repeat until end of stream" loop that the generated code that you posted was doing:
_fields = new List<FieldBlock>();
{
var i = 0;
while (!m_io.IsEof) {
_fields.Add(new FieldBlock(m_io, this, m_root));
i++;
}
}
The advantage of doing so is that you can adjust the loop to fit your needs. To avoid the overhead you describe, you'll probably want to keep the Struct02ea.FieldBlock object in a local variable inside the loop body, pull only the values you care about (save them in your compact, consumer-friendly output structures) and let it leave the scope after the loop iteration ends. This will allow each original FieldBlock object to get garbage-collected once you process it, so the overhead they have will be limited to a single instance and not multiplied by the number of fields in the file.
The most straightforward and seamless way to prevent the Kaitai Struct-generated code parse fields (but otherwise keep everything the same) is to add if: false in the KSY specification, as #webbnh suggested in a GitHub issue:
seq:
- id: unk_00
type: s4
- id: fields
type: field_block
repeat: eos
if: false # add this
The if: false works better than omitting it from seq entirely, because the kaitai-struct-compiler has occasional troubles with unused types (when compiling the KSY spec with unused types, you may get an error "Unable to derive _parent type in ..." due to a compiler bug). But with this if: false trick, you can't run into them because the field_block type is no longer unused.

JPA : Update operation without JPA query or entitymanager

I am learning JPA, I found out that we have some functions which is already present in Jparepository like save,saveAll,find, findAll etc. but there is nothing like update,
I come across one scenario where I need to update the table, if the value is already present otherwise I need to insert the record in table.
I created
#Repository
public interface ProductInfoRepository
extends JpaRepository<ProductInfoTable, String>
{
Optional<ProductInfoTable> findByProductName(String productname);
}
public class ProductServiceImpl
implements ProductService
{
#Autowired
private ProductInfoRepository productRepository;
#Override
public ResponseMessage saveProductDetail(ProductInfo productInfo)
{
Optional<ProductInfoTable> productInfoinTable =
productRepository.findByProductName(productInfo.getProductName());
ProductInfoTable productInfoDetail;
Integer quantity = productInfo.getQuantity();
if (productInfoinTable.isPresent())
{
quantity += productInfoinTable.get().getQuantity();
}
productInfoDetail =
new ProductInfoTable(productInfo.getProductName(), quantity + productInfo.getQuantity(),
productInfo.getImage());
productRepository.save(productInfoDetail);
return new ResponseMessage("product saved successfully");
}
}
as you can see, I can save the record if the record is new, but when I am trying to save the record which is already present in table it is giving me error related to primarykeyviolation which is obvious. I checked somewhat, we can do the update by creating the entitymanager object or jpa query but what if I dont want to use both of them. is there any other way we can do so ?
update I also added the instance of EntityManager and trying to merge the code
#Override
public ResponseMessage saveProductDetail(ProductInfo productInfo)
{
Optional<ProductInfoTable> productInfoinTable =
productRepository.findByProductName(productInfo.getProductName());
ProductInfoTable productInfoDetail;
Integer price = productInfo.getPrice();
if (productInfoinTable.isPresent())
{
price = productInfoinTable.get().getPrice();
}
productInfoDetail =
new ProductInfoTable(productInfo.getProductName(), price, productInfo.getImage());
em.merge(productInfoDetail);
return new ResponseMessage("product saved successfully");
but no error, no execution of update statements in log, any possible reasons for that ?
}
I suspect you need code like this to solve the problem
public ResponseMessage saveProductDetail(ProductInfo productInfo)
{
Optional<ProductInfoTable> productInfoinTable =
productRepository.findByProductName(productInfo.getProductName());
final ProductInfoTable productInfoDetail;
if (productInfoinTable.isPresent()) {
// to edit
productInfoDetail = productInfoinTable.get();
Integer quantity = productInfoDetail.getQuantity() + productInfo.getQuantity();
productInfoDetail.setQuantity(quantity);
} else {
// to create new
productInfoDetail = new ProductInfoTable(productInfo.getProductName(),
productInfo.getQuantity(), productInfo.getImage());
}
productRepository.save(productInfoDetail);
return new ResponseMessage("product saved successfully");
}

Nullpointer exception in CompositePlanningValueRangeDescriptor.extractValues

I'm facing a NPE when trying to solve my solution:
Exception in thread "main" java.lang.NullPointerException
at java.util.ArrayList.addAll(ArrayList.java:472)
at org.drools.planner.core.domain.variable.CompositePlanningValueRangeDescriptor.extractValues(CompositePlanningValueRangeDescriptor.java:46)
at org.drools.planner.core.domain.variable.PlanningVariableDescriptor.extractPlanningValues(PlanningVariableDescriptor.java:259)
at org.drools.planner.core.heuristic.selector.variable.PlanningValueSelector.initSelectedPlanningValueList(PlanningValueSelector.java:91)
at org.drools.planner.core.heuristic.selector.variable.PlanningValueSelector.phaseStarted(PlanningValueSelector.java:73)
at org.drools.planner.core.heuristic.selector.variable.PlanningValueWalker.phaseStarted(PlanningValueWalker.java:64)
at org.drools.planner.core.heuristic.selector.variable.PlanningVariableWalker.phaseStarted(PlanningVariableWalker.java:62)
at org.drools.planner.core.constructionheuristic.greedyFit.decider.DefaultGreedyDecider.phaseStarted(DefaultGreedyDecider.java:62)
at org.drools.planner.core.constructionheuristic.greedyFit.DefaultGreedyFitSolverPhase.phaseStarted(DefaultGreedyFitSolverPhase.java:112)
at org.drools.planner.core.constructionheuristic.greedyFit.DefaultGreedyFitSolverPhase.solve(DefaultGreedyFitSolverPhase.java:57)
at org.drools.planner.core.solver.DefaultSolver.runSolverPhases(DefaultSolver.java:190)
at org.drools.planner.core.solver.DefaultSolver.solve(DefaultSolver.java:155)
at de.haw.dsms.applicationcore.planning.BalancingApp.main(BalancingApp.java:47)
I have annotated my planning entity with the following annotations to collect the value range from two lists in the solution:
#PlanningEntity
public class ScheduleItem implements Cloneable{
private ChangeOfferEvent item;
#PlanningVariable()
#ValueRanges({
#ValueRange(type = ValueRangeType.FROM_SOLUTION_PROPERTY, solutionProperty = "offers"),
#ValueRange(type = ValueRangeType.FROM_SOLUTION_PROPERTY, solutionProperty = "dummies")
})
public ChangeOfferEvent getItem() {
return item;
}
public void setItem(ChangeOfferEvent item) {
this.item = item;
}
public ScheduleItem() {
this.item = null;
}
...
This is the solution:
public class ProductionConsumptionBalancing implements Solution<HardAndSoftLongScore> {
/*
* Problem facts
*/
// The grid entity offers
private List<ChangeOfferEvent> offers;
// Placeholder events to represent "not used schedule items"
private List<PlaceholderOfferEvent> dummies;
// The total energy consumption in the grid
// [Watt]
private TotalEnergyConsumption totalElectricityConsumption;
// The total energy production in the grid
// [Watt]
private TotalEnergyProduction totalElectricityProduction;
public List<ChangeOfferEvent> getOffers() {
return offers;
}
public void setOffers(List<ChangeOfferEvent> offers) {
this.offers = offers;
}
public List<PlaceholderOfferEvent> getDummies() {
return dummies;
}
public void setDummies(List<PlaceholderOfferEvent> dummies) {
this.dummies = dummies;
}
public TotalEnergyConsumption getTotalElectricityConsumption() {
return totalElectricityConsumption;
}
public void setTotalElectricityConsumption(
TotalEnergyConsumption totalElectricityConsumption) {
this.totalElectricityConsumption = totalElectricityConsumption;
}
public TotalEnergyProduction getTotalElectricityProduction() {
return totalElectricityProduction;
}
public void setTotalElectricityProduction(
TotalEnergyProduction totalElectricityProduction) {
this.totalElectricityProduction = totalElectricityProduction;
}
/*
* Problem entities
*/
private List<ScheduleItem> schedule;
#PlanningEntityCollectionProperty
public List<ScheduleItem> getSchedule() {
return schedule;
}
public void setSchedule(List<ScheduleItem> schedule) {
this.schedule = schedule;
}
...
The strange thing about this is, that during debugging I discoverd that it is the parameter "planningEntity" which is null and not the values in the solution.
Does anybody encounter the same issue or does know how to solve this?
Thanks and best regards!
PS:
It seems like this is coming from the method initSelectedPlanningValueList:
private void initSelectedPlanningValueList(AbstractSolverPhaseScope phaseScope) {
90 if (planningVariableDescriptor.isPlanningValuesCacheable()) {
91 Collection<?> planningValues = planningVariableDescriptor.extractPlanningValues(
92 phaseScope.getWorkingSolution(), null);
93 cachedPlanningValues = applySelectionOrder(planningValues);
94 } else {
95 cachedPlanningValues = null;
96 }
97 }
PSPS:
Problem solved.
The issue appeared because I forgot to link the clone's dummies-attribute to the original dummies list. So the dummies list in the cloned solution was null.
#Override
public Solution<HardAndSoftLongScore> cloneSolution() {
ProductionConsumptionBalancing clone = new ProductionConsumptionBalancing();
// Transfer consumption and production values
clone.totalElectricityConsumption = this.totalElectricityConsumption;
clone.totalElectricityProduction = this.totalElectricityProduction;
// Shallow copy offer lists (shouldn't change)
clone.offers = this.offers;
// Shallow copy of dummy list
clone.dummies = this.dummies;
// Deep copy schedule
...
Starting from 6.0.0.Beta1, OptaPlanner (= Drools Planner) supports automatic cloning out-of-the-box. So you don't need to implement the cloneSolution() method no more, because planner figures it out automatically. Because you don't need to implement the method no more, you can't implement it incorrectly.
Note that you can still implement a custom clone method if you really want too.

Case-insensitive indexing with Hibernate-Search?

Is there a simple way to make Hibernate Search to index all its values in lower case ? Instead of the default mixed-case.
I'm using the annotation #Field. But I can't seem to be able to configure some application-level set
Fool that I am ! The StandardAnalyzer class is already indexing in lowercase. It's just a matter of setting the search terms in lowercase too. I was assuming the query would do that.
However, if a different analyzer were to be used, application-wide, then it can be set using the property hibernate.search.analyzer.
Lowercasing, term splitting, removing common terms and many more advanced language processing functions are applied by the Analyzer.
Usually you should process user input meant to match indexed strings with the same Analyzer used at indexing; configuring hibernate.search.analyzer sets the default (global) Analyzer, but you can customize it per index, per entity type, per field and even on different entity instances.
It is for example useful to have language specific analysis, so to process Chinese descriptions with Chinese specific routines, Italian descriptions with Italian tokenizers.
The default analyzer is ok for most use cases, and does lowercasing and splits terms on whitespace.
Consider as well that when using the Lucene Queryparser the API requests you the appropriate Analyzer.
When using the Hibernate Search QueryBuilder it attempts to apply the correct Analyzer on each field; see also http://docs.jboss.org/hibernate/search/4.1/reference/en-US/html_single/#search-query-querydsl .
There are multiple way to make sort insensitive in string type field only.
1.First Way is add #Fields annotation in field/property on entity.
Like
#Fields({#Field(index=Index.YES,analyze=Analyze.YES,store=Store.YES),
#Field(index=Index.YES,name = "nameSort",analyzer = #Analyzer(impl=KeywordAnalyzer.class), store = Store.YES)})
private String name;
suppose you have name property with custom analyzer and sort on that. so it's not possible then you can add new Field in index with nameSort apply sort on that field.
you must apply Keyword Analyzer class because that is not tokeniz field and by default apply lowercase factory class in field.
2.Second way is that you can implement your comparison class on sorting like
#Override
public FieldComparator newComparator(String field, int numHits, int sortPos, boolean reversed) throws IOException {
return new StringValComparator(numHits, field);
}
Make one class with extend FieldComparatorSource class and implement above method.
Created new Class name with StringValComparator and implements FieldComparator
and implement following method
class StringValComparator extends FieldComparator {
private String[] values;
private String[] currentReaderValues;
private final String field;
private String bottom;
StringValComparator(int numHits, String field) {
values = new String[numHits];
this.field = field;
}
#Override
public int compare(int slot1, int slot2) {
final String val1 = values[slot1];
final String val2 = values[slot2];
if (val1 == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return val1.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public int compareBottom(int doc) {
final String val2 = currentReaderValues[doc];
if (bottom == null) {
if (val2 == null) {
return 0;
}
return -1;
} else if (val2 == null) {
return 1;
}
return bottom.toLowerCase().compareTo(val2.toLowerCase());
}
#Override
public void copy(int slot, int doc) {
values[slot] = currentReaderValues[doc];
}
#Override
public void setNextReader(IndexReader reader, int docBase) throws IOException {
currentReaderValues = FieldCache.DEFAULT.getStrings(reader, field);
}
#Override
public void setBottom(final int bottom) {
this.bottom = values[bottom];
}
#Override
public String value(int slot) {
return values[slot];
}
}
Apply sorting on Fields Like
new SortField("name",new StringCaseInsensitiveComparator(), true);

Subreports within a subreport (IReport-JasperReports)

I have a requirement that needs to have a subreport within a subreport. Is there a sample code which I can refer?
Thanks in advance.
You don't actually need any code at all to generate a subreport within a subreport. This can be done with reports that have no dynamic components (nothing in the detail band). Of course the resulting report won't be much use for anything interesting.
If you wanted a more interesting report than this, you'll need to provide data for the report and / or subreport. At that point the code will vary depending on where your data is coming from. If you can provide more information on what you are trying to do, we can perhaps be more help.
If the subreport has dynamic content, you will need to pass in to it access to an object which implements JRDataSource.
For example, I recently created a one page report that had multiple "clauses" in it. To make my life simpler, I stored the clauses in a Map and derived the JRDataSource object using the following code. The JRDataSource objects were then passed in as a field for the main report.
private static class ListMapDataSource implements JRRewindableDataSource {
private Map currentMap = null;
private int currentRow;
private int numberOfMoveFirsts = 0;
private List<Map<String, ? extends Object>> rowList;
ListMapDataSource(List<Map<String, ? extends Object>> rowList) {
this.rowList = rowList;
moveFirst();
}
ListMapDataSource(Map<String, ? extends Object> singleRow) {
this.rowList = new ArrayList<Map<String, ? extends Object>>(1);
this.rowList.add(singleRow);
moveFirst();
}
public boolean next() throws JRException {
if (currentRow >= rowList.size() - 1) {
return false;
}
currentRow++;
currentMap = rowList.get(currentRow);
return true;
}
public Object getFieldValue(JRField jrField) throws JRException {
String name = jrField.getName();
Class valueClass = jrField.getValueClass();
if (JasperReport.class.isAssignableFrom(valueClass)) {
}
return currentMap.get(name);
}
public void moveFirst() {
numberOfMoveFirsts++;
if (numberOfMoveFirsts > 10) {
System.out.println("Exceeded 10 moveFirst() calls. Aborting.");
System.exit(1);
}
currentRow = - 1;
currentMap = null;
}
}