I'm experimenting with Drools backward chaining mechanism and some simple Web Ontology Language (OWL)-RL logic. OWL supports inverse properties, which means I have to generate recursive queries from my TBox. The Drools documentation states that "The algorithm uses stacks to handle recursion, so the method stack will not blow up.", but when invoking my query, the CPU usage goes to 100% and the stack grows to infinity. I have three queries for the two inverse properties "tsEquivalen" and "phxEquivalent". The invoked query is the "bind_tsEquivalent_value" query.
query "bind_tsEquivalent_value"(Resource $subject, Resource $object)
#Abductive(target=ObjectPropertyQueryResult.class)
Statement(subject == $subject, predicate == tsEquivalent, $object := object)
or
$object := bind_phxEquivalent_inverse_value($subject;)
end
query "bind_tsEquivalent_inverse_value"(Resource $subject, Resource $object)
#Abductive(target=ObjectPropertyQueryResult.class)
Statement($object := subject, predicate == tsEquivalent, object == $subject)
or
$object := bind_phxEquivalent_inverse_value($subject;)
end
query "bind_phxEquivalent_inverse_value"(Resource $subject, Resource $object)
#Abductive(target=ObjectPropertyQueryResult.class)
Statement($object := subject, predicate == phxEquivalent, object == $subject)
or
$object := bind_tsEquivalent_inverse_value($subject;)
end
My ObjectPropertyQueryResult looks like this:
import com.hp.hpl.jena.rdf.model.Resource;
public class ObjectPropertyQueryResult {
private Resource subject;
private Resource object;
public ObjectPropertyQueryResult() {
super();
}
public ObjectPropertyQueryResult(Resource subject, Resource object) {
this.subject = subject;
this.object = object;
}
public Resource getSubject() {
return subject;
}
public void setSubject(Resource subject) {
this.subject = subject;
}
public Resource getObject() {
return object;
}
public void setObject(Resource object) {
this.object = object;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((object == null) ? 0 : object.hashCode());
result = prime * result + ((subject == null) ? 0 : subject.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
ObjectPropertyQueryResult other = (ObjectPropertyQueryResult) obj;
if (object == null) {
if (other.object != null)
return false;
} else if (!object.equals(other.object))
return false;
if (subject == null) {
if (other.subject != null)
return false;
} else if (!subject.equals(other.subject))
return false;
return true;
}
}
I think that this query combination is (if you pardon the expression) a wash-out. Running
bind_tsEquivalent_value"(sub, obj)
without a matching Statement leads to an evaluation of
bind_phxEquivalent_inverse_value(sub)
and this delegates to
bind_tsEquivalent_inverse_value(sub)
and then to
bind_phxEquivalent_inverse_value(sub)
and now you are caught in an infinite recursive loop.
A logical "or" without a constraint condition on the non-terminal branch is inadequate for breaking recursion. It is possible that the DRL compiler should terminate recursion when being caught in a loop like this, but basically (I think) this is a programmer's error.
"the CPU usage goes to 100% and the stack grows to infinity."
The heap or the stack? If your query never returns, it will recurse until you run out of heap space.
If it's a stack error, can you paste the trace.
Related
I have these two operations:
public class Car {
...
public void delete() throws SQLException {
Connection c = Db.getConnection();
c.setAutoCommit(false);
if (c.getTransactionIsolation() == 0) {
c.setTransactionIsolation(c.TRANSACTION_READ_COMMITTED);
}
String sql = "DELETE FROM cars WHERE id = ?";
try(PreparedStatement s = c.prepareStatement(sql)) {
s.setInt(1, id);
s.executeUpdate();
c.commit();
}
}
}
and second one:
public class CarTransfer {
public static boolean transfer(int person_id, int car_id, int other_shop_id) throws SQLException, Exception {
Car car = FindCar.getInstance().findById(car_id);
Person person = FindPerson.getInstance().findById(person_id);
try {
if (car == null) {
throw new CallException("Car doesn't exist");
}
} catch (CallException e) {
System.out.println(e.getMessage());
}
Connection c = Db.getConnection();
c.setAutoCommit(false);
if (c.getTransactionIsolation() == 0) {
c.setTransactionIsolation(c.TRANSACTION_READ_COMMITTED);
}
String sql = "";
try {
sql = "UPDATE car_belongs_shop SET shop_id = "+other_shop_id+" WHERE car_id = "+car.getId();
} catch (NullPointerException e) {
System.out.println("Did not find a car / shop");
return false;
}
try(PreparedStatement s = c.prepareStatement(sql)) {
s.executeUpdate();
try {
if (person.getCredit() < 100) {
c.rollback();
throw new CallException("Not enough credit");
}
else {
if (car == null) {
c.rollback();
throw new CallException("Car doesn't exist");
}
else {
c.commit();
person.buy(100);
}
}
} catch (CallException | NullPointerException e) {
System.out.println(e.toString());
return false;
}
}
c.setAutoCommit(true);
return true;
}
}
So what I want to do is to transfer a car from one shop to another. But in that time, some other transaction can be done on the other side, when someone removes that car from database (that's first method delete()). What I want to do is to block any delete() method in the time when transfer is running. I'm trying to do that by this code, and it's transaction isolation (in level read committed). However, this does not work as intended, because it's still possible to remove a car whilst transfer method is running. Can you help me, whether I use sufficient isolation level or have whole transactions in the right place of the code?
IRepository.cs
public interface ICommonRepository<T>
{
Task<int> CountAsync(Expression<Func<T, bool>> filter = null, Func<IQueryable<T>,IOrderedQueryable<T>> orderBy = null,List<Expression<Func<T, object>>> includes = null);
}
Repository.cs:
public class Repository<T> : IRepository<T> where T : class, new()
{
protected readonly MyDbContext _context;
protected readonly ILogger<Repository<T>> _logger;
protected readonly DbSet<T> _dbSet;
public CommomRepository(MyDbContext context, ILogger<Repository<T>> logger)
{
_context = context;
_logger = logger;
if (_context != null)
{
_dbSet = _context.Set<T>();
}
else
{
}
}
internal IQueryable<T> _Select(Expression<Func<T, bool>> filter = null
, Func<IQueryable<T>, IOrderedQueryable<T>> orderBy = null
, List<Expression<Func<T, object>>> includes = null
, int? pageIndex = null
, int? pageSize = null)
{
IQueryable<T> query = _dbSet;
if (includes != null)
{
query = includes.Aggregate(query, (current, include) => current.Include(include));
}
if (orderBy != null)
{
query = orderBy(query);
}
if (filter != null)
{
query = query.Where(filter);
}
if (pageIndex != null && pageSize != null)
{
query = query.Skip((pageIndex.Value - 1) * pageSize.Value).Take(pageSize.Value);
}
return query;
}
public async Task<int> CountAsync(Expression<Func<T, bool>> filter = null
, Func<IQueryable<T>, IOrderedQueryable<T>> orderBy = null
, List<Expression<Func<T, object>>> includes = null)
{
var query = _Select(filter, orderBy, includes);
return await query.CountAsync();
}
}
Usage (controller):
var singleCheckTask = _Repo.CountAsync(x=> x.id== item.id);
var nameCheckTask = _Repo.CountAsync(x=> x.name== item.name);
var ipCheckTask = _Repo.CountAsync(x=> x.ip == item.ip);
await Task.WhenAll(singleCheckTask, nameCheckTask, ipCheckTask);
And exception thowed:
Microsoft.EntityFrameworkCore.Query.Internal.SqlServerQueryCompilationContextFactory|ERROR|An exception occurred in the database while iterating the results of a query.
System.InvalidOperationException: Can not start another operation while there is an asynchronous operation pending.
I'vs tested that if I do not use Task.whenAll, var testSingleCheck = _Repo.CountAsync(x=> x.id== item.id).Result; This would be all right.
It's simple, you can't run queries in parallel with EF (neither EF6 nor EF Core).
One reasons for is, that EF isn't thread-safe.
EF 6 on Task-based pattern
Thread Safety
While thread safety would make async more useful it is an orthogonal feature. It is unclear that we could ever implement support for it in the most general case, given that EF interacts with a graph composed of user code to maintain state and there aren't easy ways to ensure that this code is also thread safe.
For the moment, EF will detect if the developer attempts to execute two async operations at one time and throw.
I've implemented a custom autocomplete text field in a cn1 app, but I've noticed it only loads the suggestions list once, after that any change in the text doesn't trigger a change in the list, and the getSuggestionModel() is never called again. How can I achieve this (in my mind, basic) functionality?
This is my autocomplete class:
public class ForumNamesAutocomplete extends AutoCompleteTextField {
List<String>suggestions = new LinkedList<String>();
List<Map<String,Object>> fData;
StateMachine mac;
int currentIndex;
String prevText;
public static final String KEY_FORUM_NAME = "name";
public static final String KEY_FORUM_ID = "id";
public static final String KEY_FORUM_DESC = "desc";
public ForumNamesAutocomplete(StateMachine sm){
super();
mac = sm;
if(sm.forumData != null){
fData = mac.forumData;
}
}
#Override
protected boolean filter(String text) {
if(text.equals(prevText)){
return false;
}
setSuggestionList(text);
fireDataChanged(DataChangedListener.CHANGED, text.length());
prevText = text;
return true;
}
#Override
public void fireDataChanged(int type, int index) {
super.fireDataChanged(type, index);
}
public void setSuggestionList(String s){
if(suggestions == null){
suggestions = new LinkedList<String>();
}else{
suggestions.clear();
}
LinkedList<String> descList = new LinkedList<String>();
for(int i = 0;i<fData.size();i++){
boolean used = false;
Map<String,Object> forumMap = fData.get(i);
if(((String)forumMap.get(KEY_FORUM_NAME)).indexOf(s) != -1){
suggestions.add((String)forumMap.get(KEY_FORUM_NAME));
used = true;
}
if(!used && ((String)forumMap.get(KEY_FORUM_DESC)).indexOf(s) != -1){
descList.add((String)forumMap.get(KEY_FORUM_NAME));
}
}
suggestions.addAll(descList);
}
#Override
protected ListModel<String> getSuggestionModel() {
return new DefaultListModel<String>(suggestions);
}
}
This used to be simpler and seems to be a bit problematic now as explained in this issues.
Technically what you need to do is return one model and then mutate said model/fire modified events so everything will refresh. This is non-trivial and might not work correctly for all use cases so ideally we should have a simpler API to do this as we move forward.
After additional debugging, I saw that the getSuggestionModel() method was being called only during initialization, and whatever the suggestion list (in suggestion object) was at that point, it remained so. Instead I needed to manipulate the underlying ListModel object:
public class ForumNamesAutocomplete extends AutoCompleteTextField {
ListModel<String>myModel = new ListModel<String>();
...
#Override
protected boolean filter(String text) {
if(text.length() > 1){
return false;
}
setSuggestionList(text);
return true;
}
private void setSuggestionList(String s){
if(myModel == null){
myModel = new ListModel<String>();
}else{
while(myModel.getSize() > 0)
myModel.removeItem(0);
}
for(int i = 0;i<fData.size();i++){
boolean used = false;
Map<String,Object> forumMap = fData.get(i);
if(((String)forumMap.get(KEY_FORUM_NAME)).indexOf(s) != -1){
myModel.addItem((String)forumMap.get(KEY_FORUM_NAME));
used = true;
}
if(!used && ((String)forumMap.get(KEY_FORUM_DESC)).indexOf(s) != -1){
myModel.addItem((String)forumMap.get(KEY_FORUM_NAME));
}
}
}
...
}
When running the solver of my problem, i get the following error message :
Exception executing consequence for rule "addMarks" in com.abcdl.be.solver: [Error: getEndTime(): null]
[Near : {... getEndTime() ....}]
...
The message says that the method getEndTime() in the rule "addMarks" returns null.
Here's the drools file :
// ############################################################################
// Hard constraints
// ############################################################################
rule "RespectDependencies" // Respect all the dependencies in the input file
when
Dependency(respected() == false)
then
scoreHolder.addHardConstraintMatch(kcontext, 0, -1);
end
rule "addMarks" //insert a Mark each time a process chain starts or ends
when
Node($startTime : getStartTime(), $endTime : getEndTime())
then
insertLogical(new Mark($startTime));
insertLogical(new Mark($endTime));
end
rule "resourcesLimit" // At any time, The number of resources used must not exceed the total number of resources available
when
Mark($startTime: time)
Mark(time > $startTime, $endTime : time)
not Mark(time > $startTime, time < $endTime)
$total : Number(intValue > Global.getInstance().getAvailableResources() ) from
accumulate(Node(getEndTime() >=$endTime, getStartTime()<= $startTime, $res : resources), sum($res))
then
scoreHolder.addHardConstraintMatch(kcontext, 1, (Global.getInstance().getAvailableResources() - $total.intValue()) * ($endTime - $startTime));
end
rule "masterDataManagement" // Parallel loading is forbidden
when
$n1 : Node(md != "", $md : md, $id : id)
$n2 : Node(id > $id, md == $md) // We make sure to check only different nodes through the condition "id > $id"
eval(Graph.getInstance().getPaths($n1, $n2).size() == 0)
then
scoreHolder.addHardConstraintMatch(kcontext, 2, -1);
end
// ############################################################################
// Soft constraints
// ############################################################################
rule "MaximizeResources" //Maximize use of available resources at any time
when
Mark($startTime: time)
Mark(time > $startTime, $endTime : time)
not Mark(time > $startTime, time < $endTime)
$total : Number(intValue < Global.getInstance().getAvailableResources() ) from
accumulate(Node(getEndTime() >=$endTime, getStartTime()<= $startTime, $res : resources), sum($res))
then
scoreHolder.addHardConstraintMatch(kcontext, 0, ($total.intValue() - Global.getInstance().getAvailableResources()) * ($endTime - $startTime));
end
rule "MinimizeTotalTime" // Minimize the total process time
when
Problem($totalTime : getTotalTime())
then
scoreHolder.addSoftConstraintMatch(kcontext, 1, -$totalTime);
end
Node is the planning entity and the methods getStartTime() and getEndTime() which return null are defined in the planning entity
The planning entity code :
#PlanningEntity(difficultyComparatorClass = NodeDifficultyComparator.class)
public class Node extends ProcessChain {
private Node parent; // Planning variable: changes during planning, between score calculations
private int id; // Used as an identifier for each node. Different nodes cannot have the same id
public Node(String name, String type, int time, int resources, String md, int id)
{
super(name, "", time, resources, "", type, md);
this.delay = "";
this.id = id;
}
public Node()
{
super();
this.delay = "";
}
#PlanningVariable(valueRangeProviderRefs = {"parentRange"}, nullable = false)
public Node getParent() {
return parent;
}
public void setParent(Node parent) {
this.parent = parent;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String toString()
{
if(this.type.equals("AND"))
return delay;
if(!this.md.isEmpty())
return Tools.excerpt(name+" : "+this.md);
return Tools.excerpt(name);
}
public boolean equals( Object o ) {
if (o == this)
return true;
if (o instanceof Node) {
return
this.name.equals(((Node)o).name);
} else {
return false;
}
}
// ************************************************************************
// Complex methods
// ************************************************************************
public int getStartTime()
{
return Graph.getInstance().getNode2times().get(this).getFirst();
}
public int getEndTime()
{
return Graph.getInstance().getNode2times().get(this).getSecond();
}
#ValueRangeProvider(id = "parentRange")
public Collection<Node> getPossibleParents()
{
Collection<Node> nodes = Graph.getInstance().getNodes();
nodes.remove(this); // We remove this node from the list
nodes.remove(Graph.getInstance().getParents(this)); // We remove its parents from the list
return nodes;
}
/**
* The normal methods {#link #equals(Object)} and {#link #hashCode()} cannot be used because the rule engine already
* requires them (for performance in their original state).
* #see #solutionHashCode()
*/
public boolean solutionEquals(Object o) {
if (this == o) {
return true;
} else if (o instanceof Node) {
Node other = (Node) o;
return new EqualsBuilder()
.append(name, other.name)
.isEquals();
} else {
return false;
}
}
/**
* The normal methods {#link #equals(Object)} and {#link #hashCode()} cannot be used because the rule engine already
* requires them (for performance in their original state).
* #see #solutionEquals(Object)
*/
public int solutionHashCode() {
return new HashCodeBuilder()
.append(name)
.toHashCode();
}
this is very strange cause node2times().get() does not return null for all the nodes in the Graph class. i did a test to make sure :
public class Graph {
private ArrayList<Node> nodes;
...
public void test()
{
for(Node node : nodes)
{
int time = 0;
try{
time = getNode2times().get(node).getFirst();
System.out.print(node+" : "+"Start time = "+time);
}
catch(NullPointerException e)
{
System.out.println("StartTime is null for node : " +node);
}
try{
time = node.getEndTime();
System.out.println(" End time = "+time);
}
catch(NullPointerException e)
{
System.out.println("EndTime is null for node : " +node);
}
}
}
...
}
You are overloading Node.equals() but not Node.hashCode().
You are using a map: Node to times (if I may trust the name you have used).
This violates the contract for using an object as a key in a HashMap.
What is the best way to make a deep copy of a gwt overlay type?
I'm looking for a function or library that inspects a GWT overlay and clones it. It must be able to clone contained arrays or objects.
Thanks
There are 2 ways that I would consider. Most of the time overlay objects are used in conjunction with JSON, so you could just stringify the object and parse the results:
public native MyOverlayType deepCopy()/*-{
return JSON.parse(JSON.stringify(this));
}-*/;
OR
public static native MyOverlayType fromJson(String json)/*-{
return JSON.parse(json);
}-*/;
public native String getJson()/*-{
return JSON.stringify(this);
}-*/;
public MyOverlayType deepCopy(){
return fromJson(getJson());
}
The other option is a pure javascript approach which will maintain other stuff such as function pointers and probably be more efficient.
public class JsoUtils
{
#SuppressWarnings("unchecked")
public static <T extends JavaScriptObject> T deepCopy(T obj)
{
return (T) deepCopyImpl(obj);
}
private static native JavaScriptObject deepCopyImpl(JavaScriptObject obj)/*-{
if (typeof obj !== 'object' || obj === null) {
return obj;
}
var c = obj instanceof Array ? [] : {};
for (var i in obj) {
if (obj.hasOwnProperty(i)) {
if (typeof obj[i] !== 'object' || obj[i] === null)
c[i] = obj[i];
else
c[i] = #com.example.gwt.client.JsoUtils::deepCopyImpl(Lcom/google/gwt/core/client/JavaScriptObject;)(obj[i]);
}
}
return c;
}-*/;
}
Based on Lineman78's answer and taking into consideration this other answer from A. Levy I created the following function:
public class JsoUtils {
#SuppressWarnings("unchecked")
public static <T extends JavaScriptObject> T deepCopy(T obj)
{
return (T) deepCopyImpl(obj);
}
private static native JavaScriptObject deepCopyImpl(JavaScriptObject obj) /*-{
if (obj == null) return obj;
var copy;
if (obj instanceof Date) {
// Handle Date
copy = new Date();
copy.setTime(obj.getTime());
} else if (obj instanceof Array) {
// Handle Array
copy = [];
for (var i = 0, len = obj.length; i < len; i++) {
if (obj[i] == null || typeof obj[i] != "object") copy[i] = obj[i];
else copy[i] = #com.amindea.noah.client.utils.JsoUtils::deepCopyImpl(Lcom/google/gwt/core/client/JavaScriptObject;)(obj[i]);
}
} else {
// Handle Object
copy = {};
for (var attr in obj) {
if (obj.hasOwnProperty(attr)) {
if (obj[attr] == null || typeof obj[attr] != "object") copy[attr] = obj[attr];
else copy[attr] = #com.amindea.noah.client.utils.JsoUtils::deepCopyImpl(Lcom/google/gwt/core/client/JavaScriptObject;)(obj[attr]);
}
}
}
return copy;
}-*/;
}
It supports deep copy of Object, Array, Date, String, Number, or Boolean. As explained by A. Levy the function will work as long as the data in the objects and arrays form a tree structure.
I found the simplest way to clone a JavaScriptObject is using the JsonUtils class provided by GWT:
import com.google.gwt.core.client.JsonUtils;
final String taskJson = JsonUtils.stringify(selectedTask);
TaskJso task = JsonUtils.safeEval(taskJson).cast();