I need to solve a sequence of Google or-tools cp_model SAT problems, in which only one constraint of many changes on each step through the sequence. Building a new model for each case is extremely inefficient. Is there a way to vary a parameter in the model, like Parameter() in cvxpy? Otherwise, is there a way to remove constraint and add a new one in its place?
The underlying storage of the model is a protobuf:
The CpModel storage
You need to keep a reference to the Constraint objet.
To clear a constraint, just access the underlying proto object (Proto() in python, getBuilder() in java for instance) and call Clear() or clear() on it.
You can overwrite a constraint in place. But you will need to understand the process to rewrite model object into constraint (look at the cp_model.cc, cp_model.py, CpModel.java, CpModel.cs files). The simpler is to append a new constraint using the standard model.AddXXX APIs.
Related
C# 9 introduces record reference types. A record provides some synthesized methods like copy constructor, clone operation, hash codes calculation and comparison/equality operations. It seems to me convenient to use records instead of classes in general. Are there reasons no to do so?
It seems to me that currently Visual Studio as an editor does not support records as well as classes but this will probably change in the future.
Firstly, be aware that if it's possible for a class to contain circular references (which is true for most mutable classes) then many of the auto generated record members can StackOverflow. So that's a pretty good reason to not use records for everything.
So when should you use a record?
Use a record when an instance of a class is entirely defined by the public data it contains, and has no unique identity of it's own.
This means that the record is basically just an immutable bag of data. I don't really care about that particular instance of the record at all, other than that it provides a convenient way of grouping related bits of data together.
Why?
Consider the members a record generates:
Value Equality
Two instances of a record are considered equal if they have the same data (by default: if all fields are the same).
This is appropriate for classes with no behavior, which are just used as immutable bags of data. However this is rarely the case for classes which are mutable, or have behavior.
For example if a class is mutable, then two instances which happen to contain the same data shouldn't be considered equal, as that would imply that updating one would update the other, which is obviously false. Instead you should use reference equality for such objects.
Meanwhile if a class is an abstraction providing a service you have to think more carefully about what equality means, or if it's even relevant to your class. For example imagine a Crawler class which can crawl websites and return a list of pages. What would equality mean for such a class? You'd rarely have two instances of a Crawler, and if you did, why would you compare them?
with blocks
with blocks provides a convenient way to copy an object and update specific fields. However this is always safe if the object has no identity, as copying it doesn't lose any information. Copying a mutable class loses the identity of the original object, as updating the copy won't update the original. As such you have to consider whether this really makes sense for your class.
ToString
The generated ToString prints out the values of all public properties. If your class is entirely defined by the properties it contains, then this makes a lot of sense. However if your class is not, then that's not necessarily the information you are interested in. A Crawler for example may have no public fields at all, but the private fields are likely to be highly relevant to its behavior. You'll probably want to define ToString yourself for such classes.
All properties of a record are per default public
All properties of a record are per default immutable
By default, I mean when using the simple record definition syntax.
Also, records can only derive from records and you cannot derive a regular class from a record.
I'm using RxJava2 and Android's Room framework (v2.1.0). Ultimately, I'm using the Flowable from a #RawQuery-annotated abstract method in my #Dao class. When I updated a row/column of one of the referenced tables (using an #Update method in my #Dao on the root entity), I was expecting the Flowable to re-trigger when any of the referenced tables in the #RawQuery were touched. However, that didn't seem to be the case.
After digging into the generated code for my #Dao class, I noticed that the Room return value is wrapped in a call to RxRoom::createFlowable. I noticed that the tableNames argument only contained a subset of the expected tables names, so it made more sense why my Flowable was not re-triggering since I had updated one of the tables outside of the specified subset.
Upon further reflection, it made more sense why the code generator for Room couldn't derive the full set of table names, since all the table names were only available at runtime. (I wish the RxRoom documentation made it more plainly obvious that observing a #RawQuery would be flaky w/o using the observedEntities annotation argument!)
However, it's still a mystery to me how that subset of table names was even generated. While I probably could dive into the code base, it'd be great if someone knowledgeable could summarize how RxRoom derives the table names from a #RawQuery. My guess is that RxRoom is using the "leaf" joined tables of the root entity being queried, but I don't really understand why that's a reasonable default. IMHO, a safer default would be to NOT observe any referenced tables in a #RawQuery unless observedEntities is specified.
We're using polymorphic associations in our application. We've run into the classic problem: we encountered an invalid foreign key reference, and we can't create a foreign key constraint, because its a polymorphic association.
That said, I've done a lot of research on this. I know the downsides of using polymorphic associations, and the upsides. But I found what seems to be a decent solution:
http://blog.metaminded.com/2010/11/25/stable-polymorphic-foreign-key-relations-in-rails-with-postgresql/
This is nice, because you get the best of both worlds. My concern is the data duplication. I don't have a deep enough knowledge of postgresql to completely understand the cost of this solution.
What are your thoughts? Should this solution be completely avoided? Or is it a good solution?
The only alternative, in my opinion, is to create a foreign key for each association type. But then you run into validating that only one association exists. It's a "pick your poison" situation. Polymorphic associations clearly describe intent, and also make this scenario impossible. In my opinion that is the most important. The database foreign key constraint is a behind the scenes feature, and altering "intent" to work with database limitations feels wrong to me. This is why I'd like to use the above solution, assuming there is not a glaring "avoid" with it.
The biggest problem I have with PostgreSQL's INHERITS implementation is that you can't set a foreign key reference to the parent table. There are a lot of cases where you need to do that. See the examples at the end of my answer.
The decision to create tables, views, or triggers outside of Rails is the crucial one. Once you decide to do that, then I think you might as well use the very best structure you can find.
I have long used a base parent table, enforcing disjoint subtypes using foreign keys. This structure guarantees only one association can exist, and that the association resolves to the right subtype in the parent table. (In Bill Karwin's slideshow on SQL antipatterns, this approach starts on slide 46.) This doesn't require triggers in the simple cases, but I usually provide one updatable view per subtype, and require client code to use the views. In PostgreSQL, updatable views require writing either triggers or rules. (Versions before 9.1 require rules.)
In the most general case, the disjoint subtypes don't have the same number or kind of attributes. That's why I like updatable views.
Table inheritance isn't portable, but this kind of structure is. You can even implement it in MySQL. In MySQL, you have to replace the CHECK constraints with foreign key references to one-row tables. (MySQL parses and ignores CHECK constraints.)
I don't think you have to worry about data duplication. In the first place, I'm pretty sure data isn't duplicated between parent tables and inheriting tables. It just appears that way. In the second place, duplication or derived data whose integrity is completely controlled by the dbms is not an especially bitter pill to swallow. (But uncontrolled duplication is.)
Give some thought to whether deletes should cascade.
A publications example with SQL code.
A "parties" example with SQL code.
You cannot enforce that in a database in an easy way - so this is a really bad idea. The best solution is usually the simple one - forget about the polymorphic associations - this is a taste of an antipattern.
This question concerns using JPA to manage some data where some scenarios benefit from the full object model and others seem to be better implemented by a much flatter model. I'm therefore inclined to create two models. I get the feeling that this is not a good idea but I'm hard-pressed to see exactly why, or what the alternatives may be.
The basis scenario is that there is an Entity, lets call it A which the many side of a relationship with entity B. So in the database A has a foreign key field and if the full object model we see (simplified, getters/setters removed)
public Class A {
public int aKey;
public B;
// more attributes
}
public Class B {
public int bKey;
public List<A> collectionOfA;
// and more
}
One particular scenario is handling the arrival into the system of new As. They come from some external in the form of, say, text files. the insertion code needs to
for each CVS record
get the bKey from the record
find the B, or manage any error
create the A, setting the B
persist
Now in fact my scenario is more complex, there are several such relationships, so that find/set pairing is repeated several times.
Alternatively I could (and in fact have) created a second mapping for the A table
public Class Ainserter {
public int aKey;
public int bKey;
// more attributes
}
Now I just set the two values and persist. This does assume that the DB will have the referential integrity constraints, but with the tooling I'm using that is the case. In this, and in many legacy systems the DB pre-exists and may be accessed from both the new JPA code and other even non-Java code. I therefore don't see a reason to put the referential integrity checking in the JPA code in such simple cases.
I can see that potentially there are opportunities for aspects of the full model to become stale with respect to my insertions, but in a legacy environment there could be insertions happening in the DB itself at any time. So I don't see a new problem here.
I can also see potential for confusion if the same Entity Context were used for both models, but that can be avoided by suitable encapsulation.
Any other thoughts?
Edit:
There is a suggestion from axtavt to use EntityManager.getReference(B.class, bkey) to get the B instance. My understanding is that if I do this then to be properly conforming with the JPA programming model I am supposed to set both sides of the relationship, hence I would need to visit the "referenced" B object and add my A into his collection.
Edited again:
I was concerned that visiting B would cause a database lookup, so in performance terms I would not get the win. I have it on very good authority that, at least OpenJPA, will in fact not need to "inflate" B if we only access B's key and the collection of As - and so getReference() is a good suggestion. I seems reasonable that a well designed JPA implementation would have such optimisations.
JPA has an EntityManager.getReference() method, which basically combines the approaches you describe.
It gets primary key and returns a proxy object with that primary key without hitting the database. So, you can use that object to initialize the relationship field, exactly as you want to do in your second approach.
what is the best way to use Application Constants ?
What i usually do is create a separate table in database of constants and reference them as foreign key in other table.
In Java i use enum.
But how to keep a single place of authority for constants in application and what are the different ways i can do that(like table or enum).
What you described is a usual approach. You keep "constants" in the database layer and you mirror them in the application using enumerations. The only trouble is keeping them in sync. Following a strict process can help here. For example, you always update values on both levels immediately one after another, not interrupting the process for any purspose and checking in the changes immediately after it's done.
Another idea would to oly keep constants in the database. You also assign names to them. Whenever you use a constant in your application (by name) it is transparently loaded from the database. This way any change you introduce will immediately be seen by any user connecting to the database. The only error may be caused by an update happening in the middle of a transaction.