Pass polymorphic record to foreign function - purescript

I have R1 and R2 record types and I need to pass either R1 or R2 to a foreign function (it can handle either r1 or r2 record structure) is it possible to do this (I thought maybe via conversion to Foreign Object)?
Or do I need to declare two different foreign imports (with different type signatures for passing R1 and R2) pointing to the same js function?
Another way I found using unsafeCoerce for type conversion:
foreign import data R1orR2 ∷ Type
fromR1 :: R1 -> R1orR2
fromR1 = unsafeCoerce
fromR2 :: R2 -> R1orR2
fromR2 = unsafeCoerce
So maybe there some other ways.

When writing FFI bindings, unsafeCoerce is quite ok: after all, foreign import has all the same drawbacks already, so you're not really losing anything.
And yes, what you came up with - R1orR2 - is the right approach that's used quite frequently in FFI bindings.
You may also want to check out undefined-is-not-a-problem and untagged-union libraries. They offer some more advanced and generalized techniques in this area.

Related

Cannot change the type of an instance of parent A to subclass B In the JPA join table strategy

We use Eclipselink-2.6 with wildfly-8 server in a JavaEE7 application.
we have three JPA entities A, B, and C.
B and C extend A.
In order to change the type of an object "myObjectId" A to B, we try to:
1- Change the dtype value from "a" to "b" in Table "A" for the instance "myObjectId" using the criteria query.
2- Create a new row in the table "B" in the database for the same id "myObjectId" also with a criteria query.
3- Clearing the cache by evictAll as well Entitymanger using clear functions.
After, when I tried to find all data of type B, the object "myObjectId" came in the list but with type A!
After restarting wildfly server and call findAll, therefore, the data came with type B!
why myObjectId didn't change its type even if the first and the second level cache was cleared!?
See https://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
Essentially EclipseLink maps the JPA cache eviction call to its own invalidation logic, which seems to be keeping a reference to the object using a soft reference so that object identity is maintained. This prevents A1->B1->A1' from happening on cycles with lazy relationships.
Try calling ((JpaEntityManager)em.getDelegate()).getServerSession().getIdentityMapAccessor().initializeAllIdentityMaps() as suggested in the doc and then reading in the changed class.

How to deal with automatically generated values in strongly types languages when defining a generic CRUD?

So I imagine that a generic CRUD has all basic operations on some type. Insert, read, delete, etc.
Yet how do I define an "insert" operation on such generic type if the crud internally will generate some of the values?
trait GenericCrud[E]{
def insert(value:E): Unit // but in the insert I really don't want to provide a value E, but some incomplete version of it.
}
Normally you can use a typeclass in a tagless way, where
class (MonadIO mio) => Crud a mio where
read :: ? -> mio a
Personally I do not like it
But I would recommend you use Free Monad that
data Crud c a where
Read :: c -> Crud c a
Insert :: c -> a -> Crud c a
Create :: a -> Crud c a
Delete :: c -> Crud c a
deriving Functor
Then intercept the algebra to IO.

ER Diagram to Database conversion

Suppose i have Two strong entity E1 and E2 connected by a 1 to many relationship R.
E1 <--------- R ---------- E2
How many table will be created when i will convert the above ER diagram into database ?
I know that when E2 will be in total participation answer will be 2. Since, E2's primary key will merge perfectly. I am not sure about above. I have seen multiple places and found different answer. I am looking for some solid argument with answer.
Answer can be 2 or 3. I want to know which is more correct.
Chen's original method mapped every entity relation and relationship relation to a separate table. This would produce 3 tables:
E1 (e1 PK)
E2 (e2 PK)
R (e2 PK, e1)
Full participation by either E1 or E2 can be handled by an FK constraint.
As you can see, E2 and R have the same determinant / PK. This allows us to combine the two relations into one table, using a nullable e1 column if E2 participates partially in the relationship, non-nullable if it participates fully. Full participation by E1 still requires an FK constraint:
E1 (e1 PK)
E2 (e2 PK, e1)
I want to know which is more correct.
Logically, the two solutions are pretty much equivalent.
Making 3 tables maintains the structure of the conceptual (ER) model, but produces more tables which increases complexity in one way. On the other hand, it avoids nulls which create their own complexity.
Making 2 tables reduces the number of tables but introduces nulls. In addition, we have to resort to different mechanisms (nullable columns vs FK constraints) to implement a single concept (full participation).
Other requirements can also affect the decision. If I have 50 optional attributes, I certainly don't want to deal with 50 distinct tables! However, if I wanted to create another relationship (R2) which only applies to values in E2 which are already participating in R, I could enforce that constraint in the first design using an FK constraint: R2 (e2) referencing R (e2). In the second design, I would need to use a trigger since I only want to allow references to e2 which have non-null e1 values.
There is no ultimately correct answer. Conceptual, logical and physical modeling address different concerns, and as-yet unknown requirements will affect your model and contradict your decisions. As in programming, try to keep things simple, refactor continuously and hope for the best.

Normalized and immutable data model

How does Haskell solve the "normalized immutable data structure" problem?
For example, let's consider a data structure representing ex girlfriends/boyfriends:
data Man = Man {name ::String, exes::[Woman]}
data Woman = Woman {name :: String, exes::[Man]}
What happens if a Woman changes her name and she had been with 13 man? Then all the 13 man should be "updated" (in the Haskell sense) too? Some kind of normalization is needed to avoid these "updates".
This is a very simple example, but imagine a model with 20 entities, and arbitrary relationships between them, what to do then?
What is the recommended way to represent complex, normalized data in an immutable language?
For example, a Scala solution can be found here (see code below), and it uses references. What to do in Haskell?
class RefTo[V](val target: ModelRO[V], val updated: V => AnyRef) {
def apply() = target()
}
I wonder, if more general solutions like the one above (in Scala) don't work in Haskell or they are not necessary? If they don't work, then why not? I tried to search for libraries that do this in Haskell but they don't seem to exist.
In other words, if I want to model a normalized SQL database in Haskell (for example to be used with acid-state) is there a general way to describe foreign keys? By general I mean, not hand coding the IDs as suggested by chepner in the comments below.
EDIT:
Yet in other words, is there a library (for Haskell or Scala) that implements an SQL/relational database in memory (possibly also using Event Sourcing for persistence) such that the database is immutable and most of the SQL operations (query/join/insert/delete/etc.) are implemented and are type-safe ? If there is not such a library, why not ? It seems to be a pretty good idea. How should I create such a library ?
EDIT 2:
Some related links:
https://realm.io/news/slug-peter-livesey-managing-consistency-immutable-models/
https://tonyhb.gitbooks.io/redux-without-profanity/content/normalizer.html
https://github.com/agentm/project-m36
https://github.com/scalapenos/stamina
http://www.haskellforall.com/2014/12/a-very-general-api-for-relational-joins.html
The problem is you are storing data and relationships in the same type. To normalise, you need to separate. Relational databases 101.
newtype Id a = Id Int -- Type-safe ID.
data Person = Person { id :: Id Person, name :: String }
data Ex = Ex { personId :: Id Person, exId :: Id Person }
Now if a person changes their name, only a single Person value is affected. The Ex entries don't care about peoples' names.
Project M63 comes pretty close to what I was looking for. It is written in Haskell.
A more lightweight Haskell solution is outlined in Gabriel Gonzalez's post "A very general API for relational joins".

Functional dependencies - BCNF normalization issue

I need help about a normalization issue.
Consider a relation R(ABC)
with the following functional dependencies:
AB --> C
AC --> B
How can i modify this to Boyce–Codd normal form ?
If i leave it like this, it's a relation with a key attribute transitionally-dependent of a key-candidate.
I tried splitting into several relations but that way i lose information.
A relational schema R is in Boyce–Codd normal form if and only if for
every one of its dependencies X → Y, at least one of the following
conditions hold:
X → Y is a trivial functional dependency (Y ⊆ X)
X is a superkey for schema R
From Wikipedia
R has two candidate keys, AB and AC. It's clear that the second rule above applies here. So R is in BCNF.
If i leave it like this, it's a relation with a key attribute
transitionally-dependent of a key-candidate. I tried splitting into
several relations but that way i lose information.
I'm not quite sure what you're getting at here, but I think the terminology in English includes
prime attribute (an attribute that's part of any candidate key)
transitively dependent (but that refers to non-prime attributes)
candidate key (not key-candidate)
This relation is in BCNF
The AC and AB are super keys and the attributes B and C depend upon the super keys and so they are in BCNF
and
There is no Transitive dependency in this relation
Hope,this helps