Multivalued Dependency transitivity property - database-normalization

I understand MVD complement and MVD Augmentation. However, it is difficult to understand MVD transitive dependency.
For reference I have read https://en.m.wikipedia.org/wiki/Multivalued_dependency
X =>=> Y and Y =>=> Z. How can we conclude X =>=> Z-Y?
Please also clear coalescence.

Related

Coq: How are the equality tacticts symmetry and transitivity defined?

I'm interested in how the Coq tactics symmetry and transitivity actually work. I've read the Coq manual but this only describes what they do, not how they operate on a proof and change the proof states. As an example of what I'm looking for, in Interactive Theorem Proving and Program Development, the authors state that
"The reflexivity tactic actually is synonymous with apply refl_equal" (p. 124).
But, the author refers the reader to the reference manual for symmetry and transitivity. I haven't found a good description of things that these two tactics are synonymous with in the same way.
For clarification, the reason why I ask is that I have defined a path space paths {A : UU} : A -> A -> UU notated a = b (like in UniMath) which acts like an equivalence relation except for that a = b is not a proposition but a type. For this reason I was unable to add this relation as an equivalence relation using Add Parametric Relation. I'm trying to cook up a version of symmetry and transitivity with Ltac for this path space relation but I don't know how to change the proof state; knowing how symmetry and transitivity actually work might help.
These tactics only apply the lemmas corresponding to symmetry and transitivity of the relation it finds in the goal.
These are found using the type classes mechanism.
For instance you could declare
From Coq Require Import RelationClasses.
Instance trans : Transitive R.
which would ask you to prove R is transitive and then you would be able to use the tactic transitivity to prove R x y.

How to define custom functions in Maple?

I'm new to Maple and I'm looking for a simple way to automate some tasks. In particular, I'm looking for a way to define custom "action" that perform some steps automatically.
As as an example I would like to define a quick way to compute the determinant of the Hessian of a polynomial. Currently the way I do this is opening Maple, create a new worksheet than performing the following commands:
p := (x, y) -> x^2*y + 3*x^3 + y^3
with(VectorCalculus):
h := Hessian(p(x, y), [x, y])
Determinant(h)
What I would like to do is to compute the hessian determinant directly with something like
HessDet(p)
where HessDet would be a custom command that performs the operations above. How does one achieve something like this in Maple?
First things first: The value assigned to your p is a procedure which can return a polynomial expression, but not itself a polynomial. It's important not to muddle expressions and procedures. Doing so is a common cause of problems for new users.
Being able to throw around p(x,y) may be visually pleasing to your eye, but it serves little programmatic purpose here. The fact that the formal parameters of procedure p happen to be called x and y, along with the fact that you called procedure p with arguments x and y, is actually just another common source of confusion. Don't create procedures merely to call them in this way.
Also, your call p(x,y) makes it look magic that your code snippet "knows" how many arguments would be required by procedure p. So it's already a muddle to have your candidate HessDet accept p as a procedure.
So instead let's keep it straightforward, by writing HessDet to accept a polynomial rather than a procedure. We can programmatically ascertain the names in which this expression of of type polynom.
restart;
HessDet:=proc(p::algebraic)
local H,vars;
vars:=indets(p,
And(name,Non(constant),
satisfies(u->type(p,polynom(anything,u)))));
H:=VectorCalculus:-Hessian(p,[vars[]]);
LinearAlgebra:-Determinant(H);
end proc:
Now some examples of using it,
P := x^2*y + 3*x^3 + y^3;
HessDet(P);
p := (x, y) -> x^2*y + 3*x^3 + y^3;
HessDet(p(x,y));
HessDet(x^3-x^2+4*x);
HessDet(s^2*t + 3*s^3 + t^3);
HessDet(s[r]^2*t[r] + 3*s[r]^3 + t[r]^3);
You might also wonder how you could re-use this custom procedure across sessions, without having to type it in each time. Two reasonable ways are:
Put the (above) defining plaintext definition of HessDet inside a personal initialization file.
Create a (.mla) Maple Library Archive file, then Save your HessDet to that, and then augment the Library search path in your initialization file.
It might look like 2) is more effort, but only the Save step is needed for repeats, and you can store many custom procedures to the same archive. Your choice...
[edit] The OP has asked for clarification of the first part of the above procedure HessDet, which I suspect means the call to indets.
If P is assigned an expression then then the call indets(P,name) will return a set of all the names present in that expression. Basically, it returns the set of all indeterminate subexpressions of the expression which are of type name in Maple's technical sense.
For example,
P := x*y + sin(a*Pi)*x;
x y + sin(a Pi) x
indets( P,
name );
{Pi, a, x, y}
Perhaps the name of the constant Pi is not wanted here. Ie,
indets( P,
And( name,
Non(constant) ) );
{a, x, y}
Perhaps we want only the non-constant names in which the expression is a polynomial? Ie,
indets( P,
And( name,
Non(constant),
satisfies(u->type(p,polynom(anything,u))) ) );
{x, y}
That last result is an advanced way of using the following tests:
type(P, polynom(anything, x));
true
type(P, polynom(anything, y));
true
type(P, polynom(anything, a));
false
A central issue here is that the OP made no mention of what kind of polynomials are to be handled by the custom procedure. So I guessed with some defensive coding, in hope of less surprises later on. The original Question states that the input could be a "polynomial", but we weren't told what kind of coefficients there might be.
Perhaps the coefficients will always be real and exact or numeric. Perhaps the custon procedure should throw an error when not supplied such. These details weren't mentioned in the Question.

Change of variable in MATLAB?

I have defined several anonymous functions which normally depend on three variables eta1, eta2, y. There is the following relationship between eta1 eta2 and y
eta1=#(y) ((i*alpha1*lambda_0)^(1/3))*y+eta01;
eta2=#(y) ((i*alpha2*lambda_0)^(1/3))*y+eta02;
So I basically give values for y and then I am able to plot h1b(y=whatever) via arrayfun:
DW1=#(eta) blablabla
N3Y=#(y) i*alpha1*(DW1(eta1(y))*conj(U2(eta2(y)))+W1(eta1(y))...
*conj(DU2(eta2(y))));
h1b=#(y) -(1/(lambda_0*alphats))*(betats*N3Y(y));
vec=arrayfun(h1b,eta1(0:0.01:N));
plot(abs(vec),0:0.01:N)
My question: is there a way to retrieve h1b formally depending on eta1 instead of y, as an anonymous function? Without evaluating y, subsequently eta1,eta2 and then h1b, which is what I do.
Lets clarify. So you currently have:
syms y eta1 eta2;
eta1(y), eta2(y)
W1(eta1), DW1(eta1)
U2(eta2) DU2(eta2)
Hence you also have:
N3Y(W1,DW1,U2,DW2)
or:
N3Y(eta1,eta2)
or:
N3Y(y)
Hence, you have:
h1b(N3Y)
or:
h1b(eta1,eta2)
or:
h1b(y)
So, h1b depends solely on eta1 and eta2, so if you dot want to manipulate kinky eval and simplify calls, why you just dont create two versions of your functions, one with y and other with etas?.
You dont need to rewrite, the ys versions are just evaluated from the etas versions.

Computing mixed derivatives in MATLAB using syms and diff

I'm using MATLAB 2012b.
I want to get d²/dxdy of a simple function:
f(x,y) = (x-1)² + 2y²
The documentation states that I can use syms and diff as in the following example:
> syms x y
> diff(x*sin(x*y), x, y)
ans =
2*x*cos(x*y) - x^2*y*sin(x*y)
But doing the same I got the wrong answer:
> syms x y
> f = (x-1)^2 + 2*y^2;
> diff(f,x,y)
ans =
4*y
The answer is right if I use diff like this:
diff(diff(f,x),y)
Well, it's not a problem for me to use it this way, but nevertheless why is the first variant not working? Is it a version issue?
The actual documentation from R2010a:
diff(expr) differentiates a symbolic expression expr with respect to its free variable as determined by symvar.
diff(expr, v) and diff(expr, sym('v')) differentiate expr with respect to v.
diff(expr, n) differentiates expr n times. n is a positive integer.
diff(expr, v, n) and diff(expr, n, v) differentiate expr with respect to v n times.
So, the command diff(f,x,y) is the last case. It would be equal to differentiating f w.r.t. x, y times, or w.r.t y, x times.
For some reason I don't quite understand, you don't get a warning or error, but one of the syms variables gets interpreted as n = 1, and then the differentiation is carried out. In this case, what diff seems to do is basically diff(f, y, 1).
In any case, it seems that the behavior changed from version to version, because in the documentation you link to (R2016b), there is an additional case:
diff(F,var1,...,varN) differentiates F with respect to the variables var1,...,varN
So I suspect you're running into a version issue.
If you want to differentiate twice, both w.r.t x and y, your second attempt is indeed the correct and most portable way to do that:
diff( diff(f,x), y )
or equivalently
diff( diff(f,y), x )
NB
I checked the R2010a code for symbolic/symbolic/#sym/diff.m and indeed, n is defaulted to 1 and only changed if one of the input variables is a double, and the variable to differentiate over is set equal to the last syms variable in the argument list. The multiple syms variable call is not supported, nor detected and error-trapped.
Syms is only creating symbolic variables.
The first code you execute is only a single derivative. The second code you provided differentiates two times. So I think you forgot to differentiate a second time in the first piece of code you provided.
I am also wondering what answer you expect? If you want 4*y as answer, than you should use
diff(f,y)
and not
diff(f,x,y)
Performing the second derivative is giving me zero?
diff(diff(f,x),y)
If you want 4 as answer than you have to do following:
diff(diff(f,y),y)

Mix-up of bool and Datatypes.bool after Require Import Omega

I'm going through software foundations and ran into an error.
ERROR : The term "true" has type "bool" while it is expected to have type "Datatypes.bool"
for the proof below.
Theorem beq_nat_true : forall n m,
beq_nat n m = true -> n = m.
I found out that this is happening when I use Require Import Omega.
Any tips, suggestions or explanations of what Omega introduces into the environment?
The Omega module indirectly imports many definitions of the standard library that manipulate natural numbers, some of which end up shadowing parts of Software Foundations. The beq_nat function is one of them. The problem arises because the version of the standard library for beq_nat returns standard booleans, whereas the one of SF returns its redefined booleans.
We noticed this problem a while ago, and have already fixed it in the current version. If you don't want to redownload everything (or if you have imported Omega yourself), you can also just qualify beq_nat to use the right version. My guess is that Basics.beq_nat should work.