logicblox simple rule intermediate representation - logicblox

I have a logiql file with many "complicated" rules.
Here are some examples:
tuple1(x), tuple2(x), function1[y, z] = x <- in_tuple1(x), in_tuple2(x, y), in_tuple3[x, y] = z.
tuple1(x,y) <- (in_tuple1(x,z), in_tuple2(y,z)); in_tuple2(x,y)
For my purposes it would be much better to have only rules in the simple form: only one derived tuple per rule and no "OR" combinations of rules.
Does logicblox offer some intermediate representation output that only consists of the simpler rules?

I think there are intermediate representations created, but I don't know how to unearth them. Even if I did, I think my first advice would be to write the simpler rules you want.
I'm quite confident that the first example can be re-written as follows.
Example 1 Before
tuple1(x),
tuple2(x),
function1[y, z] = x
<-
in_tuple1(x),
in_tuple2(x, y),
in_tuple3[x, y] = z.
Example 1 After
tuple1(x) <- in_tuple1(x), in_tuple2(x, y), in_tuple3[x, y] = _.
tuple2(x) <- in_tuple1(x), in_tuple2(x, y), in_tuple3[x, y] = _.
/** alternatively
tuple1(x) <- function1[_, _] = x.
tuple2(x) <- function1[_, _] = x.
**/
function1[y, z] = x
<-
in_tuple1(x),
in_tuple2(x, y),
in_tuple3[x, y] = z.
I'm a little less confident with the second one. No conflicts between the two rules jump out at me. If there is a problem here you may get a functional dependency violation, which you'll know by the output or logging of "Error: Function cannot contain conflicting records."
Example 2 Before (assumed complete clause with "." at end)
tuple1(x,y)
<-
(
in_tuple1(x,z),
in_tuple2(y,z)
)
;
in_tuple2(x,y).
Example 2 After
tuple1(x,y)
<-
in_tuple1(x,z),
in_tuple2(y,z).
tuple1(x,y)
<-
in_tuple2(x,y).

Related

Conditional Future in Scala

Given these two futures, I need to run the first one only if a condition is true (see if y>2). But I get an exception Future.filter predicate is not satisfied. What does this mean and how to fix the example?
object TestFutures extends App {
val f1 = Future {
1
}
val f2 = Future {
2
}
val y = 1
val x = for {
x1 <- f1 if y>2
x2 <- f2
} yield x1 + x2
Thread.sleep(5000)
println(x)
}
The question contains some terminology issues.
Given the requirement, "I need to run the first (future) one only if a condition is true", then one possible implementation of the requirement would be:
val f1 = if (cond) Some(Future(op)) else None
This is because a Future will start its execution once it's defined.
Going back to the expression in the question:
val x = for {
x1 <- f1 if y>2
x2 <- f2
} yield x1 + x2
This is saying "I want the result of f1 if(cond)" and not "I want to execute f1 if(cond)".
This would be a way: (note how the futures are defined within the for-comprehension, and the condition is outside):
val x = if (y > 2) {
for {
x1 <- Future(op1)
x2 <- Future(op2)
} yield x1 + x2
} else ???
The proper use of guards in for-comprehensions is to evaluate an expression against values coming from the computation expressed by the for-comprehension. For example:
"I want to execute f2 only if the result of f1 is greater than y"
val x = for {
x1 <- f1
x2 <- Future(op) if x1 > y
} yield x1 + x2
Note how the condition here involves an intermediate result of the computation (x1 in this case)
One side note: to wait for the result of a future, use Await.result(fut, duration) instead of Thread.sleep(duration)
filter is not really something you should be able to do on a Future - what would a Future that didn't pass the condition return? From your example: we still need to have a value for x1 (even if it fails the if) to use in the yield x1 + x2.
Therefore, the filter method on Future is designed to fail hard when the predicate evaluates to false. It is an "assert" of sorts. You probably would prefer something like this (that provides a default value for x1 if the condition fails):
val x = for {
x1 <- if (y > 2) f1 else Future.successful(/* some-default-value-for-x1 */)
x2 <- f2
} yield x1 + x2

Function Overloading Mechanism

class X
class Y extends X
class Z extends Y
class M {
def f(x: X): String = "f with X at M"
def f(x: Y): String = "f with Y at M"
}
class N extends M {
override def f(x: Y): String = "f with Y at N"
def f(x: Z): String = "f with Z at N"
}
val z: Z = new Z
val y: Y = z
val x: X = y
val m: M = new N
println(m.f(x))
// m dynamically matches as type N and sees x as type X thus goes into class M where it calls "f with X at M"
println(m.f(y))
// m dynamically matches as type N and sees y as type Y where it calls "f with Y at N"
println(m.f(z))
// m dynamically matches as type N and sees z as type Z where it calls "f with Z at N"
Consider this code, I don't understand with the final call println(m.f(z)) doesn't behave as I wrote in the comments - is there a good resource for understanding how overloading works in Scala?
Thank!
Firstly overloading in Scala works the same as in Java.
Secondly, it's about static and dynamic binding. Let's find out what compiler see. You have m: M object. Class M has f(X) and f(Y) methods. When you call m.f(z) compiler resolves that method f(Y) should be called because Z is subclass of Y. It's a very important point: compiler doesn't know real class of m object that's why it knows nothing about method N.f(Z). And it's called static binding: compiler resolves method's signature. Later, in runtime, dynamic binding happens. JVM knows real class of m and it calls f(Y) which is overloaded in Z.
Hope my explanations are clearly enough to understand.
class x
class Y extends X
class Z extends Y
class M {
def f(x: X): String = "f with X at M"
def f(x: Y): String = "f with Y at M"
}
class N extends M {
override def f(x: Y): String = "f with Y at N"
def f(x: Z): String = "f with Z at N"
}
val z: Z = new Z
val y: Y = z
val x: X = y
val m: M = new N
println(m.f(x))
// m dynamically matches as type N and sees x as type X thus goes into class M where it calls "f with X at M"
println(m.f(y))
// m dynamically matches as type N and sees y as type Y where it calls "f with Y at N"
println(m.f(z))
// m dynamically matches as type N and sees z as type Z where it calls "f with Z at N"
Because the function will be overloaded on N.so N is depends on m.f(y) .finally it is related with x and y that is reason z function will call
When you do this
val m: M = new N
It means that m is capable of doing everything that class M can. M has two methods - first which can take X, other Y.
And hence when you do this
m.f(z)
Runtime is going to search for a method which can accept z (of type Z). The method in N is not a candidate here because of two reasons
The reference is of type M
Your N does not override any method of M which can accept an argument of type Z. You do have a method in N which can accept a Z but that's not a candidate because it's not overriding anything from M
The best match is f in M which can accept a Y this is because Z ISA Y
You can get what your last comment says if
You define a method in M which takes argument of type Z and then you override it in N
You instantiate a val of type N e.g: val m : N = new N
I think the existing questions on SO already elaborate this point.

How to write/code several functions as one

I am trying to write a line composed of two segments as a single equation in :
y = m1*x + c1 , for x<=x1
y = m2*x + c2 , for x>=x1
My questions are:
How can I write the function of this combined line as a single equation?
How can I write multiple functions (valid in separate regions of a linear parameter space) as a single equation?
Please explain both how to express this mathematically and how to program this in general and in Matlab specifically.
You can write this equation as a single line by using the Heaviside step function, https://en.wikipedia.org/wiki/Heaviside_step_function.
Combining two functions into one:
In fact, what you are trying to do is
f(x) = a(x) (for x < x1)
f(x) = q (for x = x1), where q = a(x1) = b(x1)
f(x) = b(x) (for x > x1)
The (half-maximum) Heaviside function is defined as
H(x) = 0 (for x < 0)
H(x) = 0.5 (for x = 0)
H(x) = 1 (for x > 0)
Hence, your function will be
f(x) = H(x1-x) * a(c) + H(x-x1) * b(x)
and, therefore,
f(x) = H(x1-x) * (m1*x+c1) + H(x-x1) * (m2x+c2)
If you want to implement this, note that many programming languages will allow you to write something like
f(x) = (x<x1)?a(x):b(x)
which means if x<x1, then return value a(x), else return b(x), or in your case:
f(x) = (x<x1)?(m1*x+c1):(m2x+c2)
Matlab implementation:
In Matlab, you can write simple functions such as
a = #(x) m1.*x+c1,
b = #(x) m2.*x+c2,
assuming that you have previously defined m1, m2, and c1, c2.
There are several ways to using/implementing the Heaviside function
If you have the Symbolic Math Toolbox for Matlab, you can directly use heaviside() as a function.
#AndrasDeak (see comments below) pointed out that you can write your own half-maximum Heaviside function H in Matlab by entering
iif = #(varargin) varargin{2 * find([varargin{1:2:end}], 1, 'first')}();
H = #(x) iif(x<0,0,x>0,1,true,0.5);
If you want a continuous function that approximates the Heaviside function, you can use a logistic function H defined as
H = #(x) 1./(1+exp(-100.*x));
Independently of your implementation of the Heaviside function H, you can, create a one-liner in the following way (I am using x1=0 for simplicity) :
a = #(x) 2.*x + 3;
b = #(x) -1.5.*x + 3;
Which allows you to write your original function as a one-liner:
f = #(x) H(-x).*a(x) + H(x).*b(x);
You can then plot this function, for example from -10 to 10 by writing plot(-10:10, f(-10:10)) you will get the plot below.
Generalization:
Imagine you have
f(x) = a(x) (for x < x1)
f(x) = q (for x = x1), where q = a(x1) = b(x1)
f(x) = b(x) (for x1 < x < x2)
f(x) = r (for x = x2), where r = b(x2) = c(x2)
f(x) = c(x) (for x2 < x < x3)
f(x) = s (for x = x2), where s = c(x3) = d(x3)
f(x) = d(x) (for x3 < x)
By multiplying Heaviside functions, you can now determine zones where specific functions will be computed.
f(x) = H(x1-x)*a(c) + H(x-x1)*H(x2-x)*b(x) + H(x-x2)*H(x3-x)*c(x) + H(x-x3)*d(x)
PS: just realized that one of the comments above talks about the Heaviside function, too. Kudos to #AndrasDeak .

Using a function handle with equation and Simpson's rule

I'm working on a problem that applies a luminosity equation:
E = 64.77* T^−4 ∫ x^−5( e^(1.432/Tx) -1 )^−1 dx
Where T = 3500;
to simp son's rule which is a few sums and such.
problem 17.8 here: http://my.safaribooksonline.com/book/computer-aided-engineering/9780123748836/-introduction-to-numerical-methods/ch17lev1sec10
What I've done is made a function simpson(fn, a, b, h) that runs simp son's rule correctly.
however, the problem is making that integral equation into a function handle that works. I've gotten it to work for simple function handles like
f = #x x.^2
but when I try and make the integral into a function:
fn = #(x)(64.77/T^4).*integral((x.^(-5)).*((exp(((1.432)./(3500.*x)))).^(-1)), 4e-5, 7e-5);
simp(fn, 5, 15, 1)
function s = simp(fn, a, b, h)
x1 = a + 2*h:2*h:b-2*h;
sum1 = sum(feval(fn, x1));
x2 = a + h:22*h:b-h;
sum2 = sum(feval(fn, x2));
s = h/3*(feval(fn, a) + feval(fn, b) + 4*sum2 + 2*sum1);
it doesn't work. error message is Integral: first input must be function handle.
Any help appreciated.
You're supposed to be evaluating the integral using Simpsons rule, whereas you are using integral to calculate the integral, then fn is not a function of x. You want to do this:
fn = #(x)(x.^(-5)).*((exp(((1.432)./(3500.*x)))).^(-1));
I=simp(fn,a,b,h);
E=(64.77/T^4)*I;

Isabelle: degree of polynomial multiplied with constant

I am working with the library HOL/Library/Polynomial.thy.
A simple property didn't work. E.g., the degree of 2x *2 is equal to the degree of 2x-
How can I prove the lemmas (i.e., remove "sorry"):
lemma mylemma:
fixes y :: "('a::comm_ring_1 poly)" and x :: "('a::comm_ring_1)"
shows "1 = 1" (* dummy *)
proof-
have "⋀ x. degree [: x :] = 0" by simp
from this have "⋀ x y. degree (y * [: x :] ) = degree y" sorry
(* different notation: *)
from this have "⋀ x y. degree (y * (CONST pCons x 0)) = degree y" sorry
.
From Manuel's answer, the solution I was looking for:
have 1: "⋀ x. degree [: x :] = 0" by simp
{
fix y :: "('a::comm_ring_1 poly)" and x :: "('a::comm_ring_1)"
from 1 have "degree (y * [: x :]) ≤ degree y"
by (metis Nat.add_0_right degree_mult_le)
}
There are a number of issues here.
First of all, the statement you are trying to show simply does not hold for all x. If x = 0 and y is nonconstant, e.g. y = [:0,1:], you have
degree (y * [: x :]) = degree 0 = 0 ≠ 1 = degree y
The obvious way to fix this is to assume x ≠ 0.
However, this is not sufficient either, since you only assumed 'a to be a commutative ring. However, in a commutative ring, in general, you can have zero divisors. Consider the commutative ring ℤ/4ℤ. Let x = 2 and y = [:0,2:].
Then y * [:x:] = [:0,4:], but 4 = 0 in ℤ/4ℤ. Therefore y * [:x:] = 0, and therefore, again,
degree (y * [: x :]) = degree 0 = 0 ≠ 1 = degree y
So, what you really need is one of the following two:
the assumption x ≠ 0 and 'a::idom instead of 'a::comm_ring. idom stands for “integral domain” and, that is simply a commutative ring with a 1 and without zero divisors
more generally, the assumption that x is not a zero divisor
even more generally, the assumption that x * y ≠ 0 or, equivalently, x times the leading coefficient of y is not 0
Also, the usage of ⋀ in Isar proofs is somewhat problematic at times. The “proper“ Isar way of doing this would be:
fix x :: "'a::idom" and y :: "'a poly"
assume "x ≠ 0"
hence "degree (y * [:x:]) = degree y" by simp
The relevant lemmas are degree_mult_eq and degree_smult_eq, you will see that they require the coefficient type to be an idom. This works for the first case I described above, the other two will require some more manual reasoning, I think.
EDIT: just a small hint: you can find theorems like this by typing
find_theorems "degree (_ * _)"
If you try to apply the degree_mult_eq it shows to your situation (with comm_ring), you will find that it fails, even though the terms seem to match. If that is the case, it is usually a type issue, so you can write something like
from [[show_sorts]] degree_mult_eq
to see what the types and sorts required by the lemma are, and it says idom.