I want to modify the datatype which is now defined as below.
datatype assn =
Aemp (*r Empty heap *)
| Apointsto exp exp (infixl "⟼" 200) (*r Singleton heap *)
| Astar assn assn (infixl "**" 100) (*r Separating conjunction *)
| Awand assn assn (*r Separating implication *)
| Apure bexp (*r Pure assertion *)
| Aconj assn assn (*r Conjunction *)
| Adisj assn assn (*r Disjunction *)
| Aex "(nat ⇒ assn)" (*r Existential quantification *)
I want to modify the last line to allow a more flexible existence definition, something likeAex "('a ⇒ assn)", however, the ide notice me with the error message Extra type variables on right-hand side: "'a" . By the way, I can write Aex (A: Type) (pp: A -> assn). in Coq. Therefore, I wonder wether I can do it in Isabelle and how?
For the sake of putting the answer here too: In Isabelle you have to specify what the dependencies of the types. Therefore, you need to write:
datatype 'a assn =
...
And if you had several ones, you would write:
datatype ('a, 'b, 'c, 'd) assn =
...
Remark that the error message "Extra XXX on the right-hand side" means there is something on the right-hand side that is missing from the left-hand side of the equality. Depending on the context, this could be some extra variable, some extra type (like here), or anything else.
Related
When using the jooq-postgres-extension and inserting a row with a field value IntegerRange.integerRange(10, true, 20, true) in the query it is translated by cast('[10,20]' as int4range).
It's interesting that if I run the query select cast('[10,20]' as int4range) I get [10,21) which is not an inclusive interval anymore.
My problem is: when I read the row back in Jooq the integerRange.end is now 21 and not 20.
Is this a known issue and is there a workaround rather than the obvious subtracting 1 to upper boundary?
From here Range Types:
The built-in range types int4range, int8range, and daterange all use a canonical form that includes the lower bound and excludes the upper bound; that is, [). User-defined range types can use other conventions, however.
So the cast transforms '[10,20]' to '[10,21)'.
You can do:
select upper_inc(cast('[10,20]' as int4range));
upper_inc
-----------
f
to test the upper bound for inclusivity and modify:
select upper(cast('[10,20]' as int4range));
upper
-------
21
accordingly.
The jOOQ 3.17 RANGE type support (#2968) distinguishes between
discrete ranges (e.g. DateRange, IntegerRange, LongRange, LocaldateRange)
non-discrete ranges (e.g. BigDecimalRange, LocalDateTimeRange, OffsetDateTimeRange, TimestampRange)
Much like in PostgreSQL, jOOQ treats these as the same:
WITH r (a, b) AS (
SELECT '[10,20]'::int4range, '[10,21)'::int4range
)
SELECT a, b, a = b
FROM r;
The result being:
|a |b |?column?|
|-------|-------|--------|
|[10,21)|[10,21)|true |
As you can see, PostgreSQL itself doesn't distinguish between the two identical ranges. While jOOQ maintains the information you give it, they're the same value in PostgreSQL. PostgreSQL itself won't echo back [10,20]::int4range to jOOQ, so you wouldn't be able to maintain this value in jOOQ.
If you need the distinction, then why not use BigDecimalRange instead, which corresponds to numrange in PostgreSQL:
WITH r (a, b) AS (
SELECT '[10,20]'::numrange, '[10,21)'::numrange
)
SELECT a, b, a = b
FROM r;
Now, you're getting:
|a |b |?column?|
|-------|-------|--------|
|[10,20]|[10,21)|false |
Let's assume I have the production:
Expression // These are my semantic actions
: Expression PLUS_TOKEN Expression ( create_node(Expression, Expression) )
| SimpleExpression ( SimpleExpression ) (* Returns a node of type expression *)
Notice how I can't tell which Expression is which in my top most production's semantic action. How do I refer to the left and right Expression? What if I have three or more 'Expression's appear in the same production?
Reference: http://www.smlnj.org/doc/ML-Yacc/mlyacc002.html
According to ML-Yacc documentation we refer to non-terminals with the following notation:
{non-terminal}{n+1}
such that n is the number of occurrences of the non-terminal to the left of the symbol. If n equals one then we can just use the non-terminal name.
Hence, the above example would look like this:
Expression // These are my semantic actions
: Expression PLUS_TOKEN Expression ( create_node(Expression1, Expression2) )
| SimpleExpression ( SimpleExpression ) (* Returns a node of type expression *)
readying about compound type in programming scala 2nd edition, and left with more question than answers.
When you declare an instance that combines several types, you get a compound type:
trait T1
trait T2
class C
val c = new C with T1 with T2 // c's type: C with T1 with T2
In this case, the type of c is C with T1 with T2. This is an alternative to declaring a type that extends C and mixes in T1 and T2. Note that c is considered a subtype of all three types:
val t1: T1 = c
val t2: T2 = c
val c2: C = c
The question that comes to mind is , why the alternative ? If you add something to a language it is supposed to add some value, otherwise it is useless. Hence, what's the added value of compound type and how does it compare to mixins i.e. extend ... with ...
Mixins and compound types are different notions:
https://docs.scala-lang.org/tour/mixin-class-composition.html
vs.
https://docs.scala-lang.org/tour/compound-types.html
Mixins are traits
trait T1
trait T2
class C
class D extends C with T1 with T2
val c = new D
Partial case of that is when an anonymous class is instead of D
trait T1
trait T2
class C
val c = new C with T1 with T2 // (*)
Compound types are types
type T = Int with String with A with B with C
Type of c in (*) is a compound type.
The notion of mixins is from the world of classes, inheritance, OOP etc. The notion of compound types is from the world of types, subtyping, type systems, type theory etc.
The authors of "Programming in Scala" mean that there is an alternative:
either to introduce D
(then D extends two mixins, namely T1 and T2, type of c is D)
or not
(to use anonymous class instead of D, type of c is a compound type).
I am looking through a book about PostgreSQL and find the following example:
SELECT
'a '::VARCHAR(2) = 'a '::TEXT AS "Text and varchar",
'a '::CHAR(2) = 'a '::TEXT AS "Char and text",
'a '::CHAR(2) = 'a '::VARCHAR(2) AS "Char and varchar";
which yields:
which is strange and in the book is said:
The preceding example shows that 'a '::CHAR(2) equals 'a
'::VARCHAR(2), but both have different lengths, which isn't logical.
Also, it shows that 'a'::CHAR(2) isn't equal to 'a '::text. Finally,
'a '::VARCHAR(2) equals 'a'::text. The preceding example causes
confusion because if variable a is equal to b, and b is equal to
c, a should be equal to c according to mathematics.
But there is no explanation why? Is there something about how data is stored or maybe this is some legacy behavior of the CHAR type?
The behavior you are seeing appears to be explained by that Postgres is doing some casting behind the scenes. Consider the following slightly modified version of your query:
SELECT
'a '::VARCHAR(2) = 'a '::TEXT AS "Text and varchar",
'a '::CHAR(2) = CAST('a '::TEXT AS CHAR(2)) AS "Char and text",
'a '::CHAR(2) = 'a '::VARCHAR(2) AS "Char and varchar";
This returns True for all three comparisons. Apparently Postgres is casting both sides of the second comparison to text, and a[ ] ([ ] indicates a space) is not the same thing coming from a CHAR(2) and from text.
Recall that in order to do an A = B comparison in a SQL database, both types of A and B have to be the same. If not, even if it appears that the equality comparison be working on its own, under the hood most likely there is an implicit cast happening.
I'm trying to use Spark's PrefixSpan algorithm but it is comically difficult to get the data in the right shape to feed to the algo. It feels like a Monty Python skit where the API is actively working to confuse the programmer.
My data is a list of rows, each of which contains a list of text items.
a b c c c d
b c d e
a b
...
I have made this data available two ways, an sql table in Hive (where each row has an array of items) and text files where each line contains the items above.
The official example creates a Seq of Array(Array).
If I use sql, I get the following type back:
org.apache.spark.sql.DataFrame = [seq: array<string>]
If I read in text, I get this type:
org.apache.spark.sql.Dataset[Array[String]] = [value: array<string>]
Here is an example of an error I get (if I feed it data from sql):
error: overloaded method value run with alternatives:
[Item, Itemset <: Iterable[Item], Sequence <: Iterable[Itemset]](data: org.apache.spark.api.java.JavaRDD[Sequence])org.apache.spark.mllib.fpm.PrefixSpanModel[Item] <and>
[Item](data: org.apache.spark.rdd.RDD[Array[Array[Item]]])(implicit evidence$1: scala.reflect.ClassTag[Item])org.apache.spark.mllib.fpm.PrefixSpanModel[Item]
cannot be applied to (org.apache.spark.sql.DataFrame)
new PrefixSpan().setMinSupport(0.5).setMaxPatternLength(5).run( sql("select seq from sequences limit 1000") )
^
Here is an example if I feed it text files:
error: overloaded method value run with alternatives:
[Item, Itemset <: Iterable[Item], Sequence <: Iterable[Itemset]](data: org.apache.spark.api.java.JavaRDD[Sequence])org.apache.spark.mllib.fpm.PrefixSpanModel[Item] <and>
[Item](data: org.apache.spark.rdd.RDD[Array[Array[Item]]])(implicit evidence$1: scala.reflect.ClassTag[Item])org.apache.spark.mllib.fpm.PrefixSpanModel[Item]
cannot be applied to (org.apache.spark.sql.Dataset[Array[String]])
new PrefixSpan().setMinSupport(0.5).setMaxPatternLength(5).run(textfiles.map( x => x.split("\u0002")).limit(3))
^
I've tried to mold the data by using casting and other unnecessarily complicated logic.
This can't be so hard. Given a list of items (of the very reasonable format described above), how the heck do I fed it to PrefixSpan?
edit:
I'm on spark 2.2.1
Resolved:
A column in the table I was querying had collections in each cell. This was causing the returned result to be inside a WrappedArray. I changed my query so the result column only contained a string (by concat_ws). This made it MUCH easier to deal with the type error.