Is the colon symbol interchangeable with the set membership symbol in the Z specifications language? - specifications

In the Z specifications language, is the colon symbol ':' a mere variation of the set membership symbol '∈'? Can they always be used interchangeably? In particular, are the following expressions legal in Z?

IMHO, No.
Although every set can be used as a type [source: page 11], the syntax for declaring a variable x of type T is well-defined and it unambiguously uses only the colon symbol : [source: page 15].
The subtle distinction between the notion of Type and the notion of membership-of is that a variable x can be only of one given Type, but it may belong to (possibly infinitely) many Sets.
Since this type of writing qualifies as an abuse of notation, you may be required to include a disclaimer in your research paper/document that clarifies the notation you are using, and what is the advantage in doing so (I see none). On this regard, I would invite you to look at the research literature and at your peers and try to stick to the convention used by the surrounding environment.
This
{ x ∈ Z | x < 100 }
does not respect the syntax for set comprehensions:
{ x : T | pred(x) ● expr(x) }
The set of all elements that result from evaluating
expr(x) for all x of type T for which pred(x)
holds.
When expr(x) is equal to x, i.e. when we return
the element itself, we can omit expr(x) and write
{ x : T | pred(x) }
[source: pages 7,8]
This
λx ∈ Z • x + 1
does not respect the syntax for lambda functions [source: page 24]:
λa : S | p • e

Related

Variable scope propagation in k

I've seen a variable scope propagation to the inner function in previous versions of k. See eval: {[t;c]{x*t+y}/c} in http://www.math.bas.bg/bantchev/place/k.html
But if I try to do the same in modern k, I get an error:
KDB+ 3.6 2018.05.17 Copyright (C) 1993-2018 Kx Systems
q)\
{[k]{x*x}k}3
9
{[k]{x*x+k}k}3
'k
[2] k){x*x+k}
^
)
So why this error happens? Is such variable scope propagation 'banned' in modern q?
Indeed, k4, the most recent implementation of k by kx does not support closures. In fact, the article you refer to does mention that in a section called "Changes to the Language":
K4/q is a change over K3 in a number of significant ways, such as:
...
Nested functions in K4 and q cannot refer to surrounding function's local
variables. (Often, the lack of this ability can be circumvented by
making use of function projection.)
It turns out the lack of support of lexical scoping has not always been the case. Although the only officially documented language nowadays is q, one can still find a reference manual for k2, an implementation of k circa 1998, for example here: http://www.nsl.com/k/k2/k295/kreflite.pdf. Section "Local functions" on page 158 reads:
Local Functions
Suppose that the function g is defined within the body of another
function f and uses the variable x in its definition, where x is local
to f. Then x is a constant in g, not a variable, and its value is the
current one when g is defined. For example, if:
f:{b:3; g:{b}; b:4; g[]}
The value of f is the value of the local function g, which turns out to be 3, the value of b when g is defined,
not the subsequent value 4.
f[]
3
(I highly recommend reading the whole document, by the way).
I don't know why the support of closures was dropped but I think it was because of performance reasons, especially during interprocess communications.

What does the pipe character (|) mean when part of a purescript type signature?

I've been unable to figure this out exactly from reading the available docs.
On the record section of the Type docs, it seems to have to do with Row Polymorphism, but I don't understand it's general usage. What does it mean when there is a type signature with a | symbol?
For example:
class Monad m <= MonadTell w m | m -> w where
tell :: w -> m Unit
The pipe in PureScript is not used "generally". There are multiple uses of it depending on the context. One, as you mentioned, is for type row combinations. Another is for function guards.
The specific syntax you're quoting is called "functional dependency". It is a property of a multi-parameter type class, and it specifies that some variables must be unambiguously determined by others.
In this particular case, the syntax means "for every m there can be only one w".
Or, in plainer language, a given m cannot be MonadTell for several different ws.
Functional dependencies show up in many other places. For example:
-- For every type `a` there is only one generic representation `rep`
class Generic a rep | a -> rep where
-- Every newtype `t` wraps only one unique inner type `a`
class Newtype t a | t -> a where

What exactly is a Set in Coq

I'm still puzzled what the sort Set means in Coq. When do I use Set and when do I use Type?
In Hott a Set is defined as a type, where identity proofs are unique.
But I think in Coq it has a different interpretation.
Set means rather different things in Coq and HoTT.
In Coq, every object has a type, including types themselves. Types of types are usually referred to as sorts, kinds or universes. In Coq, the (computationally relevant) universes are Set, and Type_i, where i ranges over natural numbers (0, 1, 2, 3, ...). We have the following inclusions:
Set <= Type_0 <= Type_1 <= Type_2 <= ...
These universes are typed as follows:
Set : Type_i for any i
Type_i : Type_j for any i < j
Like in Hott, this stratification is needed to ensure logical consistency. As Antal pointed out, Set behaves mostly like the smallest Type, with one exception: it can be made impredicative when you invoke coqtop with the -impredicative-set option. Concretely, this means that forall X : Set, A is of type Set whenever A is. In contrast, forall X : Type_i, A is of type Type_(i + 1), even when A has type Type_i.
The reason for this difference is that, due to logical paradoxes, only the lowest level of such a hierarchy can be made impredicative. You may then wonder then why Set is not made impredicative by default. This is because an impredicative Set is inconsistent with a strong form of the axiom of the excluded middle:
forall P : Prop, {P} + {~ P}.
What this axiom allows you to do is to write functions that can decide arbitrary propositions. Note that the {P} + {~ P} type lives in Set, and not Prop. The usual form of the excluded middle, forall P : Prop, P \/ ~ P, cannot be used in the same way, because things that live in Prop cannot be used in a computationally relevant way.
In addition to Arthur's answer:
From the fact that Set is located at the bottom of the hierarchy,
it follows that Set is the type of the “small” datatypes and function types, i.e. the ones whose values do not directly or indirectly involve types.
That means the following will fail:
Fail Inductive Ts : Set :=
| constrS : Set -> Ts.
with this error message:
Large non-propositional inductive types must be in Type.
As the message suggests, we can amend it by using Type:
Inductive Tt : Type :=
| constrT : Set -> Tt.
Reference:
The Essence of Coq as a Formal System by B. Jacobs (2013), pdf.

What to_unsigned does?

Could someone please explain me how VHDL's to_unsigned works or confirm that my understanding is correct?
For example:
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31))
Here is my understanding:
-30 is a signed value, represented in bits as 1111111111100010
all bits should be inverted and to it '1' added to build the value of C
0000000000011101+0000000000000001 == 0000000000011111
In IEEE package numeric_std, the declaration for TO_UNSIGNED:
-- Id: D.3
function TO_UNSIGNED (ARG, SIZE: NATURAL) return UNSIGNED;
-- Result subtype: UNSIGNED(SIZE-1 downto 0)
-- Result: Converts a non-negative INTEGER to an UNSIGNED vector with
-- the specified SIZE.
You won't find a declared function to_unsigned with an argument or size that are declared as type integer. What is the consequence?
Let's put that in a Minimal, Complete, and Verifiable example:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity what_to_unsigned is
end entity;
architecture does of what_to_unsigned is
signal C: std_logic_vector (31 downto 0);
begin
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31));
end architecture;
A VHDL analyzer will give us an error:
ghdl -a what_to_unsigned.vhdl
what_to_unsigned.vhdl:12:53: static constant violates bounds
ghdl: compilation error
And tell us -30 (line 12:character 53) has a bounds violation. Meaning in this case the numerical literal converted to universal_integer doesn't convert to type natural in the function to_unsigned.
A different tool might tell us a bit more graphically:
nvc -a what_to_unsigned.vhdl
** Error: value -30 out of bounds 0 to 2147483647 for parameter ARG
File what_to_unsigned.vhdl, Line 12
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31));
^^^
And actually tells us where in the source code the error is found.
It's safe to say what you think to_unsigned does is not what the analyzer thinks it does.
VHDL is a strongly typed language, you tried to provide a value to place where that value is out of range for the argument ARG in function TO_UNSIGNED declared in IEEE package numeric_std.
The type NATURAL is declared in package standard and is made visible by an inferred declaration library std; use std.standard.all; in the context clause. (See IEEE Std 1076-2008, 13.2 Design libraries):
Every design unit except a context declaration and package STANDARD is
assumed to contain the following implicit context items as part of its
context clause:
library STD, WORK; use STD.STANDARD.all;
The declaration of natural found in 16.3 Package STANDARD:
subtype NATURAL is INTEGER range 0 to INTEGER'HIGH;
A value declared as a NATURAL is a subtype of INTEGER that has a constrained range excluding negative numbers.
And about here you can see you have the ability to answer this question with access to a VHDL standard compliant tool and referencing the IEEE Std 1076-2008, IEEE Standard VHDL Language Reference Manual.
The TL:DR; detail
You could note that 9.4 Static expressions, 9.4.1 General gives permission to evaluate locally static expressions during analysis:
Certain expressions are said to be static. Similarly, certain discrete ranges are said to be static, and the type marks of certain subtypes are said to denote static subtypes.
There are two categories of static expression. Certain forms of expression can be evaluated during the analysis of the design unit in which they appear; such an expression is said to be locally static.
Certain forms of expression can be evaluated as soon as the design hierarchy in which they appear is elaborated; such an expression is said to be globally static.
There may be some standard compliant tools that do not evaluate locally static expressions during analysis. "can be" is permissive not mandatory. The two VHDL tools demonstrated on the above code example take advantage of that permission. In both tools the command line argument -a tells the tool to analyze the provided file which is if successful, inserted into the current working library (WORK by default, see 13.5 Order of analysis, 13.2 Design libraries).
Tools that evaluate bounds checking at elaboration for locally static expressions are typically purely interpretive and even that can be overcome with a separate analysis pass.
The VHDL language can be used for formal specification of a design model used in formal proofs within the bounds specified by Annex D Potentially nonportable constructs and when relying on pure functions only (See 4.Subprograms and packages, 4.1 General).
VHDL compliant tools are guaranteed to give the same results, although there is no standardization of error messages nor limitations placed on tool implementation methodology.
to_unsigned is for converting between different types:
signal i : integer := 2;
signal u : unsigned(3 downto 0);
...
u <= i; -- Error, incompatible types
u <= to_unsigned(i, 4); -- OK, conversion function does the right thing
If you try to convert a negative integer, this is an error.
u <= to_unsigned(-2, 4); -- Error, does not work with negative numbers
If you simply want to invert an integer, i.e. 2 becomes -2, 5 becomes -5, just use the - operator:
u <= to_unsigned(-i, 4); -- OK as long as `i` was negative or zero
If you want the absolute value, a function for this is provided by the numeric_std library.
u <= to_unsigned(abs(i), 4);

What are some examples of type-level programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I do not understand what "type-level programming" means, nor can I find a suitable explanation using Google.
Could someone please provide an example that demonstrates type-level programming? Explanations and/or definitions of the paradigm would be useful and appreciated.
You're already familiar with "value-level" programming, whereby you manipulate values such as42 :: Int or 'a' :: Char. In languages like Haskell, Scala, and many others, type-level programming allows you to manipulate types like Int :: * or Char :: * where * is the kind of a concrete type (Maybe a or [a] are concrete types, but not Maybe or [] which have kind * -> *).
Consider this function
foo :: Char -> Int
foo x = fromEnum x
Here foo takes a value of type Char and returns a new value of type Int using the Enum instance for Char. This function manipulates values.
Now compare foo to this type family, enabled with the TypeFamilies language extension.
type family Foo (x :: *)
type instance Foo Char = Int
Here Foo takes a type of kind * and returns a new type of kind * using the simple mapping Char -> Int. This is a type level function that manipulates types.
This is a very simple example and you might wonder how this could possibly be useful. Using more powerful language tools, we can begin to encode proofs of the correctness of our code at the type level (for more on this, see the Curry-Howard correspondence).
A practical example is a red-black tree that uses type level programming to statically guarantee that the invariants of the tree hold.
A red-black tree has the following simple properties:
A node is either red or black.
The root is black.
All leaves are black. (All leaves are same colour as the root.)
Every red node must have two black child nodes. Every
path from a given node to any of its descendant leaves contains the
same number of black nodes.
We'll use DataKinds and GADTs, a very powerful type level programming combination.
{-# LANGUAGE DataKinds, GADTS, KindSignatures #-}
import GHC.TypeLits
First, some types to represent the colours.
data Colour = Red | Black -- promoted to types via DataKinds
this defines a new kind Colour inhabited by two types: Red and Black. Note that there are no values (ignoring bottoms) inhabiting these types, but we aren't going to need them anyways.
The red-black tree nodes are represented by the following GADT
-- 'c' is the Colour of the node, either Red or Black
-- 'n' is the number of black child nodes, a type level Natural number
-- 'a' is the type of the values this node contains
data Node (c :: Colour) (n :: Nat) a where
-- all leaves are black
Leaf :: Node Black 1 a
-- black nodes can have children of either colour
B :: Node l n a -> a -> Node r n a -> Node Black (n + 1) a
-- red nodes can only have black children
R :: Node Black n a -> a -> Node Black n a -> Node Red n a
GADT lets us express the Colour of the R and B constructors directly in the types.
The root of the tree looks like this
data RedBlackTree a where
RBTree :: Node Black n a -> RedBlackTree a
Now it is impossible to create a well-typed RedBlackTree that violates any of the 4 properties mentioned above.
The first constraint is obviously true, there are only 2 types inhabiting Colour.
From the definition of RedBlackTree the root is black.
From the definition of the Leaf constructor, all leaves are black.
From the definition of the R constructor, both it's children must
be Black nodes. As well, the number of black child nodes of each subtree are equal (the same n is used in the type of both left and right subtrees)
All these conditions are checked at compile time by GHC, meaning that we will never get a runtime exception from some misbehaving code invalidating our assumptions about a red-black tree. Importantly, there is no runtime cost associated with these extra benefits, all the work is done at compile time.
In most statically typed languages you have two "domains" the value-level and the type-level (some languages have even more). Type-level programming involves encoding logic ( often function abstraction ) in the type-system which is evaluated at compile-time. Some examples would be template metaprogramming or Haskell type-families.
A few languages extensions are needed to do this example in Haskell but you kind of ignore them for now and just look at the type-family as being a function but over type-level numbers (Nat).
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE UndecidableInstances #-}
import GHC.TypeLits
import Data.Proxy
-- value-level
odd :: Integer -> Bool
odd 0 = False
odd 1 = True
odd n = odd (n-2)
-- type-level
type family Odd (n :: Nat) :: Bool where
Odd 0 = False
Odd 1 = True
Odd n = Odd (n - 2)
test1 = Proxy :: Proxy (Odd 10)
test2 = Proxy :: Proxy (Odd 11)
Here instead of testing whether a natural number value is an odd number, we're testing whether a natural number type is an odd number and reducing it to a type-level Boolean at compile-time. If you evaluate this program the types of test1 and test2 are computed at compile-time to:
λ: :type test1
test1 :: Proxy 'False
λ: :type test2
test2 :: Proxy 'True
That's the essence of type-level programming, depending on the language you may be able to encode complex logic at the type-level which have a variety of uses. For example to restrict certain behavior at the value-level, manage resource finalization, or store more information about data-structures.
The other answers are very nice, but I want to emphasize one point. Our programming language theory of terms is based strongly on the Lambda Calculus. A "pure" Lisp corresponds (more or less) to a heavily-sugared untyped Lambda Calculus. The meaning of programs is defined by the evaluation rules that say how the Lambda Calculus terms are reduced as the program runs.
In a typed language, we assign types to terms. For every evaluation rule, we have a corresponding type rule that shows how the types are preserved by evaluation. Depending on the type system, there are also other rules defining how types relate to one another. It turns out that once you get a sufficiently interesting type system, the types and their system of rules also correspond to a variant of the Lambda Calculus!
Although it's common to think of Lambda Calculus as a programming language now, it was originally designed as a system of logic. This is why it is useful for reasoning about the types of terms in a programming language. But the programming language aspect of Lambda Calculus allows one to write programs that are evaluated by the type checker.
Hopefully you can see now that "type-level programming" is not a substantially different thing than "term-level programming", it's just that it's not very common now to have a language in a type system that's powerful enough that you'd have a reason to write programs in it.