In r6rs grammar for numbers there is this rule:
<complex r> => .... | <real r> # <real r>
If I evaluate in mit-scheme the "number" 2#2 I get this strange complex number.
1 ]=> 2#2
;Value: -.8322936730942848+1.8185948536513634i
I did not find documented anywhere what this rule means, what kind of numbers one can generate with this syntax. Where could I find some definition of this? Where this notation comes from?
EDIT:
I found this link. The notation dates back to 1985.
It's polar notation for complex numbers <magnitude>#<angle>. I've never found documentation for it other than the syntax but I'd guess that <angle> is in radians.
(magnitude 2#2) => 2.
(angle 2#2) => 2.
Related
I am analyzing some MATLAB code where there is the following operator: ../. I cannot find any documentation on this operator explaining what it does. Can anyone explain it to me?
sp(it,:) = (ww).*(1../sigt).*exp(-.5*(e(it,:).^2)./(sigt.^2))*srpfrac);
Just being pedantic.
There is no ../ operator, the first . is associated with the 1 indicating a radix point and the ./ is element-wise division. This was likely written by someone who is used to Python where all numbers are considered integers unless the radix point is explicitly included. The more verbose equivalent is:
1.0 ./ sigt
In your case, the 0 has been omitted as it is optional.
To improve readability and future confusion, I would just change it to the following.
1 ./ sigt
How can I convert nat to Q (Rational) in Coq?
I want to be able to write things like this:
Require Import Coq.QArith.QArith.
Open Scope Q_scope.
Definition a := 2/3.
When I try to do this, Coq tells me:
Error: The term "2" has type "nat" while it is expected to have type "Q".
You can write something like:
Definition a := Z.of_nat 2 # Pos.of_nat 3.
The # operator is just notation for the Qmake constructor of the Q type. That constructor takes elements of Z and positive as arguments, so you need the casts to be able to put nats in there.
If you're using literal number syntax, you can also use Z and positive directly:
Definition a := 2 # 3.
The difference is that this definition won't mention the convertions for nat; the numbers will already be in the right type, because Coq interprets the number notation as a Z and a positive directly.
I personally don't like the standard Coq rational number library very much, because it uses equivalence rather than Leibniz equality; that is, the elements of Q 1 # 1 and 2 # 2 are equivalent as rational numbers, but are not equal according to Coq's equality:
Goal (1 # 1 <> 2 # 2).
congruence.
Qed.
There's a feature called setoid rewrite that allows you to pretend that they are equal. It works by only allowing you to rewrite on functions where you proved to be compatible with the notion of equivalence on Q. However, there are still cases where it is harder to use than Leibniz equality.
You can also try the rat library of the Ssreflect and MathComp packages (see the documentation here). It has a definition of rationals that works with Leibniz equality, and it is more comprehensive than Coq's.
I'm trying to teach basic set theory to Prover9. The following definition of membership seems to work very well (the second axiom is just to make lists unordered):
member(x,[x:y]).
[x,y]=[y,x].
With this, I can have Prover9 prove 'complicated' things like member([A,B],[C,[A,B]]) and others.
However, I must be doing something wrong when I use it to define subsets:
subset(x,y) <-> (member(z,x) -> member(z,y)).
Prover9 clausifies this as subset(x,y) | -member(z,y) and uses it to prove false clauses, like subset([A],[B,C]).
What am I missing?
Your "second axiom ... just to make lists unordered" looks suspicious.
Note [x,y] is a two-element list. So your axiom is saying nothing about lists in general. Your 'complicated' examples, are still 2-element lists. So not very complicated. I think you'll be unable to prove member(A, [C, B, A])
Contrast that [x:y] in your first axiom is a 1-or-more-element list. The y might be nil, or might be any number of elements. IOW y is a list, whereas in [x,y], y is an element of a list.
See http://www.cs.unm.edu/~mccune/prover9/manual/2009-11A/, 'Clauses & Formulas' at 'list notation'.
I'd go:
member(x, [x:y]).
member(x, z) -> member(x, [y:z]).
(But that defines a bag, not a set.)
I think the quantification over variables is a red herring.
Edit: Errk. I'm wrong. So I don't know why I got this result:
The example that #Doug points to doesn't need to quant z 'inside'
the rhs of equivalence. You can just remove all the explicit
quantification, and the proof still works.
OK. Let's apply the rewrite rules per the Manual 'Clauses & Formulas' at "If non-clausal formulas are entered ...".
That definition of subset is an equivalence (aka bi-implication); rewrite as two separate axioms: 'forwards' and backwards' implications; rewrite each of those using p -> q ==> -p | q.
In the 'forwards' direction we get:
-subset(x, y) | (member(z, x) -> member(z, y)).
Doesn't matter whether the z is quantified narrow or wide.
In the 'backwards' direction:
-(member(z, x) -> member(z, y)) | subset(x, y).
Here if we quantify z narrowly around the implication, that's inside the scope of negation; and so different to quantifying across the whole formula.
Conclusion: both your definition of member( ) and of subset( ) are wrong.
BTW Are you sure you need to define member? The proof #Doug points to just takes member(x,y) as primitive.
I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.
I'm working on a legacy COM C++ project that makes use of system hungarian notation. Because it's maintenance of legacy code, the convention is to code in the original style it was written in - our newer code isn't coded this way. So I'm not interested in changing that standard or having a a discussion of our past sins =)
Is there an online cheat-sheet available out there for systems hungarian notation?
The best I can find thus far is a pre stack-overflow discussion post, but it doesn't quite have everything I've needed in the past. Does anyone have any other links?
(making this community wiki in the hope this becomes a self populating list)
If this is for a legacy COM project, you'll probably want to follow Microsoft's Hungarian Notation specifications, which are documented on MSDN.
Note that this is Apps Hungarian, i.e. the "good" kind of Hungarian Notation. Systems Hungarian is the "bad" kind, where names are prefixed with their compiler types, e.g. i for int.
Tables from the MSDN article
Table 1. Some examples for procedure names
Name Description
InitSy Takes an sy as its argument and initializes it.
OpenFn fn is the argument. The procedure will "open" the fn. No value is returned.
FcFromBnRn Returns the fc corresponding to the bn,rn pair given. (The names cannot tell us what the types sy, fn, fc, and so on, are.)
The following is a list of standard type constructions. (X and Y stand for arbitrary tags. According to standard punctuation, the actual tags are lowercase.)
Table 2. Standard type constructions
pX Pointer to X.
dX Difference between two instances of type X. X + dX is of type X.
cX Count of instances of type X.
mpXY An array of Ys indexed by X. Read as "map from X to Y."
rgX An array of Xs. Read as "range X." The indices of the array are called:
iX index of the array rgX.
dnX (rare) An array indexed by type X. The elements of the array are called:
eX (rare) Element of the array dnX.
grpX A group of Xs stored one after another in storage. Used when the X elements are of variable size and standard array indexing would not apply. Elements of the group must be referenced by means other then direct indexing. A storage allocation zone, for example, is a grp of blocks.
bX Relative offset to a type X. This is used for field displacements in a data structure with variable size fields. The offset may be given in terms of bytes or words, depending on the base pointer from which the offset is measured.
cbX Size of instances of X in bytes.
cwX Size of instances of X in words.
The following are standard qualifiers. (The letter X stands for any type tag. Actual type tags are in lowercase.)
Table 3. Standard qualifiers
XFirst The first element in an ordered set (interval) of X values.
XLast The last element in an ordered set of X values. XLast is the upper limit of a closed interval, hence the loop continuation condition should be: X <= XLast.
XLim The strict upper limit of an ordered set of X values. Loop continuation should be: X < XLim.
XMax Strict upper limit for all X values (excepting Max, Mac, and Nil) for all other X: X < XMax. If X values start with X=0, XMax is equal to the number of different X values. The allocated length of a dnx vector, for example, will be typically XMax.
XMac The current (as opposed to constant or allocated) upper limit for all X values. If X values start with 0, XMac is the current number of X values. To iterate through a dnx array, for example:
for x=0 step 1 to xMac-1 do ... dnx[x] ...
or
for ix=0 step 1 to ixMac-1 do ... rgx[ix] ...
XNil A distinguished Nil value of type X. The value may or may not be 0 or -1.
XT Temporary X. An easy way to qualify the second quantity of a given type in a scope.
Table 4. Some common primitive types
f Flag (Boolean, logical). If qualifier is used, it should describe the true state of the flag. Exception: the constants fTrue and fFalse.
w Word with arbitrary contents.
ch Character, usually in ASCII text.
b Byte, not necessarily holding a coded character, more akin to w. Distinguished from the b constructor by the capital letter of the qualifier in immediately following.
sz Pointer to first character of a zero terminated string.
st Pointer to a string. First byte is the count of characters cch.
h pp (in heap).
Here's one for 'Systems Hungarian', which in my experience was the more commonly used (and less useful):
http://web.mst.edu/~cpp/common/hungarian.html
But how universally followed this is, I have no idea.
The other form of Hungarian Notation is "Apps Hungarian", which apparently is Simonyi's original intent (the use of the variable was encoded rather than the type). See http://en.wikipedia.org/wiki/Hungarian_notation for some details.
Because this is a legacy project, your software department manager should have a copy of the style guide for whatever version of Hungarian Notation the original programmers used. (I'm assuming that the original programmers have long since fled to more enlightened workplaces.)
You really should reconsider your use of Hungarian notation. It was originally a patch for the lack of strong typing (and compiler type-checking) in C. Modern compilers enforce type-correctness, making Hungarian notation redundant at best, and erroneous otherwise.
There doesn't seem to be any one exhaustive resource for looking up Hungarian Notation prefixes, probably because a lot of it varied from code base to code base. There, of course, were a lot of very commonly used ones.
The best list I could find was here
The rest cover the commonly used conventions such as this entry
MSDN's enty on Hungarian Notation is here
and a couple of short papers on the subject (overlapping each other perhaps) here and here
Your best bet would be to see how the variables are used and that (may) help you figure out the definition of the prefixes (though in practice the naming rarey reflected the use of the variable, sadly).
You might be able to piece together some semblance of notation from those various links.
Just to be complete(!) how about Hungarian Object Notation for Visual Basic from Microsoft Support no less.