Right shift operator - operator-keyword

I spent several hours today hunting for a bug until I found
something I don't understand.
This is the code I've been working with:
unsigned long k,l,m;
k = 1000;
l = 33;
m = k>>l;
It gives m=500 i.e. it apparently shifts by l mod 32!
I have logically expected 0.
Is this something I have just overlooked and never noticed?

In C, I believe it is not specified what will happen if you use shifts that are too large for the size of the value that you are shifting, so having the shift count wrap is perfectly acceptable (and is what x86 and IIRC PowerPC do). In Java, the wrapping behavior you saw is mandatory.

The ANSI C standard says:
If the value of the right operand is
negative or is greater than or
equal to the width of the promoted
left operand, the behavior is
undefined.
So, shifting by 33 isn't the same as 33 times shifting by one ...

Is there a possibility you are confusing 1(one) and l(letter L) as this has happened to me before !! (double check perhaps by changing x=33; m = k>>x ?)

Related

How to "absolute-ly" take from a number? (How to offset a number?)

Essentially what I need to do is a - b, but I don't know what either will be and if a is positive then how can I take b from a in a "absolute" way?
What I mean is for example A = 10 and B = 5. Answer is obviously 5. If A was now -10, the answer is now -5. The answer leans towards 0 no matter what the numbers are. I heavily want to avoid using an if statement if I can.
My original idea was a - (b * (b / abs(b))). But b can be 0, and then div by 0 error occurs.
EDIT: A better way of saying it is I want to offset the result by an amount instead of math.
So do I understand correctly that whatever the value of A, you want it to bias towards zero (based on the initial value of A) by the value of B?
So given:
A=10 and B=5, the result will be 5.
A=-10 and B=5, the result will be -5.
A=-5 and B=-10, the result will be 5 (because it is offset by 10, towards zero from a starting point of -5).
Effectively the sign of B is immaterial because it specifies an offset towards zero in all cases.
What of the case where A=0 and B is non-zero? Is the result supposed to be undefined (because the appropriate direction of offset cannot be inferred), or is it supposed to be zero?
The formula for the latter case would be (ABS(A) - ABS(B)) * SIGN(A)) (assuming that the sign function returns 0 when A is zero).

MATLAB decimal places in array

Consider the following example:
Bathymetry = [0,4134066;
3,3817906;
6,3343666;
9,2978725;
12,2742092;
14,2584337;
16,2415355;
18,2228054;
20,2040753;
23,1761373;
26,1514085];
Depth = [0;1;2;3;5;8;10;11.6;15];
newDepth = min(Bathymetry(:,1)):0.1:max(Bathymetry(:,1));
From this I want to find which column of 'newDepth' corresponds to 'Depth'. For example:
dd = find(newDepth==Depth(1))
dd =
1
Showing that Depth == 0, is located in the first column of newDepth. When I apply this to all of the entries of 'Depth'
for i = 1:length(Depth);
dd(i) = find(newDepth == Depth(i));
end
I receive an error:
Improper assignment with rectangular empty matrix.
Initially I couldn't understand why, but by looking at the array for newDepth, especially column 117 where newDepth == 11.6, I noticed that the value isnt equal to 11.6 but equal to 11.600000000000001 thus being different from Depth(8). How can I fix this? and why does MATLAB not just write the value as 11.6? nowhere have I specified to include the .000000000000001.
This is because there is no exact representation of 0.1 in binary. Read the wiki for more background. In binary, representing 0.1 is something like trying to write out all the decimals of one-third:
1/3 == 0.333333333333333333...
it will never be exact, no matter how many 3's you add.
For this (and many other) reasons, I'd suggest you do not use == (which is a very stringent demand), but rather use
for ii = 1:length(Depth);
[~,dd(ii)] = min( abs(newDepth-Depth(ii)) );
end
This problem is to to with floating point arithmetic which is quite complicated, i recommend you google it and read a bit, there is plenty out there explaining it. Here is a good start: http://blogs.mathworks.com/loren/2006/08/23/a-glimpse-into-floating-point-accuracy/
To solve it for your case I would suggest rounding
newDepth = round(newDepth * 10) / 10
The 11.600000000000001 is because the number 11.6 is not exactly representable in binary floating point notation. This is to do with the way the hardware works rather than any limitation of Matlab.
You want to change your compare to something like
dd(i) = find(abs(newDepth - Depth(i))<.0000001);

Test if a floating point number is an integer in Matlab

My Question - part 1: What is the best way to test if a floating point number is an "integer" (in Matlab)?
My current solution for part 1: Obviously, isinteger is out, since this tests the type of an element, rather than the value, so currently, I solve the problem like this:
abs(round(X) - X) <= sqrt(eps(X))
But perhaps there is a more native Matlab method?
My Question - part 2: If my current solution really is the best way, then I was wondering if there is a general tolerance that is recommended? As you can see from above, I use sqrt(eps(X)), but I don't really have any good reason for this. Perhaps I should just use eps(X), or maybe 5 * eps(X)? Any suggestions would be most welcome.
An Example: In Matlab, sqrt(2)^2 == 2 returns False. But in practice, we might want that logical condition to return True. One can achieve this using the method described above, since sqrt(2)^2 actually equals 2 + eps(2) (ie well within the tolerance of sqrt(eps(2)). But does this mean I should always use eps(X) as my tolerance, or is there good reason to use a larger tolerance, such as 5 * eps(X), or sqrt(eps(X))?
UPDATE (2012-10-31): #FakeDIY pointed out that my question is partially a duplicate of this SO question (apologies, not sure how I missed it in my initial search). Given this I'd like to emphasize the "tolerance" part of the question (which is not covered in that link), ie is eps(X) a sensible tolerance, or should I use something larger, like 5 * eps(X), and if so, why?
UPDATE (2012-11-01): Thanks everyone for the responses. I've +1'ed all three answers as I feel they all contribute meaningfully to various aspects of the question. I'm giving the answer tick to Eric Postpischil as that answer really nailed the tolerance part of the question well (and it has the most upvotes at this point in time).
No, there is no general tolerance that is recommended, and there cannot be.
The difference between a computed result and a mathematically ideal result is a function of the operations that produced the computed result. Because those operations are specific to each application, there is no general rule for testing any property of a computed result.
To design a proper test, you must determine what errors may have occurred during computation, determine bounds on the resulting error in the computed result, and test whether the computed result differs from the ideal result (perhaps the nearest integer) by less than those bounds. You must also decide whether those bounds are sufficiently small to satisfy your application’s requirements. (Using a relaxed test that accepts as an integer something that is not an integer decreases false negatives [incorrect rejections of a result as an integer where the ideal result would be an integer] but increases false positives [incorrect acceptances of a result as an integer where the ideal result would not be an integer].)
(Note that it can even be the case the testing as if the error bounds were zero can produce false negatives: It is possible a computation produces a result that is exactly an integer when the ideal result is not an integer, so any error tolerance, even zero, will falsely report this result is an integer. If this is unacceptable for your application, then, in such a case, the computations must be redesigned.)
It is not only not possible to state, without specific knowledge of the application, a numerical tolerance that may be used, it is impossible to state whether the tolerance should be absolute, should be relative to the computed value or to a target value, should be measured in ULPs (units of least precision), or should be set in some other manner. This is because errors may be introduced into computations in a variety of ways. For example, if there is a small relative error in a and a and b are close in value, then a-b has a large relative error. Additionally, if c is large, then (a-b)*c has a large absolute error.
Its probably not the most efficient method but I would use mod for this:
a = 15.0000000000;
b = mod(a,1.0)
c = 15.0000000001;
d = mod(c,1.0)
returns b = 0 and d = 1.0000e-010
There are a number of other alternatives suggested here:
How do I test for integers in MATLAB?
I like the idea of comparing (x == floor(x)) too.
1) I have historically used your method with a simple tolerance, eps(X). The mod methods interested me though, so I benchmarked a couple using Steve Eddins timeit function.
f = #() abs(X - round(X)) <= eps(X);
g = #() X == round(X);
h = #() ~mod(X,1);
For single values, like X=1.0, yours appears to fastest:
timeit(f) = 7.3635e-006
timeit(g) = 9.9677e-006
timeit(h) = 9.9214e-006
For vectors though, like X = 1:0.01:100, the other methods are faster (though round still beats mod):
timeit(f) = 0.00076636
timeit(g) = 0.00028182
timeit(h) = 0.00040539
2) The error bound is really problem dependent. Other answers cover this much better than I am able to.

Set of unambiguous looking letters & numbers for user input

Is there an existing subset of the alphanumerics that is easier to read? In particular, is there a subset that has fewer characters that are visually ambiguous, and by removing (or equating) certain characters we reduce human error?
I know "visually ambiguous" is somewhat waffly of an expression, but it is fairly evident that D, O and 0 are all similar, and 1 and I are also similar. I would like to maximize the size of the set of alpha-numerics, but minimize the number of characters that are likely to be misinterpreted.
The only precedent I am aware of for such a set is the Canada Postal code system that removes the letters D, F, I, O, Q, and U, and that subset was created to aid the postal system's OCR process.
My initial thought is to use only capital letters and numbers as follows:
A
B = 8
C = G
D = 0 = O = Q
E = F
H
I = J = L = T = 1 = 7
K = X
M
N
P
R
S = 5
U = V = Y
W
Z = 2
3
4
6
9
This problem may be difficult to separate from the given type face. The distinctiveness of the characters in the chosen typeface could significantly affect the potential visual ambiguity of any two characters, but I expect that in most modern typefaces the above characters that are equated will have a similar enough appearance to warrant equating them.
I would be grateful for thoughts on the above – are the above equations suitable, or perhaps are there more characters that should be equated? Would lowercase characters be more suitable?
I needed a replacement for hexadecimal (base 16) for similar reasons (e.g. for encoding a key, etc.), the best I could come up with is the following set of 16 characters, which can be used as a replacement for hexadecimal:
0 1 2 3 4 5 6 7 8 9 A B C D E F Hexadecimal
H M N 3 4 P 6 7 R 9 T W C X Y F Replacement
In the replacement set, we consider the following:
All characters used have major distinguishing features that would only be omitted in a truly awful font.
Vowels A E I O U omitted to avoid accidentally spelling words.
Sets of characters that could potentially be very similar or identical in some fonts are avoided completely (none of the characters in any set are used at all):
0 O D Q
1 I L J
8 B
5 S
2 Z
By avoiding these characters completely, the hope is that the user will enter the correct characters, rather than trying to correct mis-entered characters.
For sets of less similar but potentially confusing characters, we only use one character in each set, hopefully the most distinctive:
Y U V
Here Y is used, since it always has the lower vertical section, and a serif in serif fonts
C G
Here C is used, since it seems less likely that a C would be entered as G, than vice versa
X K
Here X is used, since it is more consistent in most fonts
F E
Here F is used, since it is not a vowel
In the case of these similar sets, entry of any character in the set could be automatically converted to the one that is actually used (the first one listed in each set). Note that E must not be automatically converted to F if hexadecimal input might be used (see below).
Note that there are still similar-sounding letters in the replacement set, this is pretty much unavoidable. When reading aloud, a phonetic alphabet should be used.
Where characters that are also present in standard hexadecimal are used in the replacement set, they are used for the same base-16 value. In theory mixed input of hexadecimal and replacement characters could be supported, provided E is not automatically converted to F.
Since this is just a character replacement, it should be easy to convert to/from hexadecimal.
Upper case seems best for the "canonical" form for output, although lower case also looks reasonable, except for "h" and "n", which should still be relatively clear in most fonts:
h m n 3 4 p 6 7 r 9 t w c x y f
Input can of course be case-insensitive.
There are several similar systems for base 32, see http://en.wikipedia.org/wiki/Base32 However these obviously need to introduce more similar-looking characters, in return for an additional 25% more information per character.
Apparently the following set was also used for Windows product keys in base 24, but again has more similar-looking characters:
B C D F G H J K M P Q R T V W X Y 2 3 4 6 7 8 9
My set of 23 unambiguous characters is:
c,d,e,f,h,j,k,m,n,p,r,t,v,w,x,y,2,3,4,5,6,8,9
I needed a set of unambiguous characters for user input, and I couldn't find anywhere that others have already produced a character set and set of rules that fit my criteria.
My requirements:
No capitals: this supposed to be used in URIs, and typed by people who might not have a lot of typing experience, for whom even the shift key can slow them down and cause uncertainty. I also want someone to be able to say "all lowercase" so as to reduce uncertainty, so I want to avoid capital letters.
Few or no vowels: an easy way to avoid creating foul language or surprising words is to simply omit most vowels. I think keeping "e" and "y" is ok.
Resolve ambiguity consistently: I'm open to using some ambiguous characters, so long as I only use one character from each group (e.g., out of lowercase s, uppercase S, and five, I might only use five); that way, on the backend, I can just replace any of these ambiguous characters with the one correct character from their group. So, the input string "3Sh" would be replaced with "35h" before I look up its match in my database.
Only needed to create tokens: I don't need to encode information like base64 or base32 do, so the exact number of characters in my set doesn't really matter, besides my wanting to to be as large as possible. It only needs to be useful for producing random UUID-type id tokens.
Strongly prefer non-ambiguity: I think it's much more costly for someone to enter a token and have something go wrong than it is for someone to have to type out a longer token. There's a tradeoff, of course, but I want to strongly prefer non-ambiguity over brevity.
The confusable groups of characters I identified:
A/4
b/6/G
8/B
c/C
f/F
9/g/q
i/I/1/l/7 - just too ambiguous to use; note that european "1" can look a lot like many people's "7"
k/K
o/O/0 - just too ambiguous to use
p/P
s/S/5
v/V
w/W
x/X
y/Y
z/Z/2
Unambiguous characters:
I think this leaves only 9 totally unambiguous lowercase/numeric chars, with no vowels:
d,e,h,j,m,n,r,t,3
Adding back in one character from each of those ambiguous groups (and trying to prefer the character that looks most distinct, while avoiding uppercase), there are 23 characters:
c,d,e,f,h,j,k,m,n,p,r,t,v,w,x,y,2,3,4,5,6,8,9
Analysis:
Using the rule of thumb that a UUID with a numerical equivalent range of N possibilities is sufficient to avoid collisions for sqrt(N) instances:
an 8-digit UUID using this character set should be sufficient to avoid collisions for about 300,000 instances
a 16-digit UUID using this character set should be sufficient to avoid collisions for about 80 billion instances.
Mainly drawing inspiration from this ux thread, mentioned by #rwb,
Several programs use similar things. The list in your post seems to be very similar to those used in these programs, and I think it should be enough for most purposes. You can add always add redundancy (error-correction) to "forgive" minor mistakes; this will require you to space-out your codes (see Hamming distance), though.
No references as to particular method used in deriving the lists, except trial and error
with humans (which is great for non-ocr: your users are humans)
It may make sense to use character grouping (say, groups of 5) to increase context ("first character in the second of 5 groups")
Ambiguity can be eliminated by using complete nouns (from a dictionary with few look-alikes; word-edit-distance may be useful here) instead of characters. People may confuse "1" with "i", but few will confuse "one" with "ice".
Another option is to make your code into a (fake) word that can be read out loud. A markov model may help you there.
If you have the option to use only capitals, I created this set based on characters which users commonly mistyped, however this wholly depends on the font they read the text in.
Characters to use: A C D E F G H J K L M N P Q R T U V W X Y 3 4 6 7 9
Characters to avoid:
B similar to 8
I similar to 1
O similar to 0
S similar to 5
Z similar to 2
What you seek is an unambiguous, efficient Human-Computer code. What I recommend is to encode the entire data with literal(meaningful) words, nouns in particular.
I have been developing a software to do just that - and most efficiently. I call it WCode. Technically its just Base-1024 Encoding - wherein you use words instead of symbols.
Here are the links:
Presentation: https://docs.google.com/presentation/d/1sYiXCWIYAWpKAahrGFZ2p5zJX8uMxPccu-oaGOajrGA/edit
Documentation: https://docs.google.com/folder/d/0B0pxLafSqCjKOWhYSFFGOHd1a2c/edit
Project: https://github.com/San13/WCode (Please wait while I get around uploading...)
This would be a general problem in OCR. Thus for end to end solution where in OCR encoding is controlled - specialised fonts have been developed to solve the "visual ambiguity" issue you mention of.
See: http://en.wikipedia.org/wiki/OCR-A_font
as additional information : you may want to know about Base32 Encoding - wherein symbol for digit '1' is not used as it may 'confuse' the users with the symbol for alphabet 'l'.
Unambiguous looking letters for humans are also unambiguous for optical character recognition (OCR). By removing all pairs of letters that are confusing for OCR, one obtains:
!+2345679:BCDEGHKLQSUZadehiopqstu
See https://www.monperrus.net/martin/store-data-paper
It depends how large you want your set to be. For example, just the set {0, 1} will probably work well. Similarly the set of digits only. But probably you want a set that's roughly half the size of the original set of characters.
I have not done this, but here's a suggestion. Pick a font, pick an initial set of characters, and write some code to do the following. Draw each character to fit into an n-by-n square of black and white pixels, for n = 1 through (say) 10. Cut away any all-white rows and columns from the edge, since we're only interested in the black area. That gives you a list of 10 codes for each character. Measure the distance between any two characters by how many of these codes differ. Estimate what distance is acceptable for your application. Then do a brute-force search for a set of characters which are that far apart.
Basically, use a script to simulate squinting at the characters and see which ones you can still tell apart.
Here's some python I wrote to encode and decode integers using the system of characters described above.
def base20encode(i):
"""Convert integer into base20 string of unambiguous characters."""
if not isinstance(i, int):
raise TypeError('This function must be called on an integer.')
chars, s = '012345689ACEHKMNPRUW', ''
while i > 0:
i, remainder = divmod(i, 20)
s = chars[remainder] + s
return s
def base20decode(s):
"""Convert string to unambiguous chars and then return integer from resultant base20"""
if not isinstance(s, str):
raise TypeError('This function must be called on a string.')
s = s.translate(bytes.maketrans(b'BGDOQFIJLT7KSVYZ', b'8C000E11111X5UU2'))
chars, i, exponent = '012345689ACEHKMNPRUW', 0, 1
for number in s[::-1]:
i += chars.index(number) * exponent
exponent *= 20
return i
base20decode(base20encode(10))
base58:123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz

What does 8.309618000000001D-02 mean in QBASIC

I have a QBASIC program that basically consists of formulas and constants, and I want to translate the formulas and constants into a C++ programm. Since the formulas are not rocket science and the program is well documented, I have no problem translating the program, although I have not used or seen QBASIC before.
However, there is an initialization of a variable that reads abc(15) = 9.207134000000001D-02, and I am not sure how to interpret the D-02. I guess I should translate it like abc[15] =0.09207134...., but I'd like to verify if this is correct.
If I recall correctly D-02 means times ten raised to the power minus 2.
So 8.309618000000001D-02 = 8.30961800000000 x 10^(-2)
which is roughly 0.08309618
I also think the D means the type of the number is a double.
EDIT: It's been ages since I wrote any QBASIC code
Yes he is right the D means that the number is a double and the -2 after the D means it is multiplied by 10 to the power of negative 2 which means it is 0.08309618 to the precision of qbasics double precision numbers which is 52 or 54 bits If I remember corectly