For PixelSearch/ImageSearch, is there a difference between "If (ErrorLevel = 1)" and "If (OutputX="")" for checking if the pixel/image is not found? - autohotkey

I notice when using PixelSearch or ImageSearch, if I want to check if the image/pixel is not found, I can use either:
PixelSearch, OutputX, OutputY, 0, 0, 1920, 1080, 0xFFFFFF, 0, Fast RGB
If (ErrorLevel = 1)
{
; Code
}
or this:
PixelSearch, OutputX, OutputY, 0, 0, 1920, 1080, 0xFFFFFF, 0, Fast RGB
If (OutputX = "")
{
; Code
}
This seems to work as well by testing if "OutputX" = "" or the empty string which seems to return if the pixel/image cannot be found.
The AutoHotKey documentation https://www.autohotkey.com/docs/commands/PixelSearch.htm does says it is "made blank" if the coordinates cannot be found. I think testing if "OutputY" is empty is redundant since if one of them is blank, then both of them are, correct?
What I'm seeing is the only difference is restrictions for it to be equivalent:
It requires you to do "ErrorLevel = 1" instead of "ErrorLevel" which may include both 1 and 2, and 2 is where it fails to perform the search and 1 is pixel/image not found. I think you also have to specify the output variables for the other method for it to be equivalent because those "OutputX" and "OutputY" values may have previous values.
If there is no functional difference given those restrictions, are there any differences in performance?

Related

What is the swift way of creating an Array of numbers of a certain pattern?

The pattern of numbers I'm looking to create follows the pattern:
[100, 200, 400, 800, 1600, 3200, 6400]
I know I can accomplish this with a for loop but coming from languages like Python I wonder if there is a Swift way to do this. Thanks.
Use the sequence(first:next:) function. It creates an infinite sequence. You can then use prefix to get a finite number of elements.
sequence(first: 100) { $0 * 2 }.prefix(7)
If you then convert that to an array, you can print it out in a human readable format:
// [100, 200, 400, 800, 1600, 3200, 6400]
print(Array(sequence(first: 100) { $0 * 2 }.prefix(7)))
Another way is to use a ClosedRange as an array and map it. Technically still a for loop but in one line.
(0...6).map{pow(2,$0) * 100}

PDF417 decode and generate the same barcode using Swift

I have the following example of PDF417 barcode:
which can be decoded with online tool like zxing
as the following result: 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e�¼ô���������¬‚C`Ìe%�æ‹�ÀsõbÿG)=‡x‚�qÀ1ß–[FzùŽûVû�É�üæ±RNI�Y[.H»Eàó¼åñüì²�tØ¿ªWp…Ã�{�Õ*
or online-qrcode-generator
as 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e~|~~~~~~~~~~d~C`~e%~~~~;To~B~{~dj9v~~Z[Xm~~"HP3~~LH~~~O~"S~~,~~~~~~~k1~~~u~Iw}SQ~fqX4~mbc_ (I don't know which encoding is used to encode this)
The first part of the encoded key that contains barcode is always known and it is 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e
The second part of it can be decoded from the base64string and it always contains 88 bytes. In my case it is:
Frz0DAAAAAAAAAAArIJDYMxlJQDmiwHAc/Vi/0cpPYd4ghlxwDHflltGevmO+1b7GckT/OZ/sVJOSRpZWy5Iu0Xg87zl8fzssg502L+qV3CFwxZ/ewjVKg==
I'm using Swift on iOS device to generate this PDF417 barcode by decoding the provided base64 string like this:
let base64Str = "Frz0DAAAAAAAAAAArIJDYMxlJQDmiwHAc/Vi/0cpPYd4ghlxwDHflltGevmO+1b7GckT/OZ/sVJOSRpZWy5Iu0Xg87zl8fzssg502L+qV3CFwxZ/ewjVKg=="
let knownKey = "5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e"
let decodedData = Data(base64Encoded: base64Str.replacingOccurrences(of: "-", with: "+")
.replacingOccurrences(of: "_", with: "/"))
var codeData=knownKey.data(using: String.Encoding.ascii)
codeData?.append(decodedData)
let image = generatePDF417Barcode(from: codeData!)
let imageView = UIImageView(image: image!)
//the function to generate PDF417 UIMAGE from parsed Data
func generatePDF417Barcode(from codeData: Data) -> UIImage? {
if let filter = CIFilter(name: "CIPDF417BarcodeGenerator") {
filter.setValue(codeData, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
But I always get the wrong barcodes generated. It can be seen visually.
Please help me correct the code to get the same result as the first barcode image.
I also have the another example of barcode:
The first part of key is the same but it's second part is known as int8 byte array and I also don't have an idea how to generate the PDF417 barcode from it (with prepended key) correctly.
Here's how I try:
let knownKey = "5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e"
let secretArray: [Int8] = [22, 124, 24, 12, 0, 0, 0, 0, 0, 0, 0, 0, 100, 127, 67, 96, -52, 101, 37, 0, -85, -123, 1, -64, 111, -28, 66, -27, 123, -25, 100, 106, 57, 118, -4, 16, 90, 91, 88, 109, -105, 126, 34, 72, 80, 51, -116, 28, 76, 72, -37, -24, -93, 79, -115, 34, 83, 18, -61, 44, -12, -13, -8, -59, -107, -9, -128, 107, 49, -50, 126, 13, -59, 50, -24, -43, 127, 81, -85, 102, 113, 88, 52, -60, 109, 98, 99, 95]
let secretUInt8 = secretArray.map { UInt8(bitPattern: $0) }
let secretData = Data(secretUInt8)
let keyArray: [UInt8] = Array(knownKey.utf8)
var keyData = Data(keyArray)
keyData.append(secretData)
let image = generatePDF417Barcode(from: keyData!)
let imageView = UIImageView(image: image!)
There are a lot of things going on here. Gereon is correct that there are a lot of parameters. Choosing different parameters can lead to very different bar codes that decode identically. Your current barcode is "correct" (though a bit messy due to an Apple bug). It's just different.
I'll start with the short answer of how to make your data match the barcode you have. Then I'll walk through what you should probably actually do, and finally I'll get to the details of why.
First, here's the code you're looking for (but probably not the code you want, unless you have to match this barcode):
filter.setValue(codeData, forKey: "inputMessage")
filter.setValue(3, forKey: "inputCompactionMode") // This is good (and the big difference)
filter.setValue(5, forKey: "inputDataColumns") // This is fine, but probably unneeded
filter.setValue(0, forKey: "inputCorrectionLevel") // This is bad
PDF 417 defines several "compaction modes" to let it pack a truly impressive amount of information into a very small space while still offering excellent error detection and correction, and handling a lot of real-world scanning concerns. The default compaction mode only supports Latin text and basic punctuation. (It compacts even more if you only use uppercase Latin letters and space.) The first part of your string can be stored with text compaction, but the rest can't, so it has to switch to byte compaction.
Core Image actually does this switch shockingly badly by default (I opened FB9032718 to track). Rather than encoding in text and then switching to bytes, or just doing it all in bytes, it switches to bytes over and over again unnecessarily.
There's no way for you to configure multiple compaction methods, but you can just set it to byte, which is what value 3 is. And that's also how your source is doing it.
The second difference is the number of data columns, which drive how wide the output is. Your source is using 5, but Core Image is choosing 6 based on its default rules (which aren't fully documented).
Finally, your source has set the error correction level to 0, which is not recommended. For a message of this size, the minimum recommended error correction level is 3, which is what Core Image chooses by default.
If you just want a good barcode, and don't have to match this input, my recommendation would be to set inputCompactionMode to 3, and leave the rest as defaults. If you want a different aspect ratio, I'd use inputPreferredAspectRatio rather than modifying the number of data columns directly.
You may want to stop reading now. This was a very enjoyable puzzle to spend the morning on, so I'm going to dump a lot of details here.
If you want a deep dive into how this format works, I don't know anything currently available other than the ISO 15438 Spec, which will cost you around US$200. But there used to be some pages at GeoCities that explained a lot of this, and they're still available through the Wayback Machine.
There also aren't a lot of tools for decoding this stuff on the command line, but pdf417decode does a reasonable job. I'll use output from it to explain how I knew all the values.
The last tool you need is a way to turn jpeg output into black-and-white pbm files so that pdf417decode can read them. For that, I use the following (after installing netpbm):
cat /tmp/barcode.jpeg | jpegtopnm | ppmtopgm | pamthreshold | pamtopnm > new.pbm && ./pdf417decode -c -e new.pbm
With that, let's decode the first three rows of your existing barcode (with my commentary to the side). Everywhere you see "function output," that means this value is the output of some function that takes the other thing as the input:
0 7f54 0x02030000 (0) // Left marker
0 6a38 0x00000007 (7) // Number of rows function output
0 218c 0x00000076 (118) // Total number of non-error correcting codewords
0 0211 0x00000385 (901) // Latch to Byte Compaction mode
0 68cf 0x00000059 (89) // Data
0 18ec 0x0000021c (540)
0 02e7 0x00000330 (816)
0 753c 0x00000004 (4) // Number of columns function output
0 7e8a 0x00030001 (1) // Right marker
1 7f54 0x02030000 (0) // Left marker
1 7520 0x00010002 (2) // Security Level function output
1 704a 0x00010334 (820) // Data
1 31f2 0x000101a7 (423)
1 507b 0x000100c9 (201)
1 5e5f 0x00010319 (793)
1 6cf3 0x00010176 (374)
1 7d47 0x00010007 (7) // Number of rows function output
1 7e8a 0x00030001 (1) // Right marker
2 7f54 0x02030000 (0) // Left marker
2 6a7e 0x00020004 (4) // Number of columns function output
2 0fb2 0x0002037a (890) // Data
2 6dfa 0x000200d9 (217)
2 5b3e 0x000200bc (188)
2 3bbc 0x00020180 (384)
2 5e0b 0x00020268 (616)
2 29e0 0x00020002 (2) // Security Level function output
2 7e8a 0x00030001 (1) // Right marker
The next 3 lines will continue this pattern of function outputs. Note that the same information is encoded on the left and right, but in a different order. The system has a lot of redundancy, and can detect that it's seeing a mirror image of the barcode.
We don't care about the number of rows for this purpose, but given a current row of n and a total number of rows of N, the function is:
30 * (n/3) + ((N-1)/3)
Where / always means "integer, truncating division." Given there are 24 rows, on row 0, this is 0 + (24-1)/3 = 7.
The security level function's output is 2. Given a security level of e, the function is:
30 * (n/3) + 3*e + (N-1) % 3
=> 0 + 3*e + (23%3) = 2
=> 3*e + 2 = 2
=> 3*e = 0
=> e = 0
Finally, the number of columns can just be counted off in the output. For completeness, given a number of columns c, the function is:
30 * (n/3) + (c - 1)
=> 0 + c - 1 = 4
=> c = 5
If you look at the Data lines, you'll notice that they don't match your input data at all. That's because they have a complex encoding that I won't detail here. But for Byte compaction, you can think of it as similar to Base64 encoding, but instead of 64, it's Base900. Where Base64 encodes 3 bytes of data into 4 characters, Base900 encodes 6 bytes of data into 5 codewords.
In the end, all these codewords get converted to symbols (actual lines and spaces). Which symbol is used depends on the line. Lines divisible by 3 use one symbol set, the lines after use a second, and the lines after that use a third. So the same codewords will look completely different on line 7 than on line 8.
Taken together, all these things make it very difficult to look at a barcode and decide how "different" it is from another barcode in terms of content. You just have to decode them and see what's going on.
CIPDF417BarcodeGenerator has a few more input parameters besides inputMessage that can have an influence on how the generated barcode looks - see the documentation. Visual inspection/comparison of two codes only makes sense when you know that all these parameters, most importantly inputCorrectionLevel were equal for both generators.
So, instead of a visual comparison, simply try decoding the barcodes using one of the many scanner apps out there, and compare the decoded bytes.
For your second example, try this:
// ...
var keyData = knownKey.data(using: .isoLatin1)!
keyData.append(secretData)
let image = generatePDF417Barcode(from: keyData)

Tracking the point of intersection of a simulated time series with a specific value over many runs in OpenBUGS

I have an OpenBUGS model that uses observed data (y.values) over time (x.values) to simulate many runs (~100000) with new estimates of y-values (y.est) for each run. The observed data exhibit a pronounced decline from a maximum value.
I want to keep track of the length of time it takes for each run to decline from the maximum abundance (T.max) to 10% of the maximum abundance (T.10%). Because the maximum abundance value changes from run to run, 10% of that maximum will also vary from run to run, and thus T.10% will vary from run to run.
Setting a parameter to store T.max is easy enough, that doesn't vary from run to run because the maximum value is sufficiently greater than any other value.
What I can't figure out, is how to store the intersection of the y-est values and T.10%.
My first attempt was to determine whether each y-est value is above or below T.10% using the step() function:
above.below[i] <- step(T.10% - y.est[i])
This generates a string of ones and zeros for each y.est value (e.g., 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, etc.) If each run simply declined continuously from a maximum to a minimum, I could use the rank() function to determine how many above.below[i] values occur above T.10%:
decline.length <- rank(above.below[1:N], 0)
In this example, decline.length would be equal to the number of '0's in the string above, which is 9. Unfortunately, the y-est values occasionally display periods of growth following their decline below T.10%. So, the vector of above.below values can look like this: 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, etc. Thus, decline.length would equal 14 rather than 9, given the subsequent 0s in the vector.
What I want to do, is figure out how to store only the number of '0's in above.below prior to the first '1'; above.below[1:10] rather than above.below[1:N]. Unfortunately, it's not always the 10th time step in which the first '1' occurs, so I need to make the maximum extent of the range of above.below vary from run to run during the simulation.
I'm struggling to accomplish this in OpenBUGS since it's a non-procedural language, but I think it can be done, I just don't know the trick to do it. I'm hoping someone more familiar with the step() and rank() functions can lend some expert advice.
Any guidance is much appreciated!
Two solutions offered to me:
1) Calculate the cumulative sum up to each time step:
for (i in 1:N){
above.below[i] <- step(T.10% - y.est[i])
cum.above.below[i] <- sum(above.below[1:i])
}
decline.length <- rank(cum.above.below[1:N], 0)
2) Calculate whether each year is above or below the threshold directly, without 1s and 0s:
for(i in 1:N){
above.below[i] <- step(T.10% - y.est[i])
dummy[i] <- above.below[i] * i + (1 - above.below[i]) * (N+1)
}
decline.length <- ranked(dummy[], 1)
So dummy is i when above.below is 1, and dummy is N+1 when above.below is 0.

Average saved output from multiple runs of function

I have a function that has 11 input parameters.
MyFunction(40, 40, 1, 1, 1, 5, 0, 1, 0, 1500, 'MyFile');
The input parameter 'MyFile' when passed through the MyFunction saves a text file using the save command that is 6 columns by the 10th input parameter of rows (e.g. 1500). I usually then load this files back into MATLAB when I am ready to analyze different runs.
I'd like to run MyFunction m times and ultimately have the 'MyFile' be a measure of central tendency (e.g. mean or median) of those m runs.
m=10
for i = 1:m;
MyFunction(40, 40, 1, 1, 1, 5, 0, 1, 0, 1500, 'MyFile');
end;
I could use the for-loop to generate a new 'MyFile' name for each iteration (e.g. MyFile1, MyFile2,...,MyFileM) with something like MyFile = sprintf('MyFile%m'); and then load all of the MyFiles back into MATLAB and then take their average and save it as a UltimateMyFile, but this seems cumbersome. Is their a better method to average these output files more directly? Should I store the files as an object, use dlmwrite, or -append?
Thanks.
since you are trying to find median, you need access to all the data.
you can define a 3 dimension array say
data = zeros(1500,6,m);
and then at each step of for loop update it:
data(:,:,i) = MyFunction(40, 40, 1, 1, 1, 5, 0, 1, 0, 1500);
of course you will need to redefine your function to get the right output.
However if you need to access the data at some other time, then you are better of writing it to a file and reading it from there.
in case you are only interested in the average, you can keep a running total as each case is analyzed and then then just divide it by number of cases (m).

perfect hash function

I'm attempting to hash the values
10, 100, 32, 45, 58, 126, 3, 29, 200, 400, 0
I need a function that will map them to an array that has a size of 13 without causing any collisions.
I've spent several hours thinking this over and googling and can't figure this out. I haven't come close to a viable solution.
How would I go about finding a hash function of this sort? I've played with gperf, but I don't really understand it and I couldn't get the results I was looking for.
if you know the exact keys then it is trivial to produce a perfect hash function -
int hash (int n) {
switch (n) {
case 10: return 0;
case 100: return 1;
case 32: return 2;
// ...
default: return -1;
}
}
Found One
I tried a few things and found one semi-manually:
(n ^ 28) % 13
The semi-manual part was the following ruby script that I used to test candidate functions with a range of parameters:
t = [10, 100, 32, 45, 58, 126, 3, 29, 200, 400, 0]
(1..200).each do |i|
t2 = t.map { |e| (e ^ i) % 13 }
puts i if t2.uniq.length == t.length
end
On some platforms (e.g. embedded), modulo operation is expensive, so % 13 is better avoided. But AND operation of low-order bits is cheap, and equivalent to modulo of a power-of-2.
I tried writing a simple program (in Python) to search for a perfect hash of your 11 data points, using simple forms such as ((x << a) ^ (x << b)) & 0xF (where & 0xF is equivalent to % 16, giving a result in the range 0..15, for example). I was able to find the following collision-free hash which gives an index in the range 0..15 (expressed as a C macro):
#define HASH(x) ((((x) << 2) ^ ((x) >> 2)) & 0xF)
Here is the Python program I used:
data = [ 10, 100, 32, 45, 58, 126, 3, 29, 200, 400, 0 ]
def shift_right(value, shift_value):
"""Shift right that allows for negative values, which shift left
(Python shift operator doesn't allow negative shift values)"""
if shift_value == None:
return 0
if shift_value < 0:
return value << (-shift_value)
else:
return value >> shift_value
def find_hash():
def hashf(val, i, j = None, k = None):
return (shift_right(val, i) ^ shift_right(val, j) ^ shift_right(val, k)) & 0xF
for i in xrange(-7, 8):
for j in xrange(i, 8):
#for k in xrange(j, 8):
#j = None
k = None
outputs = set()
for val in data:
hash_val = hashf(val, i, j, k)
if hash_val >= 13:
pass
#break
if hash_val in outputs:
break
else:
outputs.add(hash_val)
else:
print i, j, k, outputs
if __name__ == '__main__':
find_hash()
Bob Jenkins has a program for this too: http://burtleburtle.net/bob/hash/perfect.html
Unless you're very lucky, there's no "nice" perfect hash function for a given dataset. Perfect hashing algorithms usually use a simple hashing function on the keys (using enough bits so it's collision-free) then use a table to finish it off.
Just some quasi-analytical ramblings:
In your set of numbers, eleven in all, three are odd and eight are even.
Looking at the simplest forms of hashing - %13 - will give you the following hash values:
10 - 3,
100 - 9,
32 - 6,
45 - 6,
58 - 6,
126 - 9,
3 - 3,
29 - 3,
200 - 5,
400 - 10,
0 - 0
Which, of course, is unusable due to the number of collisions. Something more elaborate is needed.
Why state the obvious?
Considering that the numbers are so few any elaborate - or rather, "less simple" - algorithm will likely be slower than either the switch statement or (which I prefer) simply searching through an unsigned short/long vector of size eleven positions and using the index of the match.
Why use a vector search?
You can fine-tune it by placing the most often occuring values towards the beginning of the vector.
I assume the purpose is to plug in the hash index into a switch with nice, sequential numbering. In that light it seems wasteful to first use a switch to find the index and then plug it into another switch. Maybe you should consider not using hashing at all and go directly to the final switch?
The switch version of hashing cannot be fine-tuned and, due to the widely differing values, will cause the compiler to generate a binary search tree which will result in a lot of comparisons and conditional/other jumps (especially costly) which take time (I've assumed you've turned to hashing for its speed) and require space.
If you want to speed up the vector search additionally and are using an x86-system you can implement a vector search based on the assembler instructions repne scasw (short)/repne scasd (long) which will be much faster. After a setup time of a few instructions you will find the first entry in one instruction and the last in eleven followed by a few instructions cleanup. This means 5-10 instructions best case and 15-20 worst. This should beat the switch-based hashing in all but maybe one or two cases.
I did a quick check and using the SHA256 hash function and then doing modular division by 13 worked when I tried it in Mathematica. For c++ this function should be in the openssl library. See this post.
If you were doing a lot of hashing and lookup though, modular division is a pretty expensive operation to do repeatedly. There is another way of mapping an n-bit hash function into a i-bit indices. See this post by Michael Mitzenmacher about how to do it with a bit shift operation in C. Hope that helps.
Try the following which maps your n values to unique indices between 0 and 12
(1369%(n+1))%13