Autohotkey Most of numpad keys don't work - autohotkey

Here is my script. When I try it, only the NumpadClear (Num5) key works. I use Windows 7 64bit.
#SingleInstance force
#UseHook
#InstallKeybdHook
*~NumpadIns::MouseMove, 0, 1, 0, R
*~NumpadClear::MouseMove, 0, -1, 0, R
*~NumpadEnd::MouseMove, -1, 0, 0, R
*~NumpadPgDn::MouseMove, 1, 0, 0, R
*~NumpadDown::Click
*~NumpadEnter::Click Right

This AutoHotkey script should achieve what you're looking for.
It will work whether NumLock is turned off or on.
#SingleInstance force
#UseHook
#InstallKeybdHook
NumpadIns::MouseMove, 0, 1, 0, R
NumpadClear::MouseMove, 0, -1, 0, R
NumpadEnd::MouseMove, -1, 0, 0, R
NumpadPgDn::MouseMove, 1, 0, 0, R
NumpadDown::Click
NumpadEnter::Click Right
Numpad0::MouseMove, 0, 1, 0, R
Numpad5::MouseMove, 0, -1, 0, R
Numpad1::MouseMove, -1, 0, 0, R
Numpad3::MouseMove, 1, 0, 0, R
Numpad2::Click
;NumpadEnter::Click Right

Related

HLSL Array Indexing Returning Unexpected Values

I am executing some HLSL code on the GPU in Unity, but I am having issues with getting values out of an array. Here is my simplified code example.
C#
ComputeBuffer meshVerticesBuffer = new ComputeBuffer(
15 * 1,
sizeof(float) * 3
);
marchingCubesShader.SetBuffer(0, "MeshVertices", meshVerticesBuffer);
marchingCubesShader.Dispatch(0, 1, 1, 1);
Vector3[] meshVertices = new Vector3[15 * 1];
meshVerticesBuffer.GetData(meshVertices);
meshVerticesBuffer.Release();
HLSL
#pragma kernel ApplyMarchingCubes
int EDGE_TABLE[][15] = {
{-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
...255 more irrelevant entries
};
RWStructuredBuffer<float3> MeshVertices;
[numthreads(4, 4, 4)]
void ApplyMarchingCubes(uint3 id : SV_DispatchThreadID)
{
MeshVertices[0] = float3(0, 0, EDGE_TABLE[0][0]);
}
I am watching meshVertices on the C# side through the debugger, and the first item is always a Vector3(0, 0, 0). I am expecting a result of Vector3(0, 0, -1). What am I doing wrong?
I figured out why the array was not putting out the right values.
In HLSL, when declaring and initializing an array in the same line, you must include the static keyword.
My HLSL code should have been:
#pragma kernel ApplyMarchingCubes
static int EDGE_TABLE[][15] = {
{-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1},
...255 more irrelevant entries
};
RWStructuredBuffer<float3> MeshVertices;
[numthreads(4, 4, 4)]
void ApplyMarchingCubes(uint3 id : SV_DispatchThreadID)
{
MeshVertices[0] = float3(0, 0, EDGE_TABLE[0][0]);
}
(Notice static on the third line, it was not there before).

AutohotKey control + comma opening tab in new window within browser

The below is the autokey script which works perfect. However when i click using ctrl + comma a new tab gets opened. How do I avoid this and get the default behaviour
here is the script
;The Offset variable controls pointer speed
;Ctrl + Arrow keys = move mouse
;Ctrl + comma = left click
;Ctrl + period = right click
#SingleInstance force
Offset = 20
^Up::MouseMove, 0, (Offset * -1), 0, R
^Down::MouseMove, 0, Offset, 0, R
^Left::MouseMove, (Offset * -1), 0, 0, R
^Right::MouseMove, Offset, 0, 0, R
^.::click right
;This allows to press and hold the left mouse button instead of just clicking it once. Needed for drag and drop operations.
;snippet by x79animal at https://autohotkey.com/board/topic/59665-key-press-and-hold-emulates-mouse-click-and-hold-win7/
^,::
If (A_PriorHotKey = A_ThisHotKey)
return
click down
return
^, up::click up
On which program window ?
^Up::MouseMove, 0, -Offset, 0, R
^Down::MouseMove, 0, Offset, 0, R
^Left::MouseMove, -Offset, 0, 0, R
^Right::MouseMove, Offset, 0, 0, R
is a simple way to do it

Swift: BLE 16 bytes to Int

I'm getting a byte array like this one:
[60, 2, 0, 0, 0]
In the documentation there is written this:
uint16 -> heartBeatNum;
uint8 -> rawDataFilesNum;
uint8 -> alertNum
uint8 -> fallsNum
I will explain a little about the device so that you understand and then I ask my question.
The bluetooth device sends an object every minute that is called heartbeat. If this is the first time the object is to use the array looks like this:
After first minute:
[1, 0, 0, 0, 0]
After two minute:
[2, 0, 0, 0, 0]
After three minute:
[3, 0, 0, 0, 0]
After for minute:
[4, 0, 0, 0, 0]
...
Now there are more than 12 that have passed and the array is:
[60, 2, 0, 0, 0]
So I try to understand from the documentation the heartbeat count is the first 16 bytes. I can not figure out how to collect the 60's and the 2's to have the exact heartbeat number.
How does this function?
According to my calculation if I do 60 * 12 = 720
So I should have about 700
Can someone enlighten me how to gather the 16 bytes in int?

Distribute elements evenly (adding ints values to array of int values)

Say I have an array of Ints and all elements equal zero. It'd look something like this:
let arr: [Int] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
There are 11 elements in this array in total. I want three of the elements in this array to be the number one. I want these one values to be distributed evenly throughout the array so that it looks something like this:
[0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0]
I want to be able to add however many one's and distribute them evenly (or as close to evenly as possible) no matter how many total elements there are. How could I do this?
Note: For anyone wondering why I need this, I have a collection of strings that when joined together make up a large body of text. Think of the zeroes as the pieces of text and think of the ones as advertisements I am adding in between the text. I wanted to distribute these ads as evenly as possible. I figured this would be a simple way of expressing what I needed.
Maybe you can try this.
var arr: [Int] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
let distribution = arr.count / 3 // 3 is the number of 1s
for (index, value) in arr.enumerated() {
arr[index] = (index + 1) % distribution == 0 ? 1 : value
}
print(arr) // [0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0]
Assuming that the value distribution > 1

What should the results of a convolution be?

I'm using convnetjs to build an interactive tutorial (as I learn it myself). I have a simple 9x9 image of an 'X' and a convolutional layer with one of the filters as a 3x3 '\'...
I expected the results to be different. I expected the circled result on the right to be (-1+1+1+1+1+1+1+1+1)/9 = 0.77 instead of 7.1.
What else is happening that results in 7.1? Is this due to biases? I also expected the whole result to show highest numbers along the '\' diagonal, since the filter is that shape which would match the '\' part of the 'X'.
UPDATE: I expected the results to be the following. The biases appear to be an array [0.1,0.1,0.1]. What is the calculation that yeilds the above results (for at least the upper left pixel), instead of the below?
<html>
<head>
<script src="http://cs.stanford.edu/people/karpathy/convnetjs/build/convnet-min.js"></script>
</head>
<body>
<script>
// Initialize an input that is 9x9 and initialized with zeroes.
let inputVol = new convnetjs.Vol(9, 9, 1, 0.0);
// Manually set the input weights from zeroes to a 'X'...
inputVol.w = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, 1, -1, -1, -1, 1, -1, -1, -1, -1, -1, 1, -1, 1, -1, -1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, -1, 1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, 1, -1, -1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1];
// Define the layers
let layers = [];
layers.push({
type: 'input',
out_sx: 9,
out_sy: 9,
out_depth: 1
});
layers.push({
type: 'conv',
sx: 3,
pad: 0,
filters: 3,
stride: 1,
activation: 'relu'
});
let net = new convnetjs.Net();
net.makeLayers(layers);
let convLayer = net.layers[1];
let convLayerFilters = convLayer.filters;
// Set filters manually
// looks like a '\'
convLayerFilters[0].w = [1, -1, -1, -1, 1, -1, -1, -1, 1];
// looks like a 'X'
convLayerFilters[1].w = [1, -1, 1, -1, 1, -1, 1, -1, 1];
// looks like a '/'
convLayerFilters[2].w = [-1, -1, 1, -1, 1, -1, 1, -1, -1];
// Run the net
net.forward(inputVol);
// Prints '7.1' instead of '0.77'. Why???
console.log(net.layers[1].out_act.w[0]);
</script>
</body>
</html>
Yes it is happening because of biasing so that it stays in the range defined.
Bias is the value that you can add to each element in a convolution result to add additional influence from neighbouring pixels. Since with certain convolutions it is possible to get negative numbers (which are not representable in a 0–255 format), bias prevents the signal from drifting out of range. You can choose to add a bias of 127 or 128 to allow some negative numbers to be representable (with an implicit +127 or +128 in their value).