Let's say i have this problem and wanted to solve it using dimacs and maxsat solvers
There's 10 police patrols and i want solver to pick the best police patrol to go to intervention, each patrol is described by 3 variables (status, distance, districts)
so there will be 3 group of clauses
for example first patrol will be PP1 = x1,x11,x21, PP2 = x2,x12,x22 PP3 = x3,x13,x23 .. PP10 = x10,x20,x30
group 1 describing police patrol status, (300 means weight)
300 C1 - (x1 v x2 v x3)
50 C2 - (x4 v x5)
10 C3 - (x6 v x7 v x8 v x9 v x10)
C1 means their status is the best and C3 means it's the worst
group 2 describing police patrol distance to some incident or crime happening
300 C4 - (x11 v x12 v x13)
50 C5 - (x14 v x15)
10 C6 - (x16 v x17 v x18 v x19 v x20 )
C4 means they are the closest to incident, in C6 they are farthest
group 3 describing in what district they are
300 C7 - (x21 v x22 v x23)
50 C8 - (x24 v x25)
10 C9 - (x26 v x27 v x28 v x29 v x30)
C7 will be the safest etc.
So this is my wcnf file in dimacs, i don't know if it's good but will be pleased if you correct me what's wrong with it
p wcnf 30 9
300 1 2 3 0
50 4 5 0
10 6 7 8 9 10 0
300 20 11 14 0
50 15 16 17 0
10 12 13 18 19 0
300 29 21 27 0
50 22 23 24 25 0
10 26 28 30 0
I tested it in 2 solvers, rc2 maxsat solver and EvalMaxSAT and output was like this:
EvalMaxSAT
s OPTIMUM FOUND
o 0
v -1 2 -3 4 -5 6 -7 -8 -9 -10 11 12 -13 -14 15 -16 -17 -18 -19 -20 21 22 -23 -24 -25 26 -27 -28 -29 -30
c Total time: 335 µs
-
rc2
c formula: 30 vars, 0 hard, 9 soft
s OPTIMUM FOUND
o 0
v 1 -2 -3 4 -5 6 -7 -8 -9 -10 11 12 -13 -14 15 -16 -17 -18 -19 -20 21 22 -23 -24 -25 26 -27 -28 -29 -30
but looking at my wcnf file, I think ideal output should get values 1,11,21 as true because they are in the clauses with highest weight
Related
How can one compute weighted median in KDB?
I can see that there is a function med for a simple median but I could not find something like wmed similar to wavg.
Thank you very much for your help!
For values v and weights w, med v where w gobbles space for larger values of w.
Instead, sort w into ascending order of v and look for where cumulative sums reach half their sum.
q)show v:10?100
17 23 12 66 36 37 44 28 20 30
q)show w:.001*10?1000
0.418 0.126 0.077 0.829 0.503 0.12 0.71 0.506 0.804 0.012
q)med v where "j"$w*1000
36f
q)w iasc v / sort w into ascending order of v
0.077 0.418 0.804 0.126 0.506 0.012 0.503 0.12 0.71 0.829
q)0.5 1*(sum;sums)#\:w iasc v / half the sum and cumulative sums of w
2.0525
0.077 0.495 1.299 1.425 1.931 1.943 2.446 2.566 3.276 4.105
q).[>]0.5 1*(sum;sums)#\:w iasc v / compared
1111110000b
q)v i sum .[>]0.5 1*(sum;sums)#\:w i:iasc v / weighted median
36
q)\ts:1000 med v where "j"$w*1000
18 132192
q)\ts:1000 v i sum .[>]0.5 1*(sum;sums)#\:w i:iasc v
2 2576
q)wmed:{x i sum .[>]0.5 1*(sum;sums)#\:y i:iasc x}
Some vector techniques worth noticing:
Applying two functions with Each Left (sum;sums)#\: and using Apply . and an operator on the result, rather than setting a variable, e.g. (0.5*sum yi)>sums yi:y i or defining an inner lambda {sums[x]<0.5*sum x}y i
Grading one list with iasc to sort another
Multiple mappings through juxtaposition: v i sum ..
You could effectively weight the median by duplicating (using where):
q)med 10 34 23 123 5 56 where 4 1 1 1 1 1
10f
q)med 10 34 23 123 5 56 where 1 1 1 1 1 4
56f
q)med 10 34 23 123 5 56 where 1 2 1 3 2 1
34f
If your weights are percentages (e.g. 0.15 0.10 0.20 0.30 0.25) then convert them to equivalent whole/counting numbers
q)med 1 2 3 4 5 where "i"$100*0.15 0.10 0.20 0.30 0.25
4f
I'm trying to perform some simple logic on a table but I'd like to verify that the columns exists prior to doing so as a validation step. My data consists of standard table names though they are not always present in each data source.
While the following seems to work (just validating AAA at present) I need to expand to ensure that PRI_AAA (and eventually many other variables) is present as well.
t: $[`AAA in cols `t; temp: update AAA_VAL: AAA*AAA_PRICE from t;()]
Two part question
This seems quite tedious for each variable (imagine AAA-ZZZ inputs and their derivatives). Is there a clever way to leverage a dictionary (or table) to see if a number of variables exists or insert a place holder column of zeros if they do not?
Similarly, can we store a formula or instructions to to apply within a dictionary (or table) to validate and return a calculation (i.e. BBB_VAL: BBB*BBB_PRICE.) Some calculations would be dependent on others (i.e. BBB_Tax_Basis = BBB_VAL - BBB_COSTS costs for example so there could be iterative issues.
Thank in advance!
A functional update may be the best way to achieve this if your intention is to update many columns of a table in a similar fashion.
func:{[t;x]
if[not x in cols t;t:![t;();0b;(enlist x)!enlist 0]];
:$[x in cols t;
![t;();0b;(enlist`$string[x],"_VAL")!enlist(*;x;`$string[x],"_PRICE")];
t;
];
};
This function will update t with *_VAL columns for any column you pass as an argument, while first also adding a zero column for any missing columns passed as an argument.
q)t:([]AAA:10?100;BBB:10?100;CCC:10?100;AAA_PRICE:10*10?10;BBB_PRICE:10*10?10;CCC_PRICE:10*10?10;DDD_PRICE:10*10?10)
q)func/[t;`AAA`BBB`CCC`DDD]
AAA BBB CCC AAA_PRICE BBB_PRICE CCC_PRICE DDD_PRICE AAA_VAL BBB_VAL CCC_VAL DDD DDD_VAL
---------------------------------------------------------------------------------------
70 28 89 10 90 0 0 700 2520 0 0 0
39 17 97 50 90 40 10 1950 1530 3880 0 0
76 11 11 0 0 50 10 0 0 550 0 0
26 55 99 20 60 80 90 520 3300 7920 0 0
91 51 3 30 20 0 60 2730 1020 0 0 0
83 81 7 70 60 40 90 5810 4860 280 0 0
76 68 98 40 80 90 70 3040 5440 8820 0 0
88 96 30 70 0 80 80 6160 0 2400 0 0
4 61 2 70 90 0 40 280 5490 0 0 0
56 70 15 0 50 30 30 0 3500 450 0 0
As you've already mentioned, to cover point 2, a dictionary of functions might be the best way to go.
q)dict:raze{(enlist`$string[x],"_VAL")!enlist(*;x;`$string[x],"_PRICE")}each`AAA`BBB`DDD
q)dict
AAA_VAL| * `AAA `AAA_PRICE
BBB_VAL| * `BBB `BBB_PRICE
DDD_VAL| * `DDD `DDD_PRICE
And then a slightly modified function...
func:{[dict;t;x]
if[not x in cols t;t:![t;();0b;(enlist x)!enlist 0]];
:$[x in cols t;
![t;();0b;(enlist`$string[x],"_VAL")!enlist(dict`$string[x],"_VAL")];
t;
];
};
yields a similar result.
q)func[dict]/[t;`AAA`BBB`DDD]
AAA BBB CCC AAA_PRICE BBB_PRICE CCC_PRICE DDD_PRICE AAA_VAL BBB_VAL DDD DDD_VAL
-------------------------------------------------------------------------------
70 28 89 10 90 0 0 700 2520 0 0
39 17 97 50 90 40 10 1950 1530 0 0
76 11 11 0 0 50 10 0 0 0 0
26 55 99 20 60 80 90 520 3300 0 0
91 51 3 30 20 0 60 2730 1020 0 0
83 81 7 70 60 40 90 5810 4860 0 0
76 68 98 40 80 90 70 3040 5440 0 0
88 96 30 70 0 80 80 6160 0 0 0
4 61 2 70 90 0 40 280 5490 0 0
56 70 15 0 50 30 30 0 3500 0 0
Here's another approach which handles dependent/cascading calculations and also figures out which calculations are possible or not depending on the available columns in the table.
q)show map:`AAA_VAL`BBB_VAL`AAA_RevenueP`AAA_RevenueM`BBB_Other!((*;`AAA;`AAA_PRICE);(*;`BBB;`BBB_PRICE);(+;`AAA_Revenue;`AAA_VAL);(%;`AAA_RevenueP;1e6);(reciprocal;`BBB_VAL));
AAA_VAL | (*;`AAA;`AAA_PRICE)
BBB_VAL | (*;`BBB;`BBB_PRICE)
AAA_RevenueP| (+;`AAA_Revenue;`AAA_VAL)
AAA_RevenueM| (%;`AAA_RevenueP;1000000f)
BBB_Other | (%:;`BBB_VAL)
func:{c:{$[0h=type y;.z.s[x]each y;-11h<>type y;y;y in key x;.z.s[x]each x y;y]}[y]''[y];
![x;();0b;where[{all in[;cols x]r where -11h=type each r:(raze/)y}[x]each c]#c]};
q)t:([] AAA:1 2 3;AAA_PRICE:1 2 3f;AAA_Revenue:10 20 30;BBB:4 5 6);
q)func[t;map]
AAA AAA_PRICE AAA_Revenue BBB AAA_VAL AAA_RevenueP AAA_RevenueM
---------------------------------------------------------------
1 1 10 4 1 11 1.1e-05
2 2 20 5 4 24 2.4e-05
3 3 30 6 9 39 3.9e-05
/if the right columns are there
q)t:([] AAA:1 2 3;AAA_PRICE:1 2 3f;AAA_Revenue:10 20 30;BBB:4 5 6;BBB_PRICE:4 5 6f);
q)func[t;map]
AAA AAA_PRICE AAA_Revenue BBB BBB_PRICE AAA_VAL BBB_VAL AAA_RevenueP AAA_RevenueM BBB_Other
--------------------------------------------------------------------------------------------
1 1 10 4 4 1 16 11 1.1e-05 0.0625
2 2 20 5 5 4 25 24 2.4e-05 0.04
3 3 30 6 6 9 36 39 3.9e-05 0.02777778
The only caveat is that your map can't have the same column name as both the key and in the value of your map, aka cannot re-use column names. And it's assumed all symbols in your map are column names (not global variables) though it could be extended to cover that
EDIT: if you have a large number of column maps then it will be easier to define it in a more vertical fashion like so:
map:(!). flip(
(`AAA_VAL; (*;`AAA;`AAA_PRICE));
(`BBB_VAL; (*;`BBB;`BBB_PRICE));
(`AAA_RevenueP;(+;`AAA_Revenue;`AAA_VAL));
(`AAA_RevenueM;(%;`AAA_RevenueP;1e6));
(`BBB_Other; (reciprocal;`BBB_VAL))
);
AWk experts, I have a file as descried below and I wonder if it is possible to easily convert it to the form that I want:
The file containing multiple variables over one month (one observance ONLY in one day, but some days may be missing). The format for each day is the same except the date/value. However there is some description lines (containing words and numbers) at the end of each day, and the number of description lines varies among different days.
KBO BTA Observations at 12Z 01 Feb 2020
-----------------------------------------------------------------------------
PRES HGHT TEMP DWPT RELH MIXR DRCT SKNT THTA THTE THTV
hPa m C C % g/kg deg knot K K K
-----------------------------------------------------------------------------
1000.0 92
925.0 765
850.0 1516
754.0 2546 13.0 9.3 78 9.85 150 2 310.2 340.6 312.0
752.0 2569 14.0 9.2 73 9.80 149 2 311.5 342.0 313.4
700.0 3173 -9.20 7.5 89 9.38 120 6 312.6 341.9 314.4
Station information and sounding indices
Station elevation: 2546.0
Lifted index: 1.83
Pres [hPa] of the Lifted Condensation Level: 693.42
1000 hPa to 500 hPa thickness: 5798.00
Precipitable water [mm] for entire sounding: 21.64
8022 KBO BTA Observations at 00Z 02 Feb 2020
-----------------------------------------------------------------------------
PRES HGHT TEMP DWPT RELH MIXR DRCT SKNT THTA THTE THTV
hPa m C C % g/kg deg knot K K K
-----------------------------------------------------------------------------
1000.0 97
925.0 758
850.0 1515
753.0 2546 10.8 6.8 76 8.30 190 3 307.9 333.4 309.5
750.0 2580 12.6 7.9 73 8.99 186 3 310.2 338.1 311.9
Here is what I want: remove all the description lines and read the date/time information and put it as the first column.
Time PRES HGHT TEMP DWPT RELH MIXR DRCT SKNT THTA THTE THTV
20200201t12Z 754.0 2546 13.0 9.3 78 9.85 150 2 310.2 340.6 312.0
20200201t12Z 752.0 2569 14.0 9.2 73 9.80 149 2 311.5 342.0 313.4
20200201t12Z 700.0 3173 -9.2 7.5 89 9.38 120 6 312.6 341.9 314.4
20200202t00Z 753.0 2546 10.8 6.8 76 8.30 190 3 307.9 333.4 309.5
20200202t00Z 750.0 2580 12.6 7.9 73 8.99 186 3 310.2 338.1 311.9
Any help is appreciated.
Kelly
something like this...
$ awk 'function m(x)
{return sprintf("%02d",int(index("JanFebMarAprMayJunJulAugSepOctNovDec",x)-1)/3+1)}
NR==1 {print "time PRES TEMP WDIR WSPD RELH"}
/^-+$/ {f=!f}
f {date=p[n] m(p[n-1]) p[n-2]}
!f {n=split($0,p)}
NF==11 && !/[^ 0-9.-]/ {print date,$0}' file | column -t
time PRES TEMP WDIR WSPD RELH
20200201 1000 10 230 5 90
20200201 900 9 200 6 85
20200201 800 9 100 6 87
20200202 1000 9.2 233 5 90
20200202 900 9.1 200 4 80
20200202 800 9 176 2 80
Explanation
function just returns the month number from the month string by looking up the index of and converting to formatted number
f keeps track of the dashed lines so that from the previous line we can parse the date,
finally to find the data lines the heuristic is number of fields and no non-number signs (digits, spaces, dots or negative signs).
$ cat tst.awk
/^-+$/ && ( ((++dashCnt) % 2) == 1 ) {
mthNr = (index("JanFebMarAprMayJunJulAugSepOctNovDec",p[n-1])+2)/3
time = sprintf("%04d%02d%02d", p[n], mthNr, p[n-2])
}
/^[[:upper:][:space:]]+$/ && !doneHdr++ { print "Time", $0 }
/^[0-9.[:space:]]+$/ { print time, $0 }
{ n = split($0,p) }
.
$ awk -f tst.awk file | column -t
Time PRES TEMP WDIR WSPD RELH
20200001 1000 10 230 5 90
20200001 900 9 200 6 85
20200001 800 9 100 6 87
20200002 1000 9.2 233 5 90
20200002 900 9.1 200 4 80
20200002 800 9 176 2 80
I have a question about intensity inhomogeneity. I read a paper, it defined a way to calculate the intensity inhomogeneity based on average filter:
Let see my problem, I have a image I (below code) and a average filter with r=3. I want to calculate image transformation J based on formula (17). Could you help me to implement it by matlab code? Thank you so much.
This is my code
%Create image I
I=[3 5 5 2 0 0 6 13 1
0 3 7 5 0 0 2 8 6
4 5 5 4 2 1 3 5 9
17 10 3 1 3 7 9 9 0
7 25 0 0 5 0 10 13 2
111 105 25 19 13 11 11 8 0
103 105 15 26 0 12 2 6 0
234 238 144 140 51 44 7 8 8
231 227 150 146 43 50 8 16 9
];
%% Create filter AF
size=3; % scale parameter in Average kernel
AF=fspecial('average',[size,size]); % Average kernel
%%How to calculate CN and J
CN=mean(I(:));%Correct?
J=???
You're pretty close! The mean intensity is calculated correctly; all you are missing to calculate J is apply the filter defined with fspecial to your image:
Here is the code:
clc
clear
%Create image I
I=[3 5 5 2 0 0 6 13 1
0 3 7 5 0 0 2 8 6
4 5 5 4 2 1 3 5 9
17 10 3 1 3 7 9 9 0
7 25 0 0 5 0 10 13 2
111 105 25 19 13 11 11 8 0
103 105 15 26 0 12 2 6 0
234 238 144 140 51 44 7 8 8
231 227 150 146 43 50 8 16 9
];
% Create filter AF
size=3; % scale parameter in Average kernel
AF=fspecial('average',[size,size]); % Average kernel
%%How to calculate CN and J
CN=mean(I(:)); % This is correct
J = (CN*I)./imfilter(I,AF); % Apply the filter to the image
figure;
subplot(1,2,1)
image(I)
subplot(1,2,2)
image(J)
Resulting in the following:
I'm trying to implement the Baker map.
Is there a function that would allow one to divide a 8 x 8 matrix by providing, for example, a sequence of divisors 2, 4, 2 and rearranging pixels in the order as shown in the matrices below?
X = reshape(1:64,8,8);
After applying divisors 2,4,2 to the matrix X one should get a matrix like A shown below.
A=[31 23 15 7 32 24 16 8;
63 55 47 39 64 56 48 40;
11 3 12 4 13 5 14 6;
27 19 28 20 29 21 30 22;
43 35 44 36 45 37 46 38;
59 51 60 52 61 53 62 54;
25 17 9 1 26 18 10 2;
57 49 41 33 58 50 42 34]
The link to the document which I am working on is:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.5132&rep=rep1&type=pdf
This is what I want to achieve:
Edit: a little more generic solution:
%function Z = bakermap(X,divisors)
function Z = bakermap()
X = reshape(1:64,8,8)'
divisors = [ 2 4 2 ];
[x,y] = size(X);
offsets = sum(divisors)-fliplr(cumsum(fliplr(divisors)));
if any(mod(y,divisors)) && ~(sum(divisors) == y)
disp('invalid divisor vector')
return
end
blocks = #(div) cell2mat( cellfun(#mtimes, repmat({ones(x/div,div)},div,1),...
num2cell(1:div)',...
'UniformOutput',false) );
%create index matrix
I = [];
for ii = 1:numel(divisors);
I = [I, blocks(divisors(ii))+offsets(ii)];
end
%create Baker map
Y = flipud(X);
Z = [];
for jj=1:I(end)
Z = [Z; Y(I==jj)'];
end
Z = flipud(Z);
end
returns:
index matrix:
I =
1 1 3 3 3 3 7 7
1 1 3 3 3 3 7 7
1 1 4 4 4 4 7 7
1 1 4 4 4 4 7 7
2 2 5 5 5 5 8 8
2 2 5 5 5 5 8 8
2 2 6 6 6 6 8 8
2 2 6 6 6 6 8 8
Baker map:
Z =
31 23 15 7 32 24 16 8
63 55 47 39 64 56 48 40
11 3 12 4 13 5 14 6
27 19 28 20 29 21 30 22
43 35 44 36 45 37 46 38
59 51 60 52 61 53 62 54
25 17 9 1 26 18 10 2
57 49 41 33 58 50 42 34
But have a look at the if-condition, it's just possible for these cases. I don't know if that's enough. I also tried something like divisors = [ 1 4 1 2 ] - and it worked. As long as the sum of all divisors is equal the row-length and the modulus as well, there shouldn't be problems.
Explanation:
% definition of anonymous function with input parameter: div: divisor vector
blocks = #(div) cell2mat( ... % converts final result into matrix
cellfun(#mtimes, ... % multiplies the next two inputs A,B
repmat(... % A...
{ones(x/div,div)},... % cell with a matrix of ones in size
of one subblock, e.g. [1,1,1,1;1,1,1,1]
div,1),... % which is replicated div-times according
to actual by cellfun processed divisor
num2cell(1:div)',... % creates a vector [1,2,3,4...] according
to the number of divisors, so so finally
every Block A gets an increasing factor
'UniformOutput',false...% necessary additional property of cellfun
));
Have also a look at this revision to have a simpler insight in what is happening. You requested a generic solution, thats the one above, the one linked was with more manual inputs.