Is there a way to only show the first record in a Crystal Report that does not meet a specified condition? - crystal-reports

Say I have data formatted as follows in a Crystal Report:
Job: 1
Asm Opr LbrQty
0 10 0.0
0 10 60.0
0 10 60.0
0 20 65.0
0 30 0.0
0 30 20.0
0 30 40.0
Job: 2
Asm Opr LbrQty
0 10 60.0
0 10 60.0
0 10 75.0
0 20 0.0
0 20 165.0
0 30 0.0
0 30 20.0
0 30 40.0
0 40 60.0
1 10 60.0
1 10 60.0
1 10 75.0
1 20 0.0
1 20 165.0
1 30 0.0
1 30 20.0
1 40 0.0
1 40 60.0
I only want the report to show the first Opr within an Asm where LbrQty is NOT zero, as below:
Job: 1
Asm Opr LbrQty
0 10 60.0
0 20 65.0
0 30 20.0
Job: 2
Asm Opr LbrQty
0 10 60.0
0 20 165.0
0 30 20.0
0 40 60.0
1 10 60.0
1 20 165.0
1 30 20.0
1 40 60.0
I've attempted to use the following as my Supression Formula, which works for the most part, but still occasionally displays multiple records with the same Opr:
(
Previous ({OprSeq}) = ({OprSeq}) and
Previous ({JobNum}) = ({JobNum}) and
Previous ({LaborQty}) <> 0
) or
(
({LaborQty}) = 0
)
How can I change my formula to give me the behavior I require?

Try below way:
Create a running total with following criteria:
In field to summairze take lbrqty and take count as summary option
In evalute use option formula and write below code:
{lbrqty}>0
In Reset use option On change of field opr
Now use this running total to supress.. Now in supress of the section write below code:
if {#RTotal1}=1
then false
else true

Related

HALog - Connect and response times percentiles

When I run the following command to parse haproxy logs, the output doesn't contain any headers, and I'm not able to understand the meanings of the numbers in each of the columns.
Command halog -pct < haproxy.log > percentiles.txt
the output that I see is:
0.1 3493 18 0 0 0
0.2 6986 25 0 0 0
0.3 10479 30 0 0 0
0.4 13972 33 0 0 0
0.5 17465 37 0 0 0
0.6 20958 40 0 0 0
0.7 24451 43 0 0 0
0.8 27944 46 0 0 0
0.9 31438 48 0 0 0
1.0 34931 49 0 0 0
1.1 38424 50 0 0 0
1.2 41917 51 0 0 0
1.3 45410 52 0 0 0
1.4 48903 53 0 0 0
1.5 52396 55 0 0 0
1.6 55889 56 0 0 0
1.7 59383 57 0 0 0
1.8 62876 58 0 0 0
1.9 66369 60 0 0 0
2.0 69862 61 0 0 0
3.0 104793 74 0 0 0
4.0 139724 80 0 1 0
5.0 174656 89 0 1 0
6.0 209587 94 0 1 0
7.0 244518 100 0 1 0
8.0 279449 106 0 1 0
9.0 314380 112 0 1 0
10.0 349312 118 0 1 0
15.0 523968 144 0 1 0
20.0 698624 168 0 1 0
25.0 873280 180 0 2 0
30.0 1047936 190 0 2 0
35.0 1222592 200 0 3 0
40.0 1397248 210 0 3 0
45.0 1571904 220 0 4 0
50.0 1746560 230 0 6 0
55.0 1921216 241 0 7 0
60.0 2095872 258 0 9 0
65.0 2270528 279 0 10 0
70.0 2445184 309 0 16 0
75.0 2619840 354 1 18 0
80.0 2794496 425 1 20 0
85.0 2969152 545 1 22 0
90.0 3143808 761 1 39 1
91.0 3178740 821 1 80 1
92.0 3213671 921 1 217 1
93.0 3248602 1026 1 457 1
94.0 3283533 1190 1 683 1
95.0 3318464 1408 1 889 1
96.0 3353396 1721 1 1107 1
97.0 3388327 2181 1 1328 1
98.0 3423258 2902 1 1555 1
98.1 3426751 3000 1 1580 1
98.2 3430244 3094 1 1607 1
98.3 3433737 3196 1 1635 1
98.4 3437231 3301 1 1666 1
98.5 3440724 3420 1 1697 1
98.6 3444217 3550 1 1731 1
98.7 3447710 3690 1 1770 1
98.8 3451203 3848 1 1815 1
98.9 3454696 4030 1 1864 1
99.0 3458189 4249 1 1923 2
99.1 3461682 4490 1 1993 2
99.2 3465176 4766 2 2089 2
99.3 3468669 5085 2 2195 2
99.4 3472162 5441 3 2317 97
99.5 3475655 5899 5 2440 365
99.6 3479148 6517 11 2567 817
99.7 3482641 7403 14 2719 1555
99.8 3486134 8785 16 2992 2779
99.9 3489627 11650 997 3421 4931
100.0 3493121 85004 4008 20914 71716
The first column looks to be the percentile, (like P50, P90, P99, etc) but the what are the values in the 2nd, 3rd, 4th, 5th and 6th columns? Also, are they total values (halog reports total times when provided with other options), or average values or maximum values?
<percentile> <request count> <Request Time*> <Connect Time**> <Response Time***> <Data Time****>
* Referred to as TR in the documentation.
** Referred to as Tc in the documentation.
*** Referred to as Tr in the documentation.
**** Referred to as Td in the documentation.
The source provides some good pointers.

How to calculate the volume of the ship under a surface

Now I am working on calculating the volume of the ship,the figure of the ship under the water can be described as follows:
0 0 0
-20 12 0
-20 18 0
0 30 0
0 10 -5
0 20 -5
0 30 02
20 0 0
20 10 -5
20 20 -5
20 30 0
40 0 0
40 10 -5
40 20 -5
40 30 0
60 0 0
60 10 -5
60 20 -5
60 30 0
80 0 0
80 10 -5
0 20 -5
80 30 0
00 0 0
100 10 -5
100 20 -5
100 30 0
101 15 0
100 0 0
then
text = load('---.txt')
x = text(:,1) ;
y = text(:,2) ;
z = text(:,3) ;
tri = delaunay(x,y);
tmp=trisurf(tri,x,y,z);
and i get the approximate shape of the ship ,but how do i calculate the volume of it under z=0 ?
Firstly, your 3d-object is not really defined until you provide both vertices (which you have) and faces, which are approximated (badly, for this purpose) by the delanuay triangulation.
With the correct triangulation you can call Volume of a triangulated surface mesh found at Mathworks file exchange.

Netlogo BehaviourSpace weird Output and Multiple Runs

For some weird reason the behaviorSpace in netlogo runs the same value pairs model multiple times even though repetitions are 1. I can't understand why. The output file in table format looks like this. I don't know what's up with double quotes.
BehaviorSpace results (NetLogo 5.2.0)
/home/abhishekb/new_models/basic/try4.nlogo
experiment1
10/26/2015 02:34:28:770 +0530
min-pxcor max-pxcor min-pycor max-pycor
-7 7 -7 7
[run number] knt k threshold scale [step]
16 0 75 0.1 1 2535
7 0 54 0.1 1 0 8715
5 0 47 0.3 1 0 9374
10 0 61 0.1 1 0 8841
"22 0 89 0.1 1 0 3664"8" 0 54 0.3 1 0 12001" 0 40 0.1 1 0 10727
2" 013" 0 68 0.1 1 0"22 0 89 0.1 1 0 4449
128" 0 103 0.1 1 0 2805
4 0 47 0.1 1 0 1200119" 0 82 0.1 1 0 12001"1 0 40 0.1 1 0 12001
3"
26" 0 96 0.3 1 0 4800
9" 0 103 0.3 1 0 43321" 0 82 0.6 1 0 7385
31 0 110 0.1 1 0 2976
1" 0 40 0.1 1 0 12001
4" 0 89 0.6 1 0 6389
25 0 96 0.1 1 0 7517
26 0 96 0.3 1 0 5479
9""27" 0 96 0.6 1 0 6117
28 0 103 0.1 1 0 2219
29 0 103 0.3 1 0 6411
30 0 103 0.6 1 0 5693
31 0 110 0.1 1 0 3985
78 500 61 0.6 1 0 9720
79 500 68 0.1 1 0 6067
80 500 68 0.3 1 0 6795
81 500 68 0.6 1 0 8305
82 500 75 0.1 1 0 4416
83 500 75 0.3 1 0 5742
84 500 75 0.6 1 0 7399
85 500 82 0.1 1 0 5306
86 500 82 0.3 1 0 5388
01"
87 500 82 0.6 1 0 6869
88 500 89 0.1 1 0 12001
89 500 89 0.3 1 0 5097
90 500 89 0.6 1 0 6478
91 500 96 0.1 1 0 2275
92 500 96 0.3 1 0 4693" 500 96 0.6 1 0 6395
94"94 500 103 0.1 1 0 12001
95 500 103 0.3 1 0 3984
96 500 103 0.6 1 0 5440
97 500 110 0.1 1 0 1893
98 500 110 0.3 1 0 37299" 500 110 0.6 1 0 5275
100 750 40 0.1 1 0 12001
101" 750 40 0.3 1 0 12001
102 750 40 0.6 1 0 12001
"
103""" 750 47 0.1 1 0 11911
"
104""" 750 47 0.3 1 0 12001
105 750 47 0.6 1 0 11821
750" 54 0.1 1 0 12001
107 750 54 0.3 1 0 8703
811108" 750 54 0.6 1 0 10099
5"
,61" 0.1 1 0 12001
110 750 61 0.3 1 0 7453
0111" 750111" 750 61 0.6 1 0 9051
112 750 68 0.1 1 0 12001
68 0.3 1 0 12001
BehaviourSpace Code:
##$###$##
NetLogo 5.2.1
##$###$##
##$###$##
##$###$##
<experiments>
<experiment name="experiment1" repetitions="1" runMetricsEveryStep="false">
<setup>check-setup-percent
file-write-values</setup>
<go>go</go>
<final>write-to-file</final>
<timeLimit steps="12000"/>
<exitCondition>count inboxturtles with[exit = 1 and exited = false] = 0</exitCondition>
<steppedValueSet variable="knt" first="0" step="250" last="2500"/>
<steppedValueSet variable="k" first="40" step="7" last="110"/>
<enumeratedValueSet variable="threshold">
<value value="0.1"/>
<value value="0.3"/>
<value value="0.6"/>
</enumeratedValueSet>
<enumeratedValueSet variable="scale">
<value value="1"/>
</enumeratedValueSet>
</experiment>
</experiments>
##$###$##
##$###$##
default
0.0
-0.2 0 0.0 1.0
0.0 1 1.0 0.0
0.2 0 0.0 1.0
link direction
true
0
Line -7500403 true 150 150 90 180
Line -7500403 true 150 150 210 180
##$###$##
1
##$###$##

Is it possible to rotate a matrix by 45 degrees in matlab

i.e. so that it appears like a diamond. (it's a square matrix) with each row having 1 more element than the row before up until the middle row which has the number of elements equal to the dimensions of the original matrix, and then back down again with each row back to 1?
A rotation is of course not possible as the "grid" a matrix is based on is regular.
But I remember what your initially idea was, so the following will help you:
%example data
A = magic(5);
A =
17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9
d = length(A)-1;
diamond = zeros(2*d+1);
for jj = d:-2:-d
ii = (d-jj)/2+1;
kk = (d-abs(jj))/2;
D{ii} = { [zeros( 1,kk ) A(ii,:) zeros( 1,kk ) ] };
diamond = diamond + diag(D{ii}{1},jj);
end
will return the diamond:
diamond =
0 0 0 0 17 0 0 0 0
0 0 0 23 0 24 0 0 0
0 0 4 0 5 0 1 0 0
0 10 0 6 0 7 0 8 0
11 0 12 0 13 0 14 0 15
0 18 0 19 0 20 0 16 0
0 0 25 0 21 0 22 0 0
0 0 0 2 0 3 0 0 0
0 0 0 0 9 0 0 0 0
Now you can again search for words or patterns row by row or column by column, just remove the zeros then:
Imagine you extract a single row:
row = diamond(5,:)
you can extract the non-zero elements with find:
rowNoZeros = row( find(row) )
rowNoZeros =
11 12 13 14 15
Not a real diamond, but probably useful as well:
(Idea in the comments by #beaker. I will remove this part, if he is posting it by himself.)
B = spdiags(A)
B =
11 10 4 23 17 0 0 0 0
0 18 12 6 5 24 0 0 0
0 0 25 19 13 7 1 0 0
0 0 0 2 21 20 14 8 0
0 0 0 0 9 3 22 16 15

matrix from a matrix matlab

I am trying to get the function to output an array T that has each value inside the fixed outer rows and columns, averaged with itself and the 4 numbers surrounding it. I made X to recieve all 9 of the values from my larger array, S to select only the ones I wanted and A to use when averaging, yet it will not work, I believe the problem lies in the X(ii,jj) = T((ii-1):(ii+1), (jj-1):(jj+1)). Any help much appreciated
function T = tempsim(rows, cols, topNsideTemp, bottomTemp, tol)
T = zeros(rows,cols);
T(1,:) = topNsideTemp;
T(:,1) = topNsideTemp;
T(:,rows) = topNsideTemp;
T(rows,:) = bottomTemp;
S = [0 1 0; 1 1 1; 0 1 0];
X = zeros(3,3);
A = zeros(3,3);
for ii = 2:(cols-1);
jj = 2:(rows-1);
X(ii,jj) = T((ii-1):(ii+1), (jj-1):(jj+1))
A = X.*S;
T = (sum(sum(A)))/5
What you are doing looks like a convolution as Jouni points out. So using that knowledge, I came up with following code:
function T = tempsim(rows, cols, topNsideTemp, bottomTemp, tol)
sz = [rows,cols];
topEdge = sub2ind(sz, ones(1,cols) , 1:cols);
bottomEdge = sub2ind(sz, ones(1,cols)*rows, 1:cols);
leftEdge = sub2ind(sz, 1:rows , ones(1,rows));
rightEdge = sub2ind(sz, 1:rows , ones(1,rows)*cols);
otherEdges = [topEdge leftEdge rightEdge];
edges = [bottomEdge otherEdges];
%% set initial grid
T0 = zeros(sz);
T0(otherEdges) = topNsideTemp;
T0(bottomEdge) = bottomTemp;
%% average filter
F = [0 1 0
1 1 1
0 1 0];
F = F/sum(F(:));
%% simulation
T = T0; % initial condition
T = conv2(T, F, 'same');
T(edges) = T0(edges); % this keeps the edges set to the initial values
If you run this, you will get following results:
T = tempsim(10,10,100,-100)
T0 =
100 100 100 100 100 100 100 100 100 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 0 0 0 0 0 0 0 0 100
100 -100 -100 -100 -100 -100 -100 -100 -100 100
T =
100 100 100 100 100 100 100 100 100 100
100 40 20 20 20 20 20 20 40 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 20 0 0 0 0 0 0 20 100
100 0 -20 -20 -20 -20 -20 -20 0 100
100 -100 -100 -100 -100 -100 -100 -100 -100 100
I also showed T0 for clarity as you can see that T(2,2) == 40, which is equal to (100 + 100 + 0 + 0 + 0)/5 from the same position in T0.
From the context, I guess you'll be studying the convergence of this problem. If that's the case, you will have to repeat the last 2 lines until it converges.
But depending on your actual problem, I think you can improve the initial conditions to speed up convergence by initializing the grid to a temperature different from 0. In the current code your boundary conditions will heat up the complete grid, which takes some time. If you just provide a proper guess for the bulk temperature (in lieu of 0), this can speed up the convergence considerably. In my example I need about 40 steps for convergence up to a certain tolerance, with a proper guess (50 in my case) this can be reduced to about 20 steps for the same tolerance level. For larger grid, I expect to see even larger gains in efficiency.
This converges to the following values (and the mirror image for the other values):
100 100 100 100 100
100 96.502 93.464 91.254 90.097
100 92.989 86.925 82.533 80.245
100 89.229 79.995 73.386 69.974
100 84.579 71.615 62.556 57.963
100 77.78 59.86 47.904 42.037
100 66.515 41.786 26.614 19.565
100 45.939 13.075 -4.3143 -11.72
100 3.4985 -32.392 -46.997 -52.455
100 -100 -100 -100 -100
You can verify that this solution is an approximate fixpoint by verifying that for each element in the bulk it is equal to the calculated average within a certain tolerance.