Missing data (JasperReports Server reports) - jasper-reports

Ok so when I run my report in iReport I only get one row as output
100 100 - BA - 7294 - 1 - 3
But when I copy the query created by the report out of the server logs and run it I get 80 rows as output
100 100 - BA - 7294 - 1 - 3
100 101 - BA - 7294 - 1 - 3
100 102 - BA - 7294 - 1 - 3
100 103 - BA - 7294 - 1 - 3
100 104 - BA - 7294 - 1 - 3
100 106 - BA - 7294 - 1 - 3
100 107 - BA - 7294 - 1 - 3
100 108 - BA - 7294 - 1 - 3
100 109 - BA - 7294 - 1 - 3
100 110 - BA - 7294 - 1 - 3
etc...
I have done these kinds of reports a hundred times and this has never happened and I can't seem to find a solution
What can be the cause of this missing data ?
Here is the query I copied from the logs (Its a simple query, nothing fancy)
select
`quallev_qualificationlevel`.`quallev_fieldofstudy` as
`quallev_qualificationlevel_quallev_fieldofstudy`,
`qde_qualificationdescription`.`qde_shortdescription` as `qde_qualificationdescription_qde_shortdescription`,
`qde_qualificationdescription`.`qde_namepurpose_cd_id` as `qde_qualificationdescription_qde_namepurpose_cd_id`,
`cn_campusname`.`cn_campusid` as `cn_campusname_cn_campusid`,
`qde_qualificationdescription`.`qde_langauge_id` as `qde_qualificationdescription_qde_langauge_id`,
`fv_factorvalue_ft`.`fv_1` as `fv_factorvalue_ft_fv_1`,
`fv_factorvalue_ft`.`fv_2` as `fv_factorvalue_ft_fv_2`,
`fv_factorvalue_ft`.`fv_3` as `fv_factorvalue_ft_fv_3`,
`fosfte_fieldofstudyftefactor`.`fosfte_factor_cd_id` as `fosfte_fieldofstudyftefactor_fosfte_factor_cd_id`,
`fv_factorvalue_ft`.`fv_4` as `fv_factorvalue_ft_fv_4`,
`fv_factorvalue_ft`.`fv_5` as `fv_factorvalue_ft_fv_5`,
`fv_factorvalue_ft`.`fv_6` as `fv_factorvalue_ft_fv_6`,
`fv_factorvalue_ft`.`fv_7` as `fv_factorvalue_ft_fv_7`,
`fv_factorvalue_ft`.`fv_8` as `fv_factorvalue_ft_fv_8`,
`fv_factorvalue_ft`.`fv_9` as `fv_factorvalue_ft_fv_9`,
`fos_fieldofstudy`.`fos_code` as `fos_fieldofstudy_fos_code`,
`fosfte_fieldofstudyftefactor`.`fosfte_ftefactorvalue` as `fosfte_fieldofstudyftefactor_fosfte_ftefactorvalue`,
`qual_qualification`.`qual_code` as `qual_qualification_qual_code`,
`oun_organisationunitname`.`oun_ou_id` as `oun_organisationunitname_oun_ou_id`,
`fos_fieldofstudy`.`fos_startdate` as `fos_fieldofstudy_fos_startdate`,
`qh_qualificationhemis`.`qh_weight` as `qh_qualificationhemis_qh_weight`
from `qo_qualificationorganisation`
inner join `quallev_qualificationlevel` on (`qo_qualificationorganisation`.`qo_quallev_id` = `quallev_qualificationlevel`.`quallev_id`)
inner join `org_organisation` on (`org_organisation`.`org_be_id` = `qo_qualificationorganisation`.`qo_org_be_id`)
inner join `oun_organisationunitname` on (`oun_organisationunitname`.`oun_ou_id` = `org_organisation`.`org_ou_id`)
inner join `cn_campusname` on (`org_organisation`.`org_campusid` = `cn_campusname`.`cn_campusid`)
inner join `fos_fieldofstudy` on (`quallev_qualificationlevel`.`quallev_fos_id` = `fos_fieldofstudy`.`fos_id`)
inner join `qual_qualification` on (`quallev_qualificationlevel`.`quallev_qual_id` = `qual_qualification`.`qual_id`)
inner join `qh_qualificationhemis` on (`qh_qualificationhemis`.`qh_qual_id` = `qual_qualification`.`qual_id`)
inner join `fosfte_fieldofstudyftefactor` on (`fosfte_fieldofstudyftefactor`.`fosfte_fos_id` = `fos_fieldofstudy`.`fos_id`)
inner join `qde_qualificationdescription` on (`qde_qualificationdescription`.`qde_qual_id` = `qual_qualification`.`qual_id`)
inner join `fv_factorvalue_ft` on (`fos_fieldofstudy`.`fos_id` = `fv_factorvalue_ft`.`fv_fos_id`)
where `qde_qualificationdescription`.`qde_langauge_id` = 3
and `fos_fieldofstudy`.`fos_startdate` <= '2013-11-08'
and `qde_qualificationdescription`.`qde_namepurpose_cd_id` = 7294
and `cn_campusname`.`cn_campusid` = '1'
and `oun_organisationunitname`.`oun_ou_id` = '11'
and `fosfte_fieldofstudyftefactor`.`fosfte_factor_cd_id` = 7699
group by `quallev_qualificationlevel`.`quallev_fieldofstudy`
Note: I am using iReport 5.0.0 on Win7 Pro 64bit for JasperReports Server 5.0.1 on Tomcat 7

Ok so I finally found the solution
The problem was jasperserver had a default row limit set of 200,000
I raised that limmit to 2,000,000 and BOOOM !!! it works now :)

Related

Postgresql get the distinct values with another max value

I have a table with this values:
id - idProduction - historical - idVehicle - km
1 - 200 - 1 - 258 - 100
2 - 200 - 2 - 258 - 100
3 - 200 - 2 - 259 - 150
4 - 200 - 3 - 258 - 120
5 - 200 - 3 - 259 - 170
6 - 300 - 1 - 100 - 80
7 - 100 - 1 - 258 - 140
8 - 300 - 2 - 325 - 50
I need to get the values with the max historical, for all the distinct idProduction. In that case:
4 - 200 - 3 - 258 - 120
5 - 200 - 3 - 259 - 170
7 - 100 - 1 - 258 - 140
8 - 300 - 2 - 325 - 50
It my first work with postgresql, so I don't have to much idea on how to do it, does anyone can help me?
Thank you!
I think with that I can solve my problem, but I'm not sure... :
SELECT id, productions_vehicles.id_production, nu_task_number, id_historical, id_vehicle
FROM productions_vehicles
INNER JOIN
(SELECT id_production, MAX(id_historical) AS idHistorico
FROM productions_vehicles
GROUP BY id_production) topHistorico
ON productions_vehicles.id_production = topHistorico.id_production
AND productions_vehicles.id_historical = topHistorico.idHistorico;
You effectively need two requests, your solution looks good, you can also use the WITH clause to do the first request :
WITH topHistorico (
SELECT id_production, MAX(id_historical) SA idHistorico
FROM productions_vehicles
GROUP BY id_production)
SELECT id, pv.id_production, nu_task_number, id_historical, id_vehicle
FROM production_vehicles pv
INNER JOIN topHistorico th ON pv.id_production = th.id_production AND pv.id_historical = th.idHistorico
PostgreSQL: Documentation: 9.1: WITH Queries (Common Table Expressions)

Kafka Streams: Kafka Streams application stuck rebalancing

After all Kafka brokers restart to upgrade offset.retention.minutes setting (to increase it to 60 days), the Kafka Streams application consuming there were stuck, and the consumer group shows rebalancing:
bin/kafka-consumer-groups.sh --bootstrap-server ${BOOTSTRAP_SERVERS} --describe --group stream-processor | sort
Warning: Consumer group 'stream-processor' is rebalancing.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
customers 0 84263 84288 25 - - -
customers 1 71731 85068 13337 - - -
customers 10 71841 84801 12960 - - -
customers 11 84273 84336 63 - - -
customers 12 84189 84297 108 - - -
customers 13 83969 84051 82 - - -
customers 14 84693 84767 74 - - -
customers 15 84472 84556 84 - - -
customers 2 84175 84239 64 - - -
customers 3 71719 71719 0 - - -
customers 4 71446 84499 13053 - - -
customers 5 84291 84361 70 - - -
customers 6 71700 71700 0 - - -
customers 7 72003 85235 13232 - - -
customers 8 84521 84587 66 - - -
customers 9 71513 71513 0 - - -
customers-intermediate 0 102774 102792 18 - - -
customers-intermediate 1 102931 103028 97 - - -
customers-intermediate 10 102883 102965 82 - - -
customers-intermediate 11 102723 102861 138 - - -
customers-intermediate 12 102797 102931 134 - - -
customers-intermediate 13 102339 102460 121 - - -
customers-intermediate 14 103227 103321 94 - - -
customers-intermediate 15 103231 103366 135 - - -
customers-intermediate 2 102637 102723 86 - - -
customers-intermediate 3 84620 103297 18677 - - -
customers-intermediate 4 102596 102687 91 - - -
customers-intermediate 5 102980 103071 91 - - -
customers-intermediate 6 84385 103058 18673 - - -
customers-intermediate 7 103559 103652 93 - - -
customers-intermediate 8 103243 103312 69 - - -
customers-intermediate 9 84211 102772 18561 - - -
events 15 11958555 15231834 3273279 - - -
events 3 1393386 16534651 15141265 - - -
events 4 1149540 15390069 14240529 - - -
visitors 15 2774874 2778873 3999 - - -
visitors 3 603242 603324 82 - - -
visitors 4 565266 565834 568
The streaming application was restarted too, and afterwards I could see some processing logs for about 20 hours and then stopped processing.
It's been like this for two days. But it is also worth mentioning that all topics you see above have 16 partitions, but some show three of them (visitors, events). However I can describe the topics and they have their partitions well distributed as usual and I can find nothing strange there.
What could have happened?
After application restart, I can see all partitions again, and the applications consuming from topic partitions. However many (most) partitions had lost their offsets. Since I changed the offset.retention.minutes setting, this should not have happened.
events 0 - 14538261 - stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer-a8a6e989-d6c1-472f-aec5-3ae637d87b9e /ip2 stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer
events 1 49070 13276094 13227024 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 10 - 15593746 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 11 - 15525487 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 12 - 21863908 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 13 - 15810925 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 14 - 13509742 - stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer-69da45eb-b0b2-4ad8-a831-8aecf6849892 /ip1 stream-processor-44fefb47-23b1-4597-8a6b-d7f1c364c316-StreamThread-1-consumer
events 15 11958555 15231834 3273279 stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer-a8a6e989-d6c1-472f-aec5-3ae637d87b9e /ip2 stream-processor-01f7ecea-4e50-4505-a8e7-8c536058b7bc-StreamThread-1-consumer
...
Kafka 1.1.0, Kafka Streams 1.0
UPDATE Still happening on 2.1.1.

Matlab programming, Matrix sorting and ranking

Consider an example given below. There are 3 customers A, B,C.
1st row of matrix is the demand of respective customer and second row is the day when they need.
for example: demand A=[10,40,50;15,45,75]; Customer A needs 10 items on 15th day.. 40 items on 45th day and 50 items on 75th day..Similarly for B,C.
demand A=[10,40,50;15,45,75];
demand B=[80,30,20;05,35,80];
demand C=[50,40,30;20,47,88];
Now i need to rank the customer on basis of days. So here answer should be like
rank 1: 5th day customer B 80 items
rank 2: 15th day customer A 10 items
rank 3: 20th day customer C 50 items.
and so on.
How can i do it in mat lab. so that when i rank it on basis of the day then I should then know how many items and which customer accordingly.
output should be like this:
Rank Customer items day
1 B 80 05
2 A 10 15
3 C 50 20
4 B 30 35
5 A 40 45
6 C 40 47
7 A 40 75
8 B 20 80
9 C 30 88
I suggest the following approach:
First stage
generates a new matrix, which is the composition of A,B and C, such that:
The first col represents the day.
The second col represents the requested amount.
The third col is the costumer index (A=1,B=2,C=3).
res = [A',ones(size(A',1),1);B',ones(size(A',1),1)*2;C',ones(size(C',1),1)*3];
res(:,[2,1]) = res(:,[1,2]);
Second stage
sort the matrix according to the first column, which represents the day
[~,sortedDaysIndices] = sort(res(:,1));
res = res(sortedDaysIndices,:);
Third stage: print the results
for ii=1:size(res)
if res(ii,3)==1
costumerStr = 'A';
elseif res(ii,3)==2
costumerStr = 'B';
else
costumerStr = 'C';
end
fprintf('%s\n',[num2str(ii) ' ' costumerStr ' ' num2str(res(ii,2)) ' ' num2str(res(ii,1))])
end
Result
1 B 80 5
2 A 10 15
3 C 50 20
4 B 30 35
5 A 40 45
6 C 40 47
7 A 50 75
8 B 20 80
9 C 30 88

JSX/Photoshop: layer opacity issue

Photoshop CS6/JSX. I'm changing the opacity of the selected layer by increasing or reducing it by 10. The problems I'm getting:
When reducing the value by 10 I get this secuence of reductions:
100 - 90 - 80 - 71 - 61 - 51 - 41 - 31 - 22 - 12 - 2
When increasing the results are:
0 - 10 - 20 - 31 - 41 - 51 - 61 - 71 - 82 - 92
The code is something like this:
var opc = app.activeDocument.activeLayer.opacity;
desc2.putUnitDouble(cTID('Opct'), cTID('#Prc'), opc - 10.0);
/* or
desc2.putUnitDouble(cTID('Opct'), cTID('#Prc'), opc + 10.0); */
Any idea on how to fix it in order to get only multiples of 10?
Thanks in advance
Math.round() do the trick. First, force the opacity of the layer to be round:
var opc = Math.round(app.activeDocument.activeLayer.opacity)
Now you can change the opacity by adding or substracting the desired value:
app.activeDocument.activeLayer.opacity = opc -10; //or +10
Thanks to Anna Forrest for the help.

cut specific columns from several files and reshape using unix tools

I have several hundred files in a folder. Each of these file is a tab delimited text file that contain more than a million rows and 27 columns. From each file, I want to be able to extract only specific columns (say pull out only columns: 1,2,11,12,13). Columns 3:10 & 14:27 can be ignored. I want to be able to do this for all files in the folder (say 2300 files). The columns from each of the 2300 file looks like this..........
Sample.ID SNP.Name col3 col10 Sample.Index Allele1...Forward Allele2...Forward col14 ....col27
1234567890_A rs758676 - - 1 T T - ....col27
1234567890_A rs3916934 - - 1 T T - ....col27
1234567890_A rs2711935 - - 1 T C - ....col27
1234567890_A rs17126880 - - 1 - - - ....col27
1234567890_A rs12831433 - - 1 T T - ....col27
1234567890_A rs12797197 - - 1 T C - ....col27
The cut columns from the 2nd file may look like this....
Sample.ID SNP.Name col3 col10 Sample.Index Allele1...Forward Allele2...Forward col14 ....col27
1234567899_C rs758676 - - 100 T A - ....col27
1234567899_C rs3916934 - - 100 T T - ....col27
1234567899_C rs2711935 - - 100 T C - ....col27
1234567899_C rs17126880 - - 100 C G - ....col27
1234567899_C rs12831433 - - 100 T T - ....col27
1234567899_C rs12797197 - - 100 T C - ....col27
The cut columns from the 3rd file may look like this....
Sample.ID SNP.Name col3 col10 Sample.Index Allele1...Forward Allele2...Forward col14 ....col27
1234567999_F rs758676 - - 256 A A - ....col27
1234567999_F rs3916934 - - 256 T T - ....col27
1234567999_F rs2711935 - - 256 T C - ....col27
1234567999_F rs17126880 - - 256 C G - ....col27
1234567999_F rs12831433 - - 256 T T - ....col27
1234567999_F rs12797197 - - 256 C C - ....col27
The width of the Sample.ID, Sample.Index are the same in each file but can change between files. The value of Sample.ID is the same within each file but different between files. Each of the cut files have the same values under "SNP.Name" column. The Sample.Index column may sometimes be same from different file. The other two columns values (Allele1...Forward & Allele2...Forward) may change, and are pasted with " " sep under each SNP.Name for each Sample.ID.
I finally want to merge (tab-delemited) all the cut columns from the 2300 files into this format ......
Sample.Index Sample.ID rs758676 rs3916934 rs2711935 rs17126880 rs12831433 rs12797197
1 1234567890_A T T T T T C 0 0 T T T C
200 1234567899_C T A T T T C C G T T T C
256 1234567999_F A A T T T C C G T T C C
In simple terms I want to be able to convert a long format into wide format based on the Sample.ID column. This is similar to reshape function in R. I tried this with R and it runs out of memory and is really slow. Can anyone help with unix tools?
When reshape.sh was applied to 20 files... it produced a spurious "Samples line" in the output. The first 4 fields are featured here.
Sample.Index Sample.ID rs476542 rs7073746
1234567891_A 11 C C A G
1234567892_A 191 T C A G
1234567893_A 204 T C G G
1234567894_A 15 T C A G
1234567895_A 158 T T A A
1234567896_A 208 T C A A
1234567897_A 111 T T G G
1234567898_A 137 T C G G
1234567899_A 216 T C A G
1234567900_A 113 T C G G
1234567901_A 152 T C A G
1234567902_A 178 C C A A
1234567903_A 135 C C A A
1234567904_A 125 T C A A
1234567905_A 194 C C A A
1234567906_A 110 C C G G
1234567907_A 126 C C A A
Sample -
1234567908_A 169 C C G G
1234567909_A 173 C C G G
1234567910_A 168 T C A A
#!/bin/bash
awk '
BEGIN {
maxInd = length("Sample.Index")
maxID = length("Sample.ID")
}
FNR>11 && $2 ~ "^rs" {
SNP[$2]
key[$11,$1]
val[$2,$11,$1]=$12" "$13
maxInd = (len=length($11)) > maxInd ? len : maxInd
maxID = (len=length($1)) > maxID ? len : maxID
}
END {
printf("%-*s\t%*s\t", maxInd, "Sample.Index", maxID, "Sample.ID")
for (rs in SNP)
printf("%s\t", rs)
printf("\n")
for(pair in key) {
split(pair,a,SUBSEP)
printf("%-*s\t%*s\t", maxInd, a[1], maxID, a[2])
for(rs in SNP) {
ale = val[rs,a[1],a[2]]
out = ale == "- -" || ale == "" ? "0 0" : ale
printf("%*s\t", length(rs), out)
}
printf("\n")
}
}' DNA*.txt
Proof of Concept
$ ./reshapeDNA
Sample.Index Sample.ID rs2711935 rs10829026 rs3924674 rs2635442 rs715350 rs17126880 rs7037313 rs11983370 rs6424572 rs7055953 rs758676 rs7167305 rs12831433 rs2147587 rs12797197 rs3916934 rs11002902
11 1234567890_A T T 0 0 C C 0 0 0 0 T C 0 0 C C T G 0 0 C C 0 0 T C A G T T T C G G
111 1234567892_A T T T C C C 0 0 0 0 C C T C C C T T 0 0 C C 0 0 T T A A T T T T G G
1 1234567894_A T T 0 0 T C C C A G C C 0 0 C C 0 0 T C C C T T T T A G T T C C G G
12 1234567893_A T T 0 0 C C T C A A T C 0 0 C C 0 0 T T C C T G T C A G T T T C G G
15 1234567891_A T T C C C C 0 0 0 0 C C C C C C T T 0 0 C C 0 0 T C A G T T T T G G