Jasper report - connection reset while filling report - jasper-reports

I am new to jasper and in the exploration phase to replace it with the existing reporting engine.
The report works fine for smaller data set, but I am facing this issue while generating a report for a large dataset (around 50k records). While filling report, the below error is encountered -
2021-06-24 17:20:26,039+05:30 WARN net.sf.jasperreports.data.DataFileUtils [pool-7-thread-1] - Failed to dispose stream for net.sf.jasperreports.data.http.HttpDataConnection#3cddfb2
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210) ~[?:1.8.0_252]
at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_252]
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) ~[?:1.8.0_252]
at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593) ~[?:1.8.0_252]
at sun.security.ssl.InputRecord.read(InputRecord.java:532) ~[?:1.8.0_252]
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) ~[?:1.8.0_252]
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) ~[?:1.8.0_252]
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) ~[?:1.8.0_252]
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) ~[httpcore-4.4.13.jar:4.4.13]
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) ~[httpcore-4.4.13.jar:4.4.13]
at org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:205) ~[httpcore-4.4.13.jar:4.4.13]
at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:188) ~[httpcore-4.4.13.jar:4.4.13]
at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:210) ~[httpcore-4.4.13.jar:4.4.13]
at org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:312) ~[httpcore-4.4.13.jar:4.4.13]
at org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:142) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.client.entity.LazyDecompressingInputStream.close(LazyDecompressingInputStream.java:97) ~[httpclient-4.5.13.jar:4.5.13]
at java.io.FilterInputStream.close(FilterInputStream.java:181) ~[?:1.8.0_252]
at net.sf.jasperreports.data.DataFileStream.dispose(DataFileStream.java:87) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.data.json.JsonDataAdapterService.dispose(JsonDataAdapterService.java:142) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.fill.JRFillDataset.disposeParameterContributors(JRFillDataset.java:1196) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.fill.JRBaseFiller.fill(JRBaseFiller.java:649) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.fill.BaseReportFiller.fill(BaseReportFiller.java:433) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.fill.JRFiller.fill(JRFiller.java:162) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.fill.JRFiller.fill(JRFiller.java:145) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.JasperFillManager.fill(JasperFillManager.java:758) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:1074) [jasperreports-6.17.0.jar:6.17.0-6d93193241dd8cc42629e188b94f9e0bc5722efd]
then it continues to export the report, and the final report generated is incomplete.
I have also tried using JRSwapFileVirtualizer as below, but I am still getting the same error -
JRSwapFile swapFile = new JRSwapFile(getReportOutputDir(sReport.getId()).toString(), 100, 10); // also tried with arguments 1024, 1024
JRSwapFileVirtualizer virtualizer = new JRSwapFileVirtualizer(20, swapFile);
Map<String, Object> paramMap = new HashMap();
paramMap.put(JRParameter.REPORT_VIRTUALIZER, virtualizer);
JasperPrint jPrint = JasperFillManager.fillReport(jreport, paramMap, new JREmptyDataSource());
Am I not using the virtualizer correctly?
If the problem is not with virtualizer usage, then can someone please help me with the root cause and possible solution to this problem?
Any help is greatly appreciated. Thanks in advance.

I got the problem, do not know the reason why it was happening though.
Basically, I had 2 problems -
Connection reset error in logs
Incomplete report (html) being generated
As posted in the question, I am using JREmptyDataSource in the fillReport call. The connection reset error disappeared after I changed the code to
JasperPrint jPrint = JasperFillManager.fillReport(jreport, paramMap);
Also, there was no other impact after removing this.
For the second problem, I was generating an html report and whenever I opened it in UI, it rendered incomplete. So I tried downloading the report to my local system and then opening it in the browser, which worked fine. This means, the report was not getting generated incomplete but was getting rendered incomplete, probably due to high data.
Posting this just in case anyone else faces a similar issue.

Related

R2DBC MSSQL r2dbc.mssql.client.ReactorNettyClient : Connection has been closed by peer

I have started to work on a Spring WebFlux and R2DBC project. Mainly, my code works fine.
But after some elements I am receiving this warning
r2dbc.mssql.client.ReactorNettyClient : Connection has been closed by peer
after this warning I am getting this exception and normally program stops to read from Flux which source is R2DBC driver.
ReactorNettyClient$MssqlConnectionClosedException: Connection unexpectedly closed
My main pipeline like this;
Sinks.Empty<Void> completionSink = Sinks.empty();
Flux<Event> events = service.getPairs(
taskProperties.A,
taskProperties.B);
events
.flatMap(some operation)
.doOnComplete(() -> {
log.info("Finished Job");
completionSink.emitEmpty(Sinks.EmitFailureHandler.FAIL_FAST);
})
.subscribe();
completionSink.asMono().block();
After run, flatMap requesting 256 element as a default, then after fetching trying to request(1) for next signal.
At somewhere between 280. and 320. element it is getting above error. It is not idempotent, sometimes it reads 280 element sometimes it is reading 303, 315 etc.
I think it is about network maybe? But not sure and cannot find the reason. Do I need a pool or something different?
Sorry if I missed anything, in case you want I will try to update here.
Thank you in advance
I have tried to change request size of flatMap to unbounded, adding scheduler, default r2dbc pool but for now I don't have any clue.

"invalid %N$ use detected" vlfeat sift error

I have used the vlfeat toolbox and the vl_sift, lot of times with no problem. I decided to use it in Matlab R2009a, with ubuntu 14.04. This is the error I get while running vl_setup()
??? XML-file failed validation against schema located in:
/home/tonystark/Matlab_prg/sys/namespace/info/v1/info.xsd
XML-file name:
/home/tonystark/freelancing/Content_based_image_retrieval/code/new_code/vlfeat-0.9.18/toolbox/info.xml
To retest the XML-file against the schema, call the following java method:
com.mathworks.xml.XMLValidator.validate(...
'/home/tonystark/freelancing/Content_based_image_retrieval/code/new_code/vlfeat-0.9.18/toolbox/info.xml',...
'/home/tonystark/Matlab_prg/sys/namespace/info/v1/info.xsd', true)
Errors:
org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found
starting with element 'help_contents_icon'. One of '{product_name_ends_with_type,
help_addon, preference_panel, dialogpref_registrar, project_plugin, state_item, list}'
is expected.
But when I run the same code again, the error doesn't come and the code vl_setup compiles, but no output displayed
And when I run this sample code
clc;
close all;
clear all;
I1 = imread('/home/tonystark/freelancing/Content_based_image_retrieval/Dataset/all_souls_000140.jpg');
I2 = imread('/home/tonystark/freelancing/Content_based_image_retrieval/Dataset/all_souls_000146.jpg');
[f1 d1] = vl_sift(single(rgb2gray(I1)));
[f2 d2] = vl_sift(single(rgb2gray(I2)));
matlab crashes with the following error on the terminal
*** invalid %N$ use detected ***
Aborted (core dumped)
I am stuck here for quite some time without any specific direction. If somebody can point me in the right direction or solve this, it would be of great help! Your help is much appreciated.
UPDATE 1
The readme file of vlfeat says it requires matlab 2009b min, for the toolbox to work. could that be a reason?

Siemens S7-1200. TRCV_С. Error code: 893A; Event ID 02:253A

Please, help to solve the problem with communication establishment between PC and 1211C (6ES7-211-1BD30-0XB0 Firmware: V 2.0.2). I feel that I've made a stupid mistake somewhere, but can't figure out where exactly it is.
So, I'm using function TRCV_С...
The configuration seems to be okay:
When i set the CONT=1, the connection establishes without any problems...
But, when i set EN_R=1, I'm getting "error 893A".
That's what I have in my diagnostic buffer: (DB9 - is a block where the received data is supposed to be written)
There is an explanation given for "893A" in the manuals: Parameter contains the number of a DB that is not loaded. In diag. buffer its also written that DB9 is not loaded. But in my case it is loaded! So what should I do in this case?
it seems that DB were created or edited manually due to which they are miss aligned with FB instances try removing and DB and FB instances and then add again instances of FBs with automatically created DBs and do a offline dowonload

Authentication error in moodle when entering administration logins

Every time I enter my administration username and password, I get this error message:
Fatal error: Maximum execution time of 30 seconds exceeded in /home/gosmartm/public_html/moodle/mod/quiz/lang/en/quiz.php on line 870
The file and line are different every time; I've updated the execution time to 120s but still get the same problem.
Can someone help me?
Thank you in advance
Can you switch debugging on and report the result - edit /config.php and add these 2 lines after $CFG = new stdClass();
$CFG->debug = 32767;
$CFG->debugdisplay = true;
If you can report the full php trace, not just the last line - it's unlikely a language file is causing the time out, its probably something happening before that.

Problem loading range_slices in Cassandra

I'm having just a little bit of trouble getting data out of Cassandra. The main problem is this exception:
ERROR 15:45:07,037 Internal error processing get_range_slices
java.lang.AssertionError: (162293240116362681726824838407749997815,35552186147124906726154103286687761342]
at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1251)
at org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:428)
at org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:513)
at org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.process(Cassandra.java:2868)
at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
So what do I do? I use describe_ring to get the topology of the network, then I ask each of the nodes in the network describe_splits which gives me the tokens I should use to fetch the ranges, and then I just start asking for them, making sure that I set the start_token and end_token on the keyranges.
Any ideas?
That's a bug fixed for 0.6.9 and 0.7rc2.