I am working on a app which has a lot of asynchronous messages hosted by HornetQ 2.3.21. By some process my message size goes more than 2GB and the message starts failing with this message on the server:
HQ212017: error adding packet: java.lang.IllegalStateException: Maximum size of 2gb exceeded
at org.jboss.netty.buffer.DynamicChannelBuffer.ensureWritableBytes(DynamicChannelBuffer.java:82) [netty-3.6.9.Final-redhat-1.jar:3.6.9.Final-redhat-1]
at org.jboss.netty.buffer.DynamicChannelBuffer.writeByte(DynamicChannelBuffer.java:205) [netty-3.6.9.Final-redhat-1.jar:3.6.9.Final-redhat-1]
at org.hornetq.core.buffers.impl.ChannelBufferWrapper.writeByte(ChannelBufferWrapper.java:539) [hornetq-commons-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.buffers.impl.ResetLimitWrappedHornetQBuffer.writeByte(ResetLimitWrappedHornetQBuffer.java:249) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.client.impl.ClientLargeMessageImpl$HornetQOutputStream.write(ClientLargeMessageImpl.java:205) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at java.io.OutputStream.write(OutputStream.java:116) [rt.jar:1.8.0_251]
at org.hornetq.utils.InflaterWriter.doWrite(InflaterWriter.java:106) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.utils.InflaterWriter.write(InflaterWriter.java:63) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at java.io.OutputStream.write(OutputStream.java:116) [rt.jar:1.8.0_251]
at java.io.OutputStream.write(OutputStream.java:75) [rt.jar:1.8.0_251]
at org.hornetq.core.client.impl.LargeMessageControllerImpl.addPacket(LargeMessageControllerImpl.java:185) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.client.impl.ClientConsumerImpl.handleLargeMessageContinuation(ClientConsumerImpl.java:748) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.client.impl.ClientSessionImpl.handleReceiveContinuation(ClientSessionImpl.java:935) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.client.impl.ClientSessionPacketHandler.handlePacket(ClientSessionPacketHandler.java:65) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:641) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:557) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:533) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1693) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.core.remoting.impl.invm.InVMConnection$1.run(InVMConnection.java:165) [hornetq-server-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105) [hornetq-core-client-2.3.21.Final-redhat-1.jar:2.3.21.Final-redhat-1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_251]
This message is stored on the filesystem in the messaginglargemessages folder.
I want to read the message to see which process is causing the issue, but the message is encoded in a binary format. How can I read this file? I tried some utilities available on Internet to convert .msg files, also changed the encoding, also tried sending this message to code which reads binary files but no luck. I am not sure the contents of message will be human readable or not. We need to get the module name which should be part of this message. Once we get the module name we might find some other solutions for 2GB error.
The operating system is Windows but we see the same messages and issues on Linux as well.
Related
I see this error when I stage an object that I imported from 3ds max. The error I get is:
[Warning] CGF Upload failed : Directory stream 8 cannot be from 32-bit to 16-bit format because it contains directory 65535 [File=demo/3D/Sofa.cgf].
What resource method can I use to solve the problem?
When you transfer the object by separating it into certain groups, the problem disappears. I'm open to other suggestions.
I recently reinstalled Netbeans IDE on my Windows 10 PC in order to restore some unrelated configurations. When I tried checking for new plugins in order to be able to download the Sakila sample database,
I get this error.
I've tested the connection on both No Proxy and Use Proxy Settings, and both connection tests seem to end succesfully.
I have allowed Netbeans through my firewall, but this has changed nothing either.
I haven't touched my proxy configuration, so it's on default (autodetect). Switching the autodetect off doesn't change anything, either, no matter what proxy config i have on Netbeans.
Here's part of my log file that might be helpful:
Compiler: HotSpot 64-Bit Tiered Compilers
Heap memory usage: initial 32,0MB maximum 910,5MB
Non heap memory usage: initial 2,4MB maximum -1b
Garbage collector: PS Scavenge (Collections=12 Total time spent=0s)
Garbage collector: PS MarkSweep (Collections=3 Total time spent=0s)
Classes: loaded=6377 total loaded=6377 unloaded 0
INFO [org.netbeans.core.ui.warmup.DiagnosticTask]: Total memory 17.130.041.344
INFO [org.netbeans.modules.autoupdate.updateprovider.DownloadListener]: Connection content length was 0 bytes (read 0bytes), expected file size can`t be that size - likely server with file at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b is temporary down
INFO [org.netbeans.modules.autoupdate.ui.Utilities]: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.doCopy(DownloadListener.java:155)
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.streamOpened(DownloadListener.java:78)
at org.netbeans.modules.autoupdate.updateprovider.NetworkAccess$Task$1.run(NetworkAccess.java:111)
Caused: java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.notifyException(DownloadListener.java:103)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.copy(AutoupdateCatalogCache.java:246)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.writeCatalogToCache(AutoupdateCatalogCache.java:99)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogProvider.refresh(AutoupdateCatalogProvider.java:154)
at org.netbeans.modules.autoupdate.services.UpdateUnitProviderImpl.refresh(UpdateUnitProviderImpl.java:180)
at org.netbeans.api.autoupdate.UpdateUnitProvider.refresh(UpdateUnitProvider.java:196)
[catch] at org.netbeans.modules.autoupdate.ui.Utilities.tryRefreshProviders(Utilities.java:433)
at org.netbeans.modules.autoupdate.ui.Utilities.doRefreshProviders(Utilities.java:411)
at org.netbeans.modules.autoupdate.ui.Utilities.presentRefreshProviders(Utilities.java:405)
at org.netbeans.modules.autoupdate.ui.UnitTab$14.run(UnitTab.java:806)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:1423)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:2033)
It might be that the update server is down just right now; i haven't been able to test this either. But it also might be something wrong with my configurations. I'm going crazy!!1!
Something that worked for me was changing the "http:" to "https:" in the update urls.
I.E. Change "http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
to "https://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
No idea why that makes it work on my end. I'm running Linux Mint 19.1.
I want to copy the sent message to Sent folder using appendMessageOperation.
When the message size is smaller(about below 3KB?) the operation finishes successfully.
But If the message is bigger, the operation gets timeout with error.
Error Domain=MCOErrorDomain Code=1 "A stable connection to the server could not be established." UserInfo={NSLocalizedDescription=A stable connection to the server could not be established.}
Is there any limitation about appending a message?
Environment: iPhone/ios12, iCloud
It's just because of my mistake.
The instance seems to be destroyed while appendMessageOperation is working...
ETL jobs are fialed with Greenplum Error
Interconnect error writing an outgoing packet: Operation not permitted (seg20 slice1 NVMBD2AMK050D00:40002 pid=863)
Did any one face this error before
I checked the logs but didnt get anything specific,
also tried below solution given #Pivotal's site
https://support.pivotal.io/hc/en-us/articles/202967503-Pivotal-Greenplum-GPDB-query-failed-with-Interconnect-error-writing-an-outgoing-packet-Operation-not-permitted-
Please help
An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.