Error: No chunks found for a file with Mongo gridFS - mongodb

An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.

Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).

Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.

Related

How to find the offending file for 'fopen failed for data file: errno = 2 (No such file or directory)'

I have a Swift/SpriteKit project with an App Clip. The app clip compiles on the Simulator (though it crashes when tested via TestFlight), but I receive the error fopen failed for data file: errno = 2 (No such file or directory). However, the offending file is not named.
There are numerous SO questions on the topic of this error. But as far as I can tell, those questions do not address how to find the offending file when it's not named in the logs.
My question is simple: What's the easiest way to find the name/location of the file in question when receiving this error?
Thank you!
Edit: I'm also getting the following message:
Errors found! Invalidating cache...

Problems with Chronicle Map on Windows

I am trying to use ChronicleMap for my index structure, this seems to work fine on Linux but when I am running my JUnit test on Windows (which is my development environment), I keep getting an error: java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute.
Here's the code snippet that is problematic:
File file = new File(idxFullPath);
ChronicleMap<Integer, int[]> idx =
ChronicleMapBuilder.of(Integer.class, int[].class)
.averageValue(getSampleIdxList())
.entries(IDX_MAX_SIZE)
.createPersistedTo(file);
The following exception is thrown:
[2016-06-17 14:32:47.779] ERROR main com.mcm.op.persistence.Persistence ERR java.io.IOException: Unable to wait until the file is ready, likely the process which created the file crashed or hung for more than 1 minute
at net.openhft.chronicle.map.ChronicleMapBuilder.waitUntilReady(ChronicleMapBuilder.java:1520)
at net.openhft.chronicle.map.ChronicleMapBuilder.openWithExistingFile(ChronicleMapBuilder.java:1583)
at net.openhft.chronicle.map.ChronicleMapBuilder.createWithFile(ChronicleMapBuilder.java:1444)
at net.openhft.chronicle.map.ChronicleMapBuilder.createPersistedTo(ChronicleMapBuilder.java:1405)
at com.mcm.op.persistence.Persistence.initIdx(Persistence.java:131)
at com.mcm.op.persistence.Persistence.init(Persistence.java:177)
at com.mcm.op.persistence.PersistenceTest.initPersist(PersistenceTest.java:47)
at com.mcm.op.persistence.PersistenceTest.setUp(PersistenceTest.java:29)
Indeed, it is likely that the process which created the file has crashed, or stopped terminated debugging, or something like that.
If it's ok to have a fresh index from unit test-to-test runs, I recommend to try either delete the file at idxFullPath before creating a Chronicle Map, or randomize the mapping file via something like File.createTempFile(). In either case File.deleteOnExit() could appear to be helpful.
If you want to keep the index between unit test runs and always use the same file at idxFullPath for persistence, you could try to use builder.createOrRecoverPersistedTo() instead of plain createPersistedTo() map creation method. However this might slow down the map creation.

OrientDB 2.1.9 crashes with OStorageException EOFException when running SQL script in console

I've been using my SQL database initialization script for a while, but it seems that recently the database crashes in the middle of the execution and I don't know why, but here's some details:
I am running OrientDB on Ubuntu 14 Trusty x64 (via Vagrant)
It always seems to crash while the script attempts to create a UNIQUE_HASH_INDEX, but doesn't always crash at the same UNIQUE_HASH_INDEX instruction
The script creates a lot of vertices and edges, but for example, it will crash here (see line with UNIQUE_HASH_INDEX):
CREATE CLASS Channel EXTENDS V;
CREATE PROPERTY Channel.version LONG;
CREATE PROPERTY Channel.channelId STRING;
CREATE INDEX Channel.uq_channelId ON Channel(channelId) UNIQUE_HASH_INDEX;
The database crashes entirely with the following error:
Creating index... Error:
com.orientechnologies.orient.core.exception.OStorageException: Error
on executing command: sql.create INDEX Channel.uq_channelId ON
Channel(channelId) UNIQUE_HASH_INDEX
Error: java.io.EOFException
Looking at the log files, the only hint I get are the last two lines:
2016-01-14 17:17:05:437 INFO Received signal: SIGTERM [OSignalHandler]
2016-01-14 17:17:05:454 INFO Received signal: SIGTERM [OSignalHandler]
How can I resolve this issue, or at least get better hints as to what is making the database crash?
I also test with OrientDB 2.1.6, as I was running the older version initially. Same problem.
Sorry, false alarm -- this is a Vagrant issue, not an OrientDB issue. Running the exact same script on a 32bit instance instead of 64bit solved my problem, and installing the same script on a real 64bit server also works.

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>

ClickOnce: DeploymentDownloadException: The operation has timed out

Symptom: ClickOnce installation starts and stops after around 600 kB (out of 2 MB).
Progress bar always stops at the same value (tried ten times).
Error log says that The operation has timed out (in inner exception) and fails with "DeploymentDownloadException (Unknown subtype)".
Error log details (irrelevant information trimmed):
ERROR DETAILS
Following errors were detected during this operation.
System.Deployment.Application.DeploymentDownloadException (Unknown subtype)
- Downloading http://fullpath/name.dll.deploy did not succeed.
- Source: System.Deployment
- Stack trace: at System.Deployment.Application.SystemNetDownloader.DownloadSingleFile(Downloa
dQueueItem next)
at
System.Deployment.Application.SystemNetDownloader.DownloadAllFiles()
at
System.Deployment.Application.FileDownloader.Download(SubscriptionState
subState)
--- Inner Exception ---
System.Net.WebException
- The operation has timed out.
- Source: System
- Stack trace:
at System.Net.ConnectStream.Read(Byte[] buffer,
Int32 offset, Int32 size)
at
System.Deployment.Application.SystemNetDownloader.DownloadSingleFile(Downloa
dQueueItem next)
This only happens for two customers. The install works OK for thousands of others. I have found numerous posts via google with no answer or generic "firewall is the issue" or "customer was using dialup".
Has anyone solved this? Is this a ClickOnce bug?
Disabling firewall software on the machine did not help because a hardware firewall installed on the network was the cause (FortiGate 30B).
I doubt that it's a bug. However, it seems like it gets stuck at one file in the deployment path. Maybe it is a type of file that is blocked by a firewall.
I would just remove all files but one from the build and see if that gets downloaded ok, and then add the rest of the files one by one (or maybe type by type) and see at what file ClickOnce gets stuck downloading.
If that doesn't seem to do anything, I'd build a dummy app and deploy it with ClickOnce and see if it installs at all on the customer's box.