Cryengine 3 Cgf Upload Failed - import

I see this error when I stage an object that I imported from 3ds max. The error I get is:
[Warning] CGF Upload failed : Directory stream 8 cannot be from 32-bit to 16-bit format because it contains directory 65535 [File=demo/3D/Sofa.cgf].
What resource method can I use to solve the problem?

When you transfer the object by separating it into certain groups, the problem disappears. I'm open to other suggestions.

Related

Stm32CubeIde data mismatch

I am trying work with the demo of touchgfx designer with my stm32g071rb and the x-nucleo-gfx01m2.
The example project works fine if I flash it via TouchGFX designer.
When I import it in stm32cubeide, after setting up the external loader found in the touchgfx directory, i get the following error:
Erasing memory corresponding to segment 0:
Erasing internal memory sectors [0 34]
Erasing memory corresponding to segment 1:
Erasing external memory sector 0
Download in Progress:
File download complete
Time elapsed during download operation: 00:00:02.135
Verifying ...
Error: Data mismatch found at address 0x90000000 (byte = 0x00 instead of 0xD2)
Error: Download verification failed
Shutting down...
Exit.
I've also tried to use the stm32programmer but i get the same error.
How can i fix this?

How to find the offending file for 'fopen failed for data file: errno = 2 (No such file or directory)'

I have a Swift/SpriteKit project with an App Clip. The app clip compiles on the Simulator (though it crashes when tested via TestFlight), but I receive the error fopen failed for data file: errno = 2 (No such file or directory). However, the offending file is not named.
There are numerous SO questions on the topic of this error. But as far as I can tell, those questions do not address how to find the offending file when it's not named in the logs.
My question is simple: What's the easiest way to find the name/location of the file in question when receiving this error?
Thank you!
Edit: I'm also getting the following message:
Errors found! Invalidating cache...

PhpStorm FTP 425 Unable to build data connection: Cannot assign requested address

PhpStorm FTP upload failed.
[17-1-16 下午5:17] Failed to transfer file '/a': cant open output connection for file "ftp://192.168.1.229:21/a". Reason: "425 Unable to build data connection: Cannot assign requested address".
[17-1-16 下午5:17] Upload to server completed in less than a minute: 108 files transferred, 3 items failed (541.1 Kb/s)
My PhpStorm running on deepin system (Linux)
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.ip_local_port_range = 10000 65000
I tried to change the number of ports, but still failed to upload.
Who can help me ?
I got the same problem in IntelliJ IDEA.
Changing the connection type from FTP to SFTP helped. I hope it's also possible for you.

Error: No chunks found for a file with Mongo gridFS

An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>