How to find the offending file for 'fopen failed for data file: errno = 2 (No such file or directory)' - swift

I have a Swift/SpriteKit project with an App Clip. The app clip compiles on the Simulator (though it crashes when tested via TestFlight), but I receive the error fopen failed for data file: errno = 2 (No such file or directory). However, the offending file is not named.
There are numerous SO questions on the topic of this error. But as far as I can tell, those questions do not address how to find the offending file when it's not named in the logs.
My question is simple: What's the easiest way to find the name/location of the file in question when receiving this error?
Thank you!
Edit: I'm also getting the following message:
Errors found! Invalidating cache...

Related

How to use custom dev-tool for Brave Browser?

I'm debugging wasm code in dev-tool, there is a line limit for displaying the wasm source code, which is 1000*1000 lines by default, when exceeds the limit, it shows ";; .... text is truncated due to the size":
My wasm is larger than that, so I'm trying to build my custom dev-tools. My environment is Kali Linux + Brave Browser.
According to this article, I commented out the lines:
And built successfully, the only difference is that the output directory in the article is ".../resources/inspector" and mine is ".../resources/inspector_overlay".
I tried launch it with --custom-devtools-frontend=file:///root/.../inspector_overlay, when press F12, the dev-tool fails to open, it shows error:
[82506:82549:0121/223355.672780:ERROR:devtools_ui_data_source.cc(306)] Failed to read /root/Downloads/devtools-frontend/out/Default/resources/inspector_overlay/devtools_app.html
[82506:82506:0121/223355.693282:ERROR:CONSOLE(67)] "Uncaught TypeError: Cannot read properties of undefined (reading 'setInspectedTabId')", source: devtools://devtools/bundled/devtools_compatibility.js (67)
How to make it work?
I found from this discuss that the correct path should be gen/front_end, so the param is: --custom-devtools-frontend=file:///root/Downloads/devtools-frontend/out/Default/gen/front_end

Commands invalid after 'import_board_preset' command

Currently I am trying to follow the MathWorks tutorial 1 to register a TE0720 with a TE0701-6 carrier board in Matlab. I followed the instructions, designed the block design and exported it as advised. Using the Matlab HDL Workflow Advisor I can follow unitl step 4.1 Create Project. Here, I get the following error message:
invalid command name "CONFIG.PCW_INCLUDE_ACP_TRANS_CHECK"
while executing
"CONFIG.PCW_INCLUDE_ACP_TRANS_CHECK {0} CONFIG.PCW_IOPLL_CTRL_FBDIV {30} CONFIG.PCW_IO_IO_PLL_FREQMHZ {1000.000} CONFIG.PCW_IRQ_F2P_INTR {1} CONFIG..."
(procedure "create_root_design" line 49)
invoked from within
"create_root_design """
(file "vivado_custom_block_design.tcl" line 986)
while executing
"source vivado_custom_block_design.tcl"
(file "vivado_create_prj.tcl" line 15)
This is regarding the exported block design in the corresponding *.tlc file.
After deleting the line mentioned in the error, the error persists, but for the following line. This holds true until I deleted all lines following
CONFIG.PCW_IMPORT_BOARD_PRESET {preset}
It seems to me that once the preset for the board is imported, all following commands are seen as invalid. If I put this line in the end of the list though, I get the error
ERROR [Common 17-69] Command failed: Missing name/value pair in -dict argument.
If I remove this line, I get the error
ERROR [BD 41-1811] The interconnect </axi_interconnect_0> is missing a valid master interface connection
ERROR [Common 17-39] 'validate_bd_design' failed due to earlier errors.
Is there a way to fix this or what is the problem here?
EDIT: I am using Vivado 2017.4 from the Vivado HL WebPACK. Could it be that there is a feature not available in this version for rebuilding the project as MATLAB intends to do?
EDIT 2: I started the complete tutorial fresh from scratch again and now I only get the error
ERROR: [BD 41-1811] The interconnect </axi_interconnect_0> is missing a valid master Interface connection
when going throught the HDL Workflow Advisor. As far as I understand the issue, Vivado searches for something to connect the axi_interconnect to. But isn't this the interface port (DUT) as described later in the tutorial (end of step 2 in Register the custom reference design in HDL Workflow Advisor, where the compiled simulink model should be connected?

Error: Load failed, save is disabled

I'm completely new to ipython/ jupyter, I literally just typed in me first example and went to save it. I'm not getting the error above?
Error: Load failed, save is disabled
Also I don't know if it is related but I can't left click into cells? I have to right click and then press escape (very annoying).
Thanks
Open up the Javascript developer console and read the error msg.
In my case it looked like:
Failed to load resource: the server responded with a status of 404 (Not Found)
main.min.js:12623 Object
main.min.js:12625 API request failed (404): No such file or directory: code/pcc/Prototype%%20Pattern.ipynb
main.min.js:26690 Error stack trace while loading notebook was:
main.min.js:26691 XhrError: No such file or directory: code/pcc/Prototype%%20Pattern.ipynb
at wrap_ajax_error (http://localhost:8888/static/notebook/js/main.min.js?v=40e10638fcf65fc1c057bff31d165e9d:12671:29)
at Object.settings.error (http://localhost:8888/static/notebook/js/main.min.js?v=40e10638fcf65fc1c057bff31d165e9d:12692:24)
at l (http://localhost:8888/static/notebook/js/main.min.js?v=40e10638fcf65fc1c057bff31d165e9d:89:24882)
at Object.c.fireWith (http://localhost:8888/static/notebook/js/main.min.js?v=40e10638fcf65fc1c057bff31d165e9d:89:25702)
at k (http://localhost:8888/static/notebook/js/main.min.js?v=40e10638fcf65fc1c057bff31d165e9d:91:5373)
at XMLHttpRequest.<anonymous> (http://localhost:8888/static/notebook/js/main.min.js?v=40e10638fcf65fc1c057bff31d165e9d:91:9152)
main.min.js:12030 Loaded extension: widgets/notebook/js/extension

Error: No chunks found for a file with Mongo gridFS

An error has crashed my application server and I can't seem to figure out what could be causing the issue. My application is built with Meteor and hosted on modulus.io. Here are my application logs:
Error: no chunks found for file, possibly corrupt
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:817:20
at /mnt/data/2/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:594:7
at /mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:35
at Cursor.close (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:989:5)
at Cursor.nextObject (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:758:17)
at commandHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/cursor.js:727:14)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/db.js:1916:9
at Server.Base._callHandler (/mnt/data/2/node_modules/mongodb/lib/mongodb/connection/base.js:448:41)
at /mnt/data/2/node_modules/mongodb/lib/mongodb/connection/server.js:481:18
at [object Object].MongoReply.parseBody (/mnt/data/2/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
[2015-03-29T22:05:57.573Z] Application CRASH detected. Exit code 8.
Most probably this is a mongo bug with gridfs (has been fixed)
Writing two or more different files concurrently from different node
processes using the GridStore.writeFile command results in some files
not being correctly written (ending up with a number of corrupt files
in the gridstore). Ending up with corrupt files even with all
writeFile calls being successfull and no indication of error.
writeFile occasionally fails with error "chunks out of order", but
this happens very rarely (something like 1 failed writeFile for 100
corrupt files or more).
Based on the comments with in a discussion, the problem will be fixed if you will update mongo (the gridfs files should be removed, as they are corrupt).
Error: no chunks found for file, possibly corrupt
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:808:20
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/gridfs/gridstore.js:586:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/collection/query.js:164:5
at /home/developer/rundir/node_modules/mongoose/node_modules/mongodb/lib/mongodb/cursor.js:778:35
I had a similar occurance, but it ended up the file sought in a GFS read stream had actually been deleted - so in my case it wasn't corrupt, but gone! Above is a log from when that happened.

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>