I am getting what looks to be a "classic" error when starting an Ember-CLI app:
EEXIST, file already exists.
I have consulted
Starting ember server with ember cli
And it seems like the issue in Broccoli has been fixed?
I tried deleting the node_modules folder and the tmp folder, running npm cache clear then npm install. But to no avail.
The weird thing is that there was never a file at templates/application.js (the reference file in the error below).
ember server
version: 0.1.5
Using `app.import` with a file in the root of `vendor/` causes a significant per
formance penalty. Please move `bower_components\modernizr\modernizr.js` into a s
ubdirectory.
Using `app.import` with a file in the root of `vendor/` causes a significant per
formance penalty. Please move `bower_components\fastclick\lib\fastclick.js` into
a subdirectory.
Using `app.import` with a file in the root of `vendor/` causes a significant per
formance penalty. Please move `bower_components\foundation\js\foundation.js` int
o a subdirectory.
Livereload server on port 35729
Serving on http://0.0.0.0:4200/
EEXIST, file already exists 'C:\file-path\tmp\template_compil
er-tmp_dest_dir-0waBduix.tmp\ember-base\templates\application.js'
Error: EEXIST, file already exists 'C:\file-path\tmp\template
_compiler-tmp_dest_dir-0waBduix.tmp\ember-base\templates\application.js'
at Object.fs.openSync (fs.js:432:18)
at Object.fs.writeFileSync (fs.js:971:15)
at Object.copyPreserveSync (C:\file-path\node_modules\emb
er-cli-emblem\node_modules\broccoli-emblem-compiler\node_modules\broccoli-filter
\node_modules\broccoli-kitchen-sink-helpers\index.js:150:8)
at C:\file-path\node_modules\ember-cli-emblem\node_module
s\broccoli-emblem-compiler\node_modules\broccoli-filter\index.js:41:19
at C:\file-path\node_modules\ember-cli-emblem\node_module
s\broccoli-emblem-compiler\node_modules\broccoli-filter\node_modules\promise-map
-series\index.js:11:14
at $$$internal$$tryCatch (C:\file-path\node_modules\ember
-cli-emblem\node_modules\broccoli-emblem-compiler\node_modules\broccoli-filter\n
ode_modules\rsvp\dist\rsvp.js:490:16)
at $$$internal$$invokeCallback (C:\file-path\node_modules
\ember-cli-emblem\node_modules\broccoli-emblem-compiler\node_modules\broccoli-fi
lter\node_modules\rsvp\dist\rsvp.js:502:17)
at $$$internal$$publish (C:\file-path\node_modules\ember-
cli-emblem\node_modules\broccoli-emblem-compiler\node_modules\broccoli-filter\no
de_modules\rsvp\dist\rsvp.js:473:11)
at Object.$$rsvp$asap$$flush [as _onImmediate] (C:\file-path\node_modules\ember-cli-emblem\node_modules\broccoli-emblem-compiler\node_mod
ules\broccoli-filter\node_modules\rsvp\dist\rsvp.js:1581:9)
at processImmediate [as _immediateCallback] (timers.js:336:15)
Found the problem!
I was using ember-cli-emblem and I had both an application.hbs and a application.emblem in the templates folder.
Apparently this is not allowed, one must have one or the other.
Related
I am running the following command in the directory where my root composer.json file is located:
./vendor/bin/typo3 extension:activate slickcarousel
However, I get the following error in return:
In ConnectionPool.php line 110: The requested database connection named "Default" has not been configured.
Even though I have configured my database in my LocalConfiguration.php. I also cannot find the ConnectionPool.php file in the vendor directory. How do I fix this error?
Do you use a different TYPO3 Context maybe? then you need to set that as well with
TYPO3_CONTEXT=Development ./vendor/bin/typo3 extension:activate slickcarousel
I have a conflict between a number of install files.
I am getting the below error:
Transaction Summary
================================================================================
Install 612 Packages
Total size: 110 M Installed size: 403 M Downloading Packages: Running
transaction check Transaction check succeeded. Running transaction
test Error: Transaction check error: file /etc/iproute2/rt_protos
conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/iproute2/rt_tables conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/sysctl.conf conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and procps-3.3.12-r0.aarch64
Error Summary
-------------
ERROR: amlogic-image-headless-sd-1.0-r0 do_rootfs: Function failed:
do_rootfs ERROR: Logfile of failure stored in:
/home/user/amlogic-bsp/build/tmp/work/nexbox_a95x_s905x-poky-linux/amlogic-image-headless-sd/1.0-r0/temp/log.do_rootfs.29264
ERROR: Task
(/home/user/amlogic-bsp/meta-meson/recipes-core/images/amlogic-image-headless-sd.bb:do_rootfs)
failed with exit code '1' NOTE: Tasks Summary: Attempted 3131 tasks of
which 3130 didn't need to be rerun and 1 failed.
I have seen somewhere that I should pin a file, but how do I do this? I can't find a tutorial or any reference to what that means.
I am also getting the below warning. Is this related?
WARNING: Layer meson should set LAYERSERIES_COMPAT_meson in its
conf/layer.conf file to list the core layer names it is compatible
with.
I'm new to OE coming over from OpenWRT.
For bitbake, I've added the layers for the packages below:
meta-openwrt:- OE/Yocto metadata layer for OpenWRT
superna9999/meta-meson:- Upstream Linux Amlogic Meson Yocto/OpenEmbedded Layer
And tried compiling the nexbox-a95x-s905x image
I think the problem is that /etc/iproute2/rt_protos is provided by base-files which is coming from meta-openwrt as well as from iproute2 package which is coming from other OE layers. its not clear for the image builder which one to use and hence the conflict
You can solve it via defining a iproute2_%.bbappend file in meta-openwrt where this file gets deleted from iproute2 package and preference is given to the one openwrt provides
do_install_append() {
rm -rf ${D}${sysconfdir}/iproute2/rt_protos
}
should help.
I am trying to deploy my application code with Chef, which is working for one node and failing on another. I cannot determine why it works for one node and not another when they have the exact same config, but I can at least try to debug the problem on the node that fails.
deploy_revision app_config['deploy_dir'] do
scm_provider Chef::Provider::Git
repo app_config['deploy_repo']
revision app_config['deploy_branch']
if secrets["deploy_key"]
git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
end
enable_submodules true
shallow_clone false
symlink_before_migrate({}) # Symlinks to add before running db migrations
purge_before_symlink [] # Directories to delete before adding symlinks
create_dirs_before_symlink [] # Directories to create before adding symlinks
# symlinks()
action :deploy
restart_command do
service "apache2" do action :restart; end
end
end
This is my recipe for deploying the code. Notice that I have tried disabling symlinking entirely, as Chef always jams its own defaults in. Even with this I get the error:
================================================================================
Error executing action `deploy` on resource 'deploy_revision[/var/www]'
================================================================================
Chef::Exceptions::FileNotFound
------------------------------
Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/kapture/recipes/api.rb
68:
69: deploy_revision app_config['deploy_dir'] do
70: scm_provider Chef::Provider::Git
71: repo app_config['deploy_repo']
72: revision app_config['deploy_branch']
73: if secrets["deploy_key"]
74: git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
75: end
76: enable_submodules true
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/kapture/recipes/api.rb:69:in `from_file'
deploy_revision("/var/www") do
destination "/var/www/shared/cached-copy"
symlink_before_migrate {"config/database.yml"=>"config/database.yml"}
updated_by_last_action true
restart_command #<Proc:0x00007f40f366e5a0#/var/chef/cache/cookbooks/kapture/recipes/api.rb:82>
repository_cache "cached-copy"
retries 0
keep_releases 5
create_dirs_before_symlink ["tmp", "public", "config"]
updated true
provider Chef::Provider::Deploy::Revision
enable_submodules true
deploy_to "/var/www"
current_path "/var/www/current"
recipe_name "api"
revision "HEAD"
scm_provider Chef::Provider::Git
purge_before_symlink ["log", "tmp/pids", "public/system"]
git_ssh_wrapper "/var/www/git-ssh-wrapper"
remote "origin"
shared_path "/var/www/shared"
cookbook_name "kapture"
symlinks {"log"=>"log", "system"=>"public/system", "pids"=>"tmp/pids"}
action [:deploy]
repo "git#github.com:kapture/api.git"
retry_delay 2
end
[2012-09-24T15:42:07+00:00] ERROR: Running exception handlers
[2012-09-24T15:42:07+00:00] FATAL: Saving node information to /var/chef/cache/failed-run-data.json
[2012-09-24T15:42:07+00:00] ERROR: Exception handlers complete
[2012-09-24T15:42:07+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2012-09-24T15:42:07+00:00] FATAL: Chef::Exceptions::FileNotFound: deploy_revision[/var/www] (kapture::api line 69) had an error: Chef::Exceptions::FileNotFound: Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Here you can see it mention database.yml, tmp/, system/ and pids folders, all of which are defaults that Chef likes to set (see related bug)
Question 1
What are these symlinks for and how do I know if I even need any. What sort of things am I symlinking? I will be using migrations, so if they are useful for the migration then I'll need them working.
I have read the documentation many times and it just doesn't explain this is plain English - at least not that I have found.
Question 2
If I do not require them, how can I disable symlinking entirely? Following the examples in that bug report have not helped.
Clear out all the symlink attributes.
deploy_revision("/var/www") do
# ...
symlink_before_migrate.clear
create_dirs_before_symlink.clear
purge_before_symlink.clear
symlinks.clear
end
Make sure the deployment directory in shared has the proper directory structure (/var/www/shared/[log,pids,system,config]) and that all config files necessary for your application are in the config directory.
Your recipe for your application's cookbook should have an array of directory names to create (recursively) so that you won't run into this error again.
The symlinks are there so that while your application code will continue to evolve, you can share the log, pids, and system folder by symlinking shared/log to current/log and so on and so forth..
Chef happens to cache the directory structure - somehow, I haven't dug into that - with this troll application cookbook. It's something in the deploy resource I believe - I never use that - but you can fix it by deleting the directory structure it creates in /var/derp or whatever. Also ensure your tmp directory is setup.
A couple reasons this may be an issue:
File permissions are incorrect
The currently running chef user cannot access the file
You are using the application cookbook in a configuration for rails deploys and your application does not have the same directory structure.
It's definitely caused by Chef caching the state of the deploy somewhere, then reading that state out of it's cache - wherever that is - and then reusing that. I would look at either the application cookbook to see for any persistance, and if you fail at finding it there, look in the deploy resource from chef itself.
I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell,  (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
I am getting the following warning in my code:
Warning: is_readable() [function.is-readable]: open_basedir restriction in effect. File(/usr/share/php/./views/helpers/Doctype.php) is not within the allowed path(s): (/var/www/virtual/example.com/:/usr/share/pear/) in /var/www/virtual/example.com/htdocs/rockhopper-v2/library/Zend/Loader.php on line 198
or
Warning: is_readable() [function.is-readable]: open_basedir restriction in effect. File(/usr/share/php//var/www/virtual/example.com/htdocs/rockhopper-v2/application/modules/default/views/helpers/Layout.php) is not within the allowed path(s): (/var/www/virtual/example.com/:/usr/share/pear/) in /var/www/virtual/example.com/htdocs/rockhopper-v2/library/Zend/Loader.php on line 198
what is the problem and will it cause problems in deployment and production stage of my application?
Thank you
This message appears because since Zend FW 1.10.1 the autoloader creates the path to those files differently. You can find some more information here: Zend FW Bug Report
To get rid of this message you can edit the file index.php and change the set_include_path to this:
set_include_path(
APPLICATION_PATH.'/../library'.PATH_SEPARATOR.
APPLICATION_PATH.'/../library/Zend'
);
open_basedir is set php is running in safe mode. It restricts you to the folders specified in the list. This might help: http://blog.php-security.org/archives/72-Open_basedir-confusion.html
And yes, you will need to change it on any server if you want to access files outside the default folder.