I am trying to get the following PR to view it offline.
The following address https://github.com/twitter/bootstrap/issues/5982 is accessible from every one (no login is needed) but when I run the following code
httrack https://github.com/twitter/bootstrap/issues/5982 -O ~/Sites/github/ -v
I get the following message:
mirroring https://github.com/twitter/bootstrap/issues/5982 with the wizard help..
18:28:22 Warning: Cache: damaged cache, trying to repair
18:28:22 Warning: Cache: 0 bytes successfully recovered in 0 entries
18:28:22 Warning: Cache: error trying to open the cache
18:28:22 Info: No data seems to have been transfered during this session! : restoring previous one!
Done.
Thanks for using HTTrack!
Related
I have a conflict between a number of install files.
I am getting the below error:
Transaction Summary
================================================================================
Install 612 Packages
Total size: 110 M Installed size: 403 M Downloading Packages: Running
transaction check Transaction check succeeded. Running transaction
test Error: Transaction check error: file /etc/iproute2/rt_protos
conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/iproute2/rt_tables conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and iproute2-4.14.1-r0.aarch64
file /etc/sysctl.conf conflicts between attempted installs of
base-files-3.0.14-r89.nexbox_a95x_s905x and procps-3.3.12-r0.aarch64
Error Summary
-------------
ERROR: amlogic-image-headless-sd-1.0-r0 do_rootfs: Function failed:
do_rootfs ERROR: Logfile of failure stored in:
/home/user/amlogic-bsp/build/tmp/work/nexbox_a95x_s905x-poky-linux/amlogic-image-headless-sd/1.0-r0/temp/log.do_rootfs.29264
ERROR: Task
(/home/user/amlogic-bsp/meta-meson/recipes-core/images/amlogic-image-headless-sd.bb:do_rootfs)
failed with exit code '1' NOTE: Tasks Summary: Attempted 3131 tasks of
which 3130 didn't need to be rerun and 1 failed.
I have seen somewhere that I should pin a file, but how do I do this? I can't find a tutorial or any reference to what that means.
I am also getting the below warning. Is this related?
WARNING: Layer meson should set LAYERSERIES_COMPAT_meson in its
conf/layer.conf file to list the core layer names it is compatible
with.
I'm new to OE coming over from OpenWRT.
For bitbake, I've added the layers for the packages below:
meta-openwrt:- OE/Yocto metadata layer for OpenWRT
superna9999/meta-meson:- Upstream Linux Amlogic Meson Yocto/OpenEmbedded Layer
And tried compiling the nexbox-a95x-s905x image
I think the problem is that /etc/iproute2/rt_protos is provided by base-files which is coming from meta-openwrt as well as from iproute2 package which is coming from other OE layers. its not clear for the image builder which one to use and hence the conflict
You can solve it via defining a iproute2_%.bbappend file in meta-openwrt where this file gets deleted from iproute2 package and preference is given to the one openwrt provides
do_install_append() {
rm -rf ${D}${sysconfdir}/iproute2/rt_protos
}
should help.
I have a locally sourced enterprise-installed extension which is installed via the ExtensionInstallForcelist policy. The policy is visible on the chrome://policy page with a status of OK. The URL to the update manifest xml file is of the form "file:///c:/program%20files/xxx/updates.xml" The .crx file is also located in the same folder "file:///c:/program%20files/xxx/myextension.crx" I can successfully browse to both of those files. Yet the extension does not load.
Is there any way to determine the reason that chrome is not loading the extension? I do not see any indication of error. I have opened up the inspect developer window on the extension page, but see no console messages or exceptions. Is there a log file I could look at, or some other means of determining why the extension is not loading?
UPDATE: Turned on logging and see the following:
[3752:3156:0327/171545:WARNING:extension_error_reporter.cc(79)] Extension error: Expected ID "kfegeekbdleinhdfillngiggbjiflghe", but ID was "ijdpkgandgfnpbammiehlfpfpboclodn".
[3752:3440:0327/172253:WARNING:extension_protocols.cc(422)] Failed to GetPathForExtension: kfegeekbdleinhdfillngiggbjiflghe
[3752:3440:0327/172253:WARNING:url_request_job_manager.cc(89)] Failed to map: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/
[3752:3440:0327/172253:VERBOSE1:resource_loader.cc(364)] OnResponseStarted: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/
[3752:3440:0327/172253:VERBOSE1:resource_loader.cc(778)] ResponseCompleted: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/
[3752:3156:0327/172253:VERBOSE1:navigator_impl.cc(298)] Failed Provisional Load: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/, error_code: -2, error_description: Unknown error., showing_repost_interstitial: 0, frame_id: 1
Please help to understand output of unsucessfull Scalding run on Hadoop.
I got latest Scalding distribution from git:
git clone https://github.com/twitter/scalding.git
After sbt assembly from scalding directory I tried to run tutorial with command:
scripts/scald.rb --hdfs tutorial/Tutorial0.scala
As a result I got the following errors:
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
rsyncing 19.8M from scalding-core-assembly-0.10.0.jar to my.host.here in background...
downloading hadoop-core-1.1.2.jar from http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-core/1.1.2/hadoop-core-1.1.2.jar...
ssh: Could not resolve hostname my.host.here: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
Successfully downloaded hadoop-core-1.1.2.jar!
downloading commons-codec-1.8.jar from http://repo1.maven.org/maven2/commons-codec/commons-codec/1.8/commons-codec-1.8.jar...
Successfully downloaded commons-codec-1.8.jar!
downloading commons-configuration-1.9.jar from http://repo1.maven.org/maven2/commons-configuration/commons-configuration/1.9/commons-configuration-1.9.jar...
Successfully downloaded commons-configuration-1.9.jar!
downloading jackson-asl-0.9.5.jar from http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-asl/0.9.5/jackson-asl-0.9.5.jar...
Successfully downloaded jackson-asl-0.9.5.jar!
downloading jackson-mapper-asl-1.9.13.jar from http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar...
Successfully downloaded jackson-mapper-asl-1.9.13.jar!
downloading commons-lang-2.6.jar from http://repo1.maven.org/maven2/commons-lang/commons-lang/2.6/commons-lang-2.6.jar...
Successfully downloaded commons-lang-2.6.jar!
downloading slf4j-log4j12-1.6.6.jar from http://repo1.maven.org/maven2/org/slf4j/slf4j-log4j12/1.6.6/slf4j-log4j12-1.6.6.jar...
Successfully downloaded slf4j-log4j12-1.6.6.jar!
downloading log4j-1.2.15.jar from http://repo1.maven.org/maven2/log4j/log4j/1.2.15/log4j-1.2.15.jar...
Successfully downloaded log4j-1.2.15.jar!
downloading commons-httpclient-3.1.jar from http://repo1.maven.org/maven2/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar...
Successfully downloaded commons-httpclient-3.1.jar!
downloading commons-cli-1.2.jar from http://repo1.maven.org/maven2/commons-cli/commons-cli/1.2/commons-cli-1.2.jar...
Successfully downloaded commons-cli-1.2.jar!
downloading commons-logging-1.1.1.jar from http://repo1.maven.org/maven2/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar...
Successfully downloaded commons-logging-1.1.1.jar!
downloading zookeeper-3.3.4.jar from http://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.3.4/zookeeper-3.3.4.jar...
Successfully downloaded zookeeper-3.3.4.jar!
compiling tutorial/Tutorial0.scala
scalac -classpath /tmp/temp_scala_home_2.9.3_654763/scala-library-2.9.3.jar:/tmp/temp_scala_home_2.9.3_654763/scala-compiler-2.9.3.jar:/home/test/Cascading/scalding/scalding-core/target/scala-2.9.3/scalding-core-assembly-0.10.0.jar:/tmp/maven/hadoop-core-1.1.2.jar:/tmp/maven/commons-codec-1.8.jar:/tmp/maven/commons-configuration-1.9.jar:/tmp/maven/jackson-asl-0.9.5.jar:/tmp/maven/jackson-mapper-asl-1.9.13.jar:/tmp/maven/commons-lang-2.6.jar:/tmp/maven/slf4j-log4j12-1.6.6.jar:/tmp/maven/log4j-1.2.15.jar:/tmp/maven/commons-httpclient-3.1.jar:/tmp/maven/commons-cli-1.2.jar:/tmp/maven/commons-logging-1.1.1.jar:/tmp/maven/zookeeper-3.3.4.jar -d /tmp/script-build tutorial/Tutorial0.scala
ssh: Could not resolve hostname my.host.here: Name or service not known
rsyncing 1.5K from job-jars/Tutorial0.jar to my.host.here in background...
Waiting for 2 background threads...
ssh: Could not resolve hostname my.host.here: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
Could not rsync: /home/test/Cascading/scalding/scalding-core/target/scala-2.9.3/scalding-core-assembly-0.10.0.jar to my.host.here:scalding-core-assembly-0.10.0.jar
Could not rsync: /tmp/Tutorial0.jar to my.host.here:job-jars/Tutorial0.jar
* Update *
After changing host in scald.rb I get the followng authentication problem:
$ scripts/scald.rb --hdfs tutorial/Tutorial0.scala
scripts/scald.rb:194: warning: already initialized constant SCALA_LIB_DIR
rsyncing 19.8M from scalding-core-assembly-0.10.0.jar to node7.test.net in background...
The authenticity of host 'node7.test.net (10.1.21.32)' can't be established.
RSA key fingerprint is fa:41:31:ab:b0:46:08:8f:2b:75:0a:18:24:f9:d5:ec.
Are you sure you want to continue connecting (yes/no)? The authenticity of host 'node7.test.net (10.1.21.32)' can't be established.
RSA key fingerprint is fa:41:31:ab:b0:46:08:8f:2b:75:0a:18:24:f9:d5:ec.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node7.test.net' (RSA) to the list of known hosts.
test#node7.test.net's password: Please type 'yes' or 'no':
Permission denied, please try again.
test#node7.test.net's password:
I enter correct pathword, but the authentication error persists. How should I configure rsync?
You did change this
https://github.com/twitter/scalding/blob/develop/scripts/scald.rb#l27
right?
The default host is: my.host.here.
I'm new to web development and I wanted to get started with some RoR (using Locomotive CMS).
One of the things Locomotive asks for is to have Mongodb. I installed using homebrew by following this link http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/
It installs fine but then im not able to run it!
When I type 'mongo' on terminal I get the following output :
"MongoDB shell version: 2.4.3
connecting to: test
Mon May 6 11:12:28.927
JavaScript execution failed:
Error: couldn't connect to server
127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed"
BACKGROUND TO HELP DEBUGGING ( on Terminal) :
1.When I type in mongod I get the following :
"all output going to: /usr/local/var/log/mongodb/mongo.log"
Ownership of mongo.log :
-rw-r--r-- 1 username admin 22133 May 6 11:13 mongo.log
2.When I input mongod --fork I get the following :
about to fork child process, waiting until server is ready for connections.
forked process: 77566
all output going to: /usr/local/var/log/mongodb/mongo.log
ERROR: child process failed, exited with error number 100
3.Typing mongod --help gives the following warning:
* WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
4.I have a folder called data (which acts as amongodb database, is this where it should be?)in root (PATH : /data) Ownership of data folder :
"drwxr-xr-x 3 username wheel 102 Apr 23 21:38 data"
5.Checking if ports are free: lsof -i :27017. Ive also tried to check for a running mongo process using activity montior and found zilch!
No output
6.Ive also tried : mongo --repair. Dint help!
Ive been stuk on this for a while, I've looked at most responses on stackoverflow and searched around to find a solution to this but nothing has helped so far!
UPDATE:
When I tried to start the mongo shell, I was getting the following l
log message from mongo.log:
5/6/13 1:33:27.616 PM com.apple.launchd:
(org.mongodb.mongod[79133])
open("/private/var/log/mongodb/output.log", ...): Permission denied
So I did a chmod777 for the particular folder and the shell launches!
Although I still get a warning when it launches as:
Server has startup warnings:
Mon May 6 13:33:27.693 [initandlisten]
Mon May 6 13:33:27.693 [initandlisten]
** WARNING: soft rlimits too low.
Number of files is 256, should be at least 1000
Any idea how I can silence these warnings?
To get the information you need to determine the cause of failure you need to look in (and post for us) the output from /usr/local/var/log/mongodb/mongo.log when it is trying to start.
However, the most common reason for the failure is the lack of the default database path - at /data/db. Either create that folder (and don't forget to make sure your user has permission to read/write to it) or specify a different path with the --dbpath option.
UPDATE: as you have since found, bad permissions on the log file can cause the issue, in a similar way to bad permissions on the data path.
In terms of the warning, the information you need is here:
https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
It is just that though, a warning - you can run MongoDB without an issue with those limits as long as it is not under heavy load. So, if this is a development environment, unless you plan on load testing, you should be fine
When I try to push an update to Heroku in one of my PHP apps I get the following problem:
Counting objects: 25, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (13/13), 1.20 KiB, done.
Total 13 (delta 10), reused 0 (delta 0)
-----> Heroku receiving push
-----> Fetching custom buildpack... done
-----> PHP app detected
-----> Run Sitebase buildpack
-----> Bundling Apache version 2.2.22
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Exiting with failure status due to previous errors
! Heroku push rejected, failed to compile Php app
To git#heroku.com:x
! [remote rejected] feature-removeapi -> master (pre-receive hook declined)
error: failed to push some refs to 'git#heroku.com:x'
Never had this problem before so I totally don't have a clue what the problem can be.
Is it possible that this is a bug on Heroku's side?
If I look in the Heroku logs I also see the following line:
Slug compilation failed: failed to compile Php app
All help is welcome.
I my cases that I had this problem it seemed to be a Heroku problem. Just waiting 10 minutes or so did the trick for me.
After so many years and still, this issue occurs.
Btw: My fix was to pin to a specific version as mentioned here:
heroku buildpacks:set https://github.com/heroku/heroku-buildpack-nodejs#v75 -a my-app
Same issue had occurred for my Java application which was built with Maven.
It got fixed by configuring Java buildpack provided by Heroku (Earlier I was using custom buildpack which used to work on heroku for same application).