Octopress FSSM Issue Rake preview - rake

Hey guys I am currently running an octopress blog and unfortunately I ran into a FSSM dead message, I tried options such as Octopress failed on running rake preview and this Rake Preview not working in Octopress
and many others. I still get this however whenever I try to run rake preview.
$rake preview
## Set the codepage to 65001 for windows machines
Starting to watch source with Jekyll and Compass. Starting Rack on port 4000
Configuration from C:/Users/Welcome/Documents/GitHub/octopress/_config.yml
>>> Change detected at 10:11:01 to: screen.scss
Auto-regenerating enabled: source -> public
[2014-05-07 10:11:01] regeneration: 93 files changed
Thin web server <v1.6.2 codename Doc Brown>
Maximum connections set to 1024
Listening on 0.0.0.0:4000, CTRL+C to stop
identical public/stylesheets/screen.css
Dear developers making use of FSSM in your projects,
FSSM is essentially dead at this point. Further development will
be taking place in the new shared guard/listen project. Please
let us know if you need help transitioning! ^_^b
- Travis Tilley
Help would be appreciated

Related

How come Netlify CMS on localhost is not working?

I’m from Taiwan and not good at English, but I’ll try my best to explain my question.
I use Netlify CMS to build my Hugo website.
Everything runs well, but when I want to change and check if the CMS’s configuration and style is okay, I always need to push my repository to Github, which is disturb me a lot.
So I tried to use local_backend: true in admin/config.yml, then used this command: npx netlify-cms-proxy-server.
then the window showed:
info: Netlify CMS File System Proxy Server configured with C:\Users\june.wu\Documents\GitHub\house-blog-hugo-cms
info: Netlify CMS Proxy Server listening on port 8081
It looked it ran successfully on localhost:8081.
But when I type localhost:8081 in browser it showed: Cannot GET /
How can I solve this problem?
I can confirm what said in the previous answer. Run npx netlify-cms-proxy-server and go to localhost:8080/admin or try also localhost:8080/admin/ (with ending slash). It should work if you are using default 8080 port.
Ignore that CMS is on port 8081 as said in the script log
for anyone looking at this in and still finding it unclear, as said in the docs you need to go to your regular development server based on the SSG (static site generator) that you are using once you have started both the NetlifyCMS and SSG dev servers. See point 5 in the beta features docs here.
You are not supposed to try visit the NetlifyCMS 8081 default port to use the CMS locally.
I have also seen behaviour mentioned by a few people that it only works with a / at the end of the path. So for example if using Astro as your SSG (which uses the port :3000 for it's dev server) you should try both:
http://localhost:3000/admin/
http://localhost:3000/admin
Although I have been having issues with the NeltlifyCMS dev server with Astro still...

Heroku App Only Working On Local Machine

Have something really odd going on with Heroku.
I have an application build in React/JS/Node with Mongo.
If I pull up the link to my app on my local machine: https://obscure-crag-61417.herokuapp.com/, I can see a version of my website, but it is not updating for any changes that I push to Heroku.
Even more strange, is that on a non-local machine, if i visit the aforementioned link, I get the boilerplate 'Express' page.
I've tried clearing the cache, exiting browsers on both PC's but same old story.
I have the MongoDB config set in Heroku.
Not sure what could be going on here.
Any ideas?
PS--here's my code: https://github.com/pythoncreate/twit-stocks
Okay i figured this one out. I'm pretty sure it was how I was setting the ports on the backend. Heroku has some specific rules about this:
Heroku + node.js error (Web process failed to bind to $PORT within 60 seconds of launch)

Divolte-collector with MAPR, Storm, Kafka and Cassandra

I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: false…}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.

fluxbox couldn't connect to XServer - CentOS 6.4

I'm setting up some new VNC servers. I already have this setup working with CentOS 6.3, although I'm not certain that this difference is the real problem.
One of the window managers I'm making available is fluxbox, but when I start it, I always get the following: Error: Couldn't connect to XServer. Here's my setup:
fluxbox: fluxbox-1.1.1-5.el6.x86_64
vnc : tigervnc-server-1.1.0-5.el6_4.1.x86_64
OS : CentOS 6.4
Note that I can start other window managers: Gnome, KDE, openbox, xfce4, etc.
I gutted my ~/.vnc/xstartup script so it only loads an xterm. Then, I tried running startfluxbox &, but still got the error. Obviously, VNC is working, since my xterm opened up OK. I can start firefox, another xterm or other app requiring X, and even fluxbox comes up, but it is worthless in its current state, since it is not connected to the X session.
What is fluxbox looking for? Are there some log files I can look at to give me some clues?
Thanks,
David
CentOS/RHEL 6.4 and up have upgraded libX11 and Xorg.
The $DISPLAY var handling has changed in libX11.
This one in particular is described in this git commit:
http://cgit.freedesktop.org/xorg/lib/libX11/commit/?id=f92e754297ec5fdb81068b56a4435026666224fa
we run our fluxbox with this script in our vnc configs now:
/usr/bin/fluxbox -display "$DISPLAY.0"
OK, I think I've figured out the problem, so I'm answering my own question.
In VNC, I usually specify a display number. (Note, however, that the problem occurs even if vncserver uses the first available display number.) So, I start the vncserver as:
vncserver :17
This should create an X session where my $DISPLAY is set to :17.0, but in CentOS 6.4, the $DISPLAY is set to :17 instead. Apparently, unlike other window managers, fluxbox is unable to handle this inaccuracy. The problem, then, was that fluxbox was trying to connect to :17 and was unable to do so.
My solution, as suggested by someone answering a different problem, was to set $DISPLAY as part of the invocation of fluxbox. So, in my ~/.vnc/xstartup file, I have:
DISPLAY=$DISPLAY.0 startfluxbox &
Note that this may not work for other releases of CentOS, so you might wish to test the release of the box you are using before adding the DISPLAY=... setting to the command.

Using MPI with two RaspberryPi

I am trying to make a 'dual core' RaspberryPi for a project I am working on. I had followed this tutorial by Simon Cox. Unfortunately I could not get the two RasPi to talk to each other. (This was using Hydra as the process manager)
After looking more carefully at the MPICH installers guide, which can be found here, I tried to use the -phrase to pass the passphrase I had created. However I could not find it as part of the hydra commands. So I re-installed with smpd and after many compiling attempts. I configured with:
/configure -prefix=/home/pi/mpich-install --with-pm=smpd --with-pmi=smpd
I also had to install libbsl-dev to get the MD5 that smpd requires. I also exported the path that the commands mpiexec and mpicc are in. After setting the passphrase I copied the image to a second SD card and put it in a second RasPi. I then set up the passphrase using ssh-keygen.
I was able to run the cpi program on the master Pi and the slave Pi individually but when I tried to run multiple processes on both at the same time I got the error
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init``_thread(392).................:
MPID_Init(139)........................: channel initialization failed
MPIDI_CH3_Init(38)....................:
MPID_nem_init(196)....................:
MPIDI_CH3I_Seg_commit(366)............:
MPIU_SHMW_Hnd_deserialize(324)........:
MPIU_SHMW_Seg_open(863)...............:
MPIU_SHMW_Seg_create_attach_templ(637): open failed - No such file or directory
Can someone please suggest how I can either fix this problem or get the RaspberryPis to communicate using MPICH?
Thanks
E.Lee
If anyone else has this problem make sure your hosts don't have the same name!
You can change it by following this tutorial http://raspi.tv/2012/how-to-change-the-name-of-your-raspberry-pi-new-hostname