New to Sinatra, just development server up and running but rackup is using WEBrick instead of Thin, Thin gem is already installed, this has to be a simple configuration tweak but I don't know where. Oh while you are at it, does Thin auto-refresh when I change the source code? It appears that I have to stop and restart WEBrick when I make source code changes.
EDIT
As suggested, thin start works with a tweak for my setup. By itself, it throws an error "start_tcp_server": no acceptor (RuntimeError) which means I already have another service running on that port. To resolve the issue, I simply run thin start -p 9292. Hope this helps someone else.
I believe you'll likely just want to start up thin via something like:
bundle exec rackup -s thin
If you're on OSX you might want to check out Pow for your development environment.
For reloading files between requests: How to get Sinatra to auto-reload the file after each change?
You can start the server with Thin using just $ thin start.
If you want code reloading, use one of the several reloading libraries in the wild: Shotgun (which will fork and exit for every request, does not work on Windows), Rack Reloader (which is a Rack middleware) or Sinatra Reloader. I personally favor Sinatra Reloader, since it just reloads files that have changed and is therefore faster. Also there's the possibility to add additional files which shall be reloaded and files that must not be reloaded.
Related
I am always working in the Windows for web app development, and sometimes I need to setup a simple Web Server, and I always use the Python SimpleHTTPServer method, which is nice.
python -m SimpleHTTPServer
and of course, the default port is 8000.
But I find one problem that, sometimes when I cancel or stop the SimpleHTTPServer, the localhost:8000 is still working, and the page stilled can be loaded.
So what the reason for this issue? How can I clean it.
Regards
Edit
Find one solution to remove from DevTool of Chrome Browser. Need some simple setting
As a Vagrant user, when trying Docker I noticed one significant difference between development workflow with Vagrant and with Docker - with Docker I need to rebuild my image every time from scratch, even if I made minor changes in code.
This is major problem for me, because the process of image rebuilding oftenly very redundant and time consuming.
Perhaps there are some smart workflows with Docker already invented, if so, what are they?
I filed a feature-request for the vagrant-cachier plugin for saving docker build data and attached a bash workaround for that process. If it's okay for you to hack yourself around you can implement the scripts in vagrant.
caching docker build data with vagrant
Note that this procedure needs the vagrant-cachier plugin to be installed and has to save and load +300MB files from disk if they are new to the machine. Thus it's really slow if you have dockerfiles with just 1-5 lines of code but it's fast if you have dockerfiles with a lot of LOCs or images that have to be downloaded from the net.
Also note that this approach saves every intermediate building step. So if you are building an image and change a line in the middle of a dockerfile and build again the docker build process will get all cached intermediate containers till the changed line.
Using baseimages is still the preferred way but you can combine both procedures.
Feel free to post improvements and subscribe so fgrehm will maybe implement this in his plugin natively.
As Mark O'Connor suggested, one of the tips may be building a base image to your container(s). This image should have the dependencies, package installation, downloads... or any other consuming activity. This base image should be supposed to be built less frequently than the other one(s). In a similar way, if the final states of the execution of each step of your dockerfile doesn't change, Docker don't build this layer again. Thus, you can trying execute the commands than may change this state almost every run (e.g.: apt-get update) as later as you can, so docker don't have to rebuild the steps before. And also you can try to edit your dockerfiles in the later steps better than in the first.
Another option if you compile/download something inside the container is to have it downloaded or compiled in a host folder, and attach it to the container using -v or --volume option in docker run.
Finally there is other approaches to this issue as the one used by chef with knife container. In this approach you build the container using chef cookbooks, and each time you build it (because you have edited your cookbooks...) these changes are applied as a new docker layer (AUFS layer) and you don't have to repeat all the process. I didn't recommend this solution unless you have experience with Chef and you have cookbooks to manage your software. You should work harder to get it working and if you want Chef only to manage docker containers I think it doesn't worth it (although chef is a great option to manage infrastructures).
To automate the building process in case you have several images dependents itself, you can have a bash script that helps you with that task (credits to smola#github):
#!/bin/bash
IMAGES="${IMAGES:-stratio/base:test stratio/mesos:test stratio/spark-mesos:test stratio/ingestion:test}"
LATEST_TAG="${LATEST_TAG:-test}"
for image in $IMAGES ; do
USER=${image/\/*/}
aux=${image/*\//}
NAME=${aux/:*/}
TAG=${aux/*:/}
DIR=${NAME}/${TAG}
pushd $DIR
docker build --tag=${USER}/${NAME}:${TAG} .
if [[ $TAG = $LATEST_TAG ]] ; then
docker tag ${USER}/${NAME}:${TAG} ${USER}/${NAME}:latest
fi
popd
done
There are a couple of tricks that might better your workflow (very web-focused)
Docker caching
Always make sure you are adding your source to your Docker image in Dockerfile at the very end.
Example;
COPY data/package.json /data/
RUN cd /data && npm install
COPY data/ /data
This will make sure you get optimal caching when building the image, and that Docker doesn't have to rebuild the npm packages when you are changing your source.
Also, make sure you don't have a base-image that adds folders/files that are often changed (like base images doing COPY . /data/
fig mount
Use fig (or another tool), and mount your source directory when developing. This way, you can develop with instant changes and still use the current version of your code when building the image.
development server
You can start your developer web-server when you are developing, and nginx when not (if you are developing an www app, but same idea applies to other apps).
Example, in your startup script, do something like:
if [[ $DEBUG ]]; then
/usr/bin/supervisorctl start gulp
else
/usr/bin/supervisorctl start nginx
fi
And have autostart=false in your supervisord.conf files.
auto-refresh app
If you are developing a web-app, use tools like gulp and eg gulp-connect, if you are developing a python/django app, use the runserver utility. Both reloads the server when detecting changes in the files.
If you are using the if [[ $DEBUG ]] ... trick, make them listen on the same port as your normal instance (nginx). That way, you can have 1 configuration for your reverse proxy, ie, just send the traffic to example www:8080, it will hit your web-page both in production and if you are developing.
Create a based image that holds the bulk of your application's dependencies. This will significantly reduce your docker build times.
I'm trying to mimic some basic functionality of the Todos example. After reading spinejs.com, many articles, and taking a few attempts and not getting off the ground, I do need to ask and get some help here. I wish this was more clear-cut, and I'd like to help others as well. I'm on Windows7 and I'm using spine.app to create my app, controllers, models - also using jQuery.tmpl
I'm using CS, but pretty new to it.
I'm not really sure where I need to use require (if at all) - I'm using an modules.exports = .... statement on all M, C
so index.coffee should be able to find, I assume
Maybe this is not the case - I see even though controllers/contacts used a modules.exports statement, the index still used a require.
Is index.coffee just particular about visibility ?
I see Contacts uses Contact without any require statement.
I've seen the main.App Controller be instantiated, from CS, as in Todos
or in the jQuery() script in the html, as in Contacts.
I'm assuming you should either
-build the whole thing and include application.js OR
-use the jQuery() function to create your App via javascript.
If this does compile, will it end up in public/application.js ??
I'm getting a nasty parse error,
and yes, I'm aware you consistently have to use spaces (no tabs)
That being out of the way, I'm getting hung up on the 1st require line
require('lib/setup')
Am I going to need some Cygwin stuff ? I can get it if it helps.
and I've seen the Google Groups, guillaume86's comments, contrib and CS irc channel.
I'm not sure what (date) version of hem I have
but I did try the minify: false, option and a few other things, to try to debug this.
The good news: I'm pretty stubborn and will get this to work, if I can get a little help here.
More to come, but I'm going to close at this point.
Thanks in advance for your suggestions.
I don't think this will help the OP too much, but thought I'd write this up to help anyone else who is looking to get started with these awesome tools.
Before you go further: I've rewritten this with updates at How to manage client-side JavaScript dependencies?
Here's a basic list for getting set up with a Spine, hem, coffeescript app. I only develop on Linux, so I'm not sure if some of these steps would have problems on windows, namely npm commands. Should work fine on Mac; I know other who use the same toolchain.
Install NPM: curl http://npmjs.org/install.sh | sh on a *nix system. I'll assume it's available from the command line.
npm install -g spine.app will make spine available as a global command
spine app folder will make a Spine project called app in folder, generating the right directory structure and a bunch of skeleton files to get started.
cd to folder and edit dependencies.json for the libraries you need. Add them to slug.json so that hem knows where to find them as well. You can install hem globally (npm install -g hem) or locally, as in the next step.
npm install . to download all the dependencies you just entered in, including hem.
If you take a look at the default spine config, there is a app/lib/setup.coffee where you require all the libraries you need from your dependencies. Examples:
# Spine.app had these as dependencies by default
require('json2ify')
require('es5-shimify')
require('jqueryify')
require('spine')
require('spine/lib/local')
require('spine/lib/ajax')
require('spine/lib/manager')
require('spine/lib/route')
# d3 was installed via dependencies.json
require 'd3/d3.v2'
In index.coffee, you just require lib/setup and load the main controller for your app. In addition, you need to require any other classes in those other controllers.
The default generated index.html will usually be fine for loading your app, but modify as necessary.
From folder, run node_modules/hem/bin/hem server to start a hem server, and navigate to localhost:9294 to see your app. If you installed hem globally (npm install -g hem), then hem server by itself may work, but sometimes it gets confused about the path.
Build the rest of your app using proper MVC techniques, and use stylus for CSS and eco for views.
One more thing: normally, hem server will update automatically as you update your code and save files, which makes it a cinch to debug. Running hem build will compile your app into two files, application.js, which is minified and application.css. If you run hem server after this, it will use those files and no longer update automatically. So don't hem build until you actually need a minified version of your app for deployment.
See this other thread about that: Spine.js & hem getting started
Windows is supported (there were concerns in the past, but they have been resolved). There is actually a branch of hem that being more actively developed, since the original branch is no longer being maintained by the developer. You can check out branches version0_2 or version0_3 which have been getting updates and may eventually get windows support.
HTH.
So I am attempting to modify an application written by another programmer. The program is written in Perl and apparently uses the Catalyst framework neither of which I have any experience with.
The code is well documented and my modifications seem pretty straightforward however when I try to change something (in the the controllers to be specific) the same to take no effect. Am I missing a step? I open the file edit it, save it, and try to load the web app in my browser. I've even deleted the entire contents of one of the controllers to see if it would break the application and it did not.
Please Help.
Thanks,
Ken
If the application was set-up in a sane way (using uri_for(_action) in templates and not specifically relying on the server/env/etc) you should be developing with the dev server. There are some practices that can make this difficult:impossible without modifications. This is all you should have to do–
cd {APPLICATION DIRECTORY}
# Read about it-
perldoc script/*_server.pl
# Run it-
script/*_server.pl -r -d
Unless there is something wonky in the setup, you’ll get http://localhost:3000/ running with your app.
Or, what is probably a good idea, run the application as the webuser in your apache setup. If there are files or access expected to be for that user, it might be important (e.g., if session or cache files are used and restricted to the user)–
sudo -u www script/*_server.pl -r -d
The flags turn on debugging output and the restarter so that every time you change files in the application, the server will restart automatically (if it compiles).
Catalyst is a joy to develop with and the dev server is part of why.
I have a small Sinatra app I'm running on a shared hosting account using Passenger. However, the first time the app is accessed after a while, I get a Passenger error page saying the application could not be started. Usually because Sinatra could not be found.
I am assuming this is just a normal delay from when a new instance is spawned. However, is there a way to delay trying to load Siantra until it Passenger has fully loaded?
I seem to have solved the issue by setting the GEMS_PATH environment variable in the .htaccess file. I haven't encountered the error again. YET!
I took this up with Dreamhost support recently (not a great experience) and eventually they recommended freezing the gems into the application. This is at best a partial solution, because it seems to work for some gems and not for others.
Instead of
require 'sinatra'
I have:
require 'vendor/gems/sinatra-0.9.4/lib/sinatra'
Freezing gems is covered elsewhere, but briefly: to prepare for this, one needs to do
mkdir vendor/gems
cd vendor/gems
gem unpack sinatra
As a result of this change, I never get the startup failure screen quoting sinatra as the file it can't load. However, I still get it for some other gems which require themselves or parts of other gems. Not too clear about the details, but I'm working on the idea of hacking the installed gems to make every "require" use a path directly out of my "vendor" library.
I think you may need to add Gem.clear_paths! in there
I had a similar problem a long time ago. Updating to a newer Sinatra gem helped me - what version are you running?