I'm trying to mimic some basic functionality of the Todos example. After reading spinejs.com, many articles, and taking a few attempts and not getting off the ground, I do need to ask and get some help here. I wish this was more clear-cut, and I'd like to help others as well. I'm on Windows7 and I'm using spine.app to create my app, controllers, models - also using jQuery.tmpl
I'm using CS, but pretty new to it.
I'm not really sure where I need to use require (if at all) - I'm using an modules.exports = .... statement on all M, C
so index.coffee should be able to find, I assume
Maybe this is not the case - I see even though controllers/contacts used a modules.exports statement, the index still used a require.
Is index.coffee just particular about visibility ?
I see Contacts uses Contact without any require statement.
I've seen the main.App Controller be instantiated, from CS, as in Todos
or in the jQuery() script in the html, as in Contacts.
I'm assuming you should either
-build the whole thing and include application.js OR
-use the jQuery() function to create your App via javascript.
If this does compile, will it end up in public/application.js ??
I'm getting a nasty parse error,
and yes, I'm aware you consistently have to use spaces (no tabs)
That being out of the way, I'm getting hung up on the 1st require line
require('lib/setup')
Am I going to need some Cygwin stuff ? I can get it if it helps.
and I've seen the Google Groups, guillaume86's comments, contrib and CS irc channel.
I'm not sure what (date) version of hem I have
but I did try the minify: false, option and a few other things, to try to debug this.
The good news: I'm pretty stubborn and will get this to work, if I can get a little help here.
More to come, but I'm going to close at this point.
Thanks in advance for your suggestions.
I don't think this will help the OP too much, but thought I'd write this up to help anyone else who is looking to get started with these awesome tools.
Before you go further: I've rewritten this with updates at How to manage client-side JavaScript dependencies?
Here's a basic list for getting set up with a Spine, hem, coffeescript app. I only develop on Linux, so I'm not sure if some of these steps would have problems on windows, namely npm commands. Should work fine on Mac; I know other who use the same toolchain.
Install NPM: curl http://npmjs.org/install.sh | sh on a *nix system. I'll assume it's available from the command line.
npm install -g spine.app will make spine available as a global command
spine app folder will make a Spine project called app in folder, generating the right directory structure and a bunch of skeleton files to get started.
cd to folder and edit dependencies.json for the libraries you need. Add them to slug.json so that hem knows where to find them as well. You can install hem globally (npm install -g hem) or locally, as in the next step.
npm install . to download all the dependencies you just entered in, including hem.
If you take a look at the default spine config, there is a app/lib/setup.coffee where you require all the libraries you need from your dependencies. Examples:
# Spine.app had these as dependencies by default
require('json2ify')
require('es5-shimify')
require('jqueryify')
require('spine')
require('spine/lib/local')
require('spine/lib/ajax')
require('spine/lib/manager')
require('spine/lib/route')
# d3 was installed via dependencies.json
require 'd3/d3.v2'
In index.coffee, you just require lib/setup and load the main controller for your app. In addition, you need to require any other classes in those other controllers.
The default generated index.html will usually be fine for loading your app, but modify as necessary.
From folder, run node_modules/hem/bin/hem server to start a hem server, and navigate to localhost:9294 to see your app. If you installed hem globally (npm install -g hem), then hem server by itself may work, but sometimes it gets confused about the path.
Build the rest of your app using proper MVC techniques, and use stylus for CSS and eco for views.
One more thing: normally, hem server will update automatically as you update your code and save files, which makes it a cinch to debug. Running hem build will compile your app into two files, application.js, which is minified and application.css. If you run hem server after this, it will use those files and no longer update automatically. So don't hem build until you actually need a minified version of your app for deployment.
See this other thread about that: Spine.js & hem getting started
Windows is supported (there were concerns in the past, but they have been resolved). There is actually a branch of hem that being more actively developed, since the original branch is no longer being maintained by the developer. You can check out branches version0_2 or version0_3 which have been getting updates and may eventually get windows support.
HTH.
Related
This was originally a Github Issue in the Dart-Code repository.
1. Context
I've been working on a package that has hundreds of tests, so an easy way of visualizing code coverage would be incredibly handy.
I would like to run my tests with, say, a .vscode configuration with an lcov.info output which would automatically be recognized by VS Code and highlighted on the respective editors with either red or green.
2. What I've Already Tried
I've tried many different solutions in the past few days — months actually — but none of them worked as the ideal one described above:
flutter test --coverage --coverage-path=lcov.info does work to generate the necessary file, but it's clunky to have to visualize it through a 3rd party program such as genhtml, all the more if you're on Windows.
And it does need Flutter in the end, which should not be necessary if you're working on pure Dart...
IntelliJ would supposedly work ideally, but I just can't seem to enable the Run with Coverage button on mine, even after installing the test_coverage package.
Though one person on Gitter told me he has it working on his IntelliJ.
Both the coverage and the test_coverage packages offer something close to what I described above, but their solutions are way clunkier — and on Windows they are tough to set up...
codecov.io is an alternative with a 3rd party, but it's annoying to have to handle this externally when the editor offers a much more flexible and faster experience.
And there is also the problem of ambiguous coverage, which is not clear with respect to codecov.io. For example, if one folder tests stuff that indirectly calls another folder, does that count as coverage for the indirectly called folder as well? That's almost always undesirable.
3. Other Resources
There's this old question on StackOverflow that was helpful initially.
You can take the genhtml.perl script here.
If you have Git for Windows installed on your machine, you already have Perl installed, it should be here: <git-install-dir>\usr\bin\perl.exe
Replace backslash characters (\\) with slash characters (/) in all file path lines (prefixed with SF:) in the lcov.info file.
Run genhtml.perl script. For example — assumed current working directory is root directory of your project —:
<git-install-dir>\usr\bin\perl.exe \
C:\Scripts\genhtml.perl \
-o .\coverage\html .\coverage\lcov.info
Note. It may be useful also to add the --prefix option.
As a result of these actions, you should get generated HTML report in the .\coverage\html/ directory. Open .\coverage\html\index.html file in your browser to see the report.
I hope this helps — at least, it worked for me.
I have an Ionic app running with the basics of Ionic and running it in the browser by doing ionic serve, but I want some new stuff and run it trough the grunt serve command, also has the feature for JSLint, I am already using this scss this: https://github.com/diegonetto/generator-ionic/ and I see that have everything I want, how do I install that in my project?
Take into account that my project is almost done, I have almost 85 % already done.
Is this the part I need to follow up:
Upgrading
Make sure you've committed (or backed up) your local changes and install the latest version of the generator via npm install -g generator-ionic, then go ahead and re-run yo ionic inside your project's directory.
The handsome devil is smart enough to figure out what files he is attempting to overwrite and prompts you to choose how you would like to proceed. Select Y for overwriting your Gruntfile.js and bower.json to stay up-to-date with the latest workflow goodies and front-end packages.
does this will bring some complications ? is there something else I need to know ?
I use the same generator and enjoy using it. With that said, I would not recommend starting to use a generator until you've made a complete backup of your project.
Even then, I'd recommend creating a brand new project using the generator then migrating your existing code into the newly generated project. While migrating, you should be modifying your code to match the generator conventions as you go. This gives you the most control and will make sure that you learn the conventions of the new project structure. Upgrading instructions are really meant for people who already use the generator and are just upgrading to a new version of the generator. They are not applicable to you.
So I am attempting to modify an application written by another programmer. The program is written in Perl and apparently uses the Catalyst framework neither of which I have any experience with.
The code is well documented and my modifications seem pretty straightforward however when I try to change something (in the the controllers to be specific) the same to take no effect. Am I missing a step? I open the file edit it, save it, and try to load the web app in my browser. I've even deleted the entire contents of one of the controllers to see if it would break the application and it did not.
Please Help.
Thanks,
Ken
If the application was set-up in a sane way (using uri_for(_action) in templates and not specifically relying on the server/env/etc) you should be developing with the dev server. There are some practices that can make this difficult:impossible without modifications. This is all you should have to do–
cd {APPLICATION DIRECTORY}
# Read about it-
perldoc script/*_server.pl
# Run it-
script/*_server.pl -r -d
Unless there is something wonky in the setup, you’ll get http://localhost:3000/ running with your app.
Or, what is probably a good idea, run the application as the webuser in your apache setup. If there are files or access expected to be for that user, it might be important (e.g., if session or cache files are used and restricted to the user)–
sudo -u www script/*_server.pl -r -d
The flags turn on debugging output and the restarter so that every time you change files in the application, the server will restart automatically (if it compiles).
Catalyst is a joy to develop with and the dev server is part of why.
Have been trying out Web2Py for a couple of days now and I decided it to be a keeper. But there is one thing that concerns me a lot and that might be a showstopper in the end. I need a nice development environment & setup I can trust and be productive with. Coming from the MS Visual Studio world I'm looking for something with good autocomplete / intellisense + functions for versioning and deployment.
I did some attempts to edit my code in Eclipse but it needs additional setup to run with autocomplete, and for debugging I dont know if it's possible. (Noticed there was a Django project template in Eclipse which is a bit tempting I must say.)
Wing Ide has a instruction on how to get web2py up and running and I am up to testing that one. Not free, but very cheap compared to much in the Windows world.
I also want a good versioning (hg) setup, and preferably a semi-automatic FTP-deployment-method.
What IDE do Web2Py developers use, and how do your setup look like?
A complete setup script for a project in a good IDE would be awesome! (Just like the installation is, one click to get it running 100%).
Pycharm looks good, perhaps that one can add web2py support http://youtrack.jetbrains.net/issue/PY-1648
Thanks a lot!
OS: Windows 7/Windows XP
IDE: NetBeans
Version control: TortoiseHg/NetBeans
Debugger: winpdb
Shell: IPython
Publish: WinSCP/PuTTY/TortoiseHg
Scripts
Once I create a new project in web2py I add a few scripts to my main app folder:
web2py\applications\myapp\DebugWinpdb.bat:
C:\Python25\Scripts\winpdb.bat ..\..\web2py.py -i 127.0.0.1 -p8000 -mypassword
web2py\applications\myapp\DebugShell.bat:
C:\Python25\Scripts\winpdb.bat ..\..\web2py.py -S myapp -M
web2py\applications\myapp\Shell.bat:
python ..\..\web2py.py -S myapp -M
IDE
As others have stated you need to do some extra stuff to get autocomplete/intellisense for web2py no matter what IDE you use.
For me NetBeans was a good compromise between does-everything-if-only-you-can-figure-out-how (Eclipse/PyDev) and does-the-basics-but-few-extras (PyScripter).
NetBeans Setup (Project Properties):
Python Category
Python Platform: Python 2.x (default is Jython)
Run Category
Main Module: web2py.py
Application Arguments: -i 127.0.0.1 -p 8000 -a mypassword
NetBeans Pros:
Tight Mercurial integration
Highlights which lines have been added, changed, or deleted in your source file as you edit it
Selective rollback of individual changes you've made since your last commit
One of the nicest visual diff viewers I have used
Python PEP8 style hints (fully customizable)
Name "foo" is not a valid class name according to your code style (CapitalizedWords)
Name "Bar" is not a valid function name according to your code style (lowercase_with_underscores)
Auto-format hotkey (corrects spacing around operators, etc)
Navigation within source file
semantically indexes current source file
organizes alphabetically by type (Class, method, attribute, etc)
makes even enormous style sheets manageable
NetBeans Cons:
Integrated Debugger doesn't work with web2py (that one really hurts)
Long startup time (but acceptably snappy for me once up and running)
Version Control
I use and highly recommend Mercurial for source control. As mentioned earlier, NetBeans has great support for Mercurial but there are some things I'd just rather do in TortoiseHg.
TortoiseHg Pros:
Shell overlay icons
Repository Explorer
view repos history with graphical display of branching/merging
one stop shop for Incoming, Outgoing, Push, Pull, Update, etc with button for Commit tool
Commit tool
Hunk Selection: cherry pick changes from within a file for more focused Commits
Add, Remove, Diff, Revert, Move, Remove, Forget
TortoiseHg Cons:
No easy way to drop directly into a command line
Bug that regularly prevents files from being removed during commit (waits indef for a lock to be released; running hg addremove from command line is a reliable workaround)
Publishing
I use a combination of WinSCP (for browsing), PuTTY (for terminal commands), and TortoiseHg (for push/pull of my repos) to work with my shared hosting account on Webfaction.
The first thing I do is set up public/private key encryption. If you are having trouble getting this set up between Windows and Linux, try these instructions from Andre Molnar. Short story is you need to generate your private key using ssh-keygen on Linux, copy it down to your Windows machine using WinSCP, then convert it for use with WinSCP and PuTTY.
Then add the following lines to your global mercurial.ini file:
[ui]
ssh = "C:\Program Files\TortoiseHg\TortoisePlink.exe" -ssh -2 -i "c:\path\to\your\privatekey.ppk"
Even if you have to connect to multiple servers, you need only copy your public key to each of the different servers. You'll also want to let WinSCP and PuTTY know where your private key is located, but those are fairly easy to figure out.
Try the new web2py admin interface in trunk. It has a web based mercurial interface and a google deploy interface.
In web2py you can edit applications/admin/models/0.py and set
TEXT_EDITOR = 'amy'
And you will get the web based Amy editor with autocompletion. It is not default because because it does not work with some browsers and because autocompletion is not as good as eclipse. It may work for you.
You can use web2py with Eclipse but you need a minor workaround to let Eclipse know about the web2py environment. It is explained here.
I know other users have used other IDEs with web2py, for example WinIDE and pyCharm. I suggest you ask on the web2py mailing list where people are very helpful.
I'm pretty sure that the 'one-click setup script' to do all that you are looking for does not exist (at the moment). But don't be put off - you can achieve a nice development environment to suit your needs and there are lots of choices.
Although I develop on Windows, I like the setup I have as it's more of a 'Unixy' way of thinking whereby I have a number of tools each doing a specific task. Once you get a workflow setup you can be very productive - although I realise it may look a bit confusing initially coming from a Visual Studio world.
Below I outline what I've settled on. I'm sure others will have their own recommendations. Pick the options you like best.
(There should be hyperlinks to useful software below but I don't have enough reputation to include more than 1 link...)
For developing on Windows I'm enjoying using Pyscripter. It's free, fast (compared to Aptana / Eclipse / Netbeans etc) and has some nice features (dark theme, integrated python console and code explorer to name a few).
To get code completion / intellisense to work for web2py you need to add some code to your model / controller files because of the way web2py works. There are some instructions in this discussion topic on the web2py group.
web2py has a great error ticketing system built in (see the web2py book chapter 3). For more comprehensive debugging, pydb seems to be the way to go. Just put the code below as a breakpoint:
import pydb
pydb.debugger()
and it'll take you to the debugger.
I use TortoiseHg for Mercurial integration and it works wonderfully. Combine that with winscp and you can deploy easily.
Caveats: I work in OS X, and do most of my coding in BBEdit.
That said, I've used both Wing and Komodo IDE for web2py debugging, and they've both worked quite well for me. I haven't tried NetBeans in a while now; when I did the Python support seemed a little rough around the edges. And I've never had the time or patience to come up to speed with Eclipse; it's filed in my mental file cabinet with Emacs, no doubt unjustly to Eclipse and/or Emacs.
(And I'll echo mdipierro's recommendation to try the web2py mailing list; it's really indispensable--one of web2py's strongest points.)
Have you considered using fewer tools? Both Python and web2py don't require a whole lot of code to get a lot accomplished. web2py only adds 10 or 15 new function calls (besides the HTML helpers and validators). You might find that Eclipse and other IDEs actually get in the way. Setting up new apps in web2py is simple through the admin system. Since the new app scaffolding copies the welcome app, you can customize new app setup by editing the welcome app. With Mercurial (or Git, Subversion or Bazaar) you can set up a server on your machine or with one of the public sites and either push or pull updates to your production server. Keep it simple, I say.
we are using web2py framework for all our web application needs and this is our setup :
OS - Ubuntu up-to-date
IDE - Aptana Studio 3.0 with pyDev
Version Control - git
Python 2.7
Browser for developing phase : Chrome
I've found the Wing IDE debugger to be very useful. It's a powerful debugger across the board, and also can be configured to do remote debugging, which is really important when you are running web2py on a no-GUI remote machine (e.g. at Amazon Web Services).
I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though