How do I use pulp and purescript development *without* an internet connection? - purescript

This morning I wanted to try a quick example, however, I noticed I couldn't do a "pulp init" without an internet connection.
Is there any way I could cache the pulp packages somehow so that creating a project doesn't fundamentally require an internet connection?

Pulp uses Bower for package management, so you can use Bower's facilities for offline work, including --offline, or pointing to local dependencies on disk.

Related

Upgrading to LibMan from NuGet

We have a web app project that still uses NuGet for content packages management (jQuery, Knockback, knockoutjs, etc.). We are trying to convert to use LibMan, and running into an issue where some older packages do not exist (for instance walltime-js). How do we work around this issue?
Try using a different provider. The current default, Cdnjs, is a curated catalog; the other two providers, JSDelivr and Unpkg, host any package that's available in NPM and thus have much broader catalogs.

Keep cache between recompiles in Play Framework 2.2 / Scala

Is it possible to keep the cache 'loaded' between recompiles?
Using auto-compile mode (play ~run) it calls out to several external APIs to build the response. If I am just tweaking code it is a pain to have to wait to rebuild the whole page every time.
That's the nature of development mode. The server is restarted for every recompile, and the EhCachePlugin is reinitialized. In production however, you shouldn't be using the EhCachePlugin anyway, as it not designed for a distributed environment (since each instance has it's own local cache).
I use the Play2-Memcached plugin for my production servers, and after a lot of similar frustration, I just decided to install memcached on my local machine and use that in development mode as well. I'm only kicking myself for not doing it sooner. It also comes with the added bonus of being able to flushall from the command line.

Introduction to Erlang/OTP production applications deployment

I would like to develop and deploy an Erlang/OTP application into production on a VPS.
I am pretty familiar with developing Erlang code on a local machine and my question is about deployment.
Basically, I would like to know what steps I should take in order to move Erlang code from a local machine to a production server and make it run, i.e. be available for users.
Note: I have read some documentation about Erlang and command line, Erlang code module, Erlang releases, but I am still not sure how to pursue the required task.
However, I guess that deploying an Erlang-based software on a server is a bit more tricky than doing sudo tasksel for LAMP.
I plan to have an Erlang/OTP application which has Mochiweb, CouchDB (couchbeam) and boss_db as dependencies.
So, my newbie questions about deploying all that stuff on a production server are the following:
I plan to use Ubuntu Server 12.04; is there any better choice for a Linux distro to use for Erlang/OTP in production?
How all the code should be organized? Should I put my application into a /home/myapp/ dir and then put all the dependencies into /home/myapp/deps? Or should I put all dependencies into /usr/local/lib/erlang/lib? (returned by code:get_path()). Should I somehow update the dependencies regularly or should I freeze them?
How do I make the whole application start once the server starts? Should it be some kind of bash script or anything else?
I know that Erlang allows hot code upgrades, but how should I organize that? On Rails I could update the code with git, does anything similar exist in the Erlang world?
There are two types of dependencies: Internal and External. If you want to do it the right way(tm), it takes a bit of time getting to work:
External dependencies:
Taking the latter first, an external dependency is some other thing that has to run before your application can run. For instance a PostgreSQL database, or a Riak cluster. For those, you usually just use the usual stuff in Ubuntu for making it start up properly. I've had good experience with using monit for these tasks:
http://mmonit.com/monit/
Internal Dependencies:
For internal dependencies, you need to arrange your program into applications inside the Erlang VM. These have dependencies on each other, like the external dependencies. Your main application may need a logger running before it should start, for instance. Then you create a release. A release copies the Erlang binaries and necessary libraries/beams/applications into a release directory, forming a self-contained Erlang system. It contains a boot-script which tells how to start up the applications in the right order and keep them running. So you can tar-ball up this release, copy it to the server and then start it. There are some basics covered here:
http://learnyousomeerlang.com/release-is-the-word
but do also read the chapters before it on applications. You can also get rebar to call reltool for you to build a release. This is what I usually do.
Hot upgrades:
Handling hot upgrades in production can be done in a couple of ways. You can move the beam to the machine and then deploy it, take the shell and then call l(Module) to load it into the running system. This works for smaller fixes. For large systematic upgrades you can do a release-upgrade which will upgrade the running system on the fly without stopping service. But if your system is mostly shared nothing, it is usually not worth it. Instead, you can have multiple machines and upgrade them in sequence.
For instance, you can upgrade a machine and then use a system like HAProxy to send 2% of all requests to the new system. Then systematically turn up the request load weight.
While #I GIVE CRAP ANSWERS gave a pretty thorough summary, I feel compelled to throw in the use of sync, which helps to automate the hot-recompiling and reloading of modules.
The simple way is you specify sync as a rebar dependency, then when you're getting ready to deploy an upgrade, you can run sync:go() on the Erlang node. This starts the sync engine, which watches for filesystem changes. Then you can use git to push to your server. Sync will notice the files change, recompile them, and load the new beams automatically.
Then, you can run sync:stop() right away to tell the system to stop watching for filesystem changes (it's generally not recommended to keep sync running on a live server, just to prevent accidental recompiling if, for whatever reason, a source file changes and it's unintentional.

Checkout from SVN to remote location with Eclipse

I am in the need to set up eclipse in a way that I can connect to a SVN and checkout projects or files to a remote location. The remote location is Linux-based, the clients work with windows.
I read a few threads and it seems that it works on console with ssh+svn. But I am struggling badly to make this scenario run in eclipse.
Any hints? I appreciate your help.
Philipp
Your question sounds to me that you try to solve something, that we don't know yet. So I speculate here a little bit, and I will change my answer if the question gives indication that I was wrong.
(Part of your) development has to live on the server, so there are resources you have to use during development, which are necessary for development.
Possibly these resources are (only) necessary for testing (unit tests?), or for functional tests.
You have experience with Eclipse and want to use that.
So here are sketches of possible solutions that may work for you.
Using Eclipse on the server
You install an appropriate eclipse distro on the linux machine you have to develop on.
You install locally e.g. Cygwin with the XWin packages that allow you to start an X-windows server locally.
You open up an xterm locally (just to get the display variable correct).
You start from that xterm the eclipse installed on the Linux machine: ssh <user-id>#<ip-of-linux-server> <path to eclipse> -display $DISPLAY
Pros and cons
+ You work on the machine and have the display locally.
+ You are able to checkout directly on the machine, no need of a local copy.
- Your are not able to work without the connection to the Linux machine.
Using Eclipse locally
There are two variants, and both are valuable:
Have the sources on the server (only)
Have the sources locally
Sources on server, Eclipse locally
The easiest way is to mount the file system of the server, so you have access to them locally through a different drive letter. Ask your system administrator how that could be accomplished.
Pros and cons
+ Everything works as normal.
+ You don't have to install Subversion on the server.
- Latency for the remote file system may be annoying.
- You are only able to work with network connection to the server.
Sources locally, Eclipse locally
That is the normal way to do it. Install Eclipse with Subversion plugin as usual, checkout from the repository, work locally (even disconnected), commit your changes.
You are then able to test by doing a checkout on the server, build the system there, and do your unit and integration tests there.
Pros and cons
+ Easier to install and maintain.
- No tests during development without a build process in between.
- Tests can only be done with commited code, not with changes that are not commited.
My recommendation
I like the solution best with Eclipse on the server, so you use everything that is available on the server, and Eclipse under Linux is totally the same as under Windows. You don't have any steps in between for doing tests, everything is done locally (on the server).
See as well the following questions (and answers):
Is it possible to work on remote files in Eclipse?
PS: What I forgot: I think svn+ssh is just a different protocol of Subversion to do the checkout, update and commit. It is in no way different to using the protocols file://, svn://, http:// or even https://.

Is it possible to work on remote files in Eclipse?

I'm looking into using Eclipse as a dev environment for PHP projects, but it's pretty huge and I'm not sure where to look for answers. I want to be able to work on remote files from within the client - i.e., rather than using an FTP client to download copies from our remote development server, working on them locally, and then having to upload them to test, I want to be able to work directly on the remote files. I know many development environments allow this - my colleagues who work on Macs use Coda, which allows them to define sites and access all files via an explorer tree. I'm currently running Bluefish on Ubuntu, and it also allows this.
I've downloaded and installed Helios, but can't seem to find an obvious menu entry for handling remote files. Can anyone point me in the right direction?
Edited to add: we don't use version control at this point, so I'm not looking for any kind of Subversion tie-in.
The RSE (Remote System Explorer) may be what you're looking for. It's an implementation of the Eclipse File System framework which allows resources in your workspace to be backed in reality by remote resources.
Since you are working on Ubuntu, you can have a look over here
Perhaps this will be of some help
http://www.jcraft.com/eclipse-sftp/
I've never used it myself but it seems to do what you're looking for syncing and editing files over SFTP.
I usually develop using remote Eclipse. Use ssh -Y user#server to login and try executing eclipse on that shell, it should open on your computer if you have X properly configured.
Of course, this Eclipse instance will have access to the remote server files.
This is more general than Eclipse: http://curlftpfs.sourceforge.net. I usually use the SSH/SFTP version (safer): http://fuse.sourceforge.net/sshfs.html
Both are based on FUSE (http://fuse.sourceforge.net/)
Install eclipse plugin Remote System Explorer End-User Runtime.
See video at How to Edit codes and files remotely with Eclipse.
Also, check out how to configure the remote connection: Using Remote Connections.