Can I open a lein repl connection,
or cider-jack-in in Emacs,
without a network connection?
The computer which needs lein repl is behind some network that blocks some IPs so that it cannot connect to the (lein?) server and cannot use a vpn to bypass this problem either.
So is there a way to start lein repl without network connection?
Thanks
You can tell lein not to try to do things requiring an internet connection with the -o flag:
lein -o repl
You need to, of course, make sure the dependencies are available before you do this. And you should most definatly always run your production stuff in this mode if you run it from lein, because fetching dependencies as your service starts in production is crazy (and I've been burned by this twice too many times)
Lein will, by default, try to go online to do things like checking for new snapshot dependencies (you should not use these).
Related
I want to develop a new feature or change and existing program of the FreeBSD distribution, specifically the user space¹. To do so, I need to make changes to the FreeBSD code base and then compile and test them.²
Doing so on the tree in /usr/src and installing the result on the system seems like a bad idea, given that it requires you to run your development machine on CURRENT, to develop with root privileges and hoses your system if you make a mistake. I suppose there must be a better way and possibly a standard setup FreeBSD developers use.³
What is the recommended workflow to develop the FreeBSD code base?
¹ so considerations specific to kernel development aren't terribly important
² I'm familiar with the process to submit changes after I have developed them
³ I have previously read both the development handbook and the FreeBSD handbook chapter on building the source but neither seem to recommend a specific process.
I am a src committer.
I often start with the lowest release that I intend to back port to (e.g., RELENG_11_3.
I would then do (before or after making changes):
make buildworld
then deploy to a jail directory:
make DESTDIR=/usr/jails/test installworld
This jail directory, as the first responder hinted, can be used with bhyve, but I find it easier to configure a jail or even just use chroot.
I like to configure my jails in /etc/rc.conf instead of /etc/jail.conf:
Example /etc/rc.conf contents:
jail_enable="YES"
jail_list="test"
jail_test_rootdir="/usr/jails/test"
jail_test_hostname="test"
jail_test_devfs_enable="YES"
I can provide more in-depth examples, ones where the jail has a private networking stack so you can SSH into it, for example, but I don't get the sense that a networking stack is important to your testing from the posted question.
You can see the running jail with "jls" and you can enter the running jail with "jexec test bash"
Inside the jail you can test your changes.
When doing this kind of sandboxing, jails work so long as the /usr/src that you built/installed to the jail is from a release that is:
Older than the guest OS, or
In the same STABLE branch as the guest OS, or
At the very least binary-compatible with the guest OS
Situations 1 and 2 are pretty safe, while situation 3 (e.g., running a newer /usr/src than the guest OS) can get dodgy. For example, trying to run /usr/src head (13.0-CURRENT) on a 12.0-RELEASE-pX guest OS where the KBI, KPI, and API can all differ between kernel and userland (with jails, each jail runs under the guest OS's kernel).
If you find that you have to run the newest sources against an older guest OS, then bhyve is definitely the solution. You would take that jail directory and instead of running a jail with that root directory, run a bhyve instance with the jail directory as its root. I don't use bhyve that often, so I can't recall if you first have to deposit the contents inside a disk image and point bhyve at the disk image first -- others and/or Google would know the answer to that.
I'm a ports committer, not a src one, but AFAIK running CURRENT is a common practice amongst developers.
Another way to work is to setup a CURRENT VM, share it over NFS, mount from the host and install into by running make install DESTDIR=/mnt/current. You can use BHyVe for virtualizing, by the way.
I've been working on a small set of command line programs in Scala. While
developing I used SBT, and tested the program with run within the console. At
this point the programs had a fast startup time (when re-run after initial compilation); nearly instant, even
with additional dependencies.
Now that I'm trying to actually utilize them on my system outside of sbt, the speeds have noticeable lag. I'm looking for ways to
reduce this, since the nature of these utilities requires little to no delay.
The best speeds I've achieved so far has been through utilizing Drip. I include all dependencies in a lib directory by utilizing Pack and then run by executing a shell script like this:
#!/bin/sh
SCRIPT=$(readlink -f "$0")
SCRIPT_PATH=$(dirname "$SCRIPT")
PROG_HOME=`cd "$SCRIPT_PATH/../" && pwd`
CLASSPATH_SUFFIX=""
# Path separator used in EXTRA_CLASSPATH
PSEP=":"
exec drip \
-cp "${PROG_HOME}/lib/*${CLASSPATH_SUFFIX}" \ # Add lib directory to classpath
TagWorkspace "$#" # TagWorkspace is the main class
This is still noticeably slower then invoking run from within SBT.
I'm curious as to why SBT is able to startup the application so much faster, and if there is someway for me to levarage its strategy, or SBT itself, even if that means keeping a long living process around to actually run a command through.
Unless you have forking turned on for your run task, this is likely due to VM startup time. When you run from inside an active SBT session, you have an already initialized VM pointing at your classes - all SBT needs to do is create a new ClassLoader and point it at your build output directory. This bypasses all of the other (not insignificant) stuff that happens when you fire up a new VM.
Have you tried using the client VM to start your utility from the command line? Sadly, this isn't an option with 64-bit Java, since Oracle apparently doesn't want to support it, but if you're using a 32-bit VM, try adding the -client argument to the list that you give the VM from the command line.
If you are using a 64-bit VM, some googling will find you some unofficial forks of OpenJDK that have the client VM re-enabled. It's really just a #define in the JVM build itself - it works fine once it's been compiled in.
The only slowness I have is launching SBT. Running a hello-word Scala app with java (no Drip) version 1.8 on a 7381 bogomips CPU takes only 0.2 seconds.
If you're not in that magnitude, I suspect your application startup requires loading thousands of classes, and creating instances of them.
So I've been playing with Akka Actors for a while now, and have written some code that can distribute computation across several machines in a cluster. Before I run the "main" code, I need to have an ActorSystem waiting on each machine I will be deploying over, and I usually do this via a Python script that SSH's into all the machines and starts the process by doing something like cd /into/the/proper/folder/ and then sbt 'run-main ActorSystemCode'.
I run this Python script on one of the machines (call it "Machine X"), so I will see the output of SSH'ing into all the other machines in my Machine X SSH session. Whenever I do run the script, it seems all the machines are re-compiling the entire code before actually running it, making me sit there for a few minutes before anything useful is done.
My question is this:
Why do they need to re-compile at all? The same JVM is available on all machines, so shouldn't it just run immediately?
How do I get around this problem of making each machine compile "it's own copy"?
sbt is a build tool and not an application runner. Use sbt-assembly to build an all in one jar and put the jar on each machine and run it with scala or java command.
It's usual for cluster to have a single partition mounted on every node (via NFS or samba). You just need to copy the artifact on that partition and they will be directly accessible in each node. If it's not the case, you should ask your sysadmin to install it.
Then you will need to launch the application. Again, most clusters come
with MPI. The tools mpirun (or mpiexec) are not restricted to real MPI applications and will launch any script you want on several nodes.
I there a way (plugin) in eclipse that can run (console) application over ssh (of course do a synchronize before that with something like rsync), and display results in standard console?
Try following:
best-sftp-plugin-eclipse
eclipse-sftp
eclipse-cvsssh2
eclipsesshconsole
Also take a look at: are-there-any-good-ssh-consoles-for-eclipse
Hope this helps.
You could also run Eclipse remotely.
It should be fairly easy: use ssh -Y server_ip to connect to it, execute eclipse & and it will have direct access to the filesystem.
Of course, this solution is only practical if you have a decent network connection to the remote PC since GUI processing will be handled by the other machine (not yours).
Anyway, someone asked a similar question here. Take a look at the answer.
I have a server that I can ssh into but that's it. Otherwise it is totally closed off from the outside world. I'm trying to deploy some scripts I wrote to it but they have several Perl dependencies (Params::Validate, XML::Simple, etc.) I can't get them via yum since our satellite server doesn't have those packages and the normal CPAN install won't work either since the host is so restricted. Moving the module sources over and compiling is extremely tedious. I've been doing that for over a day, trying to resolve dependencies, with no end in sight. Is there any way around this? Any help is greatly appreciated.
If you can, set up a parallel system as close as possible (as far as architecture, and perl version) to your closed system, and install all your dependencies there into a separate lib directory using local::lib. Then you can simply scp over that directory to your closed system, and again using local::lib (and setting up some environment variables), your code will be able to utilize that directory.
See this, it explains multiple methods that you can use to get CPAN modules into production.
Have you tried cpan minus? If not, here's how to get it.
curl -L http://cpanmin.us | perl - App::cpanminus
You can use it with local::lib. :-D
Chromatic has a great post on how to even get a newer(and multiple) version(s) of perl onto a restricted system.
If you can change your hosting provider, this would be a good time to switch ;-) (I personally think Linode rocks!).
Assuming that is not the case, you can try to go with the option of setting a parallel system as #Ether suggested.
On the other hand, if the modules you are using and their dependencies are pure Perl modules, you should be able to use PAR::Packer to package your script and its dependencies and scp a single file over to the host.
I use SSH tunneling to tunnel from the remote server back to a local proxy server. That way you can install whatever Modules you need.
Just set the http_proxy variable to the local port that is remote forwarded (If that makes sense) from your local machine.
i.e.
ssh user#remote -R 3128:proxy_ip:3128 (for a tunnelling a Squid setup)
then on the remote server in cpan
o conf http_proxy=http://localhost:3128
o conf commit