What is the recommended workflow and environment for working on the FreeBSD code base? - workflow

I want to develop a new feature or change and existing program of the FreeBSD distribution, specifically the user space¹. To do so, I need to make changes to the FreeBSD code base and then compile and test them.²
Doing so on the tree in /usr/src and installing the result on the system seems like a bad idea, given that it requires you to run your development machine on CURRENT, to develop with root privileges and hoses your system if you make a mistake. I suppose there must be a better way and possibly a standard setup FreeBSD developers use.³
What is the recommended workflow to develop the FreeBSD code base?
¹ so considerations specific to kernel development aren't terribly important
² I'm familiar with the process to submit changes after I have developed them
³ I have previously read both the development handbook and the FreeBSD handbook chapter on building the source but neither seem to recommend a specific process.

I am a src committer.
I often start with the lowest release that I intend to back port to (e.g., RELENG_11_3.
I would then do (before or after making changes):
make buildworld
then deploy to a jail directory:
make DESTDIR=/usr/jails/test installworld
This jail directory, as the first responder hinted, can be used with bhyve, but I find it easier to configure a jail or even just use chroot.
I like to configure my jails in /etc/rc.conf instead of /etc/jail.conf:
Example /etc/rc.conf contents:
jail_enable="YES"
jail_list="test"
jail_test_rootdir="/usr/jails/test"
jail_test_hostname="test"
jail_test_devfs_enable="YES"
I can provide more in-depth examples, ones where the jail has a private networking stack so you can SSH into it, for example, but I don't get the sense that a networking stack is important to your testing from the posted question.
You can see the running jail with "jls" and you can enter the running jail with "jexec test bash"
Inside the jail you can test your changes.
When doing this kind of sandboxing, jails work so long as the /usr/src that you built/installed to the jail is from a release that is:
Older than the guest OS, or
In the same STABLE branch as the guest OS, or
At the very least binary-compatible with the guest OS
Situations 1 and 2 are pretty safe, while situation 3 (e.g., running a newer /usr/src than the guest OS) can get dodgy. For example, trying to run /usr/src head (13.0-CURRENT) on a 12.0-RELEASE-pX guest OS where the KBI, KPI, and API can all differ between kernel and userland (with jails, each jail runs under the guest OS's kernel).
If you find that you have to run the newest sources against an older guest OS, then bhyve is definitely the solution. You would take that jail directory and instead of running a jail with that root directory, run a bhyve instance with the jail directory as its root. I don't use bhyve that often, so I can't recall if you first have to deposit the contents inside a disk image and point bhyve at the disk image first -- others and/or Google would know the answer to that.

I'm a ports committer, not a src one, but AFAIK running CURRENT is a common practice amongst developers.
Another way to work is to setup a CURRENT VM, share it over NFS, mount from the host and install into by running make install DESTDIR=/mnt/current. You can use BHyVe for virtualizing, by the way.

Related

Booting a clone of another board's Mendel system with already installed libraries

I posted this question in the Unix Stack Exchange (I did quite some research I must say), valueing options like Remastersys and Respin, Clonezilla, dd, and so on. Now I am in SO because the Google-Coral Tag might be helpful. I have found in here a few posts related to backing up using dd (mostly expected from the questions I have seen), but also some errors that are arising to other users.
After getting everything working, I installed several libraries (via apt and via git clone and make install). So now, I would like to have that same system, with the same libraries,in 3 different boards.
My main idea was: The optimal path to follow is to clone the whole system, and instead of installing Mendel in a new board following the tutorial, and then running an install.sh script (that can take some time due to downloading, installing, etc), wouldn't it be easier to just boot up an image but with a Mendel system with my needed libraries? (Making a PRIME SYSTEM, cloning it to distribute it to several boards).
Problems arose as what I found from the already mentioned possible paths (Respin, Clonezilla, etc), seem that won't work on this system in this way. So booting a new Mendel from scratch and backing up the drive with dd seems like a doable option, probably needing to change afterwards some pathings and namings like usernames. If all boards were called the same, it woulnd't be a problem. More like the opposite in my specific case. But names are assigned randomly to avoid collisions with several boards at the same time.
Is there a simple way to install a Mendel system in Google Coral that is a clone of a system from another coral? (Let's call this the PRIME SYSTEM) This would mean the exact same system Google provides, but with specific libraries already installed. Or is the only path to follow to do a normal installation and then a back-up of the PRIME SYSTEM, via dd (or any other that I do not know of?), with proper naming and pathing changes after it is done?

How can I copy a new version of libc?

I have a computer which is not to be connected to the internet for security reasons. It is running Linux. On a separate computer, I have code and a cross compiler for linux. When I move the program over to the offline Linux computer, I cannot execute it, due to an error " version `GLIBC_2.17' not found "
After checking /lib, I see that I do indeed only have version 2.13 of libc. On my development computer, I have all the relevant files for the 2.22 version which is being used to compile the program. I would like to somehow copy this version of libc on my development computer over to my offline computer so that I can run my program.
The problem is, I cannot seem to copy the files. Attempting to do so gives an error:
mv: error writing ‘./libc.so.6’: No space left on device
or something similar. I know this is not actually due to there not being space left on the device, because I don't have much of anything ON the device, and I can copy the files to other places in the filesystem, just not to the /lib directory. How can I migrate my newer version of libc over to the offline computer?
I would like to somehow copy this version of libc on my development computer over to my offline computer so that I can run my program.
Incorrectly updating libc on your "offline" computer is a very easy way to render it unbootable. You should not attempt this unless you understand what you are doing (which you clearly do not), and unless you know how to restore "offline" computer in case of failure.
The best approach is to use the package manager on the "offline" computer to install libc package correctly (details vary on what package manager is being used).
If the package manager approach doesn't work, you can copy individual files. Hoever note that system ld-linux.so and libc.so.6 must match at all times, or every dynamically linked program (your shell, cp, mv etc.) will break.
Since you need to update two files simultaneously, how could this be done? You need a statically-linked copy of cp. It is best to have a statically-linked copy of $SHELL and ln as well (in case you make a mistake).

Deployment strategies for Go services?

I'm writing some new web services in Go.
What are some deployment strategies I can use, regardless of the target platform? For example, I'm developing on a Mac, but the staging/production servers will be running Linux.
Are there some existing deployment tools I can use that support Go? If not, what are some things I can do to streamline the process?
I use LiteIDE for development. Is there any way to hook LiteIDE into the deployment process?
Unfortunately since Go is such a young language not much exists yet, or at least they've been hard to find. I would also be interested in the development of such tools for Go.
What I have found is that some people have been doing it themselves, or they've adapted other tools, such as Capistrano, to do it for them.
Most likely it's something you'll have to do yourself. And you don't have to limit yourself to shell scripts - do it in Go! In fact many of the Go tools are written in Go. You should avoid compiling on the target system as it's usually a bad practice to have build tools on your production system. Go makes it really easy to cross compile binaries. For example, this is how you compile for ARM & Linux:
GOARCH=arm GOOS=linux go build myapp
One thing you could do is hop on the #go-nuts freenode IRC channel or join the Go mailing list and ask other Gophers what they're doing.
Capistrano sounds like a good idea for deployment alone. You can also do cross-compilation as Luke suggested. Both will work just fine.
More generally though... I'm also kind of torn between OS X (development) and Linux (deployment) and in fact I ended just developing in a virtual machine via VirtualBox and Vagrant. I'm using TextMate 2 for text editing but installing many of development tools on a Mac is just a major PITA and I'm just more comfortable with having Debian or the like running somewhere in the background. The bonus is - this virtual environment can mirror deployment environment so I can avoid surprises when I deploy my code, whatever the language.
I haven't tried it myself, but it appears you can cross compile golang (either with goxc or Dave Cheney's golang-crosscompile), albeit with some caveats.
But if you need to match the environment with production, which probably you should most of the time, it's safest to go as Marcin suggested.
You can find some prebuilt VirtualBox images on http://virtualboxes.org/images/ although creating one yourself is pretty easy.
what are some things I can do to streamline the process?
The cross-compilation idea should be even more appealing with Go 1.5 (Q3 2015), as Dave Cheney details in "Cross compilation just got a whole lot better in Go 1.5":
Before:
For successful cross compilation you would need
compilers for the target platform, if they differed from your host platform, ie you’re on darwin/amd64 (6g) and you want to compile for linux/arm (5g).
a standard library for the target platform, which included some files generated at the point your Go distribution was built.
After (Go 1.5+):
With the plan to translate the Go compiler into Go coming to fruition in the 1.5 release the first issue is now resolved.
package main
import "fmt"
import "runtime"
func main() {
fmt.Printf("Hello %s/%s\n", runtime.GOOS, runtime.GOARCH)
}
build for darwin/386
% env GOOS=darwin GOARCH=386 go build hello.go
# scp to darwin host
$ ./hello
Hello darwin/386
Or build for linux/arm
% env GOOS=linux GOARCH=arm GOARM=7 go build hello.go
# scp to linux host
$ ./hello
Hello linux/arm
I'm developing on a Mac, but the staging/production servers will be running Linux.
Considering the compiler for Go is in Go, the process to produce a Linux executable from your Mac should become straightforward.

What are codepad.org's Perl runner limitations?

Sometimes I see people use http://codepad.org as a way to quickly run/test their Perl snippets (it supports doing that with a wide variety of languages, from C to Scheme to Perl).
It's pretty obvious that there must be some limitations as to what code/features can be tested with codepad - does anyone know what those limitations are for Perl runner?
I'll get the ball rolling on my own observation: not every CPAN module is available :(
Mostly based on their "about" page:
codepad only supports Perl 5.8.0
Presumably, like any Perl install, not every module (CPAN or otherwise) is present.
As a specific example, List::MoreUtils is missing.
As a sub-limitation, they seem to run on Linux. So any Windows specific modules would certainly be out.
It's in a chroot jail with system calls restrictions. Among other things this seems to prevent file creation (my snippets creating files in a current directory or /tmp both errored out, as well as File::Temp calls)
codepad code is executed on a virtual machine. Behind firewalls. And buried in a bunker. So certain functionality is probably disabled - especially networking/internet one. The exact "about" quote is:
The supervisor processes run on virtual machines, which are firewalled such that they are incapable of making outgoing connections.
The machines that run the virtual machines are also heavily firewalled, and restored from their source images periodically.
It's easier to just run Perl code locally. It's easy to install multiple versions of Perl and to track separate module repositories. It's also not hard to run just about any operating system you want in a virtual machine. Why you'd need anyone's else's service to do what you can do better yourself is beyond me.

Best practices for deploying tools & scripts to production?

I've got a number of batch processes that run behind the scenes for a Linux/PHP website. They are starting to grow in number and complexity, so I want to bring a small amount of process to bear on them.
My source tree has a bunch of cpp files and scripts, organized with development but not deployment in mind. After compiling all the executables, I need to put various scripts and binaries on a cluster of machines. Different machines need different executables, scripts, and config files for their batch processes. I also have a few of tools that I've written that belong on every machine. At the moment, this deployment process is manual and error prone.
I'm guessing I'm just going to end up with a script that runs at the root of the source tree and builds a smaller tree of everything necessary for any of the machines. Then, I'll just rsync that to the appropriate machines. But I'm curious how other people are managing this type of problem. Any ideas?
There are a several categories of tool here. Some people use a combination of tools from these categories. I sometimes use, for example, both Puppet and Capistrano. See Puppet or Capistrano - Use the Right Tool for the Job for a discussion.
Scripting Tools aimed at Deploying an Application:
The general pattern with tools in this category is that you create a script and/or config file, often with sets of commands similar to a Makefile, and the tool will ssh over to your production box, do a checkout of your source, and run whatever other steps are necessary.
Tools in this area usually have facilities for rollback to a previous version. So they'll check out your source to releases/ directory, and create a symbolic link from "current" to "releases/" if all goes well. If there's a problem, you can revert to the previous version by running a command that will remove "current" and link it to the previous releases/ directory.
Capistrano comes from the Rails community but is general-purpose. Users of Capistrano may be interested in deprec, a set of deployment recipes for Capistrano.
Vlad the Deployer is an alternative to Capistrano, again from the Rails community.
Write your own shell script or Makefile.
Options for getting the files to the production box:
Direct checkout from source. Not always possible if your production boxes lack development tools, specifically source code management tools.
Checkout source locally, then tar/zip it up. Use scp or rsync to copy the tarball over. This is sometimes preferred for something like an Amazon EC2 deployment, where a compressed tarball can save time/bandwidth.
Checkout source locally, then rsync it over to the production box.
Packaging Tools
Use your OS's packaging system to generate packages containing the files for your app. Create a master package that has as dependencies the other packages you need. The RubyWorks system is an example of this, used to deploy a Rails stack and sample application. Then it's a matter of using apt, yum/rpm, Windows msi, or whatever to deploy a given version. Rollback involves uninstalling and reinstalling an old version.
General Tools Aimed at Installing Apps/Configs and Maintaining a Set of Systems
These tools do not specifically target the problem of deploying a web app, but rather the more general problem of deploying/maintaining Apps/Configs for a set of servers, or an entire company's workstations. They are aimed more at the system administrator than the web developer, though either can find them useful.
Cfengine is a tool in this category.
Puppet aims to improve on Cfengine. It's got a learning curve but many find it worth the time to figure out how to do the configs. Once you've got it going, each box checks the central server periodically and makes sure everything is up to date. If someone edits a file or changes a permission, this is detected and corrected. So, unlike the deployment tools above, Puppet not only puts files in the right place for you, it ensures they stay that way.
Chef is a little younger than Puppet with a similar approach.
Smartfrog is another tool in this category.
Ansible works with plain YAML files and does not require agents running on the servers it manages
For a comparison of these and many more tools in this category, see the Wikipedia article, Comparison of open source configuration management software.
Take a look at the cfengine tutorial to see if cfengine looks like the right tool for your situation. It may be a little too complicated for a small website, but if it is going to involve more computers and more configuration in the future, at some point you will end up using cfengine or something like that.
Create your own packages in the format your distribution uses, e.g. Debian packages (.deb). These can either be copied to each machine and installed manually, or you can set up your own repository, and add it to your list of sources.
Your packages should be set up so that the scripts they contain consult a configuration file, which is different on each host, depending on what scripts need to be run on each.
To tie it all together, you can create a meta package that just depends on each of the other packages you create. That way, when you set up a new server, you install that one meta package, and the other packages are brought in as dependencies.
Although this process sounds a bit complicated, if you have many scripts and many hosts to deploy them to, it can really pay off in the long run.
I have to roll out PHP scripts and Apache configurations to several customers on a frequent basis. Since they all run Debian Linux, I've set up a Debian package repository on my server and the all the customer has to do is type apt-get upgrade and they get the latest version.
The first thing to do is get all these scripts into a source control repository (svn or git are good) so that you can track changes to these scripts over time.
If you are interested in ruby, check out Capistrano, it is well suited deploying things to multiple machines in a cluster, and is fairly easy to set up. It can read files directly from your version control system.
Puppet is another tool that can be used in this situation. It is similar to cfengine - you create a model of the desired deployment and Puppet figures how to get the environment to this state.