This is probably a really silly question but my work is kind of new to embedded linux, we aren't really sure how we should source our code.
We'll be getting a package from free scale and if it's anything like our omap package it'll prabably be pretty big. Is it a good idea to just source everything, or split it up into different repos, should we leave some stuff out?
We do have some experience with windows ce, we never really sourced everything, just the stuff we used in the board support package and checked it out over the wince600 folder.
There are too many variables to give a definitive answer.
Thus, the correct answer depends on
your application and how heavy your changes are
your build system
You're talking about freescale and linux embedded, maybe you want to check openembedded . If you use this or any similar solution, than you don't need to put under versioning anything but your sources and your recipes.
But if you're customizing the way you build the system, than I'd definitively put the sources and the build scripts under versioning.
Related
I am writing a perl program that I want to share with others, eventually via cpan. it's getting to the point where I should start thinking about this on a bigger scale.
a decade ago, I used the h2xs package maker once. is this still the most recommended way to get started? there used to be a couple of alternatives. because I am starting from scratch with very little recollection, anything simple will do at this point.
I need to read a few long text files (not perl modules) for configuration. where do I put them and how do I access them, no matter where the module is installed? (FindBin?) _DATA_ is inconvenient.
I need to provide an executable (linux and osx). can putting an executable into the user's path be part of the module installation? (how?)
I would like to be able to continue developing it, run it for test purposes, have a new version, repack it, and reupload it easily.
before uploading to cpan, can I share a cpan bundle for easy local installation to downloaders and testers?
# cpan < mybundle.cpanbundle
advice appreciated.
regards,
/iaw
If anything I say conflicts with Andy Lester, listen to him instead. He knows more than I ever will.
Module::Starter is a good, simple way to generate module scaffolding. My take is it's been the default for this sort of thing for a few years now.
For configuration/support files, I think you probably want File::ShareDir. Might be worth considering Data::Section if it's just a matter of needing multiple __DATA__ sections though.
You can certainly put scripts in the bin subdirectory of your distribution, the build tool will put it in the right place at install time.
A build tool will take care of the work-flow you describe.
Bundles are something different. You make a distribution and share the tarball/archive.
If you set up PERL5LIB appropriately, then repeat make test, make install, make dist to your heart's content. For development/sharing purposes a lot of projects do their work on github or similar - makes it easy to share. They have private accounts for business purposes too. Very useful if you want to rewind and see where/when a problem was introduced.
If you get a copy of cpanm (simple to install, fairly lightweight) then it can install from a tar.gz file or even direct from a git repository. You can also tell it to install to a local dir (local::lib compatible - another utility that's very useful).
Hopefully that's reasonably up-to-date as of 2014. You may see Dist::Zilla mentioned for module development. My understanding is that it's most useful for those with a large family of CPAN distributions to manage. Oh - if you (or other readers) aren't aware of them, do check out autodie and Try::Tiny around errors and exceptions, Moose (for a full-featured object-oriented framework) and Moo (for a smaller lightweight version).
I think that advice is all reasonably non-controversial. I find cpanm to be much more pleasant than the "full" cpan client, and Moo seems pretty popular nowadays too.
Take a look at Module::Starter and its much more capable (and complex) successor Dist::Zilla.
Whatever you do, don't use h2xs. Module::Starter was created specifically because h2xs was such an inappropriate tool for creating distributions.
I'm currently working on a project that's been going on for several years straight. The development-team is small (less than 5 programmers), and source-control is virtually non-existent, and the deployment-process as is is just based on manually moving files from one server to another. The project is in classic ASP, so building isn't an issue, as both deployment and testing is just about getting the files to where they need to be and directing the browser at the correct location. Currently all development is done on a network-drive which is also the test-server. The test-server is only available when inside the the local network (can be accessed trough vpn), and is available on the address 'site.test' in the browser (requires editing to the hosts-file on all the clients, but since there are so few of us that hasn't proven to be any problem at all). All development is done in visual studio. Whenever a file is change the developer that changed the file is required to write the file he changed into a word-document and include a small description as of what was changed and why. Then, whenever there's supposed to be a version-bump (deployment), our lead-developer goes trough the word-document and copies every file (file by file) that has changed over to the production-server. Now, I don't think I need to tell you that this method is very error prone (a developer might for instance forget to add that he changed some dependency, and that might cause problems when deployed), and there's a lot of work involved with deploying.
And here comes the main question. I've been asked by the lead developer to use some time and see if I can come up with a simple solution that can simplify and automate the "version-control" and the deployment. Now, the important thing is that it's as easy as posible to use for the developers. Two of the existing developers have worked with computers for a long time, and are pretty stuck up in their routines, so for instance changing it into something like git bash wouldn't work at all. Don't get me wrong, I love git, but the first time one of them got a merge-conflict they wouldn't know what to do at all. Also, it would be ideal to change to a more distributed development-process where the developers wouldn't need to be logged into vpn (or need internet at all) to develop, and the changes they made offline could be synced up when they were done with them. Now, I've looked at Teem Development Server from Microsoft because of it's strong integration with Visual Studio. As far as I've tested it seems possible to make Visual Studio prompt the user if they want to check in changes whenever the user closes Visual Studio. Now, using TFS for source-control would probably eliminate most of the problems with the development, but how about deployment? Not to mention versioning? As far as I've understood (I've only looked briefly at TFS), TFS has a running number for every check-in, but is it possible to tell TFS that this check-in should be version 2.0.1 of the system (for example), and then have it deploy it to the web-server? And another problem, the whole solution consists of about 10 directories with hundreds of files in, though the system itself (without images and such) is only 5 directories, and only these 5 should be deployed to the server, is this possible to automate?
I know there's a lot of questions here, but what is most important is that I want to automate the development process (not the coding, but the managing of the code), and the deployment process, and I want to make it as simple as possible to use. I don't care if the setup is a bit of work, cause I got enough time at hand to setup whatever system that fits our needs, but the other devs should not have to do a lot of setup. If all of the machines that should use the system needs to be setup once, that's no problem at all, cause I can do that, but there shouldn't bee any need to do config and setups as we go.
Now, do any of you have any suggestions to what systems to use/how to use them, in order to simplify the described processes above? I've worked with several types of scm-systems before (GIT, HG and SubVersion), but I don't have any experience with build-systems at all (if that is needed). Articles, and discussion on how to efficiently setup systems like this would be greatly appreciated. In advance, thanks.
This is pretty subjective territory, but I think you need to get some easy wins first. The developers who are "stuck up in there ways" are the main roadblock here. They are going to see change as disruptive and not worth it. You need to slowly and carefully go for the easy wins.
First, TFS is probably not going to be a good choice. It's expensive, heavy, and the source control in TFS is pretty lousy. Go for Subversion: it's easy to setup and easy to use, and it's free. Get that in place first, and get the devs using it. Much easier said than done.
Later (possibly much later), once the devs are using it and couldn't imagine life without a VCS, then you could switch to Hg or Git if you need first class branching and all those other nice features.
Once you have Subversion in place, you can use something like JetBrains TeamCity or Jenkins, both of which are free and easy to use. However, I'm just assuming you don't have a lot of tests and build scripts that the CI server is really going to be running, so it's far more important that you get VCS first. In all things: keep it as simple as possible. Baby steps. Get some wins, build trust, repeat.
I can't even begin to think where to begin with this! Intending no offense directed at you, apart from the mention of git and HG, this post could have been written 10 years ago.
1) Source control - How can a team of developers possibly work effectively without some form of source control? Hell, even if it's Visual Source Safe (* shudder *) at least it would be something. You have to insist that the team implement source control. You know what's available so I won't get into preaching about that. (However, Subversion with TortoiseSVN has worked quite well for me.)
2)
"write the file he changed into a
word-document and include a small
description as of what was changed and
why"
You have got to be kidding... What happens if two developers change the same file? Does the lead then have to manually merge two changes that s/he extracts from the word doc? Please see #1 and explain to them how commit comments work.
Since your don't really need to "build" (i.e. compiled, etc.) anything, you should be able to solve most of your problems with some simple tools. First and foremost you need to use a source control solution. Yes, the developers would have to learn how to use another tool (EEEK!). You could do the initial leg work of getting the code into the repository. If you have file access to the other developers machines, you could even copy a checked-out working copy to their machines so they wouldn't have to do the checkout themselves (not really that hard). You could then use all the creamy goodness of version control to create version branches when each deployment needs to be done. You could write simple scripts using the command line SVN tools to check out said branches and automatically copy the files to the target server(s). Using a tool like BeyondCompare, the copy process could be restricted to only the files that are different (plus BC can handle an FTP target if that is an issue). By enforcing commit comments on the SVN repo, you'll guarantee that the developers provide comments, and for each set of changes between releases you could very easily generate a list of all those comments using the CSM log retrieval features.
I started using Dist::Zilla several months ago. However, at YAPC::NA someone mentioned that they use ShipIt instead. Then today I noticed a .shipit file in miyagawa's cpanminus directory on github, so I decided to look into it some more...
My initial impression is that ShipIt has a subset of what is available with Dist::Zilla, but I don't want to jump to conclusions. So, for those who have had experience with both, what are the strengths/weaknesses of ShipIt vs Dist::Zilla?
crossposted at perlmonks
I'm the author of Dist::Zilla.
I evaluated ShipIt pretty extensively before choosing to go ahead and write Dist::Zilla, and initially they covered almost exactly the same problem space: doing all the boring grunt work of building and uploading a CPAN distribution. All of the features that Dist::Zilla now has beyond ShipIt are later additions, more or less.
If you only need the features of ShipIt, I still advise you to strongly consider Dist::Zilla, for one very simple reason: hackability. If I had been able to not write something new, I would've used ShipIt, but I found it to be underdocumented and difficult to extend. Its plugins were not generic enough and the core behavior made too many assumptions about how you'd like to work.
Dist::Zilla was inspired specifically by this problem: it turned everything into a plugin, and every plugin was given a very, very small interface so that its assumptions would be forcibly limited.
One benefit of ShipIt over Dist::Zilla is that ShipIt has (to the best of my knowledge) no plugins that will alter the way you actually write your code. This means your documentation will still look the same, you will still have a Makefile.PL, and so on. Some hackers don't like that so many DZ-based dists fundamentally change the assumptions of how to test and build CPAN code from its source repository. ShipIt will never change that.
It's possible to avoid using any such plugins with Dist::Zilla, but in general my experience is that people do use them, almost always, in one form or another.
As far as I can tell, my initial impressions were correct.
ShipIt provides functionality for releasing distributions:
keeping track of version numbers
integrating with version control
uploading to CPAN
displaying the changelog file in an editor so that you can edit it before release.
Dist::Zilla, by default, provides the ability to upload distributions to CPAN with a single command (i.e. dzil release). Dist::Zilla also has functionality for creating new distributions (i.e. dzil new My::New::Module). It also automatically generates so many of the files that I used to have to maintain by hand.
Using plugins, Dist::Zilla seems able to provide most, if not all, of the functionality available with ShipIt. It is also relatively easy to add brand new features using plugins.
I'm assuming Mercurial is for having an updated website and it archiving the old stuff? Easy to test things and such?
My question is, how exactly should I get started and can somebody give me a crash course in using Mercurial and using the following techs below:
Notepad++ for coding
FTP
PHP/MySQL
Jquery & other js libraries
I use windows and would like to keep things fairly simple. I'm developing 1 website currently and want some kind of CVS system in place. Or should I just stick to my current edit file in notepad++ and upload via ftp method and make a backup copy of everything every once and a while?
Any thoughts?
EDIT: I'm doing http://bugtracker.gttools.com/public/wiki/bluehost/Mercurial right now in order to try and 'install' it.
I'd definitely recommend taking a read through the excellent HGinit http://hginit.com/ site in addition to the official guide.
Definitely try and move across to using some form of version control (SVN, git or mercurial) as it'll save you down the line.
Mercurial is a distributed version control system, much like Git but allegedly slightly simpler.
A good tutorial by Joel Spolsky can be found here.
If you read up on https://www.mercurial-scm.org/guide under Basic Workflow you should be able to figure out how to work with it while editing files using Notepad++ etc.
From your question it sounds like you don't know much about version control (like you have a very basic grasp of what it is and why it is useful). So perhaps the first thing you should do is read up on that in general first of all.
But in terms of using Mercurial I don't think you will find a better insight into how to use it and why it is so good than Joel's tutorial, which you can find here hginit Tutorial
HGInit is a good tutorial for mercurial. Basically you have to hg init in the directory you want to be under version control. If you don't like the command line, you can also use gui tools, like tortoisehg.
If I'm not mistaken you also want to upload the latest version to the website. If I'm right ftp access won't be enough for this (unless you define a post-commit hook, that uploads the directory using ftp).
I'm assuming Mercurial is for having
an updated website and it archiving
the old stuff? Easy to test things and
such?
Not sure what you mean here. Mercurial (and all version control systems) lets you archive your changes as you make them, label and manage your releases so you can track code that goes together, and branch when you need to do parallel development.
You should be checking in your changes as you make them. If anything goes wrong during development, you can roll back to the last good version you had. It's a great way to make sure that you don't lose days and days of work because you forgot to check in.
Check in early, check in often.
Putting development tools (compilers, IDEs, editors, ...) and runtime environments (jre, .net framework, interpreters, ...) under the version control has a couple of nice reasons. First, you can easily compile/run your program just by checking out your repository. You don't have to have anything else. Second, the triple is surely version compatible as you once tested it. However, it has its own drawbacks. The main one is the big volume of large binary files that must be put under version control system. That may cause the VCS slower and the backup process harder. What's your idea?
Tools and dependencies actually used to compile and build the project, absolutely - it is very useful if you ever have to debug an issue or develop a fix for an older version and you've moved on to newer versions that aren't quite compatible with the old ones.
IDE's & editors no - ideally you're project should be buildable from a script so these would not be necessary. The generated output should still be the same regardless of what you used to edit the source.
I include a text (and thus easily diff-able) file in every project root called "How-to-get-this-project-running" that includes any and all things necessary, including the correct .net version and service packs.
Also for proprietry IDE's (e.g. Visual Studio), there can be licensing issues as this makes it difficult to manage who is using which pieces of software.
Edit:
We also used to store batch files that automatically checked out the source code automatically (and all dependencies) in source control. Developers just check out the "Setup" folder and run the batch scripts, instead of having to search the repository for appropriate bits and pieces.
What I find is very nice and common (in .Net projects I have experience with anyway) is including any "non-default install" dependencies in a lib or dependencies folder with source control. The runtime is provided by the GAC and kind of assumed.
First, you can easily compile/run your program just by checking out your repository.
Not true: it often isn't enough to just get/copy/check out a tool, instead the tool must also be installed on the workstation.
Personally I've seen libraries and 3rd-party components in the source version control system, but not the tools.
I keep all dependencies in a folder under source control named "3rdParty". I agree that this is very convinient and you can just pull down the source and get going. This really shouldnt affect the performance of the source control.
The only real draw back is that the initial size to pull down can be fairly large. In my situation anyone who pulls downt he code usually will run it also, so it is ok. But if you expect many people to pull down the source just to read then this can be annoying.
I've seen this done in more than one place where I worked. In all cases, I've found it to be pretty convenient.