I'm working a lot with Virtual Machines and look for an efficient way to easily manipulate the files on the VM while still having them in sync with my local filesystem from where I commit them to the VCS.
I tried the remote Remote Systems Explorer for Eclipse. This gives me easy access to the files on the remote system, but has no synchronize option. So I can work directly on the remote files, but I need to sync it back to my local directory to commit them.
Basically I need some kind of rsync (Windows machine though) so that i only need to manipulate either my local files and sync the VM files or vice versa.
Can anybody help with that issue?
Looking at your reputation, I guess you can probably create some script to synchronize the local files on the command line outside Eclipse. If so, then you can have this script be invoked automatically from Eclipse as part of your normal (automated) project build. To do so, you only have to add a new builder to the project, which invokes that script.
This tutorial shows how to invoke an ant script during each build. You can restrict the invoked builder to changes of specific files and working sets, if you don't want to trigger that script on each file change.
Related
There are 5 modules inside a branch in perforce and each module have many directories with java files and build.xml files, how should i trigger particular build.xml file if there is checkin inside that directory
ex:Below is the sample structure, if there is any checkin in cord9aif build.xml file inside that should be triggered.
AMX->oms->bb->cord9aif->jjpj.java,build.xml
bbbbbbb-> kkhfdh->hgkjgh.java,build.xml
test
A really easy way is to set up a commit trigger on the Perforce server:
change-commit //AMX/oms/bb/cord9aif/... "perl run-my-build.pl"
This has the benefit of automatically happening the instant a change is committed to that directory, but it depends on you having infrastructure set up where a script that's run on your server can trigger a build (presumably on some other machine).
If you don't have access to the Perforce server, or the Perforce server doesn't have access to where the build needs to happen, or both, you can do it by polling from any client machine (e.g. your build machine which I'd assume is set up as a Perforce client).
The way I'd probably do it would be to have a job that runs p4 sync on the affected directory and looks at the output:
Any errors or otherwise unexpected output? Page a human.
Any files updated? Kick off a build.
Otherwise, do nothing.
Another option would be to run p4 changes against that directory and keep track of the most recently seen change -- whenever there's a new change, tell your build machine to p4 sync and kick off the build.
I'm currently helping to maintain a project for a client remotely. I'm the only developer ergo some of my unorthodox approaches/thinking.
the problem
The client is using Visual Studio 2010 + Team Foundation Server for their source control. I am working on a Mac over VPN and have tried several approaches to make committing to their TFS workable. I've tried TFS plugin for Eclipse with no luck (VPN really hoses the connection to TFS). Currently I am having to do a full "checkout for edit" through a virtual machine to the TFS, then transferring the project over the VPN to overwrite those files. Not a sustainable solution to say the least.
the solution?
I'm wondering if there is a way to:
get a list of changed files from GIT (I think this is the solution
(How to list all the files in a commit?)
then use that list as a means to go in and fetch those file, maintaining their folder structure
from there I can do my dump over
VPN into the VM that has the project mapped in TFS.
Or if there is something I've overlooked or hadn't thought of, please do recommend them, I'm all ears.
First, I'm assuming you are running the VM on or near the TFS server, not on your Mac. If not, you can just share a directory using VMware/VirtualBox and edit away on your Mac...
It sounds like you could achieve what you want with plain old Git. If you:
Create a bare repository on the VM (git init --bare)
Add a post-receive hook to copy the files from the master branch (for example) into the TFS directory, overwriting merrily (http://git-scm.com/book/en/Customizing-Git-Git-Hooks)
Initialise your local copy of the source as a Git repository (git init)
Add the remote repository. Assuming it's a Windows box you can use an SMB shared folder over the VPN so your remote is "local" as far as Git is concerned. (git remote add tfsserver file:///Volumes/tfsmount/code
Your first push will be expensive (but you could prepopulate the remote repo to get around that), but subsequent pushes would be just the changesets. The post-receive hook would then take care of updating the files, and you're laughing.
Of course, you then get to impress them with how amazing Git is, get them to migrate, and your problem goes away forever :).
Update: Here's a link which describes these steps in more detail, under the guise of updating a remote website: http://toroid.org/ams/git-website-howto.
I am using netbeans to write a program. I have come a long way writing different classes on one of two computers, however I need to use both on the same classes. My net beans directory is on dropbox. I set the projects working directory as
C:\Documents and Settings\damadr01\Dokumenter\Dropbox\Me, Myself & David\Activity_Calibrator
However this will mess up as soon as I try to run from the other computer. Is there a way of writing a relative wdirectory?
I suggest you to use a VCS, like git, Mercurial, Subversion, CVS etc.
All of these have support on NetBeans. You can set the main repository on DropBox and clone repositories or working copies on a local folder. This way, you'll get the extra benefit of having the commit history.
I was wondering how to get my web-projects deployed using ftp and/or ssh.
We currently have a self-made deployment system which is able to handle this, but I want to switch to Jenkins.
I know there are publishing plugins and they work well when it comes to uploading build artifacts. But they can't delete or move files.
Do you have any hints, tipps or ideas regarding my problem?
The Publish Over SSH plugin enables you to send commands using ssh to the remote server. This works very well, we also perform some moving/deleting files before deploying the new version, and had no problems whatsoever using this approach.
The easiest way to handle deleting and moving items is by deleting everything on the server before you deploy a new release using one of the 'Publish over' extensions. I'd say that really is the only way to know the deployed version is the one you want. If you want more versioning-system style behavior you either need to use a versioning system or maybe rsync that will cover part of it.
If your demands are very specific you could develop your own convention to mark deletions and have them be performed by a separate script (like you would for database changes using Liquibase or something like that).
By the way: I would recommend not automatically updating your live sites after every build using the 'publish over ...' extension. In case we really want to have a live site automatically updated we rely on the Promoted Builds Plugin to keep it nearly fully-automated but add a little safety.
I came up with a simple solution to remove deleted files and upload changes to a remote FTP server as a build action in Jenkins using a simple lftp mirror script. Lftp Manual Page
In Short, you create a config file in your jenkins user directory ~/.netrc and populate it with your FTP credentials.
machine ftp.remote-host.com
login mySuperSweetUsername
password mySuperSweetPassword
Create an lftp script deploy.lftp and drop it in the root of your .git repo
set ftp:list-options -a
set cmd:fail-exit true
open ftp.remote-host.com
mirror --reverse --verbose --delete --exclude .git/ --exclude deploy.lftp --ignore-time --recursion=always
Then add an "Exec Shell" build action to execute lftp on the script.
lftp -f deploy.lftp
The lftp script will
mirror: copy all changed files
reverse: push local files to a remote host. a regular mirror pulls from remote host to local.
verbose: dump all the notes about what files were copied where to the build log
delete: remove remote files no longer present in the git repo
exclude: don't publish .git directory or the deploy.lftp script.
ignore-time: won't publish based on file creation time. If you don't have this, in my case, all files got published since a fresh clone of the git repo updated the file create timestamps. It still works quite well though and even files modified by adding a single space in them were identified as different and uploaded.
recursion: will analyze every file rather than depending on folders to determine if any files in them were possibly modified. This isn't technically necessary since we're ignoring time stamps but I have it in here anyway.
I wrote an article explaining how I keep FTP in sync with Git for a WordPress site I could only access via FTP. The article explains how to sync from FTP to Git then how to use Jenkins to build and deploy back to FTP. This approach isn't perfect but it works. It only uploads changed files and it deletes files off the host that have been removed from the git repo (and vice versa)
I am working on a project that depends on external programs, and needs to know the paths to them. I develop and use the project on several machines, using mercurial for version control. The paths are machine-dependent, so I keep them in a machine-specific config file. I would like the config file for each host to be version-controlled, but I need to ensure that the config file from one host would never overwrite the config file for another host when pushing or pulling between hosts. Is there any way to accomplish this?
In principle, Wim is right: machine specific configurations shouldn't be part of the project's source control. As long as you walk alone, this isn't a real problem, but once you want to provide generic releases of your project, you have to get rid of them. In that case you might not be happy about the fact, that the change history contains files with machine specific data.
Nevertheless, it may make sense to have machine specific data in version controlled files (personally I do this for my dot-rc files and shell scripts). In that case I would suggest to separate generic and specific configurations into different files and include/utilize the specific one at build- or runtime, depending on the currently used machine.
If it is not possible to detect the current machine automatically, you could still create an unversioned symbolic link on each machine, pointing to the appropriate specific configuration file. For instance, on the machine foo the file layout could look like this:
generic.conf version-controlled
specific-foo.conf version-controlled
specific-bar.conf version-controlled
specific.conf → specific-foo.conf unversioned symbolic link
An alternative to symbolic links is to use a hook which automatically creates specific.conf, e.g. on each invocation of hg update. As hooks are set in a repository's hgrc file, it can be defined individually on each machine. Here's an example of a corresponding hooks section in the .hg/hgrc file of a repository clone on the machine foo:
[hooks]
update = cp specific-foo.conf specific.conf
Machine specific configuration settings should not be version controlled in the same repository as the project code.
However, it is still a good idea to put an inactive sample configuration file in your code repository. And this sample could show a bunch of typical locations for the external program paths you mentioned as lines that are commented out. That way you make it easier to get your project running on new machines.