Simple yet fast deployment tool or solution for FTP-based server - deployment

I have a simple task to which some simple solution should exist yet I cannot come across one.
I have a huge file tree on computer A (development). I have the same (multiple) such file trees on a computer B (let's call it production). Computer B runs FTP and PHP, nothing much else.
I need to move the changed files from the tree on A to the tree on B but as efficiently as possible. I.e. if just one file changes, it will just transfer that one file. It would be enough to "compare" the local and remote trees using last modification dates, nothing else needed.
I tried to use the good old Ant for it but that really does not work as the FTP task is really bad one there (does not preserve modification dates on PUT and so on). What other options are there if I do not want to write the code for such a task myself? I'd expect there is some tool out there that would make a remote dir listing, download it to local computer, select only those changed files and transfer them to the destination. Do you know how I could do it? Some sort of FTP or PHP-based distributed robocopy?
EDIT: I should have added that I mean doing it on a Windows 10 computer syncing to some FTP/PHP server using command-line automated script, not GUI.

Actually I solved the issue using winscp. I managed to integrate it into ant calling it through the task and using the winscp's synchronize command. For my current folder size it is fast enough, let's see later. The FTP command in ant was not useful since it does not preserve the modification dates.

Related

How to periodically 'touch' temporary files on remote server so the files don't get deleted

I log on to the remote server using SAS/Emacs. On the server, there is this space where I can save files for about a week. Unless I refresh or 'touch' those files again, they will get deleted after a week. Is there a macro or a code that I can execute whenever I open SAS/EMACS so that these files stay updated?
So far, I have used SSH to go on to the server and type 'touch /*' to keep it 'touched', but I am hoping there is a better/more efficient way to keep those files touched.
Assuming you're using EMacs Speaks Statistics to connect to SAS, then you have a couple of different options.
One is to modify ess-sas-submit-command to point to a script that first does your "touch" command and then starts SAS.
Another is to create an autoexec for SAS to do that for you, assuming you have rights to do so; you can add that to various locations in Unix or to the command line itself (depending on how you're launching SAS).
Even if you're not using ESS, the Autoexec method may work for you.
Note that, of course, your system administrator may not appreciate doing this, so do make sure this is permissible (unless that sysadmin is you!).

Simple and easy to use tool for managing different versions of files

I want to manage different sets of file versions locally on a machine without using complex version control tools like TFS/Git/SVN...etc. here is my use case:
I have a Windows virtual machine that contains many xml, xslt, xsl, txt...etc. files, the virtual machine gets updated with every release of my product.
Often I need to analyze errors in this virtual machine, so I change many files and run the product and start analyzing, let us call these file changes FileChangeSet1.
based on the results above I need to change other files and maybe some of the files in FileChangeSet1 and do another test.
again based on the results, I need to change more files, eventually I end up with FileChangeSet1, FileChangeSet2...FileChangeSet(n)
I want to:
be able to switch between these file change sets easily and quickly, e.g. have a GUI that shows my my tree of FileChangeSets then click one of them and all files of that change are used.
create file change sets from other file change sets e.g. copy FileChangeSet1 in FileChangeSet2 and change only one file in set 2
I don't want to configure and install a complex version/source control system like TFS/Git/SVN where I have to create a database of all my files first.
Making snapshots of the virtual machine is not an option because it is extremely slow.
I think you would not have much advantage with version control tools even because they are made to version text files. For binary files, I think you would end up like managing several diffent copies of the binary files anyway (at least for older tools such as CVS and SVN).
If you are running in linux, you may want to use cmp/diff tools. Take a look on incremental diff and diff tools such as patchutils.
Consider also to create a checksum of huge files to avoid comparing them for nothing.
ps. also take a look on this - http://jojodiff.sourceforge.net/ - haven't tried but it seems simple to use and promising.
Mercurial is the right tool for me. With it I can solve my business case easily as follows:
Install mercurial on Windows, it integrates in the Windows file explorer.
Create a local version control mercurial database by right clicking my root folder.
Now I can open all my files under my root folder in different text editors e.g. notepad++ and modify these files.
When I want to save/remember a specific status I simply commit the files to mercurial by right clicking the root folder, I can provide a commit note.
Later I can change my files in a different way and test how my system reacts to them, again I can commit these files locally.
Over time I have a history of change sets in Mercurial, I can go back to any change set, branch it, merge it...etc.
I have a huge and complex system that contains thousands of files, my root folder is actually the C:\ drive, I can easily and quickly make out of c: a version control database using mercurial.
All with a simple and intuitive GUI, no command line learning needed.

Hybrid version control & sync system?

Is anyone aware of a hybrid version control and synchronising system?
I'm currently a happy mercurial user, but my projects usually contain a mixture of files.
Most of these (code, documentation, ...) I want to be version-controlled. This is why I use mercurial.
However, on the rare occasion I have files that I would like to synchronise between my working copies, but not version control.
For example, I version control the code I write to do image processing. This code can produce a whole bunch of output images which I'd like to have synchronised so I don't have to remember to shuffle them around my various computers, but there's no point having these version controlled.
To clarify - I am aware of extension to mercurial such as bfiles and bigfiles, which are handy for my image example, but I was just wondering if anyone out there knows of alternative ways to handle this. I just want the one system that I can tell "version control all files except those ones, which should be synced but have no history".
cheers!
EDIT: I could do something like adding a hg marksync <filename> that added <filename> to a list of files to be synced, and then adding a hook to hg push/hg pull that would (say) run rsync (or whichever sync tool) in the background, but I wondered if there was a less hacky solution (I think bfiles/bigfiles do something along these lines anyway).
Version Control System (any) doesn't care about synchronization of
not versioned data
besides default pathes
If you want sync any files - use specially designed for this task tools: f.e. rsync
This code can produce a whole bunch of output images which I'd like to have synchronised
Is this DATA or part of your CODE?
If data: Keep out of your versioning system, just don't go there. If it is part of your code (like layout images) check it in. Those are the only ways which are the generally accepted.
A nice solution for the data would be syncing OR generating them. So you might add a step after deployment to a server: GenerateImages().
edit: In addition to the comment made by the thread starter:
If the images are data and you need to process them on a different system don't think about the version control for your code. It is unrelated. The steps which would make sense to me, in order of processing:
Start with updating your image code, check it in versioning. Then deploy (yes this is deployment) the updated code to the cruncher computer. Now code is done.
Then you have tasks which the number cruncher should handle. Like processing the images. So start that processing from either the cruncher itself (probably some queue happens there) or from a central dispatcher.
Then you have the results locally at the cruncher. Now something has to happen with that data, so that's also part of your software. Decide whether you want the cruncher to send them to some central storage, your workstation or another location. Let the software handle that. This is the most hard part as I read through your question. Many solutions are possible from just FTP/network transfers to specific storage solutions. Willing to help but need more info about the real issues, amounts, sizes etc. on these parts.
If the new updated version of the image processor makes the old generated images obsolete implement that also in your code, by for example attaching an attribute to the files generated, a seperate folder or another indication. That way you could request the cruncher after update to re-generate any obsolete files.

Making and Interfacing with Custom Services

I've been searching for this for awhile now, and I am not sure if I am just not using the correct search terms or if the answer is really that hard to find.
What I am trying to do is to create a new Windows service for a game server from a batch file, and then have a task run another batch file every 30 minutes or more that would run two commands on the game server's command line and do some file work.
Specifically, I am running a Minecraft server using Bukkit for a gaming community I help run, and I want to make sure that the thing is always up unless I specifically tell it to stop (like a service). Bukkit is run directly from a batch file and has it's own command line thing running on it.
I am told that you CAN run this type of thing as a service, but the command line will be hidden from view and/or interaction. This is the second part of my query. I have a handy little backup.bat file that copies all the world files and userdata files into a backup directory, 7zips it, and deletes the directory. The only thing is, is that Minecraft likes to always have the worlds' region files open and writing at all times, meaning that it could cause map corruption if I just run it straight off. To compensate, I need to run the command "save-off" on the server to disable the file hooks temporarily, run the backup, and as soon as it finishes, run "save-on" so that the game can continue without lost data.
What I would like to know about this second one is, is it possible to interface with the game service through a batch file, or do I need to create an application to do that? If the latter, how exactly does one go about doing that? I have moderate C++ knowledge (up through my second OO-C++ course in college), and can possibly learn another language if absolutely necessary.
So, in short, two questions:
1. Is it possible to, and how to run a BAT file as a Windows Service?
2. How to interface with said service via BAT files, and if not possible, what kind of application do I need to write (redirection to or writing a tutorial works for me).
Thank you in advance for any and all help!
Old question, user account doesn't seem active on SO anymore, but hey, if you stumble upon this because you have a similar problem:
Since we are speaking about a Bukkit Minecraft server, turn to the "Essentials" plugin for Bukkit.
It now includes a Backup function that does exactly what the OP asks for, namely stop the save so the files can be manipulated without corruption, launch a script, then starts again.
The script can be a backup one (examples provided in the linked page) but can be used to run any operation on the world's files.

Identifying files for a hot fix/patch

We (occasionally!) have to issue hot fixes for our product and do this by reissuing the affected files directly rather than with a new installer. The product has a large number of pieces, some managed code, some unmanaged.
Currently development flags which build artifacts (exes, dlls) need to be shipped in a hot fix. We'd like to be able to identify these automatically by comparing them to the previous build. A simple binary diff doesn't work since the version numbers on all the files have changed as stamping the files with a new number if part of the build.
Are there any tools that will do a more intelligent comparison and decide which files should be included? We'd still have a developer check the list, this is more to catch files the developer didn't think of than the other way around.
(Note: changing the hot fix/build process is not an immediate option, whether or not we should be shipping individual files is a different discussion!)
These are the options I see:
On your build machine get a report of the files that were changed and use the directory structure of the file path to determine which dlls were really updated. Not sure if this breaks your "no build process changes" rule or not.
If you want to wait until after the build I would recommend using a binary file diff tool like http://www.romeotango.com/Downloads/FileCompReadMe.txt. Using that you can get back a set of diffs so you just need to get your script that uses the tool to ignore the diff that occurs as a result of the version number. You can figure out the pattern to how the version number appears by using a controlled scenario where you know the two binary files are the same except for the version number and note where the differences are. Do that for a few of your dlls and hopefully a pattern emerges enough so that you can script it.