For our ERP platform (NetSuite) customization code rests in the Cloud. We (different entities) can directly make changes to it but there is no source control available to us in cloud.
It is possible to fetch the code files through a SOAP API.
I was wondering if it is possible to get the files through API using Apache ANT and shove into TFS/SVN?
I am not familiar with Apache ANT so I do not know the capabilities of ANT that if it can fetch any info through API?
(you can also suggest any better approach to source control the code in cloud)
Ant has several third party task plugins for version control tasks. Plus, you can always use the <exec/> task to build an equivalent command line checkout. However, I do not recommend people use Ant to fetch versions from your version control system. This ends up being a chicken v. egg issue.
Your build script is in version control. You need to fetch it in order to run Ant against it. If you're fetching your build script, why not the rest of your project?
Once you checked out your project in a working directory, and want to do an update, why not let Ant do the update? Because your build script is also version controlled. Doing an update and build at the same time could have you running the wrong version of your build script against your build.
Maybe you're going to check in the files that were modified by the build system. Not a good idea. You should rarely, if ever, check in files you built. If you need them, rebuild them. Built files are usually binary in nature, and can vary greatly from one version to another. In most cases, your version control system will be checking in completely new versions of the built object instead of using diff format. That takes up a lot of room in your version control system.
Even worse, you can't diff the built objects, so you can't really verify their content or trace their history. And, built objects tend to age very quickly. Something built last month is already obsolete. With in a year, the vast majority of the information in your version control system will be nothing but obsolete binary, and very little stored is useful code.
Besides, your version control system has nothing to do with building your files. Imagine between Release 2.1 and Release 2.2, you change version control systems from Subversion to Git. Now, a bug in Release 2.1 needs to be fixed, and you need to create Release 2.1.1. Your checkout code in your build scripts will no longer work.
If you're using NetSuite IDE, you're using Eclipse, and Eclipse is great at handling version control. Eclipse can handle both SVN and TFS (although I don't know why anyone would use TFS). Eclipse tracks file changes quite nicely. In fact, Eclipse gets confused when you change files behind its back (like you do an update outside of Eclipse).
Let Eclipse handle your version control issues. It presents a common interface to almost all version control systems. This way, your build system can handle the builds.
I'm not sure what other requirements you might have, but if you use the NetSuite IDE (Eclipse + bundled plugin), you can use it to pull and push files to NetSuite. And then you can use any source control system you like (we use SVN, for instance).
Related
I'm developing an Eclipse plugin and i've run into this problem several times already.
I always keep my Target Platform updated for the latest (stable) Eclipse release so that i test my code against all the recent updates, fixes etc.
However, this may (and have) result in accidental breakage of backward compatibility of my plugin, e.g. when i accidentally use new API that did not exist in the Eclipse version i aim to support.
Or, more sneaky example, in 4.6 Eclipse moved to Java 8 and some interface methods got default implementations. Now when i implement these interfaces my IDE doesn't automatically generate empty implementations for those methods and no error is generated. If i install and run this code against a previous Eclipse version these methods will throw AbstractMethodError since no implementation has been provided.
So my question is: is there a tool to further restrict API my Target Platform provides to some earlier Eclipse API version?
Is API Baseline an appropriate tool for this? Because i couldn't get it to work like this. (It allowed even non-baseline method calls not to mention the more complex default-methods example.)
You can use multiple target platforms, switching between them doesn't take long. For testing Stack Overflow questions I have one Eclipse install with 10 target platforms.
So have a target platform for the oldest release you want to support as well as your current release target platform and check the code runs against that.
It is particularly important to test with the actual Target Platform if you want to support Eclipse 3 releases as the were large changes going from Eclipse 3 to 4.
I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.
I'm working with an embedded system software project right now, and we're facing some problems dealing with some precompiled binaries living inside our repository.
We have several repositories for different parts of our project: One for the application itself, one for the OS, one for the bootloader and several libraries. All of them, except the one for our application, are shared with other teams, for other projects. We are using git (and changing is not an option right now), but I think we'd have the same problem with any VCS.
Right now, we have a precompiled binary for each of those components living inside our application repository. The idea was to speed up the build time, since the OS alone takes about 20 minutes to build from scratch and most guys work only with the application.
Problem is, there are several bugs/features in those binaries (and related application code) to be integrated at any time and, as you know, diffing and merging binaries won't work.
So, how do you guys do when you have to work with those external dependencies?
Thanks a lot =)
One viable solution is to use an external binary repository like Nexus.
It is not linked to a VCS, meaning you can easily clean up old versions of said binaries you don't need anymore
It is light-weight (simple HTTP client-server protocol, no need to clone all the repo with all the versioned binaries like you would with a DVCS -- git or mercurial --)
I'm looking for a version control system just for me on my windows computer to integrate into eclipse. I was thinking to use Mercurial instead of Subversion, but I'm having doubts about the mercurial eclipse plugin. Any input on this that you can help me with?
Is it worth it to have a version control system when you're working alone, how much is it going to complicate matters? I don't think I need a remote repository since it's just for me. And what is known to work well in eclipse?
Is it worth having a version control system for just yourself ? Absolutely. Why ?
You can retrieve old versions of code - for reference, to revert changes.
you can branch and tag to create different versions and checkpoint for releases.
Your continuous integration system (you do have one, don't you?) can tag successful builds, allowing you to identify particular intermediate builds.
You can record in the logs why you've changed stuff (as opposed to what you've changed), and meta-information surrounding those changes.
So version control isn't used to manage multiple developers, but rather to manage the codebase itself.
What works well in Eclipse ? I can vouch for Subversion. I've used that successfully for a couple of years. I'm not sure I'd use a distributed system like Mercurial, unless I was in the habit of (say) developing on a laptop on the move, and on a desktop at home.
I have used the Subclipse plugin for SVN and it worked like a charm. In regards to whether or not you need a version control system while working alone, it's still a good idea. It will save your version history and allow for easy rollbacks. Also, if you ever bring on another person to the project it will be easy to get them up and going.
Is it worth it to have a version control system when you're working alone
Yes of course, you will always run in cases like, the application was running yesterday but i dont know what i did!
how much is it going to complicate matters?
It will not complicate anything, you will just need to spend half an hour at the beginning to set it, then committing,branching, uploading, sharing will all be one click away in Eclipse.
I don't think I need a remote repository since it's just for me.
I work alone as well, but you never know, if you might work on different sites it would be good to have all your work in the cloud, sometimes you run in cases like you want to share a project with a friend and whats better than SVN in that case.
And what is known to work well in eclipse?
I use Subversion, inside Eclipse and TortoiseSVN in the explorer.
If you want to setup your own SVN Server, (with the benefit of not needing to upload/download from internet each time) check here:
Create SVN Server on Windows
Create SVN Server on Mac
It IS important to have a SCM system even when working alone.
I'd suggest creating a project in code.google.com or sf.net, (unless of course you don't want an open source license).
Eclipse has built-in "Local history", you can check if it is sufficient for your needs. Otherwise you can simply install an SVN server from Collab.net, and use it on localhost with Subclipse
There is also a git plugin for eclipse. only problem is merge not being integrated yet. but is planned for the near future.
egit
The Mercurial plugin for Eclipse seems to work fine. I don't think it has all of the features that Subclipse has for Subversion though.
If you are working alone on a project though, you need to ask yourself why you are using a distributed version control system. For a project by yourself, Subversion works great. I use subversion and the eclipse project for projects I work on by myself. It gives you the history and rollback capabilities even with a single person. It is nice sometimes to be able to see what you did before that you might have deleted.
In a team environment though, the Mercurial plugin for Eclipse works fine.
What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.