Monorepo - lock app to stay on specific package version - version-control

I have a monorepo, that contains packages, and apps that are using those packages.
I am using pnpm workspaces on all apps with workspace:*
Now I've updated on of the packages with breaking changes, and one app is not ready to for it, is there any way to tell it to use previous version, other than not aliasing workspace, so instead workspace:* I would have to use ^1.2.3 ?
I am just wondering if there is a rule or common practice.

Related

Upgrading to LibMan from NuGet

We have a web app project that still uses NuGet for content packages management (jQuery, Knockback, knockoutjs, etc.). We are trying to convert to use LibMan, and running into an issue where some older packages do not exist (for instance walltime-js). How do we work around this issue?
Try using a different provider. The current default, Cdnjs, is a curated catalog; the other two providers, JSDelivr and Unpkg, host any package that's available in NPM and thus have much broader catalogs.

Using ANT to source control the code in cloud (NetSuite)

For our ERP platform (NetSuite) customization code rests in the Cloud. We (different entities) can directly make changes to it but there is no source control available to us in cloud.
It is possible to fetch the code files through a SOAP API.
I was wondering if it is possible to get the files through API using Apache ANT and shove into TFS/SVN?
I am not familiar with Apache ANT so I do not know the capabilities of ANT that if it can fetch any info through API?
(you can also suggest any better approach to source control the code in cloud)
Ant has several third party task plugins for version control tasks. Plus, you can always use the <exec/> task to build an equivalent command line checkout. However, I do not recommend people use Ant to fetch versions from your version control system. This ends up being a chicken v. egg issue.
Your build script is in version control. You need to fetch it in order to run Ant against it. If you're fetching your build script, why not the rest of your project?
Once you checked out your project in a working directory, and want to do an update, why not let Ant do the update? Because your build script is also version controlled. Doing an update and build at the same time could have you running the wrong version of your build script against your build.
Maybe you're going to check in the files that were modified by the build system. Not a good idea. You should rarely, if ever, check in files you built. If you need them, rebuild them. Built files are usually binary in nature, and can vary greatly from one version to another. In most cases, your version control system will be checking in completely new versions of the built object instead of using diff format. That takes up a lot of room in your version control system.
Even worse, you can't diff the built objects, so you can't really verify their content or trace their history. And, built objects tend to age very quickly. Something built last month is already obsolete. With in a year, the vast majority of the information in your version control system will be nothing but obsolete binary, and very little stored is useful code.
Besides, your version control system has nothing to do with building your files. Imagine between Release 2.1 and Release 2.2, you change version control systems from Subversion to Git. Now, a bug in Release 2.1 needs to be fixed, and you need to create Release 2.1.1. Your checkout code in your build scripts will no longer work.
If you're using NetSuite IDE, you're using Eclipse, and Eclipse is great at handling version control. Eclipse can handle both SVN and TFS (although I don't know why anyone would use TFS). Eclipse tracks file changes quite nicely. In fact, Eclipse gets confused when you change files behind its back (like you do an update outside of Eclipse).
Let Eclipse handle your version control issues. It presents a common interface to almost all version control systems. This way, your build system can handle the builds.
I'm not sure what other requirements you might have, but if you use the NetSuite IDE (Eclipse + bundled plugin), you can use it to pull and push files to NetSuite. And then you can use any source control system you like (we use SVN, for instance).

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.

All in one installers or Core + Plugin Sub installers?

Build everything into one installable unit
Pros
One package to test and deploy to all environments
All plugins installed but possibly not registered for use in the config
Cons
Plugins may be in various states of the pipe how to deploy only good ones.
How to handle registering which plugins to deploy to which environment
Hard to change your mind might be a month between the dev build and the prod push
Build Core Installer (no plugins) + Sub installers (only the plugin)
Pros
Smaller footprint to prod less room for errors
Cons
Possibility of integration errors between plugins since they might be installed in various orders
How to rollback deployment when the previous deployment could be a strange assortment of core and sub installers. Need a way to track what the specific installation contains
How to reproduce errors in QA when QA has all plugins and prod may have a smaller possibly older subset.
These are my thoughts on the issue I have been strugling to have my cake and eat it too but I seem to be stuck with these two choices. Anybody else struggle with this issue and how did you resolve it? Any other pros and cons that I missed? So far I have chose the all or nothing approach but I am open to ideas.
Thanks in advance.
Build everything is easier to test and to deploy. You, at build time and by testing, guarantee all the plugins are compatible with each other. Depending on nature of the product, you can create bundles of plugins, which can be selected during installation.
Of course, there should an option to remove the plugins from the installation package which are not production-ready yet. But ensure QA gets what comes to customers or shareholders.
With separate packages approach, you have to implement dependency tracking and so forth. It is more flexible, which results in lots of possible combinations.
I'd choose the first option: one single package with everything and ability to fine-tune the selected features/plugins.
There's also one more option: combination of the two approaches above. Consider Eclipse project: it has a common platform and plugins. One can download a package with the set of plugins which are usually used in a particular environment. Other plugins can be installed later, if needed. So you combine your core with several logically connected plugins; other plugins could be added to the installation later.

Version control workflow with 'external' binary files

I'm working with an embedded system software project right now, and we're facing some problems dealing with some precompiled binaries living inside our repository.
We have several repositories for different parts of our project: One for the application itself, one for the OS, one for the bootloader and several libraries. All of them, except the one for our application, are shared with other teams, for other projects. We are using git (and changing is not an option right now), but I think we'd have the same problem with any VCS.
Right now, we have a precompiled binary for each of those components living inside our application repository. The idea was to speed up the build time, since the OS alone takes about 20 minutes to build from scratch and most guys work only with the application.
Problem is, there are several bugs/features in those binaries (and related application code) to be integrated at any time and, as you know, diffing and merging binaries won't work.
So, how do you guys do when you have to work with those external dependencies?
Thanks a lot =)
One viable solution is to use an external binary repository like Nexus.
It is not linked to a VCS, meaning you can easily clean up old versions of said binaries you don't need anymore
It is light-weight (simple HTTP client-server protocol, no need to clone all the repo with all the versioned binaries like you would with a DVCS -- git or mercurial --)