I've developed a composer package that's included into many Github repo Laravel projects. It's set to a specific version, but even if I make it slightly looser and set the patch version to be a asterisk for the patch version I still need to run a composer update in the project that requires the package so that when running composer install on a server it installs the correct version.
The issue I'm facing is that when I release a new package version, I've got to run composer update locally, say 15 times each for 15 projects, then commit all 15, and open pull requests for all 15 etc making the process incredibly slow.
Is there a better way to handle composer update, maybe I need to develop a little CLI application to communicate with the Github api to open PRs and merge them?
What you are describing is exactly what's expected and intended to happen. Full projects with commited lock-files are supposed to install the locked version unless updated.
You could use something like this composer update action to run regularly and create commits when necessary, or work with the GitHub provided Dependabot.
But if this is not coupled with a robust test suite and finely tuned version constraints, you could end up breaking already working projects because some randome dependency introduced an unexpected change in behavior.
Related
I have a GitHub worflow which builds and deploys a snapshot version of a library as a GitHub package, e.g., mycompany.mytool.1.0.0-SNAPSHOT.jar. Whenever I make a new build and deploy, a new asset is created, like, e.g., mycompany.mytool.1.0.0-20210723.145233-1.jar instead which is then somehow associated with the SNAPSHOT tag. This all seems to work and I can access mycompany.mytool.1.0.0-SNAPSHOT.jar without problem.
My question now is, how can I get rid of all these older versions of this jar? Actually I just want to keep the latest version. I can delete them manually via the web-interface but that is a more than awkward task. I would somehow like to automate this too.
This is not possible as of this writing. GitHub staff member Jamie Cansdale wrote this in their community forum:
SNAPSHOT versions are exposed as artifacts inside a regular versions. There isn’t an API for cleaning up artifacts, only whole versions.
(source)
Which means that a single SNAPSHOT version (like 1.0.0-SNAPSHOT) will accumulate all builds you make, and all artifacts will show up on the Assets list to the right of the web page.
The only practical solution I can think of, is that you delete the whole version from a script, before publishing each build's artifacts. Then you'd have the effect of having a single set of artifacts stored as part of the 1.0.0-SNAPSHOT version name.
However this solution is not ideal: public package versions cannot be deleted if they are popular enough (probably to avoid squatting attacks):
If the package is public and the package version has more than 5,000 downloads, you cannot delete the package version. In this scenario, contact GitHub support for further assistance.
I am using Sitecore 6.6.0, we have multiple environments
Local
DEV
QA
PROD
I have to deploy few changes directly from Local to Prod (Don't ask me why directly to PROD, even if it is for QA, my question remains same), what I am doing is create a package on my local with all items and separately create folder structure for all files related to the fix an deploy that to PROD.
There is always a chance of human error, since I will have to remember all associated items and files for a fix, so is there a better automated way, which will not skip any changed Items or Files?
On the other note I am using Bit-bucket for source controlling sitecore code what about sitecore DBs? most of the sitecore developments stays in DBs. What is the best approach to source control sitecore DBs?
Update
Installed packages from nuget
After installing Unicorn from nuget and unicorn.default.config, I get the following error
Attempt by method 'Unicorn.Data.DataProvider.UnicornDataProvider..ctor(Unicorn.Data.ITargetDataStore, Unicorn.Data.ISourceDataStore, Unicorn.Predicates.IPredicate, Rainbow.Filtering.IFieldFilter, Unicorn.Data.DataProvider.IUnicornDataProviderLogger, Unicorn.Data.DataProvider.IUnicornDataProviderConfiguration, Unicorn.Predicates.PredicateRootPathResolver)' to access method 'System.Action`1<System.__Canon>..ctor(System.Object, IntPtr)' failed.
Further after following the ReadMe on Github
When I do a sync on site/unicorn.aspx.
[P] Auto-publishing of synced items is beginning.
ERROR: Method not found: 'Sitecore.Publishing.Pipelines.Publish.PublishResult Sitecore.Publishing.Publisher.PublishWithResult()'. (System.MissingMethodException)
at Unicorn.Publishing.ManualPublishQueueHandler.PublishQueuedItems(Item triggerItem, Database[] targets, IProgressStatus progress)
at Unicorn.Pipelines.UnicornSyncEnd.TriggerAutoPublishSyncedItems.Process(UnicornSyncEndPipelineArgs args)
at (Object , Object[] )
at Sitecore.Pipelines.CorePipeline.Run(PipelineArgs args)
at Unicorn.ControlPanel.SyncConsole.Process(IProgressStatus progress)
Solution:
For older sitecore versions (pre 7.2 iirc) you need to disable the auto
publish config file as it relies on a method added later by sitecore.
https://github.com/kamsar/Unicorn/issues/103
In order to track the database changes you are making, you will first need to install software that will be able to help you serialize your changes and store in source control. Team Development for Sitecore (TDS) and Unicorn are the two most popular options.
You will also want to make sure you have your own local database where you are making your changes so you can isolate those changes from your QA, PROD, etc. allowing you to maintain the same level of isolation you do for developing code.
Automation of this process helps reduce the human error you mention for the deployment by introducing a repeatable and known process. Here are a few blogs that can help you get started:
Jason Bert - Continuous Deployment (Git/TDS/TeamCity)
Jason St-Cyr - Automating with TeamCity and TFS (TFS/TDS/Team Build)
Andrew Lansdowne - Auto deploy Sitecore Items using Unicorn and TeamCity (Unicorn/TeamCity)
Brian Beckham - TDS and Build Configurations
You may also want to look into configuration transforms to support different values in your Sitecore Include patch files. SlowCheetah plugin will let create the transforms in Visual Studio (it might be in Visual Studio 2015 now...). TDS can pick up those transforms automatically and execute them on the build server for you, or you can do it with Visual Studio itself to create published packages.
For Sitecore versioning and deployment Unicorn is also a good option.
https://github.com/kamsar/Unicorn
Cheers,
Bo
I have read this article (Link: http://guides.beanstalkapp.com/version-control/branching-best-practices.html) that gives some good "best practice" advice about deploying bug-fixes and feature requests and tend to agree with all that is written there. But I have one major problem that I can't seem to work around:
How do I deploy only the features that are fully tested, without also deploying files that are currently being tested?
Example: Bug #1 affects file1.php. The bug fix is coded in a bug branch, tested locally by the dev, and merged back into the dev branch. The dev branch is deployed to a Testing environment.
Feature #1 also affects file1.php. It is coded, merged, and deployed to Testing.
I need to deploy the bug fix (and 100 other fixes that may have similar conflicts) to Staging. I DO NOT want to deploy the new features yet, because documentation, training, etc. hasn't been conducted.
How do I only deploy the bug fixes? How would I deploy just certain bug fixes and certain feature requests, but not all of them?
I've thought about tracking every file change and linking that to the bug ticket, compiling the list of files from the bug ticket, and manually choosing each of those files. But that seems like it is mistake prone and more work than should be required.
What am I missing? How do I deploy just the bug-fixes and featuresets that I want deployed?
An usual way, as I understand it, is:
You have a tag/branch (same thing in SVN) for every version deployed (say 1.1).
If critical bugs appear, you fix the bug in a branch of (1.1), as you said
The bug is merged to (1.1) creating (1.1.1) and also merged to dev. (1.1.1) goes to Testing NOT Dev.
In the meantime
Features go to Dev.
When you want to release features you create a branch for that (say 1.2) that goes to Testing and then the rest.
Note:
Bugfixes can sometimes be done in Dev and be backported to branchs.
I have a CakePhp Website that is currently live. I would like to keep working on the site, without impacting the deployed site.
What is the best way to keep a production version separate from a deployed version, and then merging the two when appropriate?
Currently, I am using Git for version control.
Thanks!
First thing, get to know a version control system Subversion, Git, Bazaar, Mercurial are some examples. They are a safety net that can save your bacon because they save EVERY change to EVERY file in your fileset.
Then, typically I have a local development server and also a subdomain (staging.example.com) on the production server. I then do my heavy development on the local development server. Then I use SVN to archive all my site changes. Then, using a shell account on the production server I check out the new version of the software to the staging subdomain. If it works ok there, I can then update the live site using just a single SVN check out.
I've also heard of people placing a symbolic link in the location where the site root should be (/var/www/public_html) that points to the live directory (/var/www/site_ver_01234) , then set up the new version in a parallel directory (/var/www/site_ver_23456). Finally, just recreate the symbolic link pointing to the new version's directory. The switch is instantaneous and transparent. I'm sorry I'm not more clear on this method though, I read about it a while back but never tried it myself though.
I've also looked at Bazaar (another version control system) that has a plugin that automatically ftps any changed files to a given server every time a version is checked in.
The general idea, first of all, is to use a version control system. Using this, you're developing your site on your local machine or with several people, having a central repository somewhere.
When you're happy with a certain revision and would like to deploy it, you "tag" it. That means you freeze the state of that revision and separate it from the continually evolving "trunk". What that means specifically depends on your version control system.
You then take that tagged revision and copy it to the live server. Possibly you may copy it to a "staging server" before to test it in another environment. This copying can be as simple as overwriting all existing files using FTP, or it can involve automated deployment systems which will take care of the details for you and allow you to roll back an unsuccessful deployment. If a database is involved as well, you're probably also looking at database schema migration scripts that need to be run.
Each of these steps can be done in many different ways, and you'll have to figure out what's the best approach for you. If you're not doing so already, start using a version control system such as SVN or git. Do it now! Then you might want to google or search on SO about different techniques to tag and branch using that system. For serious deployment, start with a keyword like Capistrano or one of its PHP clones.
Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on.
The issue is that there aren't any good solutions which is just appalling given the popularity of the two.
The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change.
The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files.
The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes.
I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between.
My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that).
Anyone have any ideas?
April 2010
Add EGit to your Eclipse+Aptana setup, and:
let developers push to a local bare repo their developments (see also this post)
let your local project be updated by a git pull from that same local bare repo (creating/updating) a local working directory with sources merged/updated (or by using a post-update hook as described in my previous SO link)
let your local Aptana+Eclipse(+EGit) reference that local working directory, also used by your web server.
In short, when you are speaking of file synchronization + merges, this is a job for a (D)VCS (Version Control System: Centralized or Distributed VCS)
Oct 2011: as xmedeko mentions in the comments, Aptana3 has its own Git plugin.
And it isn't very compatible with EGit: See bug 1988.
Adding to VonC answer (which is correct IMHO), what probably lies beneath this scenario is that the process you adopted is not correct in itself, apart from the tools used.
If I understood well, you should not allow nor perform a direct upload from a development version of the project to the web server. Merging is not a job for remote synchronization tools, and it should happen well before the deployment phase (upload to web server is practically a deploy).
You should have a dedicated repository taken from some point in development history (according to you release timeline), a point where merge has already happened. Then deploy it (by means of file synchronization if you want, but that is not mandatory) on a local/staging web server.
Perform there any test you run on the web site actively running (i.e. integration and/or functional tests). If there's any bug & fixing, well there are different ways to actually apply the fixes on development & staging code repository. Only after that, you deploy the staging repository on to production web server (again, synchronization tools are a way to do that).