Running Strapi in production and version control sync issues - deployment

I'm wondering what the best practice is for running Strapi in production. I noticed that Strapi generates new files when a content type is added. This means that the production environment's files will become out of sync with version control. Is there a recommended deployment process? Am I supposed to commit changes from production to my git repo after making changes in the admin?

The file generation which is made primarily by content-type-builder and other settings are disabled in production mode NODE_ENV=production
The admin panel is supposed to be already built on prod, so you only add necessary data into DB based on the given data structure.
TLTR:
Summarizing answer to your question in from github.com/strapi/strapi/issues/1986:
emadicio commented on 20 Sep 2018
If you run your app with NODE_ENV=production you'll notice that plugins that actually edit or create files are disabled. So that means you cannot create or edit content types in prod
Downloaddave commented on 22 Sep 2018:
I had deployed Strapi locally then to a Prod environment, and was confused since I didn't see the content-type-builder in the production CMS.
I'm trying to understand the deployment and update process as well...
Developer sets Strapi up locally
Creates content-types using the content-type-builder
Strapi makes updates to the file structure locally and on the local MongoDB
On production we will have to push both the code and db updates?
I understand that making changes to the content-type-builder reboots the service, and we don't want production to go down during the rebuild, but it seems like data would get really out of sync between production and development.
Aurelsicoko commented on 2 Oct 2018
You're right! The Content-Type Builder is a development plugin. His goal is to speed up the development of your project. It should not be used in production. We didn't design this plugin like for this usage.
The real pain is to migrate the development configuration to production, and vice-versa. We plan to offer a new command with the CLI called strapi migrate to easily migrate from an environment to another. I can't give you a release date though...

Any news on this strapi migrate command? It is a major thing for me and my team in order to do go on with Continuous Integration and delivery.
I hope it is not going to be same like with Wordpress that still has no native solution to solve the migrations between Prod and Stage...
Appreciate the answer. Greetings

Related

Best Strategy for deploying test and dev app versions to Google Compute Platform

Google Compute Platform
I've got an Angular (2) app and a Node.js middleware (Loopback) running as Services in an App Engine in a project.
For the database, we have a Compute Engine running PostgreSQL in that same project.
What we want
The testing has gone well, and we now want to have a test version (for ongoing upgrade testing/demo/etc) and a release deployment that is more stable for our initial internal clients.
We are going to use a different database in psql for the release version, but could use the same server for our test and deployed apps.
Should we....?
create another GCP project and another gcloud setup on my local box to deploy to that new project for our release deployment,
or is it better to deploy multiple versions of the services to the single project with different prefixes - and how do I do that?
Cost is a big concern for our little nonprofit. :)
My recommendation is the following:
Create two projects, one for each database instance. You can mess around all you want in the test project, and don't have to worry about messing up your prod deployment. You would need to store your database credentials securely somewhere. A possible solution is to use Google Cloud Project Metadata, so your code can stay the same between projects.
When you are ready to deploy to production, I would recommend deploying a new version of your App Engine app in the production project, but not promoting it to the default.
gcloud app deploy --no-promote
This means customers will still go to the old version, but the new version will be deployed so you can make sure everything is working. After that, you can slowly (or quickly) move traffic over to the new version.
At about 8:45 into this video, traffic splitting is demoed:
https://vimeo.com/180426390
Also, I would recommend aggressively shutting down unused App Engine Flexible deployments to save costs. You can read more here.

how can I set up a continuous deployment with TFSBuild for MVC app?

I have some questions around the best mechanism to deploy MVC web applications to different environments. Previously I used setup projects (.msi's) but as these have been discontinued in VS2012 I am looking to move to an alternative.
Let me explain my current setup. I currently have a CI setup using TFSBuild 2010 with Team Foundation Server for source control.
A number of developers work on their local machines and check in to the TFS Server. We regularly deploy to a single server dev environment and a load balanced qa environment with 2 servers. Our current process includes installing an msi which carries out some of the following custom actions:
brings current app offline with the app_offline.htm file
run in database scripts (from database project in the solution)
modifies web.config (different for each web server of qa)
labels the code
warmup each deployed file via http request
etc
This is the current process. Now I would like to make some changes. Firstly, I need alternative to msi's. From som research I believe that web deploy via IIS and using MsDeploy is the best alternative. I can use web config transforms for web config modifications. Is this correct and if so, could I get an outline of what I need to do?
Secondly I want to set up continuous delivery via TFSBuild, I have no idea how this may be achieved, would it be possible to get an outline of how it can be integrated in to my current setup? Rather than check in driven, I would like it to be user driven following check in. Also, would it be possible for this to also run in database scripts from a database project in the solution.
Finally, there is also a production environment, but I would like to manually deploy this - can my process also produce an artifact that I can manually install?
Vishal Joshi has some information on his blog that is reasonably good, http://vishaljoshi.blogspot.com/2010/11/team-build-web-deployment-web-deploy-vs.html. It does have the downside that your deployment password is include in the properties you pass to msbuild.
Syed Hashimi has also posted some information on this in another questions Team Build: Publish locally using MSDeploy.

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.

Deploy Entity Framework Code First

I guess I should have thought of this before I started my project but I have successfully built and tested a mini application using the code-first approach and I am ready to deploy it to a production web server.
I have moved the folder to my staging server and everything works well. I am just curious if there is a suggested deployment strategy?
If I make a change to the application I don't want to lose all the data if the application is restarted.
Should I just generate the DB scripts from the code-first project and then move it to my server that way?
Any tips and guide links would be useful.
Thanks.
Actually database initializer is only for development. Deploying such code to production is the best way to get some troubles. Code-first currently doesn't have any approach for database evolution so you must manually build change scripts to your database after new version. The easiest approach is using Database tools in VS Studio 2010 Premium and Ultimate. If you will have a database with the old schema and a database with the new schema and VS will prepare change script for you.
Here are the steps I follow.
Comment out any Initialization strategy I'm using.
Generate the database scripts for schema + data for all the tables EXCEPT the EdmMetadata table and run them on the web server. (Of course, if it's a production server, BE CAREFUL about this step. In my case, during development, the data in production and development are identical.)
Commit my solution to subversion which then triggers TeamCity to build, test, and deploy to the web server (of course, you will have your own method for this step, but somehow deploy the website to the web server).
You're all done!
The Initializer and the EdmMetadata tables are needed for development only.

What's best Drupal deployment strategy? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client.
I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch?
install Drupal
download modules (CCK, Views, Date, Calendar)
create the Contents
...
Thanks
A couple of tips:
Use source control, NOT FTP/etc., for the files. It doesn't matter what you use; we tend to spin up an Unfuddle.com subversion account for each client so they have a place to log bugs as well, but the critical first step is getting the full source tree of your site into version control. When changes are made on the testing server or staging server, you see if they work, you commit, then you update on the live server. Rollbacks and deployment gets a lot, lot simpler. For clusters of multiple webheads you can repeat the process, or rsync from a single 'canonical' server.
If you use SVN, though, you can also use CVS checkouts of Drupal and other modules/themes and the SVN/CVS metadata will be able to live beside each other happily.
For bulky folders like the files directory, use a symlink in the 'proper' location to point to a server-side directory outside of the webroot. That lets your source control repo include all the code and a symlink, instead of all the code and all the files users have uploaded.
Databases are trickier; cleaning up the dev/staging DB and pushing it to live is easiest for the initial rollout but there are a few wrinkles when doing incremental DB updates if users on the live site are also generating content.
I did a presentation on Drupal deployment best practices last year. Feel free to check the slides out.
Features.module is an extremely powerful tool for managing Drupal configuration changes.
Content Types, CCK settings, Views, Drupal Variables, Contexts, Imagecache presets, Menus, Taxonomies, and Permissions can all be rolled into a feature, which can be checked into version control. From there, deploying a new site, or pushing changes to an existing one, is easily managed with the Features UI or Drush.
Make sure you install Strongarm.module for exporting drupal config that gets stored in your Variables table. You can also static content/nodes (ie: about us, faqs, etc) into Features by installing uuid_features.module.
Hands down, this is the best way to work with other developers on the same site, and to move your site from Development to Testing to Staging and Production.
We've had an extensive discussion on this at my workplace, and the way we finally settled on was pushing code updates (including modules and themes) from development to staging to production. We're using Subversion for this, and it's working well so far.
What's particularly important is that you automate a process for pushing the database back from production, so that your developers can keep their copies of the database as close to production as possible. In a mission-critical environment, you want to be absolutely certain a module update isn't going to hose your database. The process we use is as follows:
Install a module on the development server.
Take note of whatever changes and updates were necessary. If there are any hitches, revert and do again until you have a solid, error-free process.
Test your changes! Repeat your testing process as a normal, logged-in user, and again as an anonymous user.
If the update process involved anything other than running update.php, then write a script to do it.
Copy the production database to your staging server, and perform the same steps immediately. If it fails, diagnose the failure and return to step 1. Otherwise, continue.
Test your changes!
BACK UP YOUR PRODUCTION DATABASE and TAKE NOTE OF THE REVISION YOU HAVE CHECKED OUT FROM SVN.
Put your production Drupal in maintenance mode, run "svn update" on your production tree, and go through your update process.
Take Drupal out of maintenance mode and test everything (as admin, regular user, and anonymous)
And that's it. One thing you can never really expect for a community framework such as Drupal is to be able to move your database from testing to production after you go live. From then on, all database moves are from production to testing, which complicates the deployment process somewhat. Be careful! :)
We use the Features module extensively to capture features and then install them easily at the production site.
I'm surprised that no one mentioned the Deployment module. Here is an excerpt from its project page:
... designed to allow users to easily stage content from one Drupal site to another. Deploy automatically manages dependencies between entities (like node references). It is designed to have a rich API which can be easily extended to be used in a variety of content staging situations.
I don't work with Drupal, but I do work with Joomla a lot. I deploy by archiving all the files in the web root (tar and gzip in my case, but you could use zip) and then uploading and expanding that archive on the production server. I then take a SQL dump (mysqldump -u user -h host -p databasename > dump.sql), upload that, and use the reverse command to insert the data (mysql -u produser -h prodDBserver -p prodDatabase < dump.sql). If you don't have shell access you can upload the files one at a time and write a PHP script to import dump.sql.
Any version control system (GIT, SVN) + Features module to deploy Drupal code + custom settings (content types, custom fields, module dependencies, views etc.).
As Deploy module is still in development mode, so you may like to use Node export module in Drupal 7 to deploy your content / nodes.
If you're new to deployment (and or Drupal) then be sure to do everything in one lump.
You have to be quite careful once there are users effecting content while you are working on another copy.
It is possible to leave the tables that relate to actual content, taxonomy, users, etc. rather than their structure. Then push the ones relating to configuration. However, this add an order of magnitude of complexity.
Apologies if deployment is something old hat to you, thus this is vaguely insulting.
A good strategy that I have found and am currently implementing is to use a combination of the deploy module to migrate my content, and then drush along with dbscripts to merge and update the core and modules. It takes care of database merging even if you have live content, security and module updates, and I currently have mine set up to work with svn.