I have been trying to get a staging server working on Heroku where my product app runs. I thought the issue was related to a mis-configured database. There seemed to be some odd things visible in the Heroku dashboard where it showed multiple instances of Postgresql add-ons, with different names. On the production implementation of the database, it said it was also attached to my staging app - which did not seem right - so I deleted that (I want my staging app to have a clean DB and not be attached to production).
BUT... now it appears my production database is GONE. I have a support ticket in to Heroku. But would LOVE some help - I am pretty desperate at this point!
I have gone into the Activity menu for my Heroku app and rolled back to a point before the screwup - but I am pretty sure that just restores code & settings. So I am still dead as the database is not there.
To make matters worse, I quickly checked pg:backups and the list it returns is quite old. That doesn't seem right - I think I should have automatic backups - so that adds to the fear that I may have royally screwed this up.
I expected the database to detach from my staging app but remain intact for my production app. Now all that is showing is the "old" version of the database.
UPDATE: I was able to promote an instance of an older database that was still attached to my app, so the app itself is up and running (but with older data). Still waiting on Heroku support to respond to my trouble ticket and hopefully help me restore this database.
RESOLUTION: Heroku tech support responded overnight and was able to recover/restore my production database as hoped, so only minor damage done with users unable to access recent data for about 6 hours. So 2 learnings from this.
First, and fairly obvious, is to be very careful with how you manage your production DB and make sure you have regularly scheduled backups occuring (I thought I did, but after an automatic Postgres upgrade it appears those went away and I did not know to re-schedule them).
Second, that Heroku does continuously backup your app/data and can help you recover from this kind of disaster as long as you are on one of their paid plans. But you have to contact them through tech support and request this, there is no way to recover from these automatic backups via the dashboard or CLI.
Related
I am looking to have an exact copy of my production database to be used in dev for testing. What I would prefer is if changes are made in production they will be mirrored in dev but if changes are made in dev they are only changed in dev. I was looking into Cluster to Cluster Sync but I am not sure if that is what I exactly need because when I looked into the use cases for it it doesn't mention anything of the sort. Can anyone let me know if there are other avenues I should be looking into? Thanks!
I'm looking to amend some tables I have in a PostgreSQL instance on Azure, but I cannot work out how to perform the upgrades with Alembic.
I have been following the tutorial here, which includes a Heroku deployment around the 12:01:00 mark. In that case, once the changes have been defined, we can run heroku run "alembic upgrade head" to perform the upgrade. However, I cannot find the equivelant process for Azure.
My postgres instance is housed in a VNet and connected to a web app. Until now, I've made code changes to a server which is running in an attached web app. I push to GitHub which then deploys the changes in Azure. Obviously, if the table already exists in postgres, changes I make to the original schema are not reflected. I considered deleting the table and stating again, but this seems a very risky strategy.
A similar question was asked here, but has remained unanswered. I've also checked the documentation for Alembic and Azure but could not find anything.
In Heroku, connected to Git. I want to deploy my Dev branch, and can select it.
When I manually deploy it does it's thing (deploys my website to Heroku). But my website has Master branch code. I go back to Heroku and it's on Master.
If I select Dev as the branch for either Manual or Automatic, then reload the page, it switches back to Master. Below is a screenshot of me setting the branch to dev. If I do a browser refresh, it resets to Master.
I tried reconnecting Github. Not sure what else it could be.
Deploying Dev was working up until yesterday.
Here is a screenshot of how I manually deploy (as opposed to auto deploy) from the Heroku Deployment tab.
Edit: I should also add, I happily was on Dev, and could deploy Dev updates up until recently. I deployed Master by mistake, bat can't go back to Dev.
I ended up having a corrupt Collection / DB record. I was tipped on another forum that the symptoms I was seeing (Nighscout web app not displaying some data, not the Heroku deploy I was attempting to work around that issue) could be caused by that. So as a last resort I dropped the entire Mongo Collection and I can now deploy Master and Dev, and it sticks in Heroku.
I don't know the significance since the data should be separate from the web app source code itself.
The whole reason I wanted to try Dev was for a fix for parts of the app not working. After initialising the Mongo DB Collection, I can use Master, so Dev (and the fix it contained) is not needed.
I know this isn't the exact root cause, but I'll leave this here in case someone comes across it and hasn't thought to look at the data.
I am using Azure websites with staging to warm up changes before swapping to production. However, this solution dies not work with entity framework migrations.
If I enable a migration on the staging slot, then the database is migrated correctly. However, the production website still uses the old migration and is no longer able to connect with the db context and generates an error.
Therefore, the staging website is ok, but production goes down. I have tried to publish directly to production whenever there was a migration with bad results (website going down for 15 minutes as db struggled to migrate while being hit with multiple requests because all caching was reset on the website).
The only solution I can figure out is for the website to go down and to display a maintenance message to users.
What I would like to do is redirect users to a maintenance page that would redirect them back to their previous request after 1 minute with javascript. However, I would only do this for Entity migration errors (db schema change). For other errors, they should go to the standard error page.
I can do this all manually by setting up a maintenance message deployment slot and swapping it to production as I manually deploy the upgrades. However, is there an automated way to do this?
Update: Workaround to take users offline while migrating database
Upload (with ftp) app_offline.htm to production which displays a simple message to the user and reloads the page every minute with javascript.
Deploy Migrations to staging
Load staging to execute migrations and warm up web app
Swap to production
Remove app_offline.htm (now at staging)
currently my work-flow is as follows:
Locally on a machine I maintain a git repo on each website I am working on, when the time comes to publish something I compress the folder and upload this single file to the production server via ssh then I decompress, test the changes a move the changes to the live folder and I get rid of the .git folder.
I was wondering if the use of a git repo on the live server was a good idea, seems to be at first but it can be problematic if a change doesn't look the same on on the production server in comparison to the local development machine... this could start a fire...
What about creating a bare repo on some folder on production server then clone from there to the public folder thus pushing updates from local machine to the bare repo and pulling from the bare on the public folder of the production server... may anyone plese provide some feedback.
Later I read about capistrano http://capify.org but I have no experience w/ this software...
In your experience what is the best practice/methodology to accomplish a website deployment/updates?
Thanks in advance and for your feedback.
I don't think that our method can be called best practice, but it has served us well.
We have several large databases for our application (20gb+), so maintaining local copies on each developers computer has never really been an option, and even though we don't develop against the live database, we do need to do the development against a database that is as close to the real thing as possible.
As a consequence we use a central web server as well, and keep a development branch of our subversion trunk on it. Generally we don't work on the same part of the system at once, but when we do need to do that, or someone is making a lot of substantial changes, we branch the trunk and create a new vhost on the dev server.
We also have a checkout of the code on the production servers, so after we're finished testing we simply do a svn update on the production servers. We've implemented a script that executes the update command on all servers using ssh. This is extremely convinient, since our code base is large and takes a lot of time to upload. Subversion will only copy the files that actually have been changed, so it's a lot faster.
This has worked really well for us, and the only thing to watch out for is making changes on the production servers directly (which of course is a no-no from the beginning) since it might cause conflicts when updating.
I never thought about having a repository copy on the server. After reading it, I thought it might be cool... However, updating the files directly in the live environment without testing is not a great idea.
You should always update a secondary environment matching exactly the live one (webserver + DB version, if any) and test there. If everything goes well, then put the live site under maintenance, update files, and go live again.
So I wouldn't make the live site a copy of the repository, but you could do so with the test env. You'll save SSH + compressing time, plus you can check out any specific revision you'd like to test.
Capistrano is great. The default recipes The documentation is spotty, but the mailing list is active, and getting it set up is pretty easy. Are you running Rails? It has some neat built-in stuff for Rails apps, but is also used fairly frequently with other types of webapps.
There's also Webistrano, which is based on Capistrano but has a web front-end. Haven't used it myself. Another deployment system that seems to be gaining some traction, at least among Rails users, is Vlad the Deployer.