i used automated backups on Google Cloud. That worked perfect until one week ago.
Suddenly there are no more backups.
They are still activated, but there are no more backups.
There is no output in the logging section.
Is there a way to debug this?
thank you!
We (CloudSQL service) had some problems with automated backups. It should be mitigated now, so your backups should resume now or very shortly. Manual backups still worked so it is a viable workaround as well.
Just to add: as with any other critical piece of data (managed or not), I'd highly recommend to set up a process of periodic end-to-end disaster restore testing.
Related
I have been trying to get a staging server working on Heroku where my product app runs. I thought the issue was related to a mis-configured database. There seemed to be some odd things visible in the Heroku dashboard where it showed multiple instances of Postgresql add-ons, with different names. On the production implementation of the database, it said it was also attached to my staging app - which did not seem right - so I deleted that (I want my staging app to have a clean DB and not be attached to production).
BUT... now it appears my production database is GONE. I have a support ticket in to Heroku. But would LOVE some help - I am pretty desperate at this point!
I have gone into the Activity menu for my Heroku app and rolled back to a point before the screwup - but I am pretty sure that just restores code & settings. So I am still dead as the database is not there.
To make matters worse, I quickly checked pg:backups and the list it returns is quite old. That doesn't seem right - I think I should have automatic backups - so that adds to the fear that I may have royally screwed this up.
I expected the database to detach from my staging app but remain intact for my production app. Now all that is showing is the "old" version of the database.
UPDATE: I was able to promote an instance of an older database that was still attached to my app, so the app itself is up and running (but with older data). Still waiting on Heroku support to respond to my trouble ticket and hopefully help me restore this database.
RESOLUTION: Heroku tech support responded overnight and was able to recover/restore my production database as hoped, so only minor damage done with users unable to access recent data for about 6 hours. So 2 learnings from this.
First, and fairly obvious, is to be very careful with how you manage your production DB and make sure you have regularly scheduled backups occuring (I thought I did, but after an automatic Postgres upgrade it appears those went away and I did not know to re-schedule them).
Second, that Heroku does continuously backup your app/data and can help you recover from this kind of disaster as long as you are on one of their paid plans. But you have to contact them through tech support and request this, there is no way to recover from these automatic backups via the dashboard or CLI.
Why am I getting this error message for my apps deployed with Github at Heroku?
There is an issue with the GitHub token for this app. Disconnect and reconnect to restore functionality
We had the same issue before, it happened with private repositories and disconnecting/reconnecting from time to time seems to do the trick but after a while we started to grow and we needed more automation.
We looked into all kinds of CI/CD tools like Codeship, CircleCi, etc. Ended up choosing DeployBot and stuck with it because it works really well for us and fitted our needs best.
Either way these tools can be lifesavers for the team no matter which one you end up using
We are currently talking about deploying a website via rsync. However, during rsyncing the application is left in an inconsistent state, as some files may already be synced while others still are left with the old version right? How do people deal with this issue? I guess the same problem exists when deploying via svn/git/cvs. Should I just close the site, rsync, and open up again? Or do people simply ignore this inconsistency problem?
Use a two-step deployment. rsync to a test directory, ideally test it, then swap the production and test deployments around. The first time you do this, you might not have a ready-to-go test directory, but you can fix this by simply rsync-ing from production to test.
Does anyone have any experience using version control with a production website? Would it be a terrible idea to run a website from a repository? I just found a related article but I would like to hear your thoughts/comments.
Makes imho no sense - cheap person's approach.
In larger scenarios you have develop / test / production, so you version control on the develop side, then publish forward to test and production. There is no need to acutally version control once things hit production. You do keep one or two backup versions, for a fast rollback, but otherwise - no need.
Every production manager will tell you the same thing: a (D)VCS has no place in a production environment.
You can maybe have a one "release deployment" server in the production pit, where you do have a VCS allowing you to view the correct delivery, and from that server to copy/rsync it to the right production server.
But on the servers themselves, you only have:
the application itself
monitoring process to follow and report
some diagnostic tools
The reason is that the more elements you have in your release environment, the more possibility you get for one of those elements to go wrong.
Adding a VCS in the mix is not worth it.
The way I've always done it is to have a live & test version be checkouts of the repository. Then my workflow is like this:
make changes on my dev checkout
commit changes.
update test.
make sure everything works
update production.
currently my work-flow is as follows:
Locally on a machine I maintain a git repo on each website I am working on, when the time comes to publish something I compress the folder and upload this single file to the production server via ssh then I decompress, test the changes a move the changes to the live folder and I get rid of the .git folder.
I was wondering if the use of a git repo on the live server was a good idea, seems to be at first but it can be problematic if a change doesn't look the same on on the production server in comparison to the local development machine... this could start a fire...
What about creating a bare repo on some folder on production server then clone from there to the public folder thus pushing updates from local machine to the bare repo and pulling from the bare on the public folder of the production server... may anyone plese provide some feedback.
Later I read about capistrano http://capify.org but I have no experience w/ this software...
In your experience what is the best practice/methodology to accomplish a website deployment/updates?
Thanks in advance and for your feedback.
I don't think that our method can be called best practice, but it has served us well.
We have several large databases for our application (20gb+), so maintaining local copies on each developers computer has never really been an option, and even though we don't develop against the live database, we do need to do the development against a database that is as close to the real thing as possible.
As a consequence we use a central web server as well, and keep a development branch of our subversion trunk on it. Generally we don't work on the same part of the system at once, but when we do need to do that, or someone is making a lot of substantial changes, we branch the trunk and create a new vhost on the dev server.
We also have a checkout of the code on the production servers, so after we're finished testing we simply do a svn update on the production servers. We've implemented a script that executes the update command on all servers using ssh. This is extremely convinient, since our code base is large and takes a lot of time to upload. Subversion will only copy the files that actually have been changed, so it's a lot faster.
This has worked really well for us, and the only thing to watch out for is making changes on the production servers directly (which of course is a no-no from the beginning) since it might cause conflicts when updating.
I never thought about having a repository copy on the server. After reading it, I thought it might be cool... However, updating the files directly in the live environment without testing is not a great idea.
You should always update a secondary environment matching exactly the live one (webserver + DB version, if any) and test there. If everything goes well, then put the live site under maintenance, update files, and go live again.
So I wouldn't make the live site a copy of the repository, but you could do so with the test env. You'll save SSH + compressing time, plus you can check out any specific revision you'd like to test.
Capistrano is great. The default recipes The documentation is spotty, but the mailing list is active, and getting it set up is pretty easy. Are you running Rails? It has some neat built-in stuff for Rails apps, but is also used fairly frequently with other types of webapps.
There's also Webistrano, which is based on Capistrano but has a web front-end. Haven't used it myself. Another deployment system that seems to be gaining some traction, at least among Rails users, is Vlad the Deployer.