hi I take backup of my heroku database using PGBackups but its provide me backup on there pre define places .now I wont to take that backup on my remote location folder on S3 or some other remote storage.
how can we do that periodically like every week or month it's own/automatically.
my app on Ruby on Rails help me to achieve this.
The latest backup is always available to you by entering heroku pgbackups:url. So, set up a cron job or the equivalent that fetches that URL once a week or once a day.
You could write a rake or Ruby script and call it with Heroku Scheduler (a free addon), or use a different remote machine to pull the backup, or do a shell script and:
curl -O `heroku pgbackups:url`
Here is a Gem that appears to do what you want: https://coderwall.com/p/w4wpvw
Related
I have recently started working on an existing Heroku environment.
How can I tell if there are database backups scheduled?
Assuming you are using Heroku Postgres, you can view backup schedules with the following command:
heroku pg:backups:schedules
You might have to provide the --app argument so Heroku knows which app you're interested in.
I have implemented a javascript script for my mongo database. This script is called getMetrics.js and I am able to execute it by running: mongo getMetrics.js from my computer.
Now I want to automatically execute that script one time per day. To do so, I have created a Heroku app and I added to it the scheduler add-on (https://devcenter.heroku.com/articles/scheduler).
My main problem is that in order to be run, my task will execute the command "mongo getMetrics.js" and it will failed because I don't have mongo command installed in my Heroku app.
How can I run this script from Heroku?
Thanks a lot for your help.
I did the below in a similar case:
Download mongodb for linux https://www.mongodb.com/download-center#community
The bin folder contains the mongo binary
Make this binary available in your Heroku instance (e.g. If you have your Heroku configured with your git repo, then checkin this binary along side your script
[Make sure the folder you are keeping this binary is in the path, safe path will be inside /bin]
I am running scheduled backups on my Heroku application (rails 4). I downloaded the backup that ran last night and I'd like to view the contents of the file. When I open it in textedit, I can see the tables but then it's just a mix of characters and letters.
I have scheduled backups running every evening on my Heroku PG db so I just had to download the backup from the previous day. To download the backup I ran:
heroku pg:backups public-url --app foobar
I then visited the AWS S3 link provided by Heroku and downloaded the file onto my computer.
How would I go about either viewing the contents of the backup in terminal or any other GUI without restoring the backup into my Heroku application?
I have a small app, and I run backups manually using heroku pgbackups:capture on my dev machine.
I'd like to use the Heroku Scheduler send these backups to my own S3 bucket.
The thing is: pg_dump is not an available on Heroku boxes, and heroku pgbackups:capture is a local CLI command, also not available.
Is there another way to achieve this using Scheduler?
I need to periodically import some data into my rails app on Heroku.
The task to execute is split into the following parts:
* download a big zip file (e.g. ~100mb) from a website
* unzip the file (unzipped space is ~1.50gb)
* run a rake script that reads those file and create or update records using my active record models
* cleanup
How can I do this on heroku? Is it better to use some external storage (e.g. S3).
How would you approach such a thing?
Ideally this needs to run every night.
I have tried exact same thing couple of days back and the conclusion that I came up with was this can't be done because of memory limit restrictions that heroku imposes on each process. (I build a data structure with the files that I read from the internet and try to push to DB)
I was using a rake task that would pull and parse couple of big file and then populate the database.
As a work around I run this rake task in my local machine now and push the database to S3 and issue a heroku command from my local machine to restore the heroku DB instance.
"heroku pgbackups:restore 'http://s3.amazonaws.com/#{yourfilepath}' --app #{APP_NAME} --confirm #{APP_NAME}"
You could push to S3 using fog library
require 'rubygems'
require 'fog'
connection = Fog::Storage.new(
:provider => 'AWS',
:aws_secret_access_key => "#{YOUR_SECRECT}",
:aws_access_key_id => "#{YOUR_ACCESS_KEY}"
)
directory = connection.directories.get("#{YOUR_BACKUP_DIRECTORY}")
# upload the file
file = directory.files.create(
:key => '#{REMOTE_FILE_NAME}',
:body => File.open("#{LOCAL_BACKUP_FILE_PATH}"),
:public => true
)
The command that I use to make a pgbackup on my local machine is
system "PGPASSWORD=#{YOUR_DB_PASSWORD} pg_dump -Fc --no-acl --no-owner -h localhost -U #{YOUR_DB_USER_NAME} #{YOUR_DB_DATABSE_NAME} > #{LOCAL_BACKUP_FILE_PATH}"
I have put a rake task that automates all these steps.
After thing your might try is use worker(DelayedJob). I guess you can configure your workers to run every 24 hours. I think workers don't have the restriction of 30 seconds limit. But I am not sure about the memory usage.