Heroku Review Apps: copy DB to review app - postgresql

Trying to fully automate Heroku's Review Apps (beta) for an app. Heroku wants us to use db/seeds.rb to seed the recently spun up instance's DB.
We don't have a db/seeds.rb with this app. We'd like to set up a script to copy the existing DB from the current parent (staging) and use that as the DB for the new app under review.
This I can do manually:
heroku pg:copy myapp::DATABASE_URL DATABASE_URL --app myapp-pr-1384 --confirm myapp-pr-1384
But I can't get figure out how to get the app name that Heroku creates into the postdeploy script.
Anyone tried this and know how it might be automated?

I ran into this same issue and here is how I solved it.
Set up the database url you want to copy from as an environment variable on the base app for the pipeline. In my case this is STAGING_DATABASE_URL. The url format is postgresql://username:password#host:port/db_name.
In your app.json file make sure to copy that variable over.
In your app.json provision a new database which will set the DATABASE_URL environment variable.
Use the following script to copy over the database pg_dump $STAGING_DATABASE_URL | psql $DATABASE_URL
Here is my app.json file for reference:
{
"name": "app-name",
"scripts": {
"postdeploy": "pg_dump $STAGING_DATABASE_URL | psql $DATABASE_URL && bundle exec rake db:migrate"
},
"env": {
"STAGING_DATABASE_URL": {
"required": true
},
"HEROKU_APP_NAME": {
"required": true
}
},
"formation": {
"web": {
"quantity": 1,
"size": "hobby"
},
"resque": {
"quantity": 1,
"size": "hobby"
},
"scheduler": {
"quantity": 1,
"size": "hobby"
}
},
"addons": [
"heroku-postgresql:hobby-basic",
"papertrail",
"rediscloud"
],
"buildpacks": [
{
"url": "heroku/ruby"
}
]
}

An alternative is to share the database between review apps. You can inherit DATABASE_URL in your app.json file.
PS: This is enough for my case which is a small team, keep in mind that maybe is not enough for yours. And, I keep my production and test (or staging, or dev, whatever you called it) data separated.

Alternatively:
Another solution using pg_restore, thanks to
https://gist.github.com/Kalagan/1adf39ffa15ae7a125d02e86ede04b6f
{
"scripts": {
"postdeploy": "pg_dump -Fc $DATABASE_URL_TO_COPY | pg_restore --clean --no-owner -n public -d $DATABASE_URL && bundle exec rails db:migrate"
}
}

I ran into problem after problem trying to get this to work. This postdeploy script finally worked for me:
pg_dump -cOx $STAGING_DATABASE_URL | psql $DATABASE_URL && bundle exec rails db:migrate

I see && bundle exec rails db:migrate as part of the postdeploy step in a lot of these responses.
Should that actually just be bundle exec rails db:migrate in the release section of app.json?

Related

How to connect jupyter to an existing environment WITHOUT altering that environment in any way

One can currently connect jupyter to an existing environment, if one installs ipykernel in that particular environment first (and then creates a "kernel" for that environment").
My question is how can that be achieved without touching the environment.
I tried creating a kernelspec.json file manually:
"argv": [
"/path/to/envs/myenv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "myenv",
"language": "python",
"metadata": {
"debugger": true
}
}
but that doesn't work.
Any hints (even regarding why my request is not sensible) are appreciated.

Running nx target only if file doesn't exist

I have a project which has a build step, however, I need to make sure that the file firebase.config.json exists before running the build command.
With that, I have two NPM scripts:
// package.json
{
...,
"nx": {
"targets": {
"prepare": {
"outputs": ["firebase.config.json"]
},
"build": {
"outputs": ["dist"],
"dependsOn": [
{
"target": "prepare",
"projects": "self"
}
]
}
}
},
"scripts": {
"prepare": "firebase apps:sdkconfig web $FIREBASE_APP_ID_SHOP --json | jq .result.sdkConfig > firebase.config.json",
"build": "VITE_FIREBASE_CONFIG=$(cat ./firebase.config.json) vite build",
},
...
}
So with the above, every time I run nx build app it will first run prepare and build the firebase.config.json file.
However, every time I make a change to any of the source files inside my project, prepare re-runs even though the firebase.config.json is already present.
Is it possible for nx to only run a target if the file declared under outputs is not present?
If you are in a bash environment you can modify your prepare script to be the following (note the original command has been shortened with ellipses for readability).
// package.json
{
"scripts":{
"prepare": "CONFIG=firebase.config.json; [ -f \"$CONFIG\" ] || firebase apps:sdkconfig ... | jq ... > \"$CONFIG\""
}
}
The above prepare script will still run, but it should not spend any time reproducing the configuration file if it already exists.
CONFIG=firebase.config.json is just putting our file in a bash environment variable so we can use it in multiple places (helps prevent typos). [ -f "$CONFIG" ] will return true if $CONFIG holds a filename which corresponds to an existing file. If it returns true, it will short-circuit the || (OR) command.
If you want further verification of this technique, you can test this concept at the terminal with the command [ -f somefile.txt ] || echo "File does not exist". If somefile.txt does not exist, then the echo will run. If the file does exist, then the echo will not run.
A slightly-related side-note: while you clearly can do this all in the package.json configuration, if your nx workspace is going to grow to include other libraries or applications, I highly recommend splitting up all your workspace configuration into the default nx configuration files: nx.json, workspace.json, and the per-project project.json files for the sake of readability/maintainability.
Best of luck!

ECS Task Definition - When overriding ENTRYPOINT, Docker image's CMD is dropped

I have a Docker Image built with the following CMD
# Dockerfile
...
CMD ["nginx", "-g", "daemon off;"]
When my task definition does not include entryPoint or command the task successfully enters a running state.
{
"containerDefinitions": [
{
"image": "<myregistry>/<image>",
...
}
]
}
I need to run an agent in some instances of this container, so I am using an entrypoint for this task to run my agent. The problem is when I add an entryPoint parameter to the task definition, the container starts and immediately stops.
This is what I'm doing to add the entryPoint:
{
"containerDefinitions": [
{
"image": "<myregistry>/<image>",
...
"entryPoint": [
"custom-entry-point.sh"
],
}
]
}
And here is the contents of custom-entry-point.sh:
#!/bin/bash
/myagent &
echo "CMD is: $#"
exec "$#"
To confirm my suspicion that CMD is dropped, the logs just show:
CMD is:
If I add the CMD array from the Dockerfile to the task definition with the command parameter, it works fine and the task starts:
{
"containerDefinitions": [
{
"image": "<myregistry>/<image>",
...
"entryPoint": [
"custom-entry-point.sh"
],
"command": [
"nginx",
"-g",
"daemon off;"
}
]
}
And the logs show the expected:
CMD is: nginx -g daemon off;
I have many docker images with various iterations of CMD, I do not want to have to copy these into my task definitions. It seems that just adding only an entryPoint to a task definition should not override a docker image's CMD with an empty value.
Hoping some ECS / fargate experts can help shed some light on a path forward.
Some tips:
Check if your entrypoint script is executable
Use absolute path to your entrypoint script
Check logs to see the error. hopefully you autoconfigured awslog driver?
Have you successfully run the entrypoint version on your local?
Also have a read of this for some useful background:
https://aws.amazon.com/blogs/opensource/demystifying-entrypoint-cmd-docker/
I don't think this has anything to do with ECS. This is how Docker behaves, and there's no way to change it as far as I know.
See https://docs.docker.com/engine/reference/builder/
If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
This particular snippet only refers to defining a new ENTRYPOINT in the image, but this Github discussion confirms the same behavior holds when overriding ENTRYPOINT at runtime.
I got the same problem, with my entrypoint and command attributes being something like sh -c .... I needed to delete sh -c, put the commands directly, and add #!/bin/sh at the top of my scripts.

How to make trigger command using watchman?

Hi I will be using watchman to upload images. The images will have folders so I will be using .json to make the command.
These are my images:
/home/user/Documents/Images/folder1/image1.png
/home/user/Documents/Images/folder2/image2.png
I have a list of export environment variables watchman_env
export CONDA_ENV=image_uploader
export IMG_FOLDER=/home/user/Documents/Images
export UPLOADER_SCRIPT=/home/user/Documents/Script/uploader.sh
export PYTHON_UPLOADER=/home/user/Documents/Script/img_uploader.py
export JSON_TRIGGER=/home/user/Documents/Script/uploader.json
This is my uploader script, ~/Script/uploader.sh
. watchman_env
conda run -n $CONDA_ENV python $PYTHON_UPLOADER $IMG_FOLDER
This is my json configuration, ~/Script/uploader.json:
["trigger", "/home/user/Documents/Images", {
"name": "img_uploader",
"expression": ["match", "**/*.png"],
"command": ["/home/user/Documents/Script/uploader.sh"]
}]
I run the command using another bash file, init.sh, since I want to run some few more commands.
. watchman_env
watchman --json-command < $JSON_TRIGGER
When I run uploader.sh my python script uploads the two images. However, when I run init.sh, it does not trigger. What is wrong with my code? And is that correctly how to use json trigger?
wathman version: 4.9.0
I just added to the expression wholename and finally worked.
["trigger", "/home/user/Documents/Images", {
"name": "img_uploader",
"expression": ["match", "**/*.png", "wholename"],
"command": ["/home/user/Documents/Script/uploader.sh"]
}]

Wintersmith: error Error loading plugin './node_modules/wintersmith-coffee/': Cannot find module './plugin'

I built a site with wintersmith in November of 2013. It's live at http://powma.com
I'm coming back to it, but it's not building :-{
I don't mind getting my hands dirty, but I don't know where to start. I'm getting this error:
error Error loading plugin './node_modules/wintersmith-coffee/': Cannot find module './plugin'
Any suggestions?
Thanks!
Mike
UPDATE
Hey, this is because the coffeescript wasn't getting compiled.
I installed it globally, but that didn't help.
$ sudo npm install -g coffee-script
I manually compiled it and moved to other errors. Any suggestions for what's missing?
$ coffee -c plugin.coffee
Here's my config.json:
{
"locals":
{ "url": "http://localhost:8080"
, "title": "Powma"
, "subTitle": "Linking you to technology"
, "motto": "We build exceptions sites and applications to connect people to products, services, and each other."
, "owner": "Michael Cole"
, "profilePicture": "/static/img/profile-professional.jpg"
, "inlineSpriteMaxBytes" : 10000
},
"views": "./views",
"plugins":
[ "./node_modules/wintersmith-coffee/"
, "./node_modules/wintersmith-stylus/"
],
"require": {
"moment": "moment",
"_": "underscore",
"typogr": "typogr"
},
"jade": {
"pretty": true
},
"markdown": {
"smartLists": true,
"smartypants": true
},
"paginator": {
"perPage": 3
}
}
And package.json:
{
"name": "Powma-com",
"version": "0.1.1",
"private": true,
"engines": {
"node": "0.10.17"
},
"dependencies": {
"moment": "2.0.x",
"underscore": "1.5.x",
"typogr": "0.5.x",
"wintersmith": "2.0.x",
"wintersmith-stylus": "git://github.com/MichaelJCole/wintersmith-stylus.git#master",
"wintersmith-coffee": "0.2.x",
"express": "3.4.x",
"sendgrid": "~0.3.0-rc.1.7",
"express-validator": "~0.8.0",
"underscore-express": "0.0.4"
}
}
This is a new dev laptop I'm working with so that may be part of the problem.
I worked around the issue, but didn't fix it. Do I really need to manually compile the coffeescript?
Thanks!
I solved this issue by explicitly specifying plugin.coffee in the config.json file.
{
...other stuff...
"plugins":
[ "./node_modules/wintersmith-coffee/plugin.coffee"
, "./node_modules/wintersmith-stylus/plugin.coffee"
],
...more stuff...
}
It looks like you're missing wintersmith-coffee in node_modules; make sure you have it installed locally with npm install wintersmith-coffee. You can also try removing it from config.json if you're not using it anywhere.
It would also be helpful to see both your config.json and package.json. Also make sure you run an npm install and npm update to make sure you have everything referenced in package.json installed and updated.
Update
Not having CoffeeScript installed could have been the issue. After installing that globally, I'm not sure if all of your shell sessions will pick up the command and use it without being restarted. With a new shell session, see if you can build the site. You can also try testing Wintersmith in isolation of your site. Try generating a sample site with wintersmith new somepath and see if you can run wintersmith build there. That would be a good start for narrowing down your issues between your site and your workstation setup.