How to deploy a Dash app with dash_auth to Heroku through GitHub branch tracking? - github

I'm building a Dash app using the basic authentication dash_auth. Unfortunately, this requires to hardcode a dictionary of usernames and passwords. This is not a huge problem since the app is only for in-house use.
Now, we would like to deploy this to Heroku by automatically tracking one branch of the GitHub repo because this seems most convenient. The problem is that this would require us to put the hardcoded passwords in the Github repository as well.
This post suggested using environment variables for tokens and client keys but how should I do this for dictionaries of passwords?
I'm open to alternative solutions as well.

Nothing really changes when doing this with a dictionary. You just need to parse the JSON string into a Python data structure.
In your application, instead of hard-coding the dictionary as shown in the documentation:
VALID_USERNAME_PASSWORD_PAIRS = {
'hello': 'world'
}
pull it in from the environment, e.g. something like this:
import json
import os
VALID_USERNAME_PASSWORD_PAIRS = json.loads(os.getenv("VALID_USERNAME_PASSWORD_PAIRS"))
Then set your usernames as Heroku config vars:
heroku config:set VALID_USERNAME_PASSWORD_PAIRS='{"hello": "world"}'
The single quotes here should avoid most issues with special characters being interpreted by your shell.
For local development you can set a VALID_USERNAME_PASSWORD_PAIRS environment variable, e.g. via a .env file if you are using tooling that understands that.
Another option for local development would be to hard-code just a default value into your script by adding a default argument:
VALID_USERNAME_PASSWORD_PAIRS = json.loads(
os.getenv("VALID_USERNAME_PASSWORD_PAIRS", default='{"local": "default"}')
)
Note that we give default a string here, not a dict, since we're passing the result into json.loads().
Be careful with this last option since you could accidentally publish the code without setting the environment variable, in which case the local default credentials would work.

Related

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

Is there a way in Terraform Enterprise to read the payload from VCS?

I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value

Standard practice for .wsgi secret key for flask applications on github repositories [duplicate]

This question already has answers here:
Where should I place the secret key in Flask?
(2 answers)
Closed 4 years ago.
I am building a flask web application and would like to put it on a github repo.
I notice that in the .wsgi file
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0,"/var/www/hashchain/")
from hashchain import app as application
application.secret_key = 'super secret key'
There is an application.secret_key for encryption...
I am guessing that the standard way of putting a flask web app on github would include cloning the entire flask app folder in it's entirety but NOT the .wsgi file?
That way, contributors can freely run flask in debug mode on their own localhost to develop it further and if they really want can deploy it to their own server (but will have to write their own .wsgi file and config for the server in their control).
Is this the correct way to think about it? I'm guessing if I put the .wsgi file on github it would be open season feasting for hackers?
I'm also guessing that if I hypothetically already did this? I would need to change the secret key after deleting it on the github repo because people could just look at the commit history to see it!
The general way to do this is read from enviroment variable:
import os
application.secret_key = os.getenv('SECRET_KEY', 'for dev')
Note it also set a default value for development.
You can set the enviroment variable SECRET_KEY manually:
$ export SECRET_KEY=you_key_here # use $ set ... in Windows
Or you can save it in a .env file at project root:
SECRET_KEY=you_key_here
Add it into .gitignore:
.env
Then you can use python-dotenv or something similar to import the variable:
# pip install python-dotenv
import os
from dotenv import load_dotenv
load_dotenv()
application.secret_key = os.getenv('SECRET_KEY', 'for dev')
As commented, the secret or any other sensitive information should never been part of a Git repository.
To illustrate that, see ubuntudesign/git-mirror-service, a simple WSGI server to create a mirror of a remote git repository on another remote.
It does include the step:
Optional secret
By default the server is unsecured - anyone who can access it can use it to mirror to repositories that the server has access to.
To prevent this, you can add a secret:
echo "79a36d50-09be-4bf4-b339-cf005241e475" > .secret
Once this file is in place, the service will only allow requests if the secret is provided.
NB: For this to be an effective security measure, the server should be only accessible over HTTPS.
The file is ignored in .gitignore.
And wsgi.py reads it if present:
secret_filename = os.path.join(script_dir, ".secret")
if os.path.isfile(secret_filename):
with open(secret_filename) as secret_file:
real_secret = secret_file.read().strip()

Setting :deploy_to from server config in Capistrano3

In my Capistrano 3 deployment, I would like to set the set :deploy_to, -> { "/srv/www/#{fetch(:application)}" } so the :deploy_to is different for each server it deploys to.
In my staging.rb file I have:
server 'dev.myserver.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/path'
server 'dev.myserver2.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/other/path'
My question is: would it possible to use the "install_path" I defined, in my :deploy_to? If that's possible, how would you do it?
Finally, after looking around, I came onto an issue from one of the developer of Capistrano, stating specifically that it can't be done
Quote from the Github issue:
Not possible, sorry. fetch() (as is documented widely) reads values
set by set(), the only reason to use set() and fetch() over regular
ruby variables is to provide a consistent API between plugins and
extensions, and because set() can take a Proc to be resolved later.
The variables you are setting in the host object via the server()
command belong to an individual host, some of them, user, roles, etc
have special meanings. For more information see
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md#do-something-different-on-one-host-or-another-depending-on-a-host-property.
If you specifically need to deploy to a different directory on each
machine you probably should not be using the built-in tasks (they
don't fit your needs), and rather copy the deploy.rake from the Gem
into your own project, and modify it as you need. Which in this case
might be to not take fetch(:deploy_to), but to read that from a host
property.
You could try to do something where before doing anything that relies
on calling fetch(:deploy_to), you set() it using the value from
host.someproperty but I'm pretty sure that'll break in exciting and
interesting ways.

CherryPy : Accessing Global config

I'm working on a CherryPy application based on what I found on that BitBucket repository.
As in this example, there is two config files, server.cfg (aka "global") and app.cfg.
Both config files are loaded in the serve.py file :
# Update the global settings for the HTTP server and engine
cherrypy.config.update(os.path.join(self.conf_path, "server.cfg"))
# ...
# Our application
from webapp.app import Twiseless
webapp = Twiseless()
# Let's mount the application so that CherryPy can serve it
app = cherrypy.tree.mount(webapp, '/', os.path.join(self.conf_path, "app.cfg"))
Now, I'd like to add the Database configuration.
My first thought was to add it in the server.cfg (is this the best place? or should it be located in app.cfg ?).
But if I add the Database configuration in the server.cfg, I don't know how to access it.
Using :
cherrypy.request.app.config['Database']
Works only if the [Database] parameter is in the app.cfg.
I tried to print cherrypy.request.app.config, and it shows me only the values defined in app.cfg, nothing in server.cfg.
So I have two related question :
Is it best to put the database connection in the server.cfg or app.cfg file
How to access server.cfg configuration (aka global) in my code
Thanks for your help! :)
Put it in the app config. A good question to help you decide where to put such things is, "if I mounted an unrelated blog app at /blogs on the same server, would I want it to share that config?" If so, put it in server config. If not, put it in app config.
Note also that the global config isn't sectioned, so you can't stick a [Database] section in there anyway. Only the app config allows sections. If you wanted to stick database settings in the global config anyway, you'd have to consider config entry names like "database_port" instead. You would then access it directly by that name: cherrypy.config.get("database_port").