Standard practice for .wsgi secret key for flask applications on github repositories [duplicate] - github

This question already has answers here:
Where should I place the secret key in Flask?
(2 answers)
Closed 4 years ago.
I am building a flask web application and would like to put it on a github repo.
I notice that in the .wsgi file
#!/usr/bin/python
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0,"/var/www/hashchain/")
from hashchain import app as application
application.secret_key = 'super secret key'
There is an application.secret_key for encryption...
I am guessing that the standard way of putting a flask web app on github would include cloning the entire flask app folder in it's entirety but NOT the .wsgi file?
That way, contributors can freely run flask in debug mode on their own localhost to develop it further and if they really want can deploy it to their own server (but will have to write their own .wsgi file and config for the server in their control).
Is this the correct way to think about it? I'm guessing if I put the .wsgi file on github it would be open season feasting for hackers?
I'm also guessing that if I hypothetically already did this? I would need to change the secret key after deleting it on the github repo because people could just look at the commit history to see it!

The general way to do this is read from enviroment variable:
import os
application.secret_key = os.getenv('SECRET_KEY', 'for dev')
Note it also set a default value for development.
You can set the enviroment variable SECRET_KEY manually:
$ export SECRET_KEY=you_key_here # use $ set ... in Windows
Or you can save it in a .env file at project root:
SECRET_KEY=you_key_here
Add it into .gitignore:
.env
Then you can use python-dotenv or something similar to import the variable:
# pip install python-dotenv
import os
from dotenv import load_dotenv
load_dotenv()
application.secret_key = os.getenv('SECRET_KEY', 'for dev')

As commented, the secret or any other sensitive information should never been part of a Git repository.
To illustrate that, see ubuntudesign/git-mirror-service, a simple WSGI server to create a mirror of a remote git repository on another remote.
It does include the step:
Optional secret
By default the server is unsecured - anyone who can access it can use it to mirror to repositories that the server has access to.
To prevent this, you can add a secret:
echo "79a36d50-09be-4bf4-b339-cf005241e475" > .secret
Once this file is in place, the service will only allow requests if the secret is provided.
NB: For this to be an effective security measure, the server should be only accessible over HTTPS.
The file is ignored in .gitignore.
And wsgi.py reads it if present:
secret_filename = os.path.join(script_dir, ".secret")
if os.path.isfile(secret_filename):
with open(secret_filename) as secret_file:
real_secret = secret_file.read().strip()

Related

How to deploy a Dash app with dash_auth to Heroku through GitHub branch tracking?

I'm building a Dash app using the basic authentication dash_auth. Unfortunately, this requires to hardcode a dictionary of usernames and passwords. This is not a huge problem since the app is only for in-house use.
Now, we would like to deploy this to Heroku by automatically tracking one branch of the GitHub repo because this seems most convenient. The problem is that this would require us to put the hardcoded passwords in the Github repository as well.
This post suggested using environment variables for tokens and client keys but how should I do this for dictionaries of passwords?
I'm open to alternative solutions as well.
Nothing really changes when doing this with a dictionary. You just need to parse the JSON string into a Python data structure.
In your application, instead of hard-coding the dictionary as shown in the documentation:
VALID_USERNAME_PASSWORD_PAIRS = {
'hello': 'world'
}
pull it in from the environment, e.g. something like this:
import json
import os
VALID_USERNAME_PASSWORD_PAIRS = json.loads(os.getenv("VALID_USERNAME_PASSWORD_PAIRS"))
Then set your usernames as Heroku config vars:
heroku config:set VALID_USERNAME_PASSWORD_PAIRS='{"hello": "world"}'
The single quotes here should avoid most issues with special characters being interpreted by your shell.
For local development you can set a VALID_USERNAME_PASSWORD_PAIRS environment variable, e.g. via a .env file if you are using tooling that understands that.
Another option for local development would be to hard-code just a default value into your script by adding a default argument:
VALID_USERNAME_PASSWORD_PAIRS = json.loads(
os.getenv("VALID_USERNAME_PASSWORD_PAIRS", default='{"local": "default"}')
)
Note that we give default a string here, not a dict, since we're passing the result into json.loads().
Be careful with this last option since you could accidentally publish the code without setting the environment variable, in which case the local default credentials would work.

How can I import existing password into Pulumi?

I'm trying to import an existing system into Pulumi, in particular I wish to support generating passwords for any new stacks, but using the existing password for the existing stack. Is this possible?
I've tried the following as per https://www.pulumi.com/docs/reference/pkg/random/randompassword/#import
pulumi new azure-python
pulumi plugin install resource random 3.1.1
pulumi import random:index/randomPassword:RandomPassword password Password123!
This gives the error random:index/randomPassword:RandomPassword resource 'password' has a problem: Required attribute is not set. Examine values at 'RandomPassword.Length'. This makes sense but it's not clear from the docs I've read that it is possible to set the value of an attribute when importing a resource.
This is using the latest version of Pulumi (2.32.1) which I'm using local login for.
This is actually for me writing a training exercise so if the answer is too unpleasant (e.g. exporting the state and reimporting it with the real password) it's probably not worth doing.

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

Azure batch Application package not getting copied to Working Directory of Task

I have created Azure Batch pool with Linux Machine and specified Application Package for the Pool.
My command line is
command='python $AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py',
python3: can't open file '$AZ_BATCH_APP_PACKAGE_scriptv1_1/tasks/XXX/get_XXXXX_data.py':
[Errno 2] No such file or directory
when i connect to node and look at working directory non of the Application Package files are present there.
How do i make sure that files from Application Package are available in working directory or I can invoke/execute files under Application Package from command line ?
Make sure that your async operation have proper await in place before you start using the package in your code.
Also please share your design \ pseudo-code scenario and how you are approaching it as a design?
Further to add:
Seems like this one is pool level package.
The error seems like that the application env variable is either incorrectly used or there is some other user level issue. Please checkout linmk below and specially the section where use of env variable is mentioned.
This seems like user level issue because In case of downloading the package resource, if there will be an error it will be visible to you via exception handler or at the tool level is you are using batch explorer \ Batch-labs or code level exception handling.
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Reason \ Rationale:
If the pool level or the task application has error, an error-list will come back if there was an error in the application package then it will be returned as the UserError or and AppPackageError which will be visible in the exception handle of the code.
Key you can always RDP into your node and checkout the package availability: information here: https://learn.microsoft.com/en-us/azure/batch/batch-api-basics#connecting-to-compute-nodes
I once created a small sample to help peeps around so this resource might help you to checkeout the use here.
Hope rest helps.
On Linux, the application package with version string is formatted as:
AZ_BATCH_APP_PACKAGE_{0}_{1}
On Windows it is formatted as:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version
Where 0 is the application name and 1 is the version.
$AZ_BATCH_APP_PACKAGE_scriptv1_1 will take you to the root folder where the application was unzipped.
Does this "exact" path exist in that location?
tasks/XXX/get_XXXXX_data.py
You can see more information here:
https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Edit: Just saw this question: "or can I invoke/execute files under Application Package from command line"
Yes you can invoke and execute files from the application package directory with the environment variable above.
If you type env on the node you will see the environment variables that have been set.

CherryPy : Accessing Global config

I'm working on a CherryPy application based on what I found on that BitBucket repository.
As in this example, there is two config files, server.cfg (aka "global") and app.cfg.
Both config files are loaded in the serve.py file :
# Update the global settings for the HTTP server and engine
cherrypy.config.update(os.path.join(self.conf_path, "server.cfg"))
# ...
# Our application
from webapp.app import Twiseless
webapp = Twiseless()
# Let's mount the application so that CherryPy can serve it
app = cherrypy.tree.mount(webapp, '/', os.path.join(self.conf_path, "app.cfg"))
Now, I'd like to add the Database configuration.
My first thought was to add it in the server.cfg (is this the best place? or should it be located in app.cfg ?).
But if I add the Database configuration in the server.cfg, I don't know how to access it.
Using :
cherrypy.request.app.config['Database']
Works only if the [Database] parameter is in the app.cfg.
I tried to print cherrypy.request.app.config, and it shows me only the values defined in app.cfg, nothing in server.cfg.
So I have two related question :
Is it best to put the database connection in the server.cfg or app.cfg file
How to access server.cfg configuration (aka global) in my code
Thanks for your help! :)
Put it in the app config. A good question to help you decide where to put such things is, "if I mounted an unrelated blog app at /blogs on the same server, would I want it to share that config?" If so, put it in server config. If not, put it in app config.
Note also that the global config isn't sectioned, so you can't stick a [Database] section in there anyway. Only the app config allows sections. If you wanted to stick database settings in the global config anyway, you'd have to consider config entry names like "database_port" instead. You would then access it directly by that name: cherrypy.config.get("database_port").