Why do i use custom Variable Environment in a hosting website? - mongodb

I am new to hosting world (cloudcontrol), an i got some problem with application credentials, like database administration (mongohq), or google authentification.
So, will i put those variable with some kind of syntaxte (something like $variable) in the code, and then make a commandline with key-value as variable-value ?

If you are using Tornado, it makes it even simpler. Use tornado.options and pass environment variables while running the code.
Use following in your Tornado code:
define("mysql_host", default="127.0.0.1:3306", help="Main user DB")
define("google_oauth_key", help="Client key for Google Oauth")
Then you can access the these values in your rest of your code as:
options.mysql_host
options.google_oauth_key
When you are running your Tornado script, pass the environment variables:
python main.py --mysql_host=$MYSQL_HOST --google_oauth_key=$OAUTH_KEY
assuming both $MYSQL_HOST and $OAUTH_KEY are environment variables. Let me know if you need a full working example or any further help.
example:
First set a environment variable:
$export mongo_uri_env=mongodb://alien:12345#kahana.mongohq.com:10067/essog
and make changes in your Tornado code:
define("mongo_uri", default="127.0.0.1:28017", help="MongoDB URI")
...
...
uri = options.mongo_uri
and you would run your code as
python main.py --mongo_uri=$mongo_uri_env
If you don't want to pass it while running, then you have to read that environment variable directly in your script. For that
import os
...
...
uri = os.environ['mongo_uri_env']

Related

How can I run pytesseract / tesseract in Foundry Code Repositories?

I am trying to use the function image_to_string from the library pytesseract in a repository to perform OCR of PDFs. However, I am getting the following error:
From the checks I would assume the library was loaded correctly:
Does anyone have an idea how to trouble shoot here?
It seems like Foundry is not respecting / running the environment activation script
https://github.com/conda-forge/tesseract-feedstock/blob/main/recipe/activate.sh
that sets the TESSDATA_PREFIX environment variable automatically. However, we can infer the value manually and provide it to the pytesseract API calls.
Define the following helper function:
def _get_tessdata_directory_path():
import sys
from pathlib import Path
env_root = Path(sys.executable).parent.parent
share_dir = env_root / 'share' / 'tessdata'
assert share_dir.exists(), 'tessdata directory does not exist in <envroot>/share/tessdata'
return str(share_dir)
and use it like shown in the following snippet:
tessdata_dir_config = f'--tessdata-dir "{_get_tessdata_directory_path()}"'
pytesseract.image_to_string(image, ..., config=tessdata_dir_config)

Convert CloudFormation template (YAML) to Troposphere code

I have a large sized CloudFormation template written in Yaml, I want to start using Troposphere instead. Is there any easy way to convert the CF template to Troposphere code?
I have noticed this script here https://github.com/cloudtools/troposphere/blob/master/troposphere/template_generator.py
This creates a Troposphere python object, but I am not sure if its possible to output it Troposphere code.
You can do it by converting the CF YAML to JSON and running the https://github.com/cloudtools/troposphere/blob/master/scripts/cfn2py passing the JSON file in as an argument.
Adding to the good tip from #OllieB
Install dependencies using pip or poetry:
https://python-poetry.org/docs/#installation
https://github.com/cloudtools/troposphere
https://github.com/awslabs/aws-cfn-template-flip
pip install 'troposphere[policy]'
pip install cfn-flip
poetry add -D 'troposphere[policy]'
poetry add -D cfn-flip
The command line conversion is something like:
cfn-flip -c -j template-vpc.yaml template-vpc.json
cfn2py template-vpc.json > template_vpc.py
WARNING: it appears that the cfn2py script might not be fully unit tested or something, because it can generate some code that does not pass troposphere validations. I recommend adding a simple round-trip test to the end of the output python script, e.g.
if __name__ == "__main__":
template_py = json.loads(t.to_json())
with open("template-vpc.json", "r") as json_fd:
template_cfn = json.load(json_fd)
assert template_py == template_cfn
See also https://github.com/cloudtools/troposphere/issues/1879 for an example of auto-generation of pydantic models from CFN json schemas.
from troposphere.template_generator import TemplateGenerator
import yaml
with open("aws/cloudformation/template.yaml") as f:
source_content = yaml.load(f, Loader=yaml.BaseLoader)
template = TemplateGenerator(source_content)
This snippet shall give you a template object from Troposphere library. You can then make modifications using Troposphere api.

How can I use a system environment variable inside a pyramid ini file?

I exported a variable called DBURL='postgresql://string'and I want to use it in my configuration ini file, e.g::
[app:kotti]
sqlalchemy.url = %(DBURL)s
That's not working.
Put this in your __init__.py:
def expandvars_dict(settings):
"""Expands all environment variables in a settings dictionary."""
return dict((key, os.path.expandvars(value)) for
key, value in settings.items())
Then when you export an environment variable to your shell, the proper syntax is this:
sqlalchemy.url = ${DBURL}
Once you have that environment variable set within your .ini, then you can use the configparser syntax:
sqlalchemy.connection = %(sqlalchemy.url)s%(user:pass and other stuff)s
Idea stolen from https://stackoverflow.com/a/16446566/2214933
PasteDeploy (the ini format pyramid is using here) does not support reading directly from environment variables. A couple common options are:
1) Set that option yourself in your main.
import os
def main(global_config, **settings):
settings['sqlalchemy.url'] = os.environ['DBURL']
config = Configurator(settings=settings)
...
2) Define your ini file as a jinja2 template and have a command to render it out to ini format, and just run that as part of your deploy process.

Terraform - Pass in Variable to "Source" Parameter

I'm using Terraform in a modular fashion in order to build out my infrastructure. I do this by having a configuration file that calls in the different modules. I want to pass an infrastructure variable which picks up what tagged version of the Github repository the application should be building out. Most importantly I'm trying to figure out how to make a concatenation of a string happen in the "source" variable of the configuration file.
module "athenaelb" {
source = "${concat("git::https://github.com/ORG/REPONAME.git?ref=",var.infra_version)}"
aws_access_key = "${var.aws_access_key}"
aws_secret_key = "${var.aws_secret_key}"
aws_region = "${var.aws_region}"
availability_zones = "${var.availability_zones}"
subnet_id = "${var.subnet_id}"
security_group = "${var.athenaelb_security_group}"
branch_name = "${var.branch_name}"
env = "${var.env}"
sns_topic = "${var.sns_topic}"
s3_bucket = "${var.elb_s3_bucket}"
athena_elb_sns_topic = "${var.athena_elb_sns_topic}"
infra_version = "${var.infra_version}"
}
I want it to compile and for the source to look like this (for example): git::https://github.com/ORG/REPONAME.git?ref=v1
Anyone have any thoughts on how to make this work?
Thanks,
Keren
This is not possible currently in Terraform itself.
The only way to achieve something like this is to use a separate script to interact with the git repository that Terraform clones into a subdirectory of the .terraform/modules directory and switch it to a different tag depending on which version you need. This is non-ideal since Terraform organizes these into directories based on a hash of the module path, but if you can identify the module in question it is safe to run git checkout within these repositories as long as you do not run terraform get again afterwards.
For more details and discussion on this issue, see issue #1439 in Terraform's issue tracker, where this feature was requested.
You could use envsubst or python jinja and use these wrapper scripts in your pipeline deploy script to actually build the scripts from .envsubst and .jinja files before your terraform plan/apply
https://github.com/uvoo/process-templates/tree/main/scripts
I wish terraform would support this but my guess is they never will so just add some simple functions/files into deploy scripts which is usually the best way to deploy.

Erlang: How to access CLI flags (arguments) as application environment variables?

How does one access command line flag (arguments) as environment variables in Erlang. (As flags, not ARGV) For example:
RabbitMQ cli looks something like:
erl \
...
-sasl errlog_type error \
-sasl sasl_error_logger '{file,"'${RABBITMQ_SASL_LOGS}'"}' \
... # more stuff here
If one looks at sasl.erl you see the line:
get_sasl_error_logger() ->
case application:get_env(sasl, sasl_error_logger) of
% ... etc
By some unknown magic the sasl_error_logger variable becomes an erlang tuple! I've tried replicating this in my own erlang application, but I seem to be only able to access these values via init:get_argument, which returns the value as a string.
How does one pass in values via the commandline and be able to access them easily as erlang terms?
UPDATE Also for anyone looking, to use environment variables in the 'regular' way use os:getenv("THE_VAR")
Make sure you set up an application configuration file
{application, fred,
[{description, "Your application"},
{vsn, "1.0"},
{modules, []},
{registered,[]},
{applications, [kernel,stdlib]},
{env, [
{param, 'fred'}
]
...
and then you can set your command line up like this:
-fred param 'billy'
I think you need to have the parameter in your application configuration to do this - I've never done it any other way...
Some more info (easier than putting it in a comment)
Given this
{emxconfig, {ets, [{keypos, 2}]}},
I can certainly do this:
{ok, {StorageType, Config}} = application:get_env(emxconfig),
but (and this may be important) my application is started at this time (may actually just need to be loaded and not actually started from looking at the application_controller code).