Convert CloudFormation template (YAML) to Troposphere code - aws-cloudformation

I have a large sized CloudFormation template written in Yaml, I want to start using Troposphere instead. Is there any easy way to convert the CF template to Troposphere code?
I have noticed this script here https://github.com/cloudtools/troposphere/blob/master/troposphere/template_generator.py
This creates a Troposphere python object, but I am not sure if its possible to output it Troposphere code.

You can do it by converting the CF YAML to JSON and running the https://github.com/cloudtools/troposphere/blob/master/scripts/cfn2py passing the JSON file in as an argument.

Adding to the good tip from #OllieB
Install dependencies using pip or poetry:
https://python-poetry.org/docs/#installation
https://github.com/cloudtools/troposphere
https://github.com/awslabs/aws-cfn-template-flip
pip install 'troposphere[policy]'
pip install cfn-flip
poetry add -D 'troposphere[policy]'
poetry add -D cfn-flip
The command line conversion is something like:
cfn-flip -c -j template-vpc.yaml template-vpc.json
cfn2py template-vpc.json > template_vpc.py
WARNING: it appears that the cfn2py script might not be fully unit tested or something, because it can generate some code that does not pass troposphere validations. I recommend adding a simple round-trip test to the end of the output python script, e.g.
if __name__ == "__main__":
template_py = json.loads(t.to_json())
with open("template-vpc.json", "r") as json_fd:
template_cfn = json.load(json_fd)
assert template_py == template_cfn
See also https://github.com/cloudtools/troposphere/issues/1879 for an example of auto-generation of pydantic models from CFN json schemas.

from troposphere.template_generator import TemplateGenerator
import yaml
with open("aws/cloudformation/template.yaml") as f:
source_content = yaml.load(f, Loader=yaml.BaseLoader)
template = TemplateGenerator(source_content)
This snippet shall give you a template object from Troposphere library. You can then make modifications using Troposphere api.

Related

Add bash script as an entrypoint to Python package with Poetry

Is it possible to add bash script as an entrypoint (console script) to Python package via poetry? It looks like it only accepts python files (see code here).
I want entry.sh to be an entry script
#!/usr/bin/env bash
set -e
echo "Running entrypoint"
via setup.py
entry_points={
"console_scripts": [
"entry=entry.sh",
],
},
On the other hand setuptools seems to be supporting shell scripts (see code here).
Is it possible to include shell script into a package and add it to the entrypoints after installing when working with Poetry?
UPD. setuptools does not support that as well (it generates code below)
def importlib_load_entry_point(spec, group, name):
dist_name, _, _ = spec.partition('==')
matches = (
entry_point
for entry_point in distribution(dist_name).entry_points
if entry_point.group == group and entry_point.name == name
)
return next(matches).load()
globals().setdefault('load_entry_point', importlib_load_entry_point)
Is it design decision? It looks to me that packaging should provide such a feature to deliver complex applications as a single bundle.
So I ended up using this workaround: have my script in place and add it to the bundle via package_data and call it from within Python code which I made as an entrypoint.
import subprocess
def _run(bash_script):
return subprocess.call(bash_script, shell=True)
def entrypoint():
return _run("./scripts/my_entrypoint.sh")
def another_entrypoint_if_needed():
return _run("./scripts/some_other_script.sh")
and pyproject.toml
[tool.poetry.scripts]
entrypoint = 'bash_runner:entrypoint'
another = 'bash_runner:another_entrypoint_if_needed'
Same works for console_scripts in setup.py file.

How can I run pytesseract / tesseract in Foundry Code Repositories?

I am trying to use the function image_to_string from the library pytesseract in a repository to perform OCR of PDFs. However, I am getting the following error:
From the checks I would assume the library was loaded correctly:
Does anyone have an idea how to trouble shoot here?
It seems like Foundry is not respecting / running the environment activation script
https://github.com/conda-forge/tesseract-feedstock/blob/main/recipe/activate.sh
that sets the TESSDATA_PREFIX environment variable automatically. However, we can infer the value manually and provide it to the pytesseract API calls.
Define the following helper function:
def _get_tessdata_directory_path():
import sys
from pathlib import Path
env_root = Path(sys.executable).parent.parent
share_dir = env_root / 'share' / 'tessdata'
assert share_dir.exists(), 'tessdata directory does not exist in <envroot>/share/tessdata'
return str(share_dir)
and use it like shown in the following snippet:
tessdata_dir_config = f'--tessdata-dir "{_get_tessdata_directory_path()}"'
pytesseract.image_to_string(image, ..., config=tessdata_dir_config)

How to use plugin commands in bcftools?

My goal is to use bcftools to check that the reference alleles in my dataset (vcf file) match with a reference genome (fasta file) using the fixref plugin.
Working on command line, I first set the following environment:
export BCFTOOLS_PLUGINS=/path/to/bcftools/plugins
The following code is recommended for test datasets with mismatches:
bcftools +fixref test.bcf -Ob -o output.bcf -- -f ref.fa -m top
When I run this code using my own files (please note that my data is .vcf, not .bcf) I get the following error:
[main] Unrecognized command
If I simply enter:
bcftools
I get a list of the only 5 commands (view, index, cat, ld, ldpair) that I can use. So although I've set the environment, does it somehow need to be activated? Do I need to run my command through a bash script?
bcftools
was pointing to a deprecated version of bcftools (0.1.19) in ../bin/, while
BCFTOOLS_PLUGINS=/path/to/bcftools/plugins
was pointing to the plugins for bcftools version 1.10.2 outside /bin/
Replacing ../bin/bcftools (0.1.19 with 1.10.2) was the fix.

Possible to change the package name when generating client code

I am generating the client scala code for an API using the Swagger Edtior. I pasted the json then did a Generate Client/Scala. It gives me a default root package of
io.swagger.client
I can't see any obvious way of specifying something different. Can this be done?
Step (1): Create a file config.json and add following lines and define package names:
{
"modelPackage" : "com.xyz.model",
"apiPackage" : "com.xyz.api"
}
Step (2): Now, pass the above file name along with codegen command with -c option:
$ java -jar swagger-codegen-cli.jar generate -i path/swagger.json -l java -o Code -c path/config.json
Now, it will generate your java packages like com.xyz… instead of default one io.swagger.client…
Run the following command to get information about the supported configuration options
java -jar swagger-codegen-cli.jar config-help -l scala
This will give you information about supported by this generator (Scala in this example):
CONFIG OPTIONS
sortParamsByRequiredFlag
Sort method arguments to place required parameters before optional parameters. (Default: true)
ensureUniqueParams
Whether to ensure parameter names are unique in an operation (rename parameters that are not). (Default: true)
modelPackage
package for generated models
apiPackage
package for generated api classes
Next, define a config.json file with the above parameters:
{
"modelPackage": "your package name",
"apiPackage": "your package name"
}
And supply config.json as input to swagger-codegen using the -c flag.

Why do i use custom Variable Environment in a hosting website?

I am new to hosting world (cloudcontrol), an i got some problem with application credentials, like database administration (mongohq), or google authentification.
So, will i put those variable with some kind of syntaxte (something like $variable) in the code, and then make a commandline with key-value as variable-value ?
If you are using Tornado, it makes it even simpler. Use tornado.options and pass environment variables while running the code.
Use following in your Tornado code:
define("mysql_host", default="127.0.0.1:3306", help="Main user DB")
define("google_oauth_key", help="Client key for Google Oauth")
Then you can access the these values in your rest of your code as:
options.mysql_host
options.google_oauth_key
When you are running your Tornado script, pass the environment variables:
python main.py --mysql_host=$MYSQL_HOST --google_oauth_key=$OAUTH_KEY
assuming both $MYSQL_HOST and $OAUTH_KEY are environment variables. Let me know if you need a full working example or any further help.
example:
First set a environment variable:
$export mongo_uri_env=mongodb://alien:12345#kahana.mongohq.com:10067/essog
and make changes in your Tornado code:
define("mongo_uri", default="127.0.0.1:28017", help="MongoDB URI")
...
...
uri = options.mongo_uri
and you would run your code as
python main.py --mongo_uri=$mongo_uri_env
If you don't want to pass it while running, then you have to read that environment variable directly in your script. For that
import os
...
...
uri = os.environ['mongo_uri_env']