Is there a way to convert juniper "json" or "xml" config to "set" or "show" config? - juniper

We use juniper hardware with junos version 15. In this version we can export our config as "json" or "xml" which we want to use to edit it with our automation tooling.
Importing however is only possible in "set" or "show" format.
Is there a tool to convert "json" or "xml" format to "set" or "show" format?
I can only find converters between "show" and "set".
We can't upgrade to version 16 where the import of "json" would be possible.

Here's a script I made at work, throw it in your bin and you can it via providing a filename or piping output. This assumes linux or mac so the os.isatty function works, but the logic can work anywhere:
usage demo:
person#laptop ~ > head router.cfg
## Last commit: 2021-04-20 21:21:39 UTC by vit
version 15.1X12.2;
groups {
BACKBONE-PORT {
interfaces {
<*> {
mtu 9216;
unit <*> {
family inet {
mtu 9150;
person#laptop ~ > convert.py router.cfg | head
set groups BACKBONE-PORT interfaces <*> mtu 9216
set groups BACKBONE-PORT interfaces <*> unit <*> family inet mtu 9150
set groups BACKBONE-PORT interfaces <*> unit <*> family inet6 mtu 9150
set groups BACKBONE-PORT interfaces <*> unit <*> family mpls maximum-labels 5
<... output removed... >
convert.py:
#!/usr/bin/env python3
# Class that attempts to parse out Juniper JSON into set format
# I think it works? still testing
#
# TODO:
# accumulate annotations and provide them as commands at the end. Will be weird as annotations have to be done after an edit command
from argparse import ArgumentParser, RawTextHelpFormatter
import sys, os, re
class TokenStack():
def __init__(self):
self._tokens = []
def push(self, token):
self._tokens.append(token)
def pop(self):
if not self._tokens:
return None
item = self._tokens[-1]
self._tokens = self._tokens[:-1]
return item
def peek(self):
if not self._tokens:
return None
return self._tokens[-1]
def __str__(self):
return " ".join(self._tokens)
def __repr__(self):
return " ".join(self._tokens)
def main():
# get file
a = ArgumentParser(prog="convert_jpr_json",
description="This program takes in Juniper style JSON (blah { format) and prints it in a copy pastable display set format",
epilog=f"Either supply with a filename or pipe config contents into this program and it'll print out the display set view.\nEx:\n{B}convert_jpr_json <FILENAME>\ncat <FILENAME> | convert_jpr_json{WHITE}",
formatter_class=RawTextHelpFormatter)
a.add_argument('file', help="juniper config in JSON format", nargs="?")
args = a.parse_args()
if not args.file and os.isatty(0):
a.print_help()
die("Please supply filename or provide piped input")
file_contents = None
if args.file:
try:
file_contents = open(args.file, "r").readlines()
except IOError as e:
die(f"Issue opening file {args.file}: {e}")
print(output_text)
else:
file_contents = sys.stdin.readlines()
tokens = TokenStack()
in_comment = False
new_config = []
for line_num, line in enumerate(file_contents):
if line.startswith("version ") or len(line) == 0:
continue
token = re.sub(r"^(.+?)#+[^\"]*$", r"\1", line.strip())
token = token.strip()
if (any(token.startswith(_) for _ in ["!", "#"])):
# annotations currently not supported
continue
if token.startswith("/*"):
# we're in a comment now until the next token (this will break if a multiline comment with # style { happens, but hopefully no-one is that dumb
in_comment = True
continue
if "inactive: " in token:
token = token.split("inactive: ")[1]
new_config.append(f"deactivate {tokens} {token}")
if token[-1] == "{":
in_comment = False
tokens.push(token.strip("{ "))
elif token[-1] == "}":
if not tokens.pop():
die("Invalid json supplied: unmatched closing } encountered on line " + f"{line_num}")
elif token[-1] == ";":
new_config.append(f"set {tokens} {token[:-1]}")
if tokens.peek():
print(tokens)
die("Unbalanced JSON: expected closing }, but encountered EOF")
print("\n".join(new_config))
def die(msg): print(f"\n{B}{RED}FATAL ERROR{WHITE}: {msg}"); exit(1)
RED = "\033[31m"; GREEN = "\033[32m"; YELLOW = "\033[33m"; B = "\033[1m"; WHITE = "\033[0m"
if __name__ == "__main__": main()

You can load XML configuration using edit-config RPC or load-configuration RPC. For more details:
https://www.juniper.net/documentation/en_US/junos/topics/reference/tag-summary/netconf-edit-config.html
https://www.juniper.net/documentation/en_US/junos/topics/reference/tag-summary/junos-xml-protocol-load-configuration.html

XML content can be loaded via an "op" script by placing the content inside a call to junos:load-configuration() template defined in "junos.xsl". Something like the following:
version 1.1;
ns jcs = "http://xml.juniper.net/junos/commit-scripts/1.0";
import "../import/junos.xsl";
var $arguments = {
<argument> {
<name> "file";
<description> "Filename of XML content to load";
}
<argument> {
<name> "action";
<description> "Mode for the load (override, replace, merge)";
}
}
param $file;
param $action = "replace";
match / {
<op-script-results> {
var $configuration = slax:document($file);
var $connection = jcs:open();
call jcs:load-configuration($connection, $configuration, $action);
}
}
Thanks,
Phil

Related

py.test capture unhandled exception

We are using py.test 2.8.7 and I have the below method which creates a separate log file for every test-case. However this does not handle unhandled Exceptions. So if a code snippet throws an Exception instead of failing with an assert, the stack-trace of the Exception is not logged into the separate file. Can someone please help me in how I could capture these Exceptions?
def remove_special_chars(input):
"""
Replaces all special characters which ideally shout not be included in the name of a file
Such characters will be replaced with a dot so we know there was something useful there
"""
for special_ch in ["/", "\\", "<", ">", "|", "&", ":", "*", "?", "\"", "'"]:
input = input.replace(special_ch, ".")
return input
def assemble_test_fqn(node):
"""
Assembles a fully-qualified name for our test-case which will be used as its test log file name
"""
current_node = node
result = ""
while current_node is not None:
if current_node.name == "()":
current_node = current_node.parent
continue
if result != "":
result = "." + result
result = current_node.name + result
current_node = current_node.parent
return remove_special_chars(result)
# This fixture creates a logger per test-case
#pytest.yield_fixture(scope="function", autouse=True)
def set_log_file_per_method(request):
"""
Creates a separate file logging handler for each test method
"""
# Assembling the location of the log folder
test_log_dir = "%s/all_test_logs" % (request.config.getoption("--output-dir"))
# Creating the log folder if it does not exist
if not os.path.exists(test_log_dir):
os.makedirs(test_log_dir)
# Adding a file handler
test_log_file = "%s/%s.log" % (test_log_dir, assemble_test_fqn(request.node))
file_handler = logging.FileHandler(filename=test_log_file, mode="w")
file_handler.setLevel("INFO")
log_format = request.config.getoption("--log-format")
log_formatter = logging.Formatter(log_format)
file_handler.setFormatter(log_formatter)
logging.getLogger('').addHandler(file_handler)
yield
# After the test finished, we remove the file handler
file_handler.close()
logging.getLogger('').removeHandler(file_handler)
I have ended-up with a custom plugin:
import io
import os
import pytest
def remove_special_chars(text):
"""
Replaces all special characters which ideally shout not be included in the name of a file
Such characters will be replaced with a dot so we know there was something useful there
"""
for special_ch in ["/", "\\", "<", ">", "|", "&", ":", "*", "?", "\"", "'"]:
text = text.replace(special_ch, ".")
return text
def assemble_test_fqn(node):
"""
Assembles a fully-qualified name for our test-case which will be used as its test log file name
The result will also include the potential path of the log file as the parents are appended to the fqn with a /
"""
current_node = node
result = ""
while current_node is not None:
if current_node.name == "()":
current_node = current_node.parent
continue
if result != "":
result = "/" + result
result = remove_special_chars(current_node.name) + result
current_node = current_node.parent
return result
def as_unicode(text):
"""
Encodes a text into unicode
If it's already unicode, we do not touch it
"""
if isinstance(text, unicode):
return text
else:
return unicode(str(text))
class TestReport:
"""
Holds a test-report
"""
def __init__(self, fqn):
self._fqn = fqn
self._errors = []
self._sections = []
def add_error(self, error):
"""
Adds an error (either an Exception or an assertion error) to the list of errors
"""
self._errors.append(error)
def add_sections(self, sections):
"""
Adds captured sections to our internal list of sections
Since tests can have multiple phases (setup, call, teardown) this will be invoked for all phases
If for a newer phase we already captured a section, we override it in our already existing internal list
"""
interim = []
for current_section in self._sections:
section_to_add = current_section
# If the current section we already have is also present in the input parameter,
# we override our existing section with the one from the input as that's newer
for index, input_section in enumerate(sections):
if current_section[0] == input_section[0]:
section_to_add = input_section
sections.pop(index)
break
interim.append(section_to_add)
# Adding the new sections from the input parameter to our internal list
for input_section in sections:
interim.append(input_section)
# And finally overriding our internal list of sections
self._sections = interim
def save_to_file(self, log_folder):
"""
Saves the current report to a log file
"""
# Adding a file handler
test_log_file = "%s/%s.log" % (log_folder, self._fqn)
# Creating the log folder if it does not exist
if not os.path.exists(os.path.dirname(test_log_file)):
os.makedirs(os.path.dirname(test_log_file))
# Saving the report to the given log file
with io.open(test_log_file, 'w', encoding='UTF-8') as f:
for error in self._errors:
f.write(as_unicode(error))
f.write(u"\n\n")
for index, section in enumerate(self._sections):
f.write(as_unicode(section[0]))
f.write(u":\n")
f.write((u"=" * (len(section[0]) + 1)) + u"\n")
f.write(as_unicode(section[1]))
if index < len(self._sections) - 1:
f.write(u"\n")
class ReportGenerator:
"""
A py.test plugin which collects the test-reports and saves them to a separate file per test
"""
def __init__(self, output_dir):
self._reports = {}
self._output_dir = output_dir
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(self, item, call):
outcome = yield
# Generating the fully-qualified name of the underlying test
fqn = assemble_test_fqn(item)
# Getting the already existing report for the given test from our internal dict or creating a new one if it's not already present
# We need to do this as this method will be invoked for each phase (setup, call, teardown)
if fqn not in self._reports:
report = TestReport(fqn)
self._reports.update({fqn: report})
else:
report = self._reports[fqn]
result = outcome.result
# Appending the sections for the current phase to the test-report
report.add_sections(result.sections)
# If we have an error, we add that as well to the test-report
if hasattr(result, "longrepr") and result.longrepr is not None:
error = result.longrepr
error_text = ""
if isinstance(error, str) or isinstance(error, unicode):
error_text = as_unicode(error)
elif isinstance(error, tuple):
error_text = u"\n".join([as_unicode(e) for e in error])
elif hasattr(error, "reprcrash") and hasattr(error, "reprtraceback"):
if error.reprcrash is not None:
error_text += str(error.reprcrash)
if error.reprtraceback is not None:
if error_text != "":
error_text += "\n\n"
error_text += str(error.reprtraceback)
else:
error_text = as_unicode(error)
report.add_error(error_text)
# Finally saving the report
# We need to do this for all phases as we don't know if and when a test would fail
# This will essentially override the previous log file for a test if we are in a newer phase
report.save_to_file("%s/all_test_logs" % self._output_dir)
def pytest_configure(config):
config._report_generator = ReportGenerator("result")
config.pluginmanager.register(config._report_generator)

Within a gimp python-fu plug-in can one create/invoke a modal dialog (and/or register a procedure that is ONLY to be added as a temp procedure?)

I am trying to add a procedure to pop-up a modal dialog inside a plug-in.
Its purpose is to query a response at designated steps within the control-flow of the plug-in (not just acquire parameters at its start).
I have tried using gtk - I get a dialog but it is asynchronous - the plugin continues execution. It needs to operate as a synchronous function.
I have tried registering a plugin in order to take advantage of the gimpfu start-up dialogue for same. By itself, it works; it shows up in the procedural db when queried. But I never seem to be able to actually invoke it from within another plug-in - its either an execution error or wrong number of arguments no matter how many permutations I try.
[Reason behind all of this nonsense: I have written a lot of extension Python scripts for PaintShopPro. I have written a App package (with App.Do, App.Constants, Environment and the like that lets me begin to port those scripts to GIMP -- yes it is perverse, and yes sometimes the code just has to be rewritten, but for a lot of what I actual use in the PSP.API it is sufficient.
However, debugging and writing the module rhymes with witch. So. I am trying to add emulation of psp's "SetExecutionMode" (ie interactive). If
set, the intended behavior is that the App.Do() method will "pause" after/before it runs the applicable psp emulation code by popping up a simple message dialog.]
A simple modal dialogue within a gimp python-fu plug-in can be implemented via gtk's Dialog interface, specifically gtk.MessageDialog.
A generic dialog can be created via
queryDialogue = gtk.MessageDialog(None, gtk.DIALOG_DESTROY_WITH_PARENT \
gtk.MESSAGE_QUESTION, \
gtk.BUTTONS_OK_CANCEL, "")
Once the dialog has been shown,
a synchronous response may be obtained from it
queryDialogue.show()
response = queryDialogue.run()
queryDialogue.hide()
The above assumes that the dialog is not created and thence destroyed after each use.
In the use case (mentioned in the question) of a modal dialog to manage single stepping through a pspScript in gimp via an App emulator package, the dialogue message contents need to be customized for each use. [Hence, the "" for the message argument in the Constructor. [more below]]
In addition, the emulator must be able to accept a [cancel] response to 'get out of Dodge' - ie quit the entire plug-in (gracefully). I could not find a gimpfu interface for the latter, (and do not want to kill the app entirely via gimp.exit()). Hence, this is accomplished by raising a custom Exception class [appTerminate] within the App pkg and catching the exception in the outer-most scope of the plugin. When caught, then, the plug-in returns (exits).[App.Do() can not return a value to indicate continue/exit/etc, because the pspScripts are to be included verbatim.]
The following is an abbreviated skeleton of the solution -
a plug-in incorporating (in part) a pspScript
the App.py pkg supplying the environment and App.Do() to support the pspScript
a Map.py pkg supporting how pspScripts use dot-notation for parameters
App.py demonstrates creation, customization and use of a modal dialog - App.doContinue() displays the dialogue illustrating how it can be customized on each use.
App._parse() parses the pspScript (excerpt showing how it determines to start/stop single-step via the dialogue)
App._exec() implements the pspScript commands (excerpt showing how it creates the dialogue, identifies the message widget for later customization, and starts/stops its use)
# App.py (abbreviated)
#
import gimp
import gtk
import Map # see https://stackoverflow.com/questions/2352181/how-to- use-a-dot-to-access-members-of-dictionary
from Map import *
pdb = gimp.pdb
isDialogueAvailable = False
queryDialogue = None
queryMessage = None
Environment = Map({'executionMode' : 1 })
_AutoActionMode = Map({'Match' : 0})
_ExecutionMode = Map({'Default' : 0}, Silent=1, Interactive=2)
Constants = Map({'AutoActionMode' : _AutoActionMode}, ExecutionMode=_ExecutionMode ) # etc...
class appTerminate(Exception): pass
def Do(eNvironment, procedureName, options = {}):
global appTerminate
img = gimp.image_list()[0]
lyr = pdb.gimp_image_get_active_layer(img)
parsed = _parse(img, lyr, procedureName, options)
if eNvironment.executionMode == Constants.ExecutionMode.Interactive:
resp = doContinue(procedureName, parsed.detail)
if resp == -5: # OK
print procedureName # log to stdout
if parsed.valid:
if parsed.isvalid:
_exec(img, lyr, procedureName, options, parsed, eNvironment)
else:
print "invalid args"
else:
print "invalid procedure"
elif resp == -6: # CANCEL
raise appTerminate, "script cancelled"
pass # terminate plugin
else:
print procedureName + " skipped"
pass # skip execution, continue
else:
_exec(img, lyr, procedureName, options, parsed, eNvironment)
return
def doContinue(procedureName, details):
global queryMessage, querySkip, queryDialogue
# - customize the dialog -
if details == "":
msg = "About to execute procedure \n "+procedureName+ "\n\nContinue?"
else:
msg = "About to execute procedure \n "+procedureName+ "\n\nDetails - \n" + details +"\n\nContinue?"
queryMessage.set_text(msg)
queryDialogue.show()
resp = queryDialogue.run() # get modal response
queryDialogue.hide()
return resp
def _parse(img, lyr, procedureName, options):
# validate and interpret App.Do options' semantics vz gimp
if procedureName == "Selection":
isValid=True
# ...
# parsed = Map({'valid' : True}, isvalid=True, start=Start, width=Width, height=Height, channelOP=ChannelOP ...
# /Selection
# ...
elif procedureName == "SetExecutionMode":
generalOptions = options['GeneralSettings']
newMode = generalOptions['ExecutionMode']
if newMode == Constants.ExecutionMode.Interactive:
msg = "set mode interactive/single-step"
else:
msg = "set mode silent/run"
parsed = Map({'valid' : True}, isvalid=True, detail=msg, mode=newMode)
# /SetExecutionMode
else:
parsed = Map({'valid' : False})
return parsed
def _exec(img, lyr, procedureName, options, o, eNvironment):
global isDialogueAvailable, queryMessage, queryDialogue
#
try:
# -------------------------------------------------------------------------------------------------------------------
if procedureName == "Selection":
# pdb.gimp_rect_select(img, o.start[0], o.start[1], o.width, o.height, o.channelOP, ...
# /Selection
# ...
elif procedureName == "SetExecutionMode":
generalOptions = options['GeneralSettings']
eNvironment.executionMode = generalOptions['ExecutionMode']
if eNvironment.executionMode == Constants.ExecutionMode.Interactive:
if isDialogueAvailable:
queryDialogue.destroy() # then clean-up and refresh
isDialogueAvailable = True
queryDialogue = gtk.MessageDialog(None, gtk.DIALOG_DESTROY_WITH_PARENT, gtk.MESSAGE_QUESTION, gtk.BUTTONS_OK_CANCEL, "")
queryDialogue.set_title("psp/APP.Do Emulator")
queryDialogue.set_size_request(450, 180)
aqdContent = queryDialogue.children()[0]
aqdHeader = aqdContent.children()[0]
aqdMsgBox = aqdHeader.children()[1]
aqdMessage = aqdMsgBox.children()[0]
queryMessage = aqdMessage
else:
if isDialogueAvailable:
queryDialogue.destroy()
isDialogueAvailable = False
# /SetExecutionMode
else: # should not get here (should have been screened by parse)
raise AssertionError, "unimplemented PSP procedure: " + procedureName
except:
raise AssertionError, "App.Do("+procedureName+") generated an exception:\n" + sys.exc_info()
return
A skeleton of the plug-in itself. This illustrates incorporating a pspScript which includes a request for single-step/interactive execution mode, and thus the dialogues. It catches the terminate exception raised via the dialogue, and then terminates.
def generateWebImageSet(dasImage, dasLayer, title, mode):
try:
img = dasImage.duplicate()
# ...
bkg = img.layers[-1]
frameWidth = 52
start = bkg.offsets
end = (start[0]+bkg.width, start[1]+frameWidth)
# pspScript: (snippet included verbatim)
# SetExecutionMode / begin interactive single-step through pspScript
App.Do( Environment, 'SetExecutionMode', {
'GeneralSettings': {
'ExecutionMode': App.Constants.ExecutionMode.Interactive
}
})
# Selection
App.Do( Environment, 'Selection', {
'General' : {
'Mode' : 'Replace',
'Antialias' : False,
'Feather' : 0
},
'Start': start,
'End': end
})
# Promote
App.Do( Environment, 'SelectPromote' )
# und_so_weiter ...
except App.appTerminate:
raise AssertionError, "script cancelled"
# /generateWebImageSet
# _generateFloatingCanvasSetWeb.register -----------------------------------------
#
def generateFloatingCanvasSetWeb(dasImage, dasLayer, title):
mode="FCSW"
generateWebImageSet(dasImage, dasLayer, title, mode)
register(
"generateFloatingCanvasSetWeb",
"Generate Floating- Frame GW Canvas Image Set for Web Page",
"Generate Floating- Frame GW Canvas Image Set for Web Page",
"C G",
"C G",
"2019",
"<Image>/Image/Generate Web Imagesets/Floating-Frame Gallery-Wrapped Canvas Imageset...",
"*",
[
( PF_STRING, "title", "title", "")
],
[],
generateFloatingCanvasSetWeb)
main()
I realize that this may seem like a lot of work just to be able to include some pspScripts in a gimp plug-in, and to be able to single-step through the emulation. But we are talking about maybe 10K lines of scripts (and multiple scripts).
However, if any of this helps anyone else with dialogues inside plug-ins, etc., so much the better.

Using input function with remote files in snakemake

I want to use a function to read inputs file paths from a dataframe and send them to my snakemake rule. I also have a helper function to select the remote from which to pull the files.
from snakemake.remote.GS import RemoteProvider as GSRemoteProvider
from snakemake.remote.SFTP import RemoteProvider as SFTPRemoteProvider
from os.path import join
import pandas as pd
configfile: "config.yaml"
units = pd.read_csv(config["units"]).set_index(["library", "unit"], drop=False)
TMP= join('data', 'tmp')
def access_remote(local_path):
""" Connnects to remote as defined in config file"""
provider = config['provider']
if provider == 'GS':
GS = GSRemoteProvider()
remote_path = GS.remote(join("gs://" + config['bucket'], local_path))
elif provider == 'SFTP':
SFTP = SFTPRemoteProvider(
username=config['user'],
private_key=config['ssh_key']
)
remote_path = SFTP.remote(
config['host'] + ":22" + join(base_path, local_path)
)
else:
remote_path = local_path
return remote_path
def get_fastqs(wc):
"""
Get fastq files (units) of a particular library - sample
combination from the unit sheet.
"""
fqs = units.loc[
(units.library == wc.library) &
(units.libtype == wc.libtype),
"fq1"
]
return {
"r1": list(map(access_remote, fqs.fq1.values)),
}
# Combine all fastq files from the same sample / library type combination
rule combine_units:
input: unpack(get_fastqs)
output:
r1 = join(TMP, "reads", "{library}_{libtype}.end1.fq.gz")
threads: 12
run:
shell("cat {i1} > {o1}".format(i1=input['r1'], o1=output['r1']))
My config file contains the bucket name and provider, which are passed to the function. This works as expected when running simply snakemake.
However, I would like to use the kubernetes integration, which requires passing the provider and bucket name in the command line. But when I run:
snakemake -n --kubernetes --default-remote-provider GS --default-remote-prefix bucket-name
I get this error:
ERROR :: MissingInputException in line 19 of Snakefile:
Missing input files for rule combine_units:
bucket-name/['bucket-name/lib1-unit1.end1.fastq.gz', 'bucket-name/lib1-unit2.end1.fastq.gz', 'bucket-name/lib1-unit3.end1.fastq.gz']
The bucket is applied twice (once mapped correctly to each element, and once before the whole list (which gets converted to a string). Did I miss something ? Is there a good way to work around this ?

Container keeps on crashing while creating a deployment from a docker image in minikube

i have docker image containing python files which should download satellite imageries from scihub website. The docker image is working fine. Now when i want to create the deployment thorugh kubectl so that i can expose it as a service, its's container keeps on crashing. That's what the pod description says when seen through kubectl describe pod.
this is how i am trying to deploy sudo kubectl run back --image=back:latest --port=8080 --image-pull-policy Never. i also tried changing the port but it did not work. Here are the files within docker image.
Docker File
FROM python:3.7-stretch
COPY . /code
WORKDIR /code
RUN pip install -r requirements.txt
ENTRYPOINT ["python", "ingestion.py"]
** ingestion **
import os
import shutil
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")
import requests
import datahub
scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"
logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
password=scihub_password,
producttype=os.getenv("producttype"),
platformname=os.getenv("platformname"),
days_back=os.getenv("days_back", 2),
footprint=os.getenv("footprint"),
max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
start_date = os.getenv("start_date"),
end_date = os.getenv("end_date"))
logger.info("Found {} relevant scenes".format(len(scenes)))
job_results = []
for scene in scenes:
# do not donwload a scene that has already been ingested
if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
filename = scene["title"]+".SAFE"
else:
logger.info("Starting the download of scene {}".format(scene["title"]))
filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
logger.info("The download was successful.")
shutil.move(filename, "/out_data")
result_message = {"description": "test",
"type": "Raster",
"format": "SAFE",
"filename": os.path.basename(filename)}
job_results.append(result_message)
res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()
*datahub
import logging
import os
import urllib.parse
import zipfile
import requests
# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"
logger = logging.getLogger(__name__)
def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
search_terms = []
if producttype:
search_terms.append("producttype:{}".format(producttype))
if platformname:
search_terms.append("platformname:{}".format(platformname))
if start_date and end_date:
search_terms.append(
"beginPosition:[{}+TO+{}]".format(start_date, end_date))
elif days_back:
search_terms.append(
"beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
if footprint:
search_terms.append("footprint:%22Intersects({})%22".format(
footprint.replace(" ", "+")))
if max_cloud_cover_percentage:
search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
url = SITE["SEARCH"] + "+AND+".join(search_terms)
return url
def _unpack(zip_file, directory, remove_after=False):
with zipfile.ZipFile(zip_file) as zf:
# This assumes that the zipfile only contains the .SAFE directory at root level
safe_path = zf.namelist()[0]
zf.extractall(path=directory)
if remove_after:
os.remove(zip_file)
return os.path.normpath(os.path.join(directory, safe_path))
def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
""" Search the Copernicus SciHub
Parameters
----------
username : str
user name for the Copernicus SciHub
password : str
password for the Copernicus SciHub
producttype : str, optional
product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
platformname : str, optional
plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
days_back : int, optional
number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
footprint : str, optional
well-known-text representation of the footprint
max_cloud_cover_percentage: str, optional
percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery.
(see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
start_date: str, optional
start point of the search extent has to be used in combination with end_date
end_date: str, optional
end_point of the search extent has to be used in combination with start_date
Returns
-------
list
a list of scenes that match the search parameters
"""
import xml.etree.cElementTree as ET
scenes = []
search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
logger.info("Search URL: {}".format(search_url))
offset = 0
rowsBreak = 5000
name_space = {"atom": "http://www.w3.org/2005/Atom",
"opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
while offset < rowsBreak: # Next pagination page:
response = requests.get(search_url.format(offset=offset), auth=(username, password))
root = ET.fromstring(response.content)
if offset == 0:
rowsBreak = int(
root.find("opensearch:totalResults", name_space).text)
for e in root.iterfind("atom:entry", name_space):
uuid = e.find("atom:id", name_space).text
title = e.find("atom:title", name_space).text
begin_position = e.find(
"atom:date[#name='beginposition']", name_space).text
end_position = e.find(
"atom:date[#name='endposition']", name_space).text
footprint = e.find("atom:str[#name='footprint']", name_space).text
scenes.append({
"id": uuid,
"title": title,
"begin_position": begin_position,
"end_position": end_position,
"footprint": footprint})
# Ultimate DHuS pagination page size limit (rows per page).
offset += 100
return scenes
def download(scene, directory, username, password, unpack=True):
""" Download a Sentinel scene based on its uuid
Parameters
----------
scene : dict
the scene to be downloaded
path : str
the path where the file will be downloaded to
username : str
username for the Copernicus SciHub
password : str
password for the Copernicus SciHub
unpack: boolean, optional
flag that defines whether the downloaded product should be unpacked after download. defaults to true
Raises
------
ValueError
if the size of the downloaded file does not match the Content-Length header
ValueError
if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub
Returns
-------
str
path to the downloaded file
"""
import hashlib
md5hash = hashlib.md5()
md5sum = requests.get(SITE["CHECKSUM"].format(
uuid=scene["id"]), auth=(username, password)).text
download_path = os.path.join(directory, scene["title"] + ".zip")
# overwrite if path already exists
if os.path.exists(download_path):
os.remove(download_path)
url = SITE["SAFEZIP"].format(uuid=scene["id"])
rsp = requests.get(url, auth=(username, password), stream=True)
cl = rsp.headers.get("Content-Length")
size = int(cl) if cl else -1
# Actually fetch now:
with open(download_path, "wb") as f: # Do not read as a whole into memory:
written = 0
for block in rsp.iter_content(8192):
f.write(block)
written += len(block)
md5hash.update(block)
written = os.path.getsize(download_path)
if size > -1 and written != size:
raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
download_path, written, size))
elif md5sum:
calculated = md5hash.hexdigest()
expected = md5sum.lower()
POD events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m39s (x18636 over 2d19h) kubelet, minikube Back-off restarting failed container
the system which wants to use this service already has another main front end service running(which just runs the application ) on 8081 so maybe i need to expose this on the same port. How can i make the deployments running?

Examples of using SCons with knitr

Are there minimal, or even larger, working examples of using SCons and knitr to generate reports from .Rmd files?
kniting an cleaning_session.Rmd file from the command line (bash shell) to derive an .html file, may be done via:
Rscript -e "library(knitr); knit('cleaning_session.Rmd')".
In this example, Rscript and instructions are fed to a Makefile:
RMDFILE=test
html :
Rscript -e "require(knitr); require(markdown); knit('$(RMDFILE).rmd', '$(RMDFILE).md'); markdownToHTML('$(RMDFILE).md', '$(RMDFILE).html', options=c('use_xhtml', 'base64_images')); browseURL(paste('file://', file.path(getwd(),'$(RMDFILE).html'), sep=''
In this answer https://stackoverflow.com/a/10945832/1172302, there is reportedly a solution using SCons. Yet, I did not test enough to make it work for me. Essentially, it would be awesome to have something like the example presented at https://tex.stackexchange.com/a/26573/8272.
[Updated] One working example is an Sconstruct file:
import os
environment = Environment(ENV=os.environ)
# define a `knitr` builder
builder = Builder(action = '/usr/local/bin/knit $SOURCE -o $TARGET',
src_suffix='Rmd')
# add builders as "Knit", "RMD"
environment.Append( BUILDERS = {'Knit' : builder} )
# define an `rmarkdown::render()` builder
builder = Builder(action = '/usr/bin/Rscript -e "rmarkdown::render(input=\'$SOURCE\', output_file=\'$TARGET\')"',
src_suffix='Rmd')
environment.Append( BUILDERS = {'RMD' : builder} )
# define source (and target files -- currently useless, since not defined above!)
# main cleaning session code
environment.RMD(source='cleaning_session.Rmd', target='cleaning_session.html')
# documentation of the Cleaning Process
environment.Knit(source='Cleaning_Process.Rmd', target='Cleaning_Process.html')
# documentation of data
environment.Knit(source='Code_Book.Rmd', target='Code_Book.html')
The first builder calls the custom script called knit. Which, in turn, takes care of the target file/extension, here being cleaning_session.html. Likely the suffix parameter is not needed altogether, in this very example.
The second builder added is Rscript -e "rmarkdown::render(\'$SOURCE\')"'.
The existence of $TARGETs (as in the example at Command wrapper) ensures SCons won't repeat work if a target file already exists.
The custom script (whose source I can't retrieve currently) is:
#!/usr/bin/env Rscript
local({
p = commandArgs(TRUE)
if (length(p) == 0L || any(c('-h', '--help') %in% p)) {
message('usage: knit input [input2 input3] [-n] [-o output output2 output3]
-h, --help to print help messages
-n, --no-convert do not convert tex to pdf, markdown to html, etc
-o output filename(s) for knit()')
q('no')
}
library(knitr)
o = match('-o', p)
if (is.na(o)) output = NA else {
output = tail(p, length(p) - o)
p = head(p, o - 1L)
}
nc = c('-n', '--no-convert')
knit_fun = if (any(nc %in% p)) {
p = setdiff(p, nc)
knit
} else {
if (length(p) == 0L) stop('no input file provided')
if (grepl('\\.(R|S)(nw|tex)$', p[1])) {
function(x, ...) knit2pdf(x, ..., clean = TRUE)
} else {
if (grepl('\\.R(md|markdown)$', p[1])) knit2html else knit
}
}
mapply(knit_fun, p, output = output, MoreArgs = list(envir = globalenv()))
})
The only thing, now, necessary is to run scons.