How to use kubernetes python sdk to redeploy a deployment - kubernetes

version info:
python3.7
kubernetes==8.0.0
doc: https://github.com/kubernetes-client/python/tree/release-8.0/kubernetes
I only found the update API, not the redeploy API。
thanks

If you want to partially update an existing deployment use PATCH method. An example below
# create an instance of the API class
api_instance = kubernetes.client.AppsV1Api(kubernetes.client.ApiClient(configuration))
name = 'name_example' # str | name of the Deployment
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = NULL # object |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
try:
api_response = api_instance.patch_namespaced_deployment(name, namespace, body, pretty=pretty, dry_run=dry_run)
pprint(api_response)
except ApiException as e:
print("Exception when calling AppsV1Api->patch_namespaced_deployment: %s\n" % e)
If you want to replace the existing deployment with a new deployment use PUT method. An example below
# create an instance of the API class
api_instance = kubernetes.client.AppsV1Api(kubernetes.client.ApiClient(configuration))
name = 'name_example' # str | name of the Deployment
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = kubernetes.client.V1Deployment() # V1Deployment |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
try:
api_response = api_instance.replace_namespaced_deployment(name, namespace, body, pretty=pretty, dry_run=dry_run)
pprint(api_response)
except ApiException as e:
print("Exception when calling AppsV1Api->replace_namespaced_deployment: %s\n" % e)

Related

authorization for API gateway

I used this tutorial and created "put" endpoint successfully.
https://sanderknape.com/2017/10/creating-a-serverless-api-using-aws-api-gateway-and-dynamodb/
When I follow this advice, I get authroization required error..
Using your favorite REST client, try to PUT an item into DynamoDB
using your API Gateway URL.
python is my favorite client:
import requests
api_url = "https://0pg2858koj.execute-api.us-east-1.amazonaws.com/tds"
PARAMS = {"name": "test", "favorite_movie":"asdsf"}
r = requests.put(url=api_url, params=PARAMS)
the response is 403
My test from console is successful, but not able to put a record from python.
The first step you can take to resolve the problem is to investigate the information returned by AWS in the 403 response. It will provide a header, x-amzn-ErrorType and error message with information about the concrete error. You can test it with curl in verbose mode (-v) or with your Python code. Please, review the relevant documentation to obtain a detailed enumeration of all the possible error reasons.
In any case, looking at your code, it is very likely that you did not provide the necessary authentication or authorization information to AWS.
The kind of information that you must provide depends on which mechanism you configured to access your REST API in API Gateway.
If, for instance, you configured IAM based authentication, you need to set up your Python code to generate an Authorization header with an AWS Signature derived from your user access key ID and associated secret key. The AWS documentation provides an example of use with Postman.
The AWS documentation also provides several examples of how to use python and requests to perform this kind of authorization.
Consider, for instance, this example for posting information to DynamoDB:
# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of the
# License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
# OF ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
# AWS Version 4 signing example
# DynamoDB API (CreateTable)
# See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
# This version makes a POST request and passes request parameters
# in the body (payload) of the request. Auth information is passed in
# an Authorization header.
import sys, os, base64, datetime, hashlib, hmac
import requests # pip install requests
# ************* REQUEST VALUES *************
method = 'POST'
service = 'dynamodb'
host = 'dynamodb.us-west-2.amazonaws.com'
region = 'us-west-2'
endpoint = 'https://dynamodb.us-west-2.amazonaws.com/'
# POST requests use a content type header. For DynamoDB,
# the content is JSON.
content_type = 'application/x-amz-json-1.0'
# DynamoDB requires an x-amz-target header that has this format:
# DynamoDB_<API version>.<operationName>
amz_target = 'DynamoDB_20120810.CreateTable'
# Request parameters for CreateTable--passed in a JSON block.
request_parameters = '{'
request_parameters += '"KeySchema": [{"KeyType": "HASH","AttributeName": "Id"}],'
request_parameters += '"TableName": "TestTable","AttributeDefinitions": [{"AttributeName": "Id","AttributeType": "S"}],'
request_parameters += '"ProvisionedThroughput": {"WriteCapacityUnits": 5,"ReadCapacityUnits": 5}'
request_parameters += '}'
# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
def getSignatureKey(key, date_stamp, regionName, serviceName):
kDate = sign(('AWS4' + key).encode('utf-8'), date_stamp)
kRegion = sign(kDate, regionName)
kService = sign(kRegion, serviceName)
kSigning = sign(kService, 'aws4_request')
return kSigning
# Read AWS access key from env. variables or configuration file. Best practice is NOT
# to embed credentials in code.
access_key = os.environ.get('AWS_ACCESS_KEY_ID')
secret_key = os.environ.get('AWS_SECRET_ACCESS_KEY')
if access_key is None or secret_key is None:
print('No access key is available.')
sys.exit()
# Create a date for headers and the credential string
t = datetime.datetime.utcnow()
amz_date = t.strftime('%Y%m%dT%H%M%SZ')
date_stamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope
# ************* TASK 1: CREATE A CANONICAL REQUEST *************
# http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
# Step 1 is to define the verb (GET, POST, etc.)--already done.
# Step 2: Create canonical URI--the part of the URI from domain to query
# string (use '/' if no path)
canonical_uri = '/'
## Step 3: Create the canonical query string. In this example, request
# parameters are passed in the body of the request and the query string
# is blank.
canonical_querystring = ''
# Step 4: Create the canonical headers. Header names must be trimmed
# and lowercase, and sorted in code point order from low to high.
# Note that there is a trailing \n.
canonical_headers = 'content-type:' + content_type + '\n' + 'host:' + host + '\n' + 'x-amz-date:' + amz_date + '\n' + 'x-amz-target:' + amz_target + '\n'
# Step 5: Create the list of signed headers. This lists the headers
# in the canonical_headers list, delimited with ";" and in alpha order.
# Note: The request can include any headers; canonical_headers and
# signed_headers include those that you want to be included in the
# hash of the request. "Host" and "x-amz-date" are always required.
# For DynamoDB, content-type and x-amz-target are also required.
signed_headers = 'content-type;host;x-amz-date;x-amz-target'
# Step 6: Create payload hash. In this example, the payload (body of
# the request) contains the request parameters.
payload_hash = hashlib.sha256(request_parameters.encode('utf-8')).hexdigest()
# Step 7: Combine elements to create canonical request
canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash
# ************* TASK 2: CREATE THE STRING TO SIGN*************
# Match the algorithm to the hashing algorithm you use, either SHA-1 or
# SHA-256 (recommended)
algorithm = 'AWS4-HMAC-SHA256'
credential_scope = date_stamp + '/' + region + '/' + service + '/' + 'aws4_request'
string_to_sign = algorithm + '\n' + amz_date + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()
# ************* TASK 3: CALCULATE THE SIGNATURE *************
# Create the signing key using the function defined above.
signing_key = getSignatureKey(secret_key, date_stamp, region, service)
# Sign the string_to_sign using the signing_key
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()
# ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
# Put the signature information in a header named Authorization.
authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature
# For DynamoDB, the request can include any headers, but MUST include "host", "x-amz-date",
# "x-amz-target", "content-type", and "Authorization". Except for the authorization
# header, the headers must be included in the canonical_headers and signed_headers values, as
# noted earlier. Order here is not significant.
# # Python note: The 'host' header is added automatically by the Python 'requests' library.
headers = {'Content-Type':content_type,
'X-Amz-Date':amz_date,
'X-Amz-Target':amz_target,
'Authorization':authorization_header}
# ************* SEND THE REQUEST *************
print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++')
print('Request URL = ' + endpoint)
r = requests.post(endpoint, data=request_parameters, headers=headers)
print('\nRESPONSE++++++++++++++++++++++++++++++++++++')
print('Response code: %d\n' % r.status_code)
print(r.text)
I think it could be easily adapted to your needs.
In the console, everything works fine because when you invoke your REST endpoints in API Gateway, you are connected to a user who is already authenticated and authorized to access these REST endpoints.

py.test capture unhandled exception

We are using py.test 2.8.7 and I have the below method which creates a separate log file for every test-case. However this does not handle unhandled Exceptions. So if a code snippet throws an Exception instead of failing with an assert, the stack-trace of the Exception is not logged into the separate file. Can someone please help me in how I could capture these Exceptions?
def remove_special_chars(input):
"""
Replaces all special characters which ideally shout not be included in the name of a file
Such characters will be replaced with a dot so we know there was something useful there
"""
for special_ch in ["/", "\\", "<", ">", "|", "&", ":", "*", "?", "\"", "'"]:
input = input.replace(special_ch, ".")
return input
def assemble_test_fqn(node):
"""
Assembles a fully-qualified name for our test-case which will be used as its test log file name
"""
current_node = node
result = ""
while current_node is not None:
if current_node.name == "()":
current_node = current_node.parent
continue
if result != "":
result = "." + result
result = current_node.name + result
current_node = current_node.parent
return remove_special_chars(result)
# This fixture creates a logger per test-case
#pytest.yield_fixture(scope="function", autouse=True)
def set_log_file_per_method(request):
"""
Creates a separate file logging handler for each test method
"""
# Assembling the location of the log folder
test_log_dir = "%s/all_test_logs" % (request.config.getoption("--output-dir"))
# Creating the log folder if it does not exist
if not os.path.exists(test_log_dir):
os.makedirs(test_log_dir)
# Adding a file handler
test_log_file = "%s/%s.log" % (test_log_dir, assemble_test_fqn(request.node))
file_handler = logging.FileHandler(filename=test_log_file, mode="w")
file_handler.setLevel("INFO")
log_format = request.config.getoption("--log-format")
log_formatter = logging.Formatter(log_format)
file_handler.setFormatter(log_formatter)
logging.getLogger('').addHandler(file_handler)
yield
# After the test finished, we remove the file handler
file_handler.close()
logging.getLogger('').removeHandler(file_handler)
I have ended-up with a custom plugin:
import io
import os
import pytest
def remove_special_chars(text):
"""
Replaces all special characters which ideally shout not be included in the name of a file
Such characters will be replaced with a dot so we know there was something useful there
"""
for special_ch in ["/", "\\", "<", ">", "|", "&", ":", "*", "?", "\"", "'"]:
text = text.replace(special_ch, ".")
return text
def assemble_test_fqn(node):
"""
Assembles a fully-qualified name for our test-case which will be used as its test log file name
The result will also include the potential path of the log file as the parents are appended to the fqn with a /
"""
current_node = node
result = ""
while current_node is not None:
if current_node.name == "()":
current_node = current_node.parent
continue
if result != "":
result = "/" + result
result = remove_special_chars(current_node.name) + result
current_node = current_node.parent
return result
def as_unicode(text):
"""
Encodes a text into unicode
If it's already unicode, we do not touch it
"""
if isinstance(text, unicode):
return text
else:
return unicode(str(text))
class TestReport:
"""
Holds a test-report
"""
def __init__(self, fqn):
self._fqn = fqn
self._errors = []
self._sections = []
def add_error(self, error):
"""
Adds an error (either an Exception or an assertion error) to the list of errors
"""
self._errors.append(error)
def add_sections(self, sections):
"""
Adds captured sections to our internal list of sections
Since tests can have multiple phases (setup, call, teardown) this will be invoked for all phases
If for a newer phase we already captured a section, we override it in our already existing internal list
"""
interim = []
for current_section in self._sections:
section_to_add = current_section
# If the current section we already have is also present in the input parameter,
# we override our existing section with the one from the input as that's newer
for index, input_section in enumerate(sections):
if current_section[0] == input_section[0]:
section_to_add = input_section
sections.pop(index)
break
interim.append(section_to_add)
# Adding the new sections from the input parameter to our internal list
for input_section in sections:
interim.append(input_section)
# And finally overriding our internal list of sections
self._sections = interim
def save_to_file(self, log_folder):
"""
Saves the current report to a log file
"""
# Adding a file handler
test_log_file = "%s/%s.log" % (log_folder, self._fqn)
# Creating the log folder if it does not exist
if not os.path.exists(os.path.dirname(test_log_file)):
os.makedirs(os.path.dirname(test_log_file))
# Saving the report to the given log file
with io.open(test_log_file, 'w', encoding='UTF-8') as f:
for error in self._errors:
f.write(as_unicode(error))
f.write(u"\n\n")
for index, section in enumerate(self._sections):
f.write(as_unicode(section[0]))
f.write(u":\n")
f.write((u"=" * (len(section[0]) + 1)) + u"\n")
f.write(as_unicode(section[1]))
if index < len(self._sections) - 1:
f.write(u"\n")
class ReportGenerator:
"""
A py.test plugin which collects the test-reports and saves them to a separate file per test
"""
def __init__(self, output_dir):
self._reports = {}
self._output_dir = output_dir
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(self, item, call):
outcome = yield
# Generating the fully-qualified name of the underlying test
fqn = assemble_test_fqn(item)
# Getting the already existing report for the given test from our internal dict or creating a new one if it's not already present
# We need to do this as this method will be invoked for each phase (setup, call, teardown)
if fqn not in self._reports:
report = TestReport(fqn)
self._reports.update({fqn: report})
else:
report = self._reports[fqn]
result = outcome.result
# Appending the sections for the current phase to the test-report
report.add_sections(result.sections)
# If we have an error, we add that as well to the test-report
if hasattr(result, "longrepr") and result.longrepr is not None:
error = result.longrepr
error_text = ""
if isinstance(error, str) or isinstance(error, unicode):
error_text = as_unicode(error)
elif isinstance(error, tuple):
error_text = u"\n".join([as_unicode(e) for e in error])
elif hasattr(error, "reprcrash") and hasattr(error, "reprtraceback"):
if error.reprcrash is not None:
error_text += str(error.reprcrash)
if error.reprtraceback is not None:
if error_text != "":
error_text += "\n\n"
error_text += str(error.reprtraceback)
else:
error_text = as_unicode(error)
report.add_error(error_text)
# Finally saving the report
# We need to do this for all phases as we don't know if and when a test would fail
# This will essentially override the previous log file for a test if we are in a newer phase
report.save_to_file("%s/all_test_logs" % self._output_dir)
def pytest_configure(config):
config._report_generator = ReportGenerator("result")
config.pluginmanager.register(config._report_generator)

Yocto variable not defined but set with _ operator

I'm struggling with something I'm not sure to address correctly.
In a Yocto environment (for STM32MP1 by the way) I have to configure a new target.
Hence I added to meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc this section, that looks like the other already available:
EXTLINUX_BOOTDEVICE_EMMC = "mmc1"
EXTLINUX_BOOTDEVICE_SDCARD = "mmc0"
EXTLINUX_ROOT_EMMC = "${#bb.utils.contains('ST_VENDORFS','1','root=/dev/mmcblk1p4','root=/dev/mmcblk1p3',d)}"
EXTLINUX_ROOT_NAND = "ubi.mtd=UBI rootfstype=ubifs root=ubi0:rootfs"
# Define available targets to use
UBOOT_EXTLINUX_CONFIGURED_TARGETS += "mp151a_sdcard"
UBOOT_EXTLINUX_CONFIGURED_TARGETS += "mp151a_emmc"
# Define bootprefix for each target
UBOOT_EXTLINUX_BOOTPREFIXES_mp151a_sdcard = "${EXTLINUX_BOOTDEVICE_SDCARD}_stm32mp151a_"
UBOOT_EXTLINUX_BOOTPREFIXES_mp151a_emcc = "${EXTLINUX_BOOTDEVICE_EMCC}_stm32mp151a_"
# Define labels for each target
UBOOT_EXTLINUX_LABELS_mp151a_sdcard = "stm32mp151a-sdcard"
UBOOT_EXTLINUX_LABELS_mp151a_emcc = "stm32mp151a-emcc"
# Define default boot config for each target
UBOOT_EXTLINUX_DEFAULT_LABEL_mp151a_sdcard ?= "stm32mp151a-sdcard"
UBOOT_EXTLINUX_DEFAULT_LABEL_mp151a_emcc ?= "stm32mp151a-emcc"
# Define FDT overrides for all labels
UBOOT_EXTLINUX_FDT_stm32mp151a-sdcard = "/stm32mp151a.dtb"
UBOOT_EXTLINUX_FDT_stm32mp151a-emcc = "/stm32mp151a.dtb"
# Define ROOT overrides for all labels
UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard = "${EXTLINUX_ROOT_SDCARD}"
UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc = "${EXTLINUX_ROOT_EMCC}"
But when I bitbake <image> (that includes the file above) I get this output:
DEBUG: Executing python function update_extlinuxconf_targets
NOTE: UBOOT_EXTLINUX_CONFIGURED_TARGETS: mp157a-dk1_sdcard mp157a-dk1_sdcard-optee mp157c-dk2_sdcard mp157c-dk2_sdcard-optee mp157c-ed1_emmc mp157c-ed1_emmc-optee mp157c-ed1_sdcard mp157c-ed1_sdcard-optee mp157c-ev1_emmc mp157c-ev1_emmc-optee mp157c-ev1_nand mp157c-ev1_nor-sdcard mp157c-ev1_nor-emmc mp157c-ev1_sdcard mp157c-ev1_sdcard-optee mp151a_sdcard mp151a_emmc
NOTE: UBOOT_EXTLINUX_CONFIG_FLAGS: emmc sdcard
NOTE: *** Loop for config_label: emmc
NOTE: *** Loop for devicetree: stm32mp151a
NOTE: >>> New target label: mp151a_emmc
NOTE: >>> Append mp151a_emmc to UBOOT_EXTLINUX_TARGETS
NOTE: *** Loop for config_label: sdcard
NOTE: *** Loop for devicetree: stm32mp151a
NOTE: >>> New target label: mp151a_sdcard
NOTE: >>> Append mp151a_sdcard to UBOOT_EXTLINUX_TARGETS
NOTE: >>> UBOOT_EXTLINUX_TARGETS (updated): mp151a_emmc mp151a_sdcard
DEBUG: Python function update_extlinuxconf_targets finished
DEBUG: Executing python function do_create_multiextlinux_config
ERROR: UBOOT_EXTLINUX_ROOT not defined
DEBUG: Python function do_create_multiextlinux_config finished
ERROR: Function failed: do_create_multiextlinux_config
As you can see, the file is actually processed because it added the targets I've defined.
But it doesn't find the UBOOT_EXTLINUX_ROOT even if it's "set" with the _ operator:
UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard = "${EXTLINUX_ROOT_SDCARD}"
UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc = "${EXTLINUX_ROOT_EMCC}"
I also tried to set the main variable to something like:
UBOOT_EXTLINUX_ROOT = ""
or
UBOOT_EXTLINUX_ROOT = "root=/dev/mmcblk1p4"
to see if it was the problem but it doesn't change nothing.
Is this something related to Yocto itself (I mean, something wrong in my syntax) or it's very specific to the SDK (meta-st) ?
The error above should be raised by this file:
root = localdata.getVar('UBOOT_EXTLINUX_ROOT')
if not root:
bb.fatal('UBOOT_EXTLINUX_ROOT not defined')
UPDATE
I checked the (huge) output of bitbake -e and among other targets I see:
# $UBOOT_EXTLINUX_ROOT [41 operations]
[...]
# "${EXTLINUX_ROOT_NOREMMC}"
# override[stm32mp157c-ev1-m4-examples-sdcard]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:274
# "${EXTLINUX_ROOT_SDCARD}"
# override[stm32mp157c-ev1-m4-examples-sdcard-optee]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:275
# "${EXTLINUX_ROOT_SDCARD_OPTEE}"
# override[stm32mp151a-sdcard]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:296
# "${EXTLINUX_ROOT_SDCARD}"
# override[stm32mp151a-emcc]:set /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/meta-st/meta-st-stm32mp/conf/machine/include/st-machine-extlinux-config-stm32mp.inc:297
[...]
# pre-expansion value:
# ""
UBOOT_EXTLINUX_ROOT=""
# $UBOOT_EXTLINUX_ROOT_cubemx-nor-sdcard
UBOOT_EXTLINUX_ROOT_cubemx-nor-sdcard="root=/dev/mmcblk0p3"
# $UBOOT_EXTLINUX_ROOT_cubemx-sdcard
UBOOT_EXTLINUX_ROOT_cubemx-sdcard="root=/dev/mmcblk0p6"
# $UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc
UBOOT_EXTLINUX_ROOT_stm32mp151a-emcc="\${EXTLINUX_ROOT_EMCC}"
# $UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard
UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard="root=/dev/mmcblk0p6"
So far, if I understand correctly, the override values are correctly assigned (but not the ${EXTLINUX_ROOT_EMCC} - I don't understand where the \ comes from) but the main variable is still empty.
Adding UBOOT_EXTLINUX_ROOT = "root=/dev/mmcblk1p4" at the beginning of the above file, seems to do the trick (even if before I wrote the opposite, perhaps I forgot to clear the cache?) but I don't think it's the right way to do it.
You should specify the wanted name of the machine as a target to build, i.e.:
MACHINE=stm32mp151a-sdcard bitbake <image>
This way, the UBOOT_EXTLINUX_ROOT gets the non-empty value "root=/dev/mmcblk0p6" (from the UBOOT_EXTLINUX_ROOT_stm32mp151a-sdcard variant of the variable).

Container keeps on crashing while creating a deployment from a docker image in minikube

i have docker image containing python files which should download satellite imageries from scihub website. The docker image is working fine. Now when i want to create the deployment thorugh kubectl so that i can expose it as a service, its's container keeps on crashing. That's what the pod description says when seen through kubectl describe pod.
this is how i am trying to deploy sudo kubectl run back --image=back:latest --port=8080 --image-pull-policy Never. i also tried changing the port but it did not work. Here are the files within docker image.
Docker File
FROM python:3.7-stretch
COPY . /code
WORKDIR /code
RUN pip install -r requirements.txt
ENTRYPOINT ["python", "ingestion.py"]
** ingestion **
import os
import shutil
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")
import requests
import datahub
scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"
logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
password=scihub_password,
producttype=os.getenv("producttype"),
platformname=os.getenv("platformname"),
days_back=os.getenv("days_back", 2),
footprint=os.getenv("footprint"),
max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
start_date = os.getenv("start_date"),
end_date = os.getenv("end_date"))
logger.info("Found {} relevant scenes".format(len(scenes)))
job_results = []
for scene in scenes:
# do not donwload a scene that has already been ingested
if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
filename = scene["title"]+".SAFE"
else:
logger.info("Starting the download of scene {}".format(scene["title"]))
filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
logger.info("The download was successful.")
shutil.move(filename, "/out_data")
result_message = {"description": "test",
"type": "Raster",
"format": "SAFE",
"filename": os.path.basename(filename)}
job_results.append(result_message)
res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()
*datahub
import logging
import os
import urllib.parse
import zipfile
import requests
# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"
logger = logging.getLogger(__name__)
def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
search_terms = []
if producttype:
search_terms.append("producttype:{}".format(producttype))
if platformname:
search_terms.append("platformname:{}".format(platformname))
if start_date and end_date:
search_terms.append(
"beginPosition:[{}+TO+{}]".format(start_date, end_date))
elif days_back:
search_terms.append(
"beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
if footprint:
search_terms.append("footprint:%22Intersects({})%22".format(
footprint.replace(" ", "+")))
if max_cloud_cover_percentage:
search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
url = SITE["SEARCH"] + "+AND+".join(search_terms)
return url
def _unpack(zip_file, directory, remove_after=False):
with zipfile.ZipFile(zip_file) as zf:
# This assumes that the zipfile only contains the .SAFE directory at root level
safe_path = zf.namelist()[0]
zf.extractall(path=directory)
if remove_after:
os.remove(zip_file)
return os.path.normpath(os.path.join(directory, safe_path))
def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
""" Search the Copernicus SciHub
Parameters
----------
username : str
user name for the Copernicus SciHub
password : str
password for the Copernicus SciHub
producttype : str, optional
product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
platformname : str, optional
plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
days_back : int, optional
number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
footprint : str, optional
well-known-text representation of the footprint
max_cloud_cover_percentage: str, optional
percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery.
(see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
start_date: str, optional
start point of the search extent has to be used in combination with end_date
end_date: str, optional
end_point of the search extent has to be used in combination with start_date
Returns
-------
list
a list of scenes that match the search parameters
"""
import xml.etree.cElementTree as ET
scenes = []
search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
logger.info("Search URL: {}".format(search_url))
offset = 0
rowsBreak = 5000
name_space = {"atom": "http://www.w3.org/2005/Atom",
"opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
while offset < rowsBreak: # Next pagination page:
response = requests.get(search_url.format(offset=offset), auth=(username, password))
root = ET.fromstring(response.content)
if offset == 0:
rowsBreak = int(
root.find("opensearch:totalResults", name_space).text)
for e in root.iterfind("atom:entry", name_space):
uuid = e.find("atom:id", name_space).text
title = e.find("atom:title", name_space).text
begin_position = e.find(
"atom:date[#name='beginposition']", name_space).text
end_position = e.find(
"atom:date[#name='endposition']", name_space).text
footprint = e.find("atom:str[#name='footprint']", name_space).text
scenes.append({
"id": uuid,
"title": title,
"begin_position": begin_position,
"end_position": end_position,
"footprint": footprint})
# Ultimate DHuS pagination page size limit (rows per page).
offset += 100
return scenes
def download(scene, directory, username, password, unpack=True):
""" Download a Sentinel scene based on its uuid
Parameters
----------
scene : dict
the scene to be downloaded
path : str
the path where the file will be downloaded to
username : str
username for the Copernicus SciHub
password : str
password for the Copernicus SciHub
unpack: boolean, optional
flag that defines whether the downloaded product should be unpacked after download. defaults to true
Raises
------
ValueError
if the size of the downloaded file does not match the Content-Length header
ValueError
if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub
Returns
-------
str
path to the downloaded file
"""
import hashlib
md5hash = hashlib.md5()
md5sum = requests.get(SITE["CHECKSUM"].format(
uuid=scene["id"]), auth=(username, password)).text
download_path = os.path.join(directory, scene["title"] + ".zip")
# overwrite if path already exists
if os.path.exists(download_path):
os.remove(download_path)
url = SITE["SAFEZIP"].format(uuid=scene["id"])
rsp = requests.get(url, auth=(username, password), stream=True)
cl = rsp.headers.get("Content-Length")
size = int(cl) if cl else -1
# Actually fetch now:
with open(download_path, "wb") as f: # Do not read as a whole into memory:
written = 0
for block in rsp.iter_content(8192):
f.write(block)
written += len(block)
md5hash.update(block)
written = os.path.getsize(download_path)
if size > -1 and written != size:
raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
download_path, written, size))
elif md5sum:
calculated = md5hash.hexdigest()
expected = md5sum.lower()
POD events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m39s (x18636 over 2d19h) kubelet, minikube Back-off restarting failed container
the system which wants to use this service already has another main front end service running(which just runs the application ) on 8081 so maybe i need to expose this on the same port. How can i make the deployments running?

Kubernetes API call equivalent to 'kubectl apply'

I try to use the master api to update resources.
In 1.2 to update a deployment resource I'm doing kubectl apply -f new updateddeployment.yaml
How to do the same action with the api?
I checked the code in pkg/kubectl/cmd/apply.go and I think the following lines of code shows what's behind the scene when you run kubectl apply -f:
// Compute a three way strategic merge patch to send to server.
patch, err := strategicpatch.CreateThreeWayMergePatch(original, modified, current,
versionedObject, true)
helper := resource.NewHelper(info.Client, info.Mapping)
_, err = helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch)
And here is the code helper.Patch:
func (m *Helper) Patch(namespace, name string, pt api.PatchType, data []byte) (runtime.Object, error) {
return m.RESTClient.Patch(pt).
NamespaceIfScoped(namespace, m.NamespaceScoped).
Resource(m.Resource).
Name(name).
Body(data).
Do().
Get()
}
This API is not really convincingly designed, since it forces us to reimplement such basic stuff at the client side...
Anyway, here is my attempt to reinvent the hexagonal wheel in Python...
Python module kube_apply
Usage is like kube_apply.fromYaml(myStuff)
can read strings or opened file streams (via lib Yaml)
handles yaml files with several concatenated objects
implementation is rather braindead and first attempts
to insert the resource. If this fails, it tries a patch,
and if this also fails, it deletes the resource and
inserts it anew.
File: kube_apply.py
#!/usr/bin/python3
# coding: utf-8
# __________ ________________________________________________ #
# kube_apply - apply Yaml similar to kubectl apply -f file.yaml #
# #
# (C) 2019 Hermann Vosseler <Ichthyostega#web.de> #
# This is OpenSource software; licensed under Apache License v2+ #
# ############################################################### #
'''
Utility for the official Kubernetes python client: apply Yaml data.
While still limited to some degree, this utility attempts to provide
functionality similar to `kubectl apply -f`
- load and parse Yaml
- try to figure out the object type and API to use
- figure out if the resource already exists, in which case
it needs to be patched or replaced alltogether.
- otherwise just create a new resource.
Based on inspiration from `kubernetes/utils/create_from_yaml.py`
#since: 2/2019
#author: Ichthyostega
'''
import re
import yaml
import logging
import kubernetes.client
def runUsageExample():
''' demonstrate usage by creating a simple Pod through default client
'''
logging.basicConfig(level=logging.DEBUG)
#
# KUBECONFIG = '/path/to/special/kubecfg.yaml'
# import kubernetes.config
# client = kubernetes.config.new_client_from_config(config_file=KUBECONFIG)
# # --or alternatively--
# kubernetes.config.load_kube_config(config_file=KUBECONFIG)
fromYaml('''
kind: Pod
apiVersion: v1
metadata:
name: dummy-pod
labels:
blow: job
spec:
containers:
- name: sleepr
image: busybox
command:
- /bin/sh
- -c
- sleep 24000
''')
def fromYaml(rawData, client=None, **kwargs):
''' invoke the K8s API to create or replace an object given as YAML spec.
#param rawData: either a string or an opened input stream with a
YAML formatted spec, as you'd use for `kubectl apply -f`
#param client: (optional) preconfigured client environment to use for invocation
#param kwargs: (optional) further arguments to pass to the create/replace call
#return: response object from Kubernetes API call
'''
for obj in yaml.load_all(rawData):
createOrUpdateOrReplace(obj, client, **kwargs)
def createOrUpdateOrReplace(obj, client=None, **kwargs):
''' invoke the K8s API to create or replace a kubernetes object.
The first attempt is to create(insert) this object; when this is rejected because
of an existing object with same name, we attempt to patch this existing object.
As a last resort, if even the patch is rejected, we *delete* the existing object
and recreate from scratch.
#param obj: complete object specification, including API version and metadata.
#param client: (optional) preconfigured client environment to use for invocation
#param kwargs: (optional) further arguments to pass to the create/replace call
#return: response object from Kubernetes API call
'''
k8sApi = findK8sApi(obj, client)
try:
res = invokeApi(k8sApi, 'create', obj, **kwargs)
logging.debug('K8s: %s created -> uid=%s', describe(obj), res.metadata.uid)
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason != 'Conflict': raise
try:
# asking for forgiveness...
res = invokeApi(k8sApi, 'patch', obj, **kwargs)
logging.debug('K8s: %s PATCHED -> uid=%s', describe(obj), res.metadata.uid)
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason != 'Unprocessable Entity': raise
try:
# second attempt... delete the existing object and re-insert
logging.debug('K8s: replacing %s FAILED. Attempting deletion and recreation...', describe(obj))
res = invokeApi(k8sApi, 'delete', obj, **kwargs)
logging.debug('K8s: %s DELETED...', describe(obj))
res = invokeApi(k8sApi, 'create', obj, **kwargs)
logging.debug('K8s: %s CREATED -> uid=%s', describe(obj), res.metadata.uid)
except Exception as ex:
message = 'K8s: FAILURE updating %s. Exception: %s' % (describe(obj), ex)
logging.error(message)
raise RuntimeError(message)
return res
def patchObject(obj, client=None, **kwargs):
k8sApi = findK8sApi(obj, client)
try:
res = invokeApi(k8sApi, 'patch', obj, **kwargs)
logging.debug('K8s: %s PATCHED -> uid=%s', describe(obj), res.metadata.uid)
return res
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason == 'Unprocessable Entity':
message = 'K8s: patch for %s rejected. Exception: %s' % (describe(obj), apiEx)
logging.error(message)
raise RuntimeError(message)
else:
raise
def deleteObject(obj, client=None, **kwargs):
k8sApi = findK8sApi(obj, client)
try:
res = invokeApi(k8sApi, 'delete', obj, **kwargs)
logging.debug('K8s: %s DELETED. uid was: %s', describe(obj), res.details and res.details.uid or '?')
return True
except kubernetes.client.rest.ApiException as apiEx:
if apiEx.reason == 'Not Found':
logging.warning('K8s: %s does not exist (anymore).', describe(obj))
return False
else:
message = 'K8s: deleting %s FAILED. Exception: %s' % (describe(obj), apiEx)
logging.error(message)
raise RuntimeError(message)
def findK8sApi(obj, client=None):
''' Investigate the object spec and lookup the corresponding API object
#param client: (optional) preconfigured client environment to use for invocation
#return: a client instance wired to the apriopriate API
'''
grp, _, ver = obj['apiVersion'].partition('/')
if ver == '':
ver = grp
grp = 'core'
# Strip 'k8s.io', camel-case-join dot separated parts. rbac.authorization.k8s.io -> RbacAuthorzation
grp = ''.join(part.capitalize() for part in grp.rsplit('.k8s.io', 1)[0].split('.'))
ver = ver.capitalize()
k8sApi = '%s%sApi' % (grp, ver)
return getattr(kubernetes.client, k8sApi)(client)
def invokeApi(k8sApi, action, obj, **args):
''' find a suitalbe function and perform the actual API invocation.
#param k8sApi: client object for the invocation, wired to correct API version
#param action: either 'create' (to inject a new objet) or 'replace','patch','delete'
#param obj: the full object spec to be passed into the API invocation
#param args: (optional) extraneous arguments to pass
#return: response object from Kubernetes API call
'''
# transform ActionType from Yaml into action_type for swagger API
kind = camel2snake(obj['kind'])
# determine namespace to place the object in, supply default
try: namespace = obj['metadata']['namespace']
except: namespace = 'default'
functionName = '%s_%s' %(action,kind)
if hasattr(k8sApi, functionName):
# namespace agnostic API
function = getattr(k8sApi, functionName)
else:
functionName = '%s_namespaced_%s' %(action,kind)
function = getattr(k8sApi, functionName)
args['namespace'] = namespace
if not 'create' in functionName:
args['name'] = obj['metadata']['name']
if 'delete' in functionName:
from kubernetes.client.models.v1_delete_options import V1DeleteOptions
obj = V1DeleteOptions()
return function(body=obj, **args)
def describe(obj):
return "%s '%s'" % (obj['kind'], obj['metadata']['name'])
def camel2snake(string):
string = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', string)
string = re.sub('([a-z0-9])([A-Z])', r'\1_\2', string).lower()
return string
if __name__=='__main__':
runUsageExample()
You could install the kubectl binary and invoke it from within your Python program a la:
exec(f"kubectl apply -f - <<EOF{yaml_manifests}EOF --prune")
Once server-side apply is ready this problem should get a bit easier as there will effectively be a k8s API endpoint you can hit (though still doesn't sound like it will be resource-agnostic, i.e. you will still have to PATCH /api/v1/some-k8s-resource specifically, whereas with kubectl apply you can input some heterogenous list of resources).