update kubernetes docker image after new image push with same or similar tag for renovate - kubernetes

I am using renovate to update my kubernetes manifest files. the process should be as follow:
build and push docker image with tag registery.com/nginx:qa-main
run renovate
renovate should be able to update image tag inside the helm values.yaml
I have tried these configuration and they sometimes work and sometimes do not:
# renovate.json
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"regexManagers": [
{
"fileMatch": ["clusters\/dev\/charts\/renovate\/values\.yaml"],
"description": "Update docker image references",
"matchStrings": ["image: (?<depName>.*?):(?<currentValue>.*?)#(?<currentDigest>sha256:[a-f0-9]+)s"],
"datasourceTemplate": "docker"
}
],
"packageRules": [
{
"matchDatasources": ["docker"],
"matchUpdateTypes": [
"major"
],
"enabled": false
}
]
}
#renovate-config.js
module.exports = {
token: 'mytoken',
platform: 'github',
logLevel: 'debug',
labels: ['renovate', 'dependencies', 'automated'],
onboarding: true,
onboardingConfig: {
extends: ['config:base', '":dependencyDashboard"'],
},
baseBranches: [],
repositories: ['mycompany/myrepo'],
renovateFork: true,
gitAuthor: "mark <a.mark#mycompany.com>",
username: "Fettah",
onboarding: false,
printConfig: true,
requireConfig: false,
logFile: "renovate.log.json",
};
# values.yaml
image: fettah/nginx:qa-main#sha256:bbd1281091831284e9488b713ee2a29b6a54494cf3519e352faa589c221c0898

Related

After each Package.json update app cannot load newly hashed index.js Vite PWA

I have just installed the vite-plugin-pwa and followed the documentation and have the following in my config
VitePWA( {
injectRegister: 'auto',
registerType: 'autoUpdate',
devOptions: {
enabled: true
},
strategies: 'generateSW',
workbox: {
globPatterns: ['**/*.{js,css,html,ico,png,svg,mp3}'],
sourcemap: true
},
includeAssets: ['favicon.ico', 'apple-touch-icon.png', 'masked-icon.svg'],
manifest: {
name: 'Litreach',
short_name: 'Cluiche Litrithe',
start_url: "/",
display: "standalone",
lang: "ga",
description: 'Cluiche litrithe ina mbíonn ar an imreoir 5 fhocal a litriú gach lá. Cluintear na focail a rá sna 3 canúintí agus bíonn 5 iarracht agat an focal a litriú i gceart.',
theme_color: '#ffffff',
icons: [
{
src: 'pwa-192x192.png',
sizes: '192x192',
type: 'image/png',
"purpose": "maskable"
},
{
src: 'pwa-192x192.png',
sizes: '192x192',
type: 'image/png',
"purpose": "any"
},
{
src: 'pwa-512x512.png',
sizes: '512x512',
type: 'image/png'
}
],
dir: "ltr",
orientation: "portrait",
display_override: [
"standalone"
],
categories: [
"education",
"games"
]
}
})
However each time I bump the the package.json version number, the next time I go to load the app the application falls over. When I inspect the Network key I can see that the app is trying to load index.js?oldHashNumber and the only way I can get the app to load is manually pressing the refresh button on the browser.
I believe there is some problem with the Service Worker or my PWA configuration.
Should I try to destroy the Service Worker and all their caches and start again?
If so how should I do this?

Nuxt.js static app is loading indefinitely

I just finished a Nuxt.js project, and I want to deploy it on a web server. So, I executed the command nuxt generate to have a static app. Before this, everything was working perfectly, but now nothing is working : the page is loading indefinitely with a rotating black and gray round in the center of the page.
Here is a picture
EDIT:
I am hosting my app on OVHcloud, and here is a public repo of my app : https://github.com/maximehamou/public.mh-info.fr.
Here is my nuxt.config.js
export default {
// Disable server-side rendering: https://go.nuxtjs.dev/ssr-mode
ssr: false,
target: "static",
// Global page headers: https://go.nuxtjs.dev/config-head
head: {
title: "Accueil | MH info",
htmlAttrs: {
lang: "fr",
},
meta: [
{ charset: "utf-8" },
{ name: "viewport", content: "width=device-width, initial-scale=1" },
{ hid: "description", name: "description", content: "" },
{ name: "format-detection", content: "telephone=no" },
],
link: [{ rel: "icon", type: "image/x-icon", href: "/favicon.ico" }],
script: [{ src: "https://kit.fontawesome.com/048c7a73f1.js/" }],
},
// Global CSS: https://go.nuxtjs.dev/config-css
css: ["./css/general.css"],
server: {
port: 4000,
},
};
Here is my package.json
{
"name": "mh-info.fr",
"version": "1.0.0",
"private": true,
"scripts": {
"dev": "nuxt",
"build": "nuxt build",
"start": "nuxt start",
"generate": "nuxt generate",
"sass": "sass -w scss:css"
},
"dependencies": {
"buttercms": "^1.2.9",
"core-js": "^3.19.3",
"nuxt": "^2.15.8",
"sass": "^1.54.9",
"vue": "^2.6.14",
"vue-server-renderer": "^2.6.14",
"vue-template-compiler": "^2.6.14",
"webpack": "^4.46.0"
}
}
Update regarding my latest changes on a cloned version of your project.
I achieved to have something properly working here: https://kissu-makes-great-sites.netlify.app/fr/tous-les-articles
Main conclusion is that there is a LOT of things to fix/improve on.
You're not writing your app as you should with Vue (even less in a Nuxt way).
There is too much to cover into a single response, so I recommend that you ping me on Twitter, Discord or by email if you want a more in-depth explanation/mentoring on how to fix all of this.
PS: I speak french, lived there for 20 years. 🇫🇷

Great expectations v3 API in aws glue 3.0

I'm trying to a validation in the pipeline using Great expectations on AWS glue 3.0.
Here's my initial attempt to create the data context at runtime based on their docs
def create_context():
logger.info("Create DataContext Config.")
data_context_config = DataContextConfig(
config_version=2,
plugins_directory=None,
config_variables_file_path=None,
# concurrency={"enabled": "true"},
datasources={
"my_spark_datasource": DatasourceConfig(
class_name="Datasource",
execution_engine={
"class_name": "SparkDFExecutionEngine",
"module_name": "great_expectations.execution_engine",
},
data_connectors={
"my_spark_dataconnector": {
"module_name": "great_expectations.datasource.data_connector",
"class_name": "RuntimeDataConnector",
"batch_identifiers": [""],
}
},
)
},
stores={
"expectations_S3_store": {
"class_name": "ExpectationsStore",
"store_backend": {
"class_name": "TupleS3StoreBackend",
"bucket": data_profile_s3_store_bucket,
"prefix": "expectations/",
"s3_put_options": {"ACL": "bucket-owner-full-control"},
},
},
"validations_S3_store": {
"class_name": "ValidationsStore",
"store_backend": {
"class_name": "TupleS3StoreBackend",
"bucket": data_profile_s3_store_bucket,
"prefix": "validations/",
"s3_put_options": {"ACL": "bucket-owner-full-control"},
},
},
"evaluation_parameter_store": {"class_name": "EvaluationParameterStore"},
"checkpoint_S3_store": {
"class_name": "CheckpointStore",
"store_backend": {
"class_name": "TupleS3StoreBackend",
"suppress_store_backend_id": "true",
"bucket": data_profile_s3_store_bucket,
"prefix": "checkpoints/",
"s3_put_options": {"ACL": "bucket-owner-full-control"},
},
},
},
expectations_store_name="expectations_S3_store",
validations_store_name="validations_S3_store",
evaluation_parameter_store_name="evaluation_parameter_store",
checkpoint_store_name="checkpoint_S3_store",
data_docs_sites={
"s3_site": {
"class_name": "SiteBuilder",
"store_backend": {
"class_name": "TupleS3StoreBackend",
"bucket": data_profile_s3_store_bucket,
"prefix": "data_docs/",
"s3_put_options": {"ACL": "bucket-owner-full-control"},
},
"site_index_builder": {
"class_name": "DefaultSiteIndexBuilder",
"show_cta_footer": True,
},
}
},
anonymous_usage_statistics={"enabled": True},
)
# Pass the DataContextConfig as a project_config to BaseDataContext
context = BaseDataContext(project_config=data_context_config)
logger.info("Create Checkpoint Config.")
checkpoint_config = {
"name": "my_checkpoint",
"config_version": 1,
"class_name": "Checkpoint",
"run_name_template": "ingest_date=%YYYY-%MM-%DD",
"expectation_suite_name": data_profile_expectation_suite_name,
"runtime_configuration": {
"result_format": {
"result_format": "COMPLETE",
"include_unexpected_rows": True,
}
},
"evaluation_parameters": {},
}
context.add_checkpoint(**checkpoint_config)
# logger.info(f'GE Data Context Config: "{data_context_config}"')
return context
Using this i get an error saying attempting to run operations on stopped spark context.
Is there a better way to use the spark source in glue3.0?
I want to be able to stay on glue3.0 as much as possible to prevent having to maintain two versions of glue jobs
You can fix this by setting the force_reuse_spark_context to True, here is a quick example (YML):
config_version: 3.0
datasources:
my_spark_datasource:
class_name: Datasource
module_name: great_expectations.datasource
data_connectors:
my_spark_dataconnector:
class_name: RuntimeDataConnector
module_name: great_expectations.datasource.data_connector
batch_identifiers: {}
execution_engine:
class_name: SparkDFExecutionEngine
force_reuse_spark_context: true
Another thing I would like to add is that you can define the context in a YML file and upload it to S3. Then, you can parse this file in the glue job with the function below:
def parse_data_context_from_S3(bucket: str, prefix: str = ""):
object_key = os.path.join(prefix, "great_expectations.yml")
print(f"Parsing s3://{bucket}/{object_key}")
s3 = boto3.session.Session().client("s3")
s3_object = s3.get_object(Bucket=bucket, Key=object_key)["Body"]
datacontext_config = yaml.safe_load(s3_object.read())
project_config = DataContextConfig(**datacontext_config)
context = BaseDataContext(project_config=project_config)
return context
Your CI/CD pipeline can easily replace the store backends in the YML file while deploying it to your environments (dev, hom, prod).
If you are using the RuntimeDataConnector, you should have no problem using Glue 3.0. The same does not apply if you are using the InferredAssetS3DataConnector and your datasets are encrypted using KMS. In this case, I was only able to use Glue 2.0.

Error when setting up glusterfs on Kubernetes: volume create: heketidbstorage: failed: Host not connected

I'm following this instruction to setup glusterfs on my kubernetes cluster. At heketi-client/bin/heketi-cli setup-openshift-heketi-storage part, heketi-cli tells me :
Error: volume create: heketidbstorage: failed: Host 192.168.99.25 not connected
or sometimes:
Error: volume create: heketidbstorage: failed: Staging failed on 192.168.99.26. Error: Host 192.168.99.25 not connected
heketi.json is
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": false,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "7319"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "7319"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes",
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"kubeexec": {
"rebalance_on_expansion": true
},
"sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "22",
"user": "root",
"sudo": false
}
},
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}
topology-sample.json is
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"redis-test25"
],
"storage": [
"192.168.99.25"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sda7",
"destroydata": true
}
]
},
{
"node": {
"hostnames": {
"manage": [
"redis-test26"
],
"storage": [
"192.168.99.26"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sda7",
"destroydata": true
}
]
},
{
"node": {
"hostnames": {
"manage": [
"redis-test01"
],
"storage": [
"192.168.99.113"
]
},
"zone": 1
},
"devices": [
{
"name": "/dev/sda7",
"destroydata": true
}
]
}
]
}
]
}
The heketi-cli is v8.0.0 and kubernetes is v1.12.3
How do I fix this problem?
Update: Just found that I missed the iptables part, but now the message becomes
Error: volume create: heketidbstorage: failed: Host 192.168.99.25 is not in 'Peer in Cluster' state
seems that one of the glusterfs pod cannot connect to others, I tried kubectl exec -i glusterfs-59ftx -- gluster peer status:
Number of Peers: 2
Hostname: 192.168.99.26
Uuid: 6950db9a-3d60-4625-b642-da5882396bee
State: Peer Rejected (Disconnected)
Hostname: 192.168.99.113
Uuid: 78983466-4499-48d2-8411-2c3e8c70f89f
State: Peer Rejected (Disconnected)
while the other one said:
Number of Peers: 1
Hostname: 192.168.99.26
Uuid: 23a0114d-65b8-42d6-8067-7efa014af68d
State: Peer in Cluster (Connected)
I solved these problems by myself.
For first part, the reason is that I didn't setup iptables in every nodes according to Infrastructure Requirements.
For second part according to this article, delete all file in /var/lib/glusterd except glusterd.info and then start over from Kubernete Deploy.

docker-compose volumes not working this way

in my docker-compose.yml , I am using the registry:2 image ( version)
as I need to set up my own configuration ( for using S3 storage ) , I tried to mound my config directory in place of the default one
/usr/share/docker-registry/config/config.yml # my own registry config in local host
/go/src/github.com/docker/distribution/cmd/registry/config.yml # default in container
in my docker-compose.yml , I wrote
backend:
image: registry:2
ports:
- 127.0.0.1:5000:5000
links:
- cache
volumes:
- /usr/share/docker-registry/config:/go/src/github.com/docker/distribution/cmd/registry
..
but when I compose it, my config settings are never taken in account... it's always using the default settings in the container cmd/registry/config.yml
what could be wrong ?
If I inspect the running registry:v2 container , I can see that
thanks for any enlightenment ...
If I inspect the running registry:v2 container , the config is weird ( S3 info are there, but no volumes , and the CMD is executing the standard config.yml file ... )
"Config": {
"Hostname": "5337012111a5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"PortSpecs": null,
"ExposedPorts": {
"5000/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"SETTINGS_FLAVOR=local",
"REGISTRY_STORAGE_S3_SECURE=True",
"REGISTRY_STORAGE_S3_ENCRYPT=True",
"REGISTRY_STORAGE_S3_ROOTDIRECTORY=/s3/object/name/prefix",
"CACHE_REDIS_PORT=6379",
"REGISTRY_STORAGE_S3_V4AUTH=True",
"REGISTRY_STORAGE_S3_CHUNKSIZE=5242880",
"REGISTRY_STORAGE_S3_SECRETKEY=yyyyyyyyyyyyyyyyyyyyyyyy”,
"CACHE_LRU_REDIS_PORT=6379",
"SEARCH_BACKEND=sqlalchemy",
"CACHE_REDIS_HOST=cache",
"REGISTRY_STORAGE_S3_ACCESSKEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
"CACHE_LRU_REDIS_HOST=cache",
"REGISTRY_STORAGE_S3_REGION=eu-central-1",
"REGISTRY_STORAGE_S3_BUCKET=harbor.dufour16.net",
"PATH=/go/bin:/usr/src/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOLANG_VERSION=1.4.2",
"GOPATH=/go/src/github.com/docker/distribution/Godeps/_workspace:/go",
"DISTRIBUTION_DIR=/go/src/github.com/docker/distribution"
],
"Cmd": [
"cmd/registry/config.yml"
],
"Image": "registry:2",
"Volumes": null,
"VolumeDriver": "",
"WorkingDir": "/go/src/github.com/docker/distribution",
"Entrypoint": [
"registry"
],
I need to override settings with environment variables.. not using external volumes ..