GitHub Actions source file - kubernetes

how do I source this file which is in another repo. here is the GitHub action workflow example which is used in another project. I used the same code, its complaining that "file is not found"
run: |
# Setting up cluster configurations, config files are in the kubectl image
# https://github.com /kube-apps/tree/master/kubectl
source /gke/gke-clusters.config
source /aks/aks-clusters.config

Just add the full path to the source lines.
run: |
# Setting up cluster configurations, config files are in the kubectl image
# https://github.com/kube-apps/tree/master/kubectl
source https://github.com/ /kube-apps/tree/master/kubectl/gke-clusters.config
source https://github.com /kube-apps/tree/master/kubectl/aks-clusters.config

Related

How to display new Yocto image option after source poky/oe-init-env

let say i have a new yocto image call stargazer-cmd
what file should i edit so that every time i source poky/oe-init-env
it display as a build option to the user?
kj#kj-Aspire-V3-471G:~/stm32Yoctominimal$ source poky/oe-init-build-env build-mp1/
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-ide-support
I wish to add stargazer-cmd on top of core-image-minimal, i am not sure what to google and what is the file i need to change.
Let me explain how to add a custom configuration to the OpenEmbedded build process.
First of all, here is the process that is done when running:
source poky/oe-init-build-env
The oe-init-build-env script initializes OEROOT variable to point to the location of the script itself.
The oe-init-build-env script sources another file $OEROOT/scripts/oe-buildenv-internal which will:
Check if OEROOT is set
Set BUILDDIR to your custom build directory name $1, or just build if you do not provide one
Set BBPATH to the poky/bitbake folder
Adds $BBPATH/bin and OEROOT/scripts to PATH (This will enable commands like bitbake and bitbake-layers ...)
Export BUILDDIR and PATH to the next file
The oe-init-build-env script continues by running the final build script with:
TEMPLATECONF="$TEMPLATECONF" $OEROOT/scripts/oe-setup-builddir
The oe-setup-builddir script will:
Check if BUILDDIR is set
Create the conf directory under $BUILDDIR
Sources a template file that will check if there is a TEMPLATECONF variable is set:
. $OEROOT/.templateconf
That file contains:
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
it means that if TEMPLATECONF variable is not set, set it to meta-poky/conf, and that is where the default local.conf and bblayers.conf are coming from.
Copy $TEMPLATECONF to $BUILDDIR/conf/templateconf.cfg
Set some variables pointing to custom local.conf and bblayers.conf:
OECORELAYERCONF="$TEMPLATECONF/bblayers.conf.sample"
OECORELOCALCONF="$TEMPLATECONF/local.conf.sample"
OECORENOTESCONF="$TEMPLATECONF/conf-notes.txt"
In the oe-setup-builddir there is a comment saying that TEMPLATECONF can point to a directory:
#
# $TEMPLATECONF can point to a directory for the template local.conf & bblayers.conf
#
Copy local.conf.sample and bblayers.conf.sample from TEMPLATECONF directory into BUIDDIR/conf:
cp -f $OECORELOCALCONF "$BUILDDIR/conf/local.conf"
sed -e "s|##OEROOT##|$OEROOT|g" \
-e "s|##COREBASE##|$OEROOT|g" \
$OECORELAYERCONF > "$BUILDDIR/conf/bblayers.conf"
Finally it will print what is inside OECORENOTESCONF which points to TEMPLATECONF/conf-notes.txt:
[ ! -r "$OECORENOTESCONF" ] || cat $OECORENOTESCONF
and by default that is located under meta-poky/conf/conf-notes.txt:
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-ide-support
You can also run generated qemu images with a command like 'runqemu qemux86'
Other commonly useful commands are:
- 'devtool' and 'recipetool' handle common recipe tasks
- 'bitbake-layers' handles common layer tasks
- 'oe-pkgdata-util' handles common target package tasks
So, now, after understanding all of that, here is what you can do:
Create a custom template directory for your project, containing:
local.conf.sample
bblayers.conf.sample
conf-notes.txt
Do not forget to set the path to poky in bblayers.conf to ##OEROOT## as it will be set automatically by the build script.
Set your custom message in conf-notes.txt
Before any new build, just set TEMPLATECONF:
TEMPLATECONF="<path/to/template-directory>" source poky/oe-init-build-env <build_name>
Then, you will find a build with your custom local.conf and bblayers.conf with additional file conf/templateconf.cfg containing the path of TEMPLATECONF
conf/conf-notes.txt in your layer.
OECORENOTESCONF should point to the file.

how to regenerate meson for newly added yaml files

I have added yaml files to add new dbus objects and I added PHOSPHOR_MAPPER_SERVICE_append = " com/newCoName"
(newCoName is the name of my company)
But when I run bitbake, do_configure for phosphor_mapper bails when it passes the option -Ddata_com_newCoName to meson. The following readme says I need to run ./regenerate_meson from the gen directory when I add new YAML files. But how do I do that from a recipe file?
https://github.com/openbmc/phosphor-dbus-interfaces
One option is to generate these files outside ot yocto environment (i.e. not involving bitbake). Thus
clone that git repo
place your yaml file where you cloned repo
do what readme tells, i.e. go to gen directory and execute meson-regenerate script
collect changes that are done by script and create patch
add patch to your layer and reference it in .bbappend file (meta-/recipes-phosphor/dbus/phosphor-dbus-interfaces_git.bbappend)
Another option would be to add to .bbappend file additional task that runs before do_configure - and call that script from there:
do_configure_prepend() {
cd ${S}/gen && ./meson-regenerate
}
Along this .bbappend you should add your yaml so that it lands inside gen folder in patch or directly in your layer (check FILESEXTRAPATHS).
In both cases you'll need to patch meson_options.txt: add option
option('data_com_newCoName', type: 'boolean', value: true)

Docker COPY error when copying files from host to container

In the following Dockerfile I'm trying to copy a jar file from a location on the host into the container, but seems Docker does not like it as I guess I'm missing something. Here is my Dockerfile:
FROM anapsix/alpine-java:jdk8
MAINTAINER joesan
ENV SBT_VERSION 0.13.15
ENV CHECKSUM 18b106d09b2874f2a538c6e1f6b20c565885b2a8051428bd6d630fb92c1c0f96
ENV APP_NAME my-app
ENV PROJECT_HOME /opt/apps
RUN mkdir -p $PROJECT_HOME/$APP_NAME
# Copy the jar file
COPY ./target/scala-*/my-app-*.jar $PROJECT_HOME/$APP_NAME
# Copy the database file
COPY .my-db.mv.db $PROJECT_HOME/$APP_NAME
# Run the application
CMD ["$PROJECT_HOME/$APP_NAME java -Denv=dev -jar my-app-*.jar"]
In my build pipeline, I could see the following error message:
Step 8/10 : COPY ./target/scala-*/my-app-*.jar $PROJECT_HOME/$APP_NAME
COPY failed: no source files were specified
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 4a240742a379 Less than a second ago 171MB
anapsix/alpine-java jdk8 ed55c27d366d 3 years ago 171MB
Error response from daemon: No such image: [secure]
Pushing image [secure] to repository hub.docker.com
The push refers to repository [docker.io/[secure]/my-app]
An image does not exist locally with the tag: [secure]/my-app
What is that I'm missing and how could I debug this? I mean I could add some echo statements to print out the path, but I'm not sure why I face this error!
This is probably because the target folder is not in "./" folder. which can be because it's ignored by .dockerignore file or the build context is not pointing to the parent folder of the target folder.
In case you are not familiar with build context, it's explained here

Linking to multiple subdirectories using :repo_tree

My repository is set up similar to the following:
repo_base
- artwork
- app
- designsystem
- api
Since each of the other folders in the repo (e.g. app, api, designsystem) depend on artwork, I have symlinks in place when running locally. This is working fine, as the path for images in the designsystem subdirectory is something like ../../artwork. When you check out the repository, the entire tree is checked out, so the symlinks are pointing to the correct directory.
However, when I deploy with capistrano, I use :repo_tree to only deploy a portion of the overall monorepo. For example, the deploy.rb script for the designsystem folder looks like:
# config valid for current version and patch releases of Capistrano
lock "~> 3.11.0"
set :application, "designsystem"
set :repo_url, "git#gitlab.com:myuser/mymonorepo"
set :deploy_to, "/var/www/someplace.net/designsystem.someplace.net"
set :deploy_via, "remote_cache_with_project_root"
set :repo_tree, 'designsystem'
set :log_level, :error
before 'deploy:set_current_revision', 'deploy:buildMonolith'
The problem, of course, is that this only ends up deploying the designsystem subdirectory. Thus, the symlinks aren't valid, and are actually skipped in the building (buildMonolith step).
I'm wondering how I might go about having capistrano check out another subdirectory, artwork, and placing it somewhere in the repository source tree.
I was able to solve this by adding a capistrano task called assets.rb:
require 'pathname'
##
# Import assets from a top level monorepo directory into the current working
# directory.
#
# When you use :repo_tree to deploy a specific directory of a monorepo, but your
# asset repository is in a different directory, you need to check out this
# top-level directory and add it to the deployment path. For example, if your
# monorepo directory structure looks something like:
#
# - /app
# - src/
# - assets -> symlink to ../../assets
# - /assets
# - /api
#
# And you want to deploy /app, the symlink to the upper directory won't exist if
# capistrano is configured to use :repo_tree "app". In order to overcome this,
# this task checks out a specified piece of the larger monorepo (in this case,
# the assets directory), and places it in the deployment directory at a
# specified location.
#
# Configuration:
# In your deploy/<stage>.rb file, you will need to specify two variables:
# - :asset_path - The location within the deployment directory where the
# assets should be placed. Relative to the deployment working
# directory.
# - :asset_source - The location of the top-level asset folder in the
# monorepo. Relative to the top level of the monorepo (i.e.
# the directory that would be used as a deployment if
# :repo_tree was not specified).
#
# In the above example, you would specify:
#
# set :asset_path, "src/assets"
# set :asset_source, "assets"
#
namespace :deploy do
desc "Import assets from a top-level monorepo directory"
task :import_assets do
on roles(:all) do |host|
within repo_path do
final_asset_location = "#{release_path}/#{fetch(:asset_path)}"
asset_stat_result = capture "stat", "-t", "#{final_asset_location}"
asset_stat_result = asset_stat_result.split(" ")
if asset_stat_result[0] == "#{final_asset_location}"
info "Removing existing asset directory #{final_asset_location}..."
execute "rm", "-rf", "#{final_asset_location}"
end
source_dir = Pathname.new(final_asset_location).parent.to_s
info "Importing assets to #{source_dir}/#{fetch(:asset_source)}"
execute "GIT_WORK_TREE=#{source_dir}", :git, "checkout", "#{fetch(:branch)}", "--", "#{fetch(:asset_source)}"
info "Moving asset directory #{source_dir}/#{fetch(:asset_source)} to #{final_asset_location}..."
execute :mv, "#{source_dir}/#{fetch(:asset_source)}", "#{final_asset_location}"
end
end
end
end
It would be nice if I could somehow link into the git scm plugin, rather than calling git from the command line directly.

Openshift/Kubernetes: disoriented by the various configuration files

I'm learning openshift origin , in the master container I found a number of config files:
[root#openshift] cd /var/lib/origin
[root#openshift origin]# find . -name *kubeconfig
./openshift.local.config/node-localhost/node.kubeconfig
./openshift.local.config/master/admin.kubeconfig
./openshift.local.config/master/openshift-master.kubeconfig
[root#openshift origin]# find . -name *config.yaml
./openshift.local.config/node-localhost/node-config.yaml
./openshift.local.config/master/master-config.yaml
I found out these files also inspecting the origin container:
$ docker inspect 671fb8df3752 | grep config
"--master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml",
"--node-config=/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml"
"/var/lib/origin/openshift.local.config:/var/lib/origin/openshift.local.config:z",
"Source": "/var/lib/origin/openshift.local.config",
"Destination": "/var/lib/origin/openshift.local.config",
"KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig",
"--master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml",
"--node-config=/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml"
Could you help me to schematize / summarize the role and use of each of these files?
Specifically when executing commands of this type:
oadm policy add-scc-to-group anyuid system:authenticated --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig
they must be directed to each of the configurations I have found or only to the specific one?