Deployment fails using Github Actions - github

This is my first CI/CD attempt using Github Actions and for some reason, my deployment keeps failing. I have my Github repo with the following files.
-rw-rw-r-- 1 ubuntu ubuntu 1.1K Sep 7 21:40 Dockerfile
-rw-rw-r-- 1 ubuntu ubuntu 1.1K Sep 7 18:06 README.md
-rwxrwxr-x 1 ubuntu ubuntu 132 Sep 8 18:09 deploy_to_aws.sh
-rw-rw-r-- 1 ubuntu ubuntu 275 Sep 8 18:05 docker-compose.yml
drwxrwxr-x 6 ubuntu ubuntu 4.0K Sep 7 23:27 flexdashboard
-rwxrwxr-x 1 ubuntu ubuntu 359 Sep 7 19:30 shiny-server.sh
Now I am trying to build, deploy, and run the shiny application on a cloud instance (the shiny application runs when I manually run it on the cloud). So I set up this workflow in Github actions
name: Deploy EC2
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v2
- name: Run a one-line script
run: echo Hello, world!
- name: Install SSH key
uses: shimataro/ssh-key-action#v2
with:
key: ${{ secrets.SSH_KEY }}
name: id_rsa # optional
known_hosts: ${{ secrets.KNOWN_HOSTS }}
- name: rsync over ssh
run: ./deploy_to_aws.sh
Here are the contents of the deploy_to_aws.sh of the deployment script
#!/bin/bash
echo 'Starting to Deploy...'
cd Illumina-mRNA-dashboard
docker-compose up -d
echo 'Deployment completed successfully'
I am now getting this error.
shell: /bin/bash -e {0}
Run ./deploy_to_aws.sh
./deploy_to_aws.sh
shell: /bin/bash -e {0}
/home/runner/work/_temp/48d3ea81-a97b-45d1-894f-177e77cb8ae5.sh: line 1: ./deploy_to_aws.sh: No such file or directory
##[error]Process completed with exit code 127.
I don't understand why it keeps telling me that deploy_to_aws.sh doesn't exist

Related

Java Springboot Deployment using GitHub

I am trying to deploy my Springboot App to my Linux VM using GitHub. The deployment itself works, but the GitHub Action does not stop running since the last command executed is still running but should not be stopped. How can I solve this?
name: Backend Deployment to Linux VM
on:
push:
branches:
- main
jobs:
build-and-deploy:
name: Backend Deployment to Linux VM
runs-on: ubuntu-latest
steps:
- name: update and start project
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
script: |
kill -9 $(lsof -t -i:8080)
cd /home/github_deploy_backend
cd backend-P2cinema
git pull
mvn clean package
nohup java -jar target/*.jar &

Docker-compose with jupyter data-science notebook failed to create new file

System: Ubuntu 18.04.5 LTS
Docker image: jupyter/datascience-notebook:latest
Docker-version:
Client: Docker Engine - Community
Version: 20.10.2
API version: 1.40
Go version: go1.13.15
Git commit: 2291f61
Built: Mon Dec 28 16:17:32 2020
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:01:06 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.18.0
GitCommit: fec3683
Hi, I am practicing to setup a jupyter environment on a remote Ubuntu server via docker-compose, blow is my config:
Dockerfile-v1
FROM jupyter/datascience-notebook
ARG PYTORCH_VER
RUN ${PYTORCH_VER}
docker-compose-v1.yml
version: "3"
services:
ychuang-pytorch:
env_file: pytorch.env
build:
context: .
dockerfile: Dockerfile-${TAG}
args:
PYTORCH_VER: ${PYTORCH_VERSION}
restart: always
command: jupyter notebook --NotebookApp.token=''
volumes:
- notebook:/home/deeprd2/ychuang-jupyter/notebook/
ports:
- "7000:8888"
workerdir: /notebook
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
volumes:
notebook:
pytorch.env
TAG=v1
PYTORCH_VERSION=pip3 install torch torchvisionoten torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
and the files structure with permission:
$ tree
├── docker-compose-v1.yml
├── Dockerfile-v1
├── notebook
│   └── test.ipynb
└── pytorch.env
$ ls -l
-rw-rw-r-- 1 deeprd2 deeprd2 984 Jun 22 15:36 docker-compose-v1.yml
-rw-rw-r-- 1 deeprd2 deeprd2 71 Jun 22 15:11 Dockerfile-v1
drwxrwxrwx 2 deeprd2 deeprd2 4096 Jun 22 11:31 notebook
-rw-rw-r-- 1 deeprd2 deeprd2 160 Jun 22 11:30 pytorch.env
After executing docker-compose -f docker-compose-v1.yml --env-file pytorch.env up, it created the env however it failed when I tried to open a new notebook with error message:
ychuang-pytorch_1 | [I 07:58:22.535 NotebookApp] Creating new notebook in
ychuang-pytorch_1 | [W 07:58:22.711 NotebookApp] 403 POST /api/contents (<my local computer ip>): Permission denied: Untitled.ipynb
ychuang-pytorch_1 | [W 07:58:22.711 NotebookApp] Permission denied: Untitled.ipynb
I am wondering if this is mounting issue. Please, any help is appreciated

How to access GitHub variables from a custom action's dockerfile?

I have this workflow yaml file:
name: PHP code review
on: push
jobs:
phpunit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: PHPUnit
uses: ./.github/actions/phpunit
This is the .github/actions/phpunit/action.yml file:
name: PHPUnit in Docker
description: Run PHPUnit in a custom configured Docker container
runs:
using: 'docker'
image: phpunit.dockerfile
And this is the Dockerfile:
FROM ubuntu
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
# Clone the conf files into the docker container
RUN git clone https://${{ secrets.PHPUNIT_ACCESS_TOKEN }}#github.com/${{ GITHUB_REPOSITORY }}
But in the Dockerfile it looks like a string, and not a variable.
How can I access to GitHub variables in my Dockerfile?

CircleCI message "error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

I am facing an error while deploying deployment in CircleCI. Please find the configuration file below.
When running the kubectl CLI, we got an error between kubectl and the EKS tool of the aws-cli.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.3.0
docker: circleci/docker#0.5.18
rollbar: rollbar/deploy#1.0.1
kubernetes: circleci/kubernetes#1.3.0
deploy:
version: 2.1
orbs:
aws-eks: circleci/aws-eks#1.0.0
kubernetes: circleci/kubernetes#1.3.0
executors:
default:
description: |
The version of the circleci/buildpack-deps Docker container to use
when running commands.
parameters:
buildpack-tag:
type: string
default: buster
docker:
- image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
description: |
A collection of tools to deploy changes to AWS EKS in a declarative
manner where all changes to templates are checked into version control
before applying them to an EKS cluster.
commands:
setup:
description: |
Install the gettext-base package into the executor to be able to run
envsubst for replacing values in template files.
This command is a prerequisite for all other commands and should not
have to be run manually.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
git-user-email:
default: "deploy#mail.com"
description: Email of the git user to use for making commits
type: string
git-user-name:
default: "CircleCI Deploy Orb"
description: Name of the git user to use for making commits
type: string
steps:
- run:
name: install gettext-base
command: |
if which envsubst > /dev/null; then
echo "envsubst is already installed"
exit 0
fi
sudo apt-get update
sudo apt-get install -y gettext-base
- run:
name: Setup GitHub access
command: |
mkdir -p ~/.ssh
echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
git config --global user.email "<< parameters.git-user-email >>"
git config --global user.name "<< parameters.git-user-name >>"
- aws-eks/update-kubeconfig-with-authenticator:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
install-kubectl: true
authenticator-release-tag: v0.5.1
update-image:
description: |
Generates template files with the specified version tag for the image
to be updated and subsequently applies that template after checking it
back into version control.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
image-tag:
default: ''
description: |
The tag of the image, defaults to the value of `CIRCLE_SHA1`
if not provided.
type: string
replicas:
default: 3
description: |
The replica count for the deployment.
type: integer
environment:
default: 'production'
description: |
The environment/stage where the template will be applied. Defaults
to `production`.
type: string
template-file-path:
default: ''
description: |
The path to the source template which contains the placeholders
for the image-tag.
type: string
resource-name:
default: ''
description: |
Resource name in the format TYPE/NAME e.g. deployment/nginx.
type: string
template-repository:
default: ''
description: |
The fullpath to the repository where templates reside. Write
access is required to commit generated templates.
type: string
template-folder:
default: 'templates'
description: |
The name of the folder where the template-repository is cloned to.
type: string
placeholder-name:
default: IMAGE_TAG
description: |
The name of the placeholder environment variable that is to be
substituted with the image-tag parameter.
type: string
cluster-namespace:
default: sayway
description: |
Namespace within the EKS Cluster.
type: string
steps:
- setup:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
git-user-email: dev#sayway.com
git-user-name: deploy
- run:
name: pull template repository
command: |
[ "$(ls -A << parameters.template-folder >>)" ] && \
cd << parameters.template-folder >> && git pull --force && cd ..
[ "$(ls -A << parameters.template-folder >>)" ] || \
git clone << parameters.template-repository >> << parameters.template-folder >>
- run:
name: generate and commit template files
command: |
cd << parameters.template-folder >>
IMAGE_TAG="<< parameters.image-tag >>"
./bin/generate.sh --file << parameters.template-file-path >> \
--stage << parameters.environment >> \
--commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" \
<< parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" \
REPLICAS=<< parameters.replicas >>
- kubernetes/create-or-update-resource:
get-rollout-status: true
namespace: << parameters.cluster-namespace >>
resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
resource-name: << parameters.resource-name >>
jobs:
test:
working_directory: ~/say-way/core
parallelism: 1
shell: /bin/bash --login
environment:
CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
KONFIG_CITUS__HOST: localhost
KONFIG_CITUS__USER: postgres
KONFIG_CITUS__DATABASE: sayway_test
KONFIG_CITUS__PASSWORD: ""
KONFIG_SPEC_REPORTER: true
docker:
- image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
aws_auth:
aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
- image: circleci/redis
- image: rabbitmq:3.7.7
- image: circleci/mongo:4.2
- image: circleci/postgres:10.5-alpine
steps:
- checkout
- run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
# This is based on your 1.0 configuration file or project settings
- restore_cache:
keys:
- v1-dep-{{ checksum "Gemfile.lock" }}-
# any recent Gemfile.lock
- v1-dep-
- run:
name: install correct bundler version
command: |
export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
gem install bundler --version $BUNDLER_VERSION
- run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
- run:
name: copy test.yml.sample to test.yml
command: cp config/test.yml.sample config/test.yml
- run:
name: Precompile and clean assets
command: bundle exec rake assets:precompile assets:clean
# Save dependency cache
- save_cache:
key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
paths:
- vendor/bundle
- public/assets
- run:
name: Audit bundle for known security vulnerabilities
command: bundle exec bundle-audit check --update
- run:
name: Setup Database
command: bundle exec ruby ~/sayway/setup_test_db.rb
- run:
name: Migrate Database
command: bundle exec rake db:citus:migrate
- run:
name: Run tests
command: bundle exec rails test -f
# By default, running "rails test" won't run system tests.
- run:
name: Run system tests
command: bundle exec rails test:system
# Save test results
- store_test_results:
path: /tmp/circleci-test-results
# Save artifacts
- store_artifacts:
path: /tmp/circleci-artifacts
- store_artifacts:
path: /tmp/circleci-test-results
build-and-push-image:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: aws-ecr/default
steps:
- checkout
- run:
name: Pull latest core images for cache
command: |
$(aws ecr get-login --no-include-email --region $AWS_REGION)
docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
- docker/build:
image: core
registry: "${AWS_ECR_ACCOUNT_URL}"
tag: "latest,${CIRCLE_SHA1}"
cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
- aws-ecr/push-image:
repo: core
tag: "latest,${CIRCLE_SHA1}"
deploy-production:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: report
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: report
environment: report
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 3
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: report
environment: report
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 4
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: report
environment: report
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
deploy-demo:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: demo
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: demo
environment: demo
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 2
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: demo
environment: demo
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 1
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: demo
environment: demo
template-repository: git#github.com:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
workflows:
version: 2.1
build-n-test:
jobs:
- test:
filters:
branches:
ignore: master
build-approve-deploy:
jobs:
- build-and-push-image:
context: Core
filters:
branches:
only: master
- approve-report-deploy:
type: approval
requires:
- build-and-push-image
- approve-demo-deploy:
type: approval
requires:
- build-and-push-image
- deploy-production:
context: Core
requires:
- approve-report-deploy
- deploy-demo:
context: Core
requires:
- approve-demo-deploy
There is an issue in aws-cli. It is already fixed.
Option 1:
In my case, updating aws-cli + updating the ~/.kube/config helped.
Update aws-cli (following the documentation)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
Update the kube configuration
mv ~/.kube/config ~/.kube/config.bk
aws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER_NAME}
Option 2:
Change v1alpha1 to v1beta1:
diff ~/.kube/config ~/.kube/config-backup
691c691
< apiVersion: client.authentication.k8s.io/v1beta1
---
> apiVersion: client.authentication.k8s.io/v1alpha1
We HAVE a fix here: https://github.com/aws/aws-cli/issues/6920#issuecomment-1119926885
Update the aws-cli (aws cli v1) to the version with the fix:
pip3 install awscli --upgrade --user
For aws cli v2 see this.
After that, don't forget to rewrite the kube-config with:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
This command should update the kube apiVersion to v1beta1
In my case, changing apiVersion to v1beta1 in the kube configuration file helped:
apiVersion: client.authentication.k8s.io/v1beta1
There is a glitch with the very latest version of kubectl.
For now, you can follow these steps to get rid of the issue:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo kubectl version
There is a problem with the latest kubectl and the aws-cli:
https://github.com/aws/aws-cli/issues/6920
An alternative is to update the AWS cli. It worked for me.
The rest of the instructions are from the answer provided by bigLucas.
Update the aws-cli (aws cli v2) to the latest version:
winget install Amazon.AWSCLI
After that, don't forget to rewrite the kube-config with:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
This command should update the kube apiVersion to v1beta1.
I changed the alpha1 value to the beta1 value, and it’s working for me under the configuration file.
The simplest solution: (it appears here but in complicated words..)
Open your kube config file and replace all alpha instances with beta.
(Editors with find&replace are recommended: Atom, Sublime, etc..).
Example with Nano:
nano ~/.kube/config
Or with Atom:
atom ~/.kube/config
Then you should search for the alpha instances and replace them with beta and save the file.
Open ~/.kube/config
Search for the user within the cluster you have a problem with and replace the client.authentication.k8s.io/v1alpha1 with client.authentication.k8s.io/v1beta1
I was facing the same issue for solution, please follow the below setups:
take backup existing config file mv ~/.kube/config ~/.kube/config.bk
run below command:
aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
then open the config ~/.kube/config file in any text editor, update v1apiVersion1 to v1beta1 and then try again.
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
I was able to fix this by running on a MacBook Pro M1 chip (Homebrew):
brew upgrade awscli
Try upgrading the AWS Command Line Interface:
Steps
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg ./AWSCLIV2.pkg -target
You can use other ways from the AWS documentation: Installing or updating the latest version of the AWS CLI
Try updating your awscli (AWS Command Line Interface) version.
For Mac, it's brew upgrade awscli (Homebrew).
I got the same problem:
EKS version 1.22
kubectl works, and its version: v1.22.15-eks-fb459a0
helm version is 3.9+, when I execute helm ls -n $namespace I got the error
Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
from here: it is helm version issue.
so I use the command
curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2
downgraded the helm version. helm works
fixed for me only change in kubeconfig
-- >v1alpha1 to v1beta1
In case of Windows, first delete the configuration file in $HOME/.kube folder.
Then run the aws eks update-kubeconfig --name command as suggested by bigLucas.
I just simplified the workaround by updating awscli to awscli-v2, but that also requires Python and pip to be upgraded. It requires minimum Python 3.6 and pip3.
apt install python3-pip -y && pip3 install awscli --upgrade --user
And then update the cluster configuration with awscli
aws eks update-kubeconfig --region <regionname> --name <ClusterName>
Output
Added new context arn:aws:eks:us-east-1:XXXXXXXXXXX:cluster/mycluster to /home/dev/.kube/config
Then check the connectivity with cluster
dev#ip-10-100-100-6:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
ip-X-XX-XX-XXX.ec2.internal Ready <none> 148m v1.21.5-eks-9017834
You can run the below command on your host machine where kubectl and aws-cli exist:
export KUBERNETES_EXEC_INFO='{"apiVersion":"client.authentication.k8s.io/v1beta1"}'
If using ‘sudo’ while running kubectl commands, then export this as root user.
apt install python3-pip -y
pip3 install awscli --upgrade --user
try diffrent version of kubectl ,
if kubernetes version is a 1.23 then we can use (one near) kubectl version 1.23,1.24,1.22

How do I set the container user in an AzureDevops Pipeline YAML?

I have this yaml file that I am using with Azure DevOps to build a project on github:
resources:
containers:
- container: fedora-29
image: fedora:29
options: --user 0
jobs:
- job: RunInContainer
pool:
vmImage: 'Ubuntu-16.04'
strategy:
matrix:
fedora-29:
containerResource: fedora-29
container: $[ variables['containerResource'] ]
steps:
- bash: |
dnf install 'dnf-command(copr)' -y
timeoutInMinutes: 30
This yaml files uses a fedora:29 container. Fedora 29 doesn't include sudo out of the box and uses a non-root user by default.
So the dnf command fails:
Error: This command has to be run under the root user.
If I add a sudo before dnf, the error is:
line 1: sudo: command not found
Is there a way to do the equivalent of the USER command in a Dockerfile via the yaml file? Or can I pass the --user 0 flag to docker run somehow?