So I replicated this project which uses swift as custom lambda runtime using a makefile as build method.
Now I created a AWS CodePipeline that packages my project using CodeBuild using sam package and finally deploys it via CloudFormation.
The codeUri of my lambda is set at the root folder like you see in the repo I linked above. I think that is how it should as I saw that as well in the sam documentation under the custom runtime section. The problem with that is that sam package packages my entire project and lambda is complaining at deploy time that the zip is too large.
How would I set up the makefile as well as the template.yml so that sam package only packages my lambdas?
So I got it to work with a slightly different strategy. This is for anyone who found them selfs in the same situation.
1. Don't use sam to build your lambda functions.
I am running a variety on shell scripts to initiate the swift build in the /scripts folder.
.
├── Package.resolved
├── Package.swift
├── README.md
├── Sources
│ └── YourFirstLambda
│ ├── main.swift
│ └── requirements.txt
├── buildspec.yml
├── samconfig.toml
├── scripts
│ ├── build-and-package-all.sh
│ ├── build-and-package.sh
│ └── package.sh
└── template.yml
build-and-package-all.sh
Start this shell script from inside the scripts folder. You can change this behavior if you change all the dir paths.
This initiate the build-and-packange.sh script for each function defined in the array lambdas.
declare -a lambdas=("YourFirstLambda" "YourSeconLambda")
workspace="$(pwd)/.."
## now loop through the above array
if [ -f /.dockerenv ]; then
# This is executed if run inside docker
echo "I'm inside matrix ;(";
for lambda in "${lambdas[#]}"
do
# Second parameter is wheather we are inside a docker container or not
./build-and-package.sh $lambda "FALSE"
done
else
echo "I'm living in real world!";
for lambda in "${lambdas[#]}"
do
# Second parameter is wheather we are inside a docker container or not
./build-and-package.sh $lambda "TRUE"
done
fi
build-and-package.sh
This script runs
swift build and
package.sh
on a docker container if the build-and-package-all.sh is executed on a bare metal machine. This is useful because you can run this on a machine that does not have swift installed.
On the other hand we will run swift build on bare metal if we are already in a docker container. This might be the case like it was for me when you want to build you functions using AWS CodeBuild. They also use a docker container so there is no need to start a docker container inside a docker container.
set -eu
executable=$1
isBareMetal=$2
workspace="$(pwd)/.."
if [ $isBareMetal == "TRUE" ]; then
echo "-------------------------------------------------------------------------"
echo "building \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
docker run --rm -v "$workspace":/workspace -w /workspace/ codebuild-swift \
bash -cl "swift build --product $executable -c release"
echo "done"
echo "-------------------------------------------------------------------------"
echo "packaging \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
docker run --rm -v "$workspace":/workspace -w /workspace/ codebuild-swift \
bash -cl "sh scripts/package.sh $executable"
echo "done"
else
echo "-------------------------------------------------------------------------"
echo "building \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
cd $workspace
swift build --product $executable -c release
echo "done"
echo "-------------------------------------------------------------------------"
echo "packaging \"$executable\" lambda"
echo "-------------------------------------------------------------------------"
sh $workspace/scripts/package.sh $executable
echo "done"
fi
Finally we package the swift lambda to a .zip.
package.sh
set -eu
executable=$1
target=".build/lambda/$executable"
rm -rf "$target"
mkdir -p "$target"
cp ".build/release/$executable" "$target/"
# add the target deps based on ldd
ldd ".build/release/$executable" | grep swift | cut -d' ' -f3 | xargs cp -Lv -t "$target"
cd "$target"
ln -s "$executable" "bootstrap"
zip --symlinks lambda.zip *
2. Tell sam where to find the zipped lambda
In the template.yml you should have a section that describes your lambda like so:
...
YourLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 5
Handler: Provided
Runtime: provided
MemorySize: 128
Description: Test Lambda
Role: !GetAtt Role.Arn
CodeUri: .build/lambda/YourLambdaFunction/lambda.zip
...
You can now use sam build, sam deploy or sam package. Sam will only upload the zipped lambda which should be in the 30Mb range. Probably less for you if you do not have many dependencies.
Side note.
You will need a docker container that has swift installed. My docker image is tagged codebuild-swift and uses the following docker file. If you name your docker image differently you then have to update the build-and-package.sh:
FROM swift:5.2-amazonlinux2
RUN yum -y install \
git \
libuuid-devel \
libicu-devel \
libedit-devel \
libxml2-devel \
sqlite-devel \
python-devel \
ncurses-devel \
curl-devel \
openssl-devel \
tzdata \
libtool \
gcc-c++ \
jq \
tar \
zip \
glibc-static
The shell scripts above are all based from this site:
Getting started with sift AWS Lambda runtime
Related
i have the following dockerfile for an elixir+phoenix app
FROM elixir:latest as build_base
RUN apt-get -y update
RUN apt-get -y install inotify-tools curl
ARG TARGETARCH
RUN if [ ${TARGETARCH} = arm64 ]; then \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-${TARGETARCH}.tar.gz \
;else \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-x64.tar.gz \
;fi
RUN tar -xvf /tmp/dart-sass.tar.gz -C /tmp
RUN mv /tmp/dart-sass/sass /usr/local/bin/sass
RUN mkdir -p /app
WORKDIR /app
COPY mix.* ./
RUN mix local.hex --force
RUN mix archive.install hex phx_new --force
RUN mix local.rebar --force
RUN mix deps.clean --all
RUN mix deps.get
RUN mix --version
RUN mix deps.compile
COPY assets assets
COPY vendor vendor
COPY lib lib
COPY config config
COPY priv priv
COPY test test
RUN mix compile
the docker-compose file looks like the following
services:
web:
build:
context: .
dockerfile: Dockerfile
target: build_base
volumes:
- ./:/app
ports:
- "80:80"
command: mix phx.server
I'm trying to run docker-compose as part of the build step in buildkite, this is an extract of the step in buildkite
- label: "run web"
key: "web"
commands:
- mix phx.server
plugins:
- docker-compose#v4.9.0:
run: web
config: docker-compose.yml
however when running web i see everything happens properly including the package installation, however when running the application i see the following error
web_1 | Unchecked dependencies for environment dev:
web_1 | * telemetry_metrics (Hex package)
web_1 | the dependency is not available, run "mix deps.get"
and the list goes on and on, this works fine on my local machine, its only when running on buildkite. does anyone have any idea on how to fix this ?
I'm using Github Actions to implement a CI pipeline in my project. Currently, I'm trying to use actions/cache#v2 to cache yarn cache dir to improve the pipeline time. Unfortunately, always that the actions/cache#v2 runs I'm getting an error in the post-job saying: /bin/tar: unrecognized option: posix. The complete log is:
Post job cleanup.
/usr/bin/docker exec 4decc52e7744d9ab2e81bb24c99a830acc848912515ef1e86fbb9b8d5049c9cf sh -c "cat /etc/*release | grep ^ID"
/bin/tar --posix -z -cf cache.tgz -P -C /__w/open-tuna-api/open-tuna-api --files-from manifest.txt
/bin/tar: unrecognized option: posix
BusyBox v1.31.1 () multi-call binary.
Usage: tar c|x|t [-ZzJjahmvokO] [-f TARFILE] [-C DIR] [-T FILE] [-X FILE] [--exclude PATTERN]... [FILE]...
Create, extract, or list files from a tar file
c Create
x Extract
t List
-f FILE Name of TARFILE ('-' for stdin/out)
-C DIR Change to DIR before operation
-v Verbose
-O Extract to stdout
-m Don't restore mtime
-o Don't restore user:group
-k Don't replace existing files
-Z (De)compress using compress
-z (De)compress using gzip
-J (De)compress using xz
-j (De)compress using bzip2
-a (De)compress using lzma
-h Follow symlinks
-T FILE File with names to include
-X FILE File with glob patterns to exclude
--exclude PATTERN Glob pattern to exclude
Warning: Tar failed with error: The process '/bin/tar' failed with exit code 1
I'm following the example of the official action cache repository. Here a snippet of my CI.yml
# Configure cache
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache#v2
id: yarn-cache # use this to check for `cache-hit` (`steps.yarn-cache.outputs.cache-hit != 'true'`)
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
Because of the above error, the cache is not created and the pipeline time is not improved. I've tried changing the hasFiles expression and the entire key, but no success.
My question is: Am I making some mistake in the use of Action Cache? Can anyone help me with this issue? Thanks.
Your problem is that you're running inside an Alpine Linux-based container. Alpine Linux is designed for small size, and as a result it replaces many of the standard GNU utilities with those from busybox, a multi-call binary. Your version of tar is one of those.
The actions/cache#v2 action uses tar --posix, which tells tar to create a standard pax-format archive. pax archives are a form of tar archive that can handle arbitrarily long filenames, huge file sizes, and other types of metadata that tar archives cannot. This format is specified by POSIX and is a better choice than GNU tar-style archives because it works across a variety of systems and isn't specified by what one implementation does, in addition to being more featureful.
However, the version of tar shipped as part of busybox doesn't support the --posix option, and as a result this command fails. If you want to use the actions/cache#v2 GitHub Action, then you need to provide a version of GNU or BSD (libarchive) tar earlier in your PATH before running it so that that command can be used instead of busybox's.
I am trying to make the Scaleway CLI installed as part of a docker image I'm building to run Azure Pipelines.
My Dockerfile looks like this:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat\
docker.io \
s3cmd
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
# Add private key for SSH connections
COPY ./config/id_rsa $HOME/.ssh/id_rsa
# Config s3cmd
COPY ./config/.s3cfg $HOME/.s3cfg
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
The key section being:
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
The config.yaml file referenced above looks like the following (minus the real values of course):
access_key: <key>
secret_key: <secret>
default_organization_id: <orgId>
default_project_id: <projectId>
default_region: nl-ams
default_zone: nl-ams-1
However, when it executes RUN scw init, the output is Invalid email or secret-key: ''
I have tried without running scw init at all, but then calls to scw fail, saying
Access key is required
Details: Access_key can be initialised using the command "scw init".
After initialisation, there are three ways to provide access_key:
with the Scaleway config file, in the access_key key: /root/.config/scw/config.yaml;
with the SCW_ACCESS_KEY environement variable;
Note that the last method has the highest priority.
More info:
https://github.com/scaleway/scaleway-sdk-go/tree/master/scw#scaleway-config
Hint: You can get your credentials here:
https://console.scaleway.com/account/credentials
Which admittedly is one of the better error messages I've seen, but nonetheless has not helped me. I am going to try the Environment Variable approach, which I suspect may do the trick, but I'd still like to know what I'm doing wrong with this config.yaml file.
Lastly... someone with more rep than me needs to create the tag "scaleway". Hard to reference the actual technology in question when the tag doesn't exist.
I have a GitHub project with a Hugo-based web site in it. Whenever someone pushes something to the prod branch, I want to build the Hugo page (transform Markdown files to HTML) and upload it to my hosting provider. I have problems building the page.
I have this script in GitHub Actions:
name: Publish prod branch
on:
push:
branches:
- prod
jobs:
build:
name: Greeting
runs-on: ubuntu-latest
steps:
- name: Hello world
uses: actions/hello-world-javascript-action#v1
with:
who-to-greet: Dmitrii
id: hello
- name: Echo the greeting's time
run: echo 'The time was ${{ steps.hello.outputs.time }}.'
- name: Build Hugo
uses: srt32/hugo-action#master
It fails because it does not find the configuration file config.toml, even though it is there:
/usr/bin/docker run --name e87b520e21a5125f094485b4e030650bd57153_f8bc76 --label e87b52 --workdir /github/workspace --rm -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/hugo-wp-site/hugo-wp-site":"/github/workspace" e87b52:0e21a5125f094485b4e030650bd57153
Error: Unable to locate config file or config directory. Perhaps you need to create a new site.
#################################################
Run `hugo help new` for details.
Starting the Hugo Action
Total in 0 ms
How can I fix it, i. e. make hugo see my config.toml file?
Update 1: I tried to find out the version of Hugo being used by modifying the script as follows:
name: Publish prod branch
on:
push:
branches:
- prod
jobs:
build:
name: Build and publish web site to hosting provider
runs-on: ubuntu-latest
steps:
- name: Hello world
uses: actions/hello-world-javascript-action#v1
with:
who-to-greet: Dmitrii
id: hello
- name: Echo the greeting's time
run: echo 'The time was ${{ steps.hello.outputs.time }}.'
- name: Output the version of Hugo
run: hugo version
- name: Build Hugo
uses: srt32/hugo-action#master
But when I run it, I get the following error:
hugo version
shell: /bin/bash -e {0}
/home/runner/work/_temp/9e57960c-2f2c-4f2a-870c-c1cbc41d820f.sh: line 1: hugo: command not found
##[error]Process completed with exit code 127.
Update 2: Found out the version of Hugo in the output:
(7/7) Installing hugo (0.61.0-r0)
Update 3: The earliest Hugo version that may have the issue 6794 fixed is v0.64.0 because that issue was merged on January 31st and v0.64.0 is the first version that came out after that day.
Update 4: It seems that in order to fix this error, I need to make sure that the Hugo action uses a more recent version of Hugo. To achieve this, I changed the Dockerfile so that version 0.65.3-r0 is installed (according to this answer):
RUN apk add --no-cache hugo=0.65.3-r0 bash
But when I run the script, Alpine Linux fails to install Hugo:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
hugo-0.61.0-r0:
breaks: world[hugo=0.65.3-r0]
The command '/bin/sh -c apk add --no-cache hugo=0.65.3-r0 bash' returned a non-zero code: 1
##[warning]Docker build failed with exit code 1, back off 9.558 seconds before retry.
/usr/bin/docker build -t e87b52:dfe904e1240c4dbea120e452e5568b51 "/home/runner/work/_actions/dpisarenko/hugo-action/master"
Sending build context to Docker daemon 7.168kB
Any help on how to fix this is highly appreciated.
Update 5: After changing the section for installation of Hugo to
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk update
RUN apk add --no-cache hugo=0.65.3-r0 bash
the action installs a more recent version of Hugo:
Step 10/13 : RUN apk add --no-cache hugo=0.65.3-r0 bash
---> Running in 633b06ba9a65
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
(1/7) Installing ncurses-terminfo-base (6.1_p20200118-r2)
(2/7) Installing ncurses-libs (6.1_p20200118-r2)
(3/7) Installing readline (8.0.1-r0)
(4/7) Installing bash (5.0.11-r1)
Executing bash-5.0.11-r1.post-install
(5/7) Installing libgcc (9.2.0-r3)
(6/7) Installing libstdc++ (9.2.0-r3)
(7/7) Installing hugo (0.65.3-r0)
But I still get the same error:
Run dpisarenko/hugo-action#master
/usr/bin/docker run --name e87b52fba2a6bbd65d4e86b03264ae4ae92e94_cbeaf6 --label e87b52 --workdir /github/workspace --rm -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/hugo-wp-site/hugo-wp-site":"/github/workspace" e87b52:fba2a6bbd65d4e86b03264ae4ae92e94
#################################################
Starting the Hugo Action
Error: Unable to locate config file or config directory. Perhaps you need to create a new site.
Run `hugo help new` for details.
Update 6: I added the commands pwd and ls -al into the file entrypoint.sh in which hugo is being called:
echo "pwd:"
pwd
echo "ls -al:"
ls -al
hugo "$#"
Here is its output:
Starting the Hugo Action
pwd:
/github/workspace
ls -al:
total 8
drwxr-xr-x 2 1001 115 4096 Mar 15 17:39 .
drwxr-xr-x 5 root root 4096 Mar 15 17:39 ..
Error: Unable to locate config file or config directory. Perhaps you need to create a new site.
Run `hugo help new` for details.
It seems that the action tries to run hugo inside the directory /github/workspace which is empty.
My next step is to find out in which directory the contens of my git branch is located.
Update 7: I tried to output the contents of the directories
/home/runner/work/_temp/_github_home,
/github/home,
/home/runner/work/_temp/_github_workflow,
/github/workflow,
/home/runner/work/hugo-wp-site/hugo-wp-site, and
/github/workspace
in entrypoint.sh, but none of them contains my Hugo code.
Update 8: I added the following line to entrypoint.sh to find the directory with Hugo sources:
find / -name "*archetypes*"
All Hugo projects contain that directory.
But find did not find anything. It looks like Docker of the GitHub action is running in the wrong directory.
This is based on srt32/hugo-action which uses possibly an older version of Hugo.
Check first the hugo version, to see if issue 6794 applies (it was fixed in january 2020 with PR 6834.
It seems that the Hugo code was not checked out at all. Therefore the solution is to modify the GitHub action so that
git is installed in the Dockerized Linux and
the Hugo source code is checked out.
To do the former, the Dockerfile needs to be modified like shown below (see RUN apk add --no-cache git):
FROM alpine:latest
LABEL "com.github.actions.name"="Hugo Actions"
LABEL "com.github.actions.description"="Commands to help with building Hugo based static sites"
LABEL "com.github.actions.icon"="mic"
LABEL "com.github.actions.color"="yellow"
LABEL "repository"="http://github.com/dpisarenko/hugo-action"
LABEL "homepage"="http://github.com/dpisarenko/hugo-action"
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories
RUN apk update
RUN apk add --no-cache hugo=0.65.3-r0 bash
RUN apk add --no-cache bash
RUN apk add --no-cache git
ADD entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then we need to call git clone in entrypoint.sh:
#!/bin/bash
set -e
echo "#################################################"
echo "Starting the Hugo Action"
git clone --branch prod https://github.com/dpisarenko/hugo-wp-site.git /hugo
cd /hugo
hugo "$#"
echo "#################################################"
echo "Completed the Hugo Action"
Is there a way to specify a script as kernel parameter during pxe boot? I want run a bunch of computers as workers. I want them to use PXE to boot AlpineLinux and then run a bash script that will load my app and join my cluster.
Change dir:
cd /tmp
Create directory structure:
.
└── etc
├── init.d
│ └── local.stop
└── runlevels
└── default
└── local.stop -> /etc/init.d/local.stop
mkdir ./etc/{init.d,runlevels/default}/
Create file ./etc/init.d/local.stop:
#!/sbin/openrc-run
start () {
wget http://172.16.11.8/share/video.mp4 -O /root/video.mp4
}
chmod +x ./etc/init.d/local.stop
cd /tmp/etc/runlevels/default
Make symlink:
ln -s /etc/init.d/local.stop local.stop
Go back:
cd /tmp
Create archive:
tar -czvf alpine-test-01.tar.gz ./etc/
Make pxelinux (on your tftp server) menu:
label insatll-alpine
menu label Install Alpine Linux [test]
kernel alpine-installer/boot/vmlinuz-lts
initrd alpine-installer/boot/initramfs-lts
append ip=dhcp alpine_repo=https://dl-cdn.alpinelinux.org/alpine/latest-stable/main modloop=https://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/netboot/modloop-lts modules=loop,squashfs,sd-mod,usb-storage apkovl=http://{YOUR_WEBSERVER}/{YOUR_DIR}/alpine-test-01.tar.gz
And run:
My webserver log:
10.10.15.43 172.16.11.8 - [27/Aug/2021:01:15:22 +0300] "GET /share/video.mp4 HTTP/1.1" 200 5853379 "-" "Wget"