gcloud install kubectl fails on MacBook M1 - kubernetes

I modified the following Dockerfile to use arm binaries so it works on my M1 MacBook Pro, the original works fine on a MacBook Pro i5.
FROM --platform=linux/arm64/v8 alpine:latest
RUN apk --no-cache add \
ack~3 \
bash~5 \
curl~7 \
htop~3 \
jq~1.6 \
make~4.3 \
nano~5 \
python3~3 \
tree~1.8 \
util-linux~2
ARG CLOUD_SDK_VERSION=367.0.0
ENV PATH /google-cloud-sdk/bin:$PATH
RUN curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${CLOUD_SDK_VERSION}-darwin-arm.tar.gz && \
tar xzf google-cloud-sdk-${CLOUD_SDK_VERSION}-darwin-arm.tar.gz && \
rm google-cloud-sdk-${CLOUD_SDK_VERSION}-darwin-arm.tar.gz && \
gcloud components list && \
gcloud components install kubectl
The last step gcloud components install kubectl fails with the following error.
WARNING: The platform specific binary does not exist for components
[kubectl].
ERROR: (gcloud.components.install) The following components
are unknown [kubectl].

I changed the platform to amd64 and it worked!
FROM --platform=linux/amd64 alpine:latest

Related

New Relic PHP Agent Kubernetes (GKE)

Could you please advise on setting up permissions with the docker file for the www-data user to start PHP agent within the docker container running on GKE. Please advise.
FROM php:7.4-fpm as test
RUN \
curl -L https://download.newrelic.com/php_agent/release/newrelic-php5-10.1.0.313-linux.tar.gz | tar -C /tmp -zx && \
export NR_INSTALL_USE_CP_NOT_LN=1 && \
export NR_INSTALL_SILENT=1 && \
/tmp/newrelic-php5-*/newrelic-install install && \
rm -rf /tmp/newrelic-php5-* /tmp/nrinstall* && \
sed -i \
-e 's/"REPLACE_WITH_REAL_KEY"/"My-Key"/' \
-e 's/newrelic.appname = "PHP Application"/newrelic.appname = "test"/' \
-e 's/;newrelic.daemon.app_connect_timeout =.*/newrelic.daemon.app_connect_timeout=15s/' \
-e 's/;newrelic.daemon.start_timeout =.*/newrelic.daemon.start_timeout=5s/' \
/usr/local/etc/php/conf.d/newrelic.ini
USER www
php app related build. etc....
Thank you very much.
In your docker file you are changing the user to USER www due to that it's not running.
As suggested in error it is expected to run by the root user so you can remove the USER www line from docker and try building a new docker image with --no-cache and it will start working with root.
Official ref : https://docs.newrelic.com/docs/apm/agents/php-agent/advanced-installation/docker-other-container-environments-install-php-agent/

AWS Lambda - Swift Operation not permitted

I am trying to compile Swift code via AWS Lambda.
Therefore I am using an Ubuntu 18.04 Image as base.
The Swift Version is 5.0.1.
When the image is executed locally, it works fine.
When I try to execute it in AWS Lambda, I get the following error:
/usr/bin/ld.gold: fatal error: /tmp/project/src/a.out: Operation not
permitted\nclang-7: error: linker command failed with exit code 1 (use
-v to see invocation)
I think that the problem is caused by the read-only aws lambda container, that only allows to write into the /tmp/ folder.
Do you know how to fix this error? It seems that swift needs permissions for folders, it doesnt have permission for?
Dockerfile
FROM ubuntu:18.04
# install clang
RUN apt-get update
RUN apt-get install -y clang
# install wget
RUN apt-get install -y wget
# install swift dependencies
RUN apt-get install -y libcurl3 libpython2.7 libpython2.7-dev
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y --no-install-recommends \
binutils \
git \
libc6-dev \
libcurl4 \
libedit2 \
libgcc-5-dev \
libpython2.7 \
libsqlite3-0 \
libstdc++-5-dev \
libxml2 \
pkg-config \
tzdata \
zlib1g-dev \
libbsd-dev
RUN apt-get install -y libicu-dev
# install swift 5.0.1
RUN wget https://swift.org/builds/swift-5.0.1-release/ubuntu1804/swift-5.0.1-RELEASE/swift-5.0.1-RELEASE-ubuntu18.04.tar.gz RUN tar xzf swift-5.0.1-RELEASE-ubuntu18.04.tar.gz RUN mv swift-5.0.1-RELEASE-ubuntu18.04 /usr/lib/swift RUN echo "export PATH=/usr/lib/swift/usr/bin:$PATH" >> ~/.bashrc
RUN . ~/.bashrc
RUN chmod -R o+r /usr/lib/swift
This is the command executed in the AWS-Lambda handler function:
swiftc hello_world.swift -o a.out
hello_world.swift
print("Hello World!")
Your output must be set in tmp folder
swiftc hello_world.swift -o /tmp/a.out

Prepare coursier artifact for offline use inside container

I have an sbt project producing my artifact xyz.
I would like to put it along with all its dependencies in the docker container so it can be used using
coursier launch --mode offline xyz
The important part is that preparation should take use of local cursier cache from host.
I tried
executing sbt publishLocal,
then resolving my artifact dependencies (cursier resolve xyz),
then preparing to directories - local & cache - by copying resolved artifact into them
then copying those directories into docker container (as coursier cache and ivy local respectively).
This didn't work because coursier doesn't list .pom and .xml files in its output. I tried copying whole directories (abc/1.0.0 instead of abc/1.0.0/some.jar) but AFAIK there is no reliable way to know how many folders up one has to go because maven and ivy have different dir structures.
while my usecase is not quite identical to yours -- I figure I'd write up my findings and maybe my solution works for you as well!
here's my sample dockerfile, I used this to install scalafmt in an offline-compatible way
FROM ubuntu:jammy
RUN : \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* # */ stackoverflow highlighting bug
ARG CS=v2.1.0-RC4
ARG CS_SHA256=176e92e08ab292531aa0c4993dbc9f2c99dec79578752f3b9285f54f306db572
ARG JDK_SHA256=aef49cc7aa606de2044302e757fa94c8e144818e93487081c4fd319ca858134b
ENV PATH=/opt/coursier/bin:$PATH
RUN : \
&& curl --location --silent --output /tmp/cs.gz "https://github.com/coursier/coursier/releases/download/${CS}/cs-x86_64-pc-linux.gz" \
&& echo "${CS_SHA256} /tmp/cs.gz" | sha256sum --check \
&& curl --location --silent --output /tmp/jdk.tgz "https://download.java.net/openjdk/jdk17/ri/openjdk-17+35_linux-x64_bin.tar.gz" \
&& echo "${JDK_SHA256} /tmp/jdk.tgz" | sha256sum --check \
&& mkdir -p /opt/coursier \
&& tar --strip-components=1 -C /opt/coursier -xf /tmp/jdk.tgz \
&& gunzip /tmp/cs.gz \
&& mv /tmp/cs /opt/coursier/bin \
&& chmod +x /opt/coursier/bin/cs \
&& rm /tmp/jdk.tgz
ENV COURSIER_CACHE=/opt/.cs-cache
RUN : \
&& cs fetch scalafmt:3.6.1 \
&& cs install scalafmt:3.6.1 --dir /opt/wd/bin
the key to offline execution for me was to use cs fetch and set COURSIER_CACHE
here's the offline execution succeeding:
$ docker run --net=none --rm -ti cs /opt/wd/bin/scalafmt --version
scalafmt 3.6.1

tzdata freeze docker build Swfit image

While run docker build to build a swift image, tzdata will stop the process. It prompt to choose a location, but no reaction after I enter the number .
Configuring tzdata
------------------
Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.
1. Africa 4. Australia 7. Atlantic 10. Pacific 13. Etc
2. America 5. Arctic 8. Europe 11. SystemV
3. Antarctica 6. Asia 9. Indian 12. US
Geographic area:
my Dockefile is :
FROM ubuntu:18.04
LABEL maintainer="Swift Infrastructure <swift-infrastructure#swift.org>"
LABEL Description="Docker Container for the Swift programming language"
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y \
git \
curl \
cmake \
wget \
ninja-build \
clang \
python \
uuid-dev \
libicu-dev \
icu-devtools \
libbsd-dev \
libedit-dev \
libxml2-dev \
libsqlite3-dev \
swig \
libpython-dev \
libncurses5-dev \
pkg-config \
libblocksruntime-dev \
libcurl4-openssl-dev \
systemtap-sdt-dev \
tzdata \
rsync && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Vapor setup
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
# Install vapor and clean
RUN apt-get install swift vapor -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN vapor --help
It is work normally before I reset the Docker. Where is my mistake?
Prevent tzdata for asking by adding ARG DEBIAN_FRONTEND=noninteractive to the Dockerfile.

How to run sql script before the application launches in docker

I'm deploying the project with Asp.net Core, PostgreSql and Docker in Windows 10 (no PostgreSql installed). So I have to run sql script to update data before the application launches (for registering a singleton dependency injection).
The content of my Dockerfile as following:
# TODO use official docker image
FROM microsoft/dotnet:1.1.0-sdk-projectjson
# Install .NET CLI dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
bzip2 \
file \
g++ \
gcc \
imagemagick \
libbz2-dev \
libc6-dev \
libcurl4-openssl-dev \
libdb-dev \
libevent-dev \
libffi-dev \
libgdbm-dev \
libgeoip-dev \
libglib2.0-dev \
libjpeg-dev \
libkrb5-dev \
liblzma-dev \
libmagickcore-dev \
libmagickwand-dev \
libmysqlclient-dev \
libncurses-dev \
libpng-dev \
libpq-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
libtool \
libwebp-dev \
libxml2-dev \
libxslt-dev \
libyaml-dev \
make \
patch \
xz-utils \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# Set environment variables
ENV ASPNETCORE_URLS="http://*:5000"
ENV ASPNETCORE_ENVIRONMENT="Development"
# Copy files to app directory
COPY . /app
# Set working directory
WORKDIR /app
# Restore NuGet packages
RUN ["dotnet", "restore"]
# Build app
RUN ["dotnet", "build"]
#dotnet ef migrations add InitialCreate
RUN ["dotnet", "ef", "migrations", "add", "InitialCreate"]
# Open up port
EXPOSE 5000
CMD chmod +x ./docker-start.sh
CMD bash ./docker-start.sh
And here is the content of docker-start.sh:
#!/bin/bash
set -e
# How to apply migrations
dotnet ef database update
# I would like to run sql file at here"
psql -h postgres --username postgres -d POSTGRES_USER -a -f /app/static.sql
# Start web app
echo "Starting web app"
dotnet run
How can I do that? Thanks advanced.
I have just found a solution for this. I missed postgresql-client. We will be need to install postgresql-client as using psql to run the sql script from Dockerfile.
So Dockerfile should be changed:
# TODO use official docker image
FROM microsoft/dotnet:1.1.0-sdk-projectjson
# Install .NET CLI dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
bzip2 \
file \
g++ \
gcc \
imagemagick \
libbz2-dev \
libc6-dev \
libcurl4-openssl-dev \
libdb-dev \
libevent-dev \
libffi-dev \
libgdbm-dev \
libgeoip-dev \
libglib2.0-dev \
libjpeg-dev \
libkrb5-dev \
liblzma-dev \
libmagickcore-dev \
libmagickwand-dev \
libmysqlclient-dev \
libncurses-dev \
libpng-dev \
libpq-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
libtool \
libwebp-dev \
libxml2-dev \
libxslt-dev \
libyaml-dev \
make \
patch \
xz-utils \
zlib1g-dev \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Install netcat so that we can ping the database server until it
RUN apt-get update -qq \
&& apt-get install -y netcat \
&& rm -rf /var/lib/apt/lists/*
# Set environment variables
ENV ASPNETCORE_URLS="http://*:5000"
ENV ASPNETCORE_ENVIRONMENT="Development"
ENV DB_HOSTNAME="posgres"
# Copy files to app directory
COPY . /app
# Set working directory
WORKDIR /app
# Restore NuGet packages
RUN ["dotnet", "restore"]
# Build app
RUN ["dotnet", "build"]
#dotnet ef migrations add InitialCreate
RUN ["dotnet", "ef", "migrations", "add", "InitialCreate"]
# Open up port
EXPOSE 5000
CMD chmod +x ./docker-start.sh
CMD bash ./docker-start.sh
Thanks.