I generated a dart project with dart create -t server-shelf . --force.
On the top folder I created a json file (my_data.json) with some mock data.
In my code I am using the data from the json file like:
final _data = json.decode(File('my_data.json').readAsStringSync()) as List<dynamic>;
But if I try to start my server with docker run -it -p 8080:8080 myserver I am getting:
FileSystemException: Cannot open file, path = 'my_data.json' (OS
Error: No such file or directory, errno = 2)
My Dockerfile:
# Use latest stable channel SDK.
FROM dart:stable AS build
# Resolve app dependencies.
WORKDIR /app
COPY pubspec.* ./
RUN dart pub get
# Copy app source code (except anything in .dockerignore) and AOT compile app.
COPY . .
RUN dart compile exe bin/server.dart -o bin/server
# Build minimal serving image from AOT-compiled `/server`
# and the pre-built AOT-runtime in the `/runtime/` directory of the base image.
FROM scratch
COPY --from=build /runtime/ /
COPY --from=build /app/bin/server /app/bin/
COPY my_data.json /app/my_data.json
# Start server.
EXPOSE 8080
CMD ["/app/bin/server"]
I think since you didn't set the WORKDIR for the new image that you started building FROM scratch. You can fix this simply by adding WORKDIR /app again, to the specification of the new image you're building, which is being used to run your application. It will look like this:
...
# Start server.
WORKDIR /app
EXPOSE 8080
CMD ["/app/bin/server"]
Replace
COPY my_data.json /app/my_data.json
with
COPY --from=build app/my_data.json app/
Related
For test purposes I create and run an Azurite docker image, in a test pipeline.
I would like to have the blob container automatically created though after Azurite is started, as it would simplify things.
Is there any good way to achieve this?
For the Postgres image we use, we can specify an init.sql which is run on startup. If something similar is available for Azurite, that would be awesome.
You can use the following Dockerfile to install the azure-storage-blob Python package on the Alpine based azurite image. The resulting image size is ~400MB compared to the ~1.2GB azure-cli image.
ARG AZURITE_VERSION="3.17.0"
FROM mcr.microsoft.com/azure-storage/azurite:${AZURITE_VERSION}
# Install azure-storage-blob python package
RUN apk update && \
apk --no-cache add py3-pip && \
apk add --virtual=build gcc libffi-dev musl-dev python3-dev && \
pip3 install --upgrade pip && \
pip3 install azure-storage-blob==12.12.0
# Copy init_azurite.py script
COPY ./init_azurite.py init_azurite.py
# Copy local blobs to azurite
COPY ./init_containers init_containers
# Run the blob emulator and initialize the blob containers
CMD python3 init_azurite.py --directory=init_containers & \
azurite-blob --blobHost 0.0.0.0 --blobPort 10000
The init_azurite.py script is a local Python script that uses the azure-storage-blob package to batch upload files and directories to the azurite blob storage emulator.
import argparse
import os
from time import sleep
from azure.core.exceptions import ResourceExistsError
from azure.storage.blob import BlobServiceClient, ContainerClient
def upload_file(container_client: ContainerClient, source: str, dest: str) -> None:
"""
Upload a single file to a path inside the container.
"""
print(f"Uploading {source} to {dest}")
with open(source, "rb") as data:
try:
container_client.upload_blob(name=dest, data=data)
except ResourceExistsError:
pass
def upload_dir(container_client: ContainerClient, source: str, dest: str) -> None:
"""
Upload a directory to a path inside the container.
"""
prefix = "" if dest == "" else dest + "/"
prefix += os.path.basename(source) + "/"
for root, dirs, files in os.walk(source):
for name in files:
dir_part = os.path.relpath(root, source)
dir_part = "" if dir_part == "." else dir_part + "/"
file_path = os.path.join(root, name)
blob_path = prefix + dir_part + name
upload_file(container_client, file_path, blob_path)
def init_containers(
service_client: BlobServiceClient, containers_directory: str
) -> None:
"""
Iterate on the containers directory and do the following:
1- create the container.
2- upload all folders and files to the container.
"""
for container_name in os.listdir(containers_directory):
container_path = os.path.join(containers_directory, container_name)
if os.path.isdir(container_path):
container_client = service_client.get_container_client(container_name)
try:
container_client.create_container()
except ResourceExistsError:
pass
for blob in os.listdir(container_path):
blob_path = os.path.join(container_path, blob)
if os.path.isdir(blob_path):
upload_dir(container_client, blob_path, "")
else:
upload_file(container_client, blob_path, blob)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Initialize azurite emulator containers."
)
parser.add_argument(
"--directory",
required=True,
help="""
Directory that contains subdirectories named after the
containers that we should create. Each subdirectory will contain the files
and directories of its container.
"""
)
args = parser.parse_args()
# Connect to the localhost emulator (after 5 secs to make sure it's up).
sleep(5)
blob_service_client = BlobServiceClient(
account_url="http://localhost:10000/devstoreaccount1",
credential={
"account_name": "devstoreaccount1",
"account_key": (
"Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq"
"/K1SZFPTOtr/KBHBeksoGMGw=="
)
}
)
# Only initialize if not already initialized.
if next(blob_service_client.list_containers(), None):
print("Emulator already has containers, will skip initialization.")
else:
init_containers(blob_service_client, args.directory)
This script will be copied to the azurite container and will populate the initial blob containers every time the azurite container is started unless some containers were already persisted using docker volumes. In that case, nothing will happen.
Following is an example docker-compose.yml file:
azurite:
build:
context: ./
dockerfile: Dockerfile
args:
AZURITE_VERSION: 3.17.0
restart: on-failure
ports:
- 10000:10000
volumes:
- azurite-data:/opt/azurite
volumes:
azurite-data:
Using such volumes will persist the emulator data until you destroy them (e.g. by using docker-compose down -v).
Finally, init_containers is a local directory that contains the containers and their folders/files. It will be copied to the azurite container when the image is built.
For example:
init_containers:
container-name-1:
dir-1:
file.txt
img.png
dir-2:
file.txt
container-name-2:
dir-1:
file.txt
img.png
I've solved the issue by creating a custom docker image and executing azure-cli tools from a health check. There could certainly be better solutions, and I will update the accepted answer if someone posts a better solution.
In more details
A solution to create the required data on startup is to run my own script. I chose to trigger the script from a health check I defined in docker-compose. What it does is use azure cli tools to create a container and then verify that it exists.
The script:
AZURE_STORAGE_CONNECTION_STRING="UseDevelopmentStorage=true"
export AZURE_STORAGE_CONNECTION_STRING
az storage container create -n images
az storage container show -n images
exit $?
However, the azurite image is based on alpine, which doesn't have apt, so installing azure cli was a bit tricky. So I did it the other way around, and based my image on mcr.microsoft.com/azure-cli:latest. With that done I installed Azurite like this:
RUN apk add npm
RUN npm install -g azurite --silent
All that's left is to actually run azurite, see the official azurite dockerfile for details.
It is possible to do this without azure-cli and use curl instead (and with that, not having to use the azure-cli docker image). However this was a bit complicated to get the authentication header working properly, so using azure-cli was easier.
I am running a Celery Executor and I'm trying to run some python script in the KubernetesPodOperator. Below are examples of what I have tried that didn't work. What am I doing wrong?
Running sctipt
org_node = KubernetesPodOperator(
namespace='default',
image="python",
cmds=["python", "somescript.py" "-c"],
arguments=["print('HELLO')"],
labels={"foo": "bar"},
image_pull_policy="Always",
name=task,
task_id=task,
is_delete_operator_pod=False,
get_logs=True,
dag=dag
)
Running function load_users_into_table()
def load_users_into_table(postgres_hook, schema, path):
gdf = read_csv(path)
gdf.to_sql('users', con=postgres_hook.get_sqlalchemy_engine(), schema=schema)
org_node = KubernetesPodOperator(
namespace='default',
image="python",
cmds=["python", "somescript.py" "-c"],
arguments=[load_users_into_table],
labels={"foo": "bar"},
image_pull_policy="Always",
name=task,
task_id=task,
is_delete_operator_pod=False,
get_logs=True,
dag=dag
)
The script somescript.py must be in Docker image.
Step-1: let's create a image https://docs.docker.com/develop/develop-images/dockerfile_best-practices/.
FROM python:3.8
# copy requirement.txt from local to container
COPY requirements.txt requirements.txt
# install dependencies into container (geopandas, sqlalchemy)
RUN pip install -r requirements.txt
# copy the python script from local to container
COPY somescript.py somescript.py
ENTRYPOINT [ "python", "somescript.py"]
Step-2: Build and push the image into public Docker repository https://hub.docker.com.
NB: kubernetes_pod_operator looks for image from public docker repo
# build image
docker build -t my-python-img:latest .
# test if your image works perfectly
docker run my-python-img:latest
# push image.
docker tag my-python-img username/my-python-img
docker push username/my-python-img
docker pull username/my-python-img
step-3: Lest's create k8s task.
p = KubernetesPodOperator(
namespace='default',
image='username/my-python-img:latest',
labels={'dag-id': dag.dag_id},
name='airflow-my-image-pod',
task_id='load-users',
in_cluster=False, #False: local, True: cluster
cluster_context='microk8s',
config_file='/usr/local/airflow/include/.kube/config',
is_delete_operator_pod=True,
get_logs=True,
dag=dag
)
If you don't understand where configuration file comes from, look here: https://www.astronomer.io/docs/cloud/stable/develop/kubepodoperator-local.
Finally: I want to mention something important when working with databases (credentials). Kubernetes offers the use secret to secure sensitive information. https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html
KubernetesPodOperator launches a Kubernetes pod that runs a container as specified in the operator's arguments.
First Example
In the first example, the following happens:
KubernetesPodOperator instructs K8s to lunch a pod and prepare to run a container in it using the python image (the image parameter) from hub.docker.com (the default image registry)
ENTRYPOINT of the python image is replaced by ["python", "somescript.py" "-c"] (the cmd parameter)
CMD of the python image is replaced by ["print('HELLO')"] (the arguments parameter)
...
The container is run
So, the complete command that is run in the container is
python somescript.py -c print('HELLO')
Obviously, the official Python image from Docker Hub does not have somescript.py in its working directory. Even if did, it probably would have been not the one that you wrote. That is why the command fails with something like:
python: can't open file 'somescrit.py': [Errno 2] No such file or directory
Second Example
In the second example, pretty much the same happens as in the first example, but the command that is run in the container (again based on the cmd and arguments parameters) is
python somescript.py -c None
(None is the string representation of the load_users_into_table()'s return value)
This command fails, because of the same reasons as in the first example.
How It Could be Done (a Sketch)
You could build a Docker image with somescript.py and all its dependencies. Push the image to an image registry. Specify the image, ENTRYPOINT, and CMD in the corresponding parameters of KubernetesPodOperator.
In the following Dockerfile I'm trying to copy a jar file from a location on the host into the container, but seems Docker does not like it as I guess I'm missing something. Here is my Dockerfile:
FROM anapsix/alpine-java:jdk8
MAINTAINER joesan
ENV SBT_VERSION 0.13.15
ENV CHECKSUM 18b106d09b2874f2a538c6e1f6b20c565885b2a8051428bd6d630fb92c1c0f96
ENV APP_NAME my-app
ENV PROJECT_HOME /opt/apps
RUN mkdir -p $PROJECT_HOME/$APP_NAME
# Copy the jar file
COPY ./target/scala-*/my-app-*.jar $PROJECT_HOME/$APP_NAME
# Copy the database file
COPY .my-db.mv.db $PROJECT_HOME/$APP_NAME
# Run the application
CMD ["$PROJECT_HOME/$APP_NAME java -Denv=dev -jar my-app-*.jar"]
In my build pipeline, I could see the following error message:
Step 8/10 : COPY ./target/scala-*/my-app-*.jar $PROJECT_HOME/$APP_NAME
COPY failed: no source files were specified
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 4a240742a379 Less than a second ago 171MB
anapsix/alpine-java jdk8 ed55c27d366d 3 years ago 171MB
Error response from daemon: No such image: [secure]
Pushing image [secure] to repository hub.docker.com
The push refers to repository [docker.io/[secure]/my-app]
An image does not exist locally with the tag: [secure]/my-app
What is that I'm missing and how could I debug this? I mean I could add some echo statements to print out the path, but I'm not sure why I face this error!
This is probably because the target folder is not in "./" folder. which can be because it's ignored by .dockerignore file or the build context is not pointing to the parent folder of the target folder.
In case you are not familiar with build context, it's explained here
I'm trying to run cypress tests inside a docker container. I've simplified my setup so I can just try to get a simple container instance running and a few tests executed.
I'm using docker-compose
version: '2.1'
services:
e2e:
image: test/e2e:v1
command: ["./node_modules/.bin/cypress", "run", "--spec", "integration/mobile/all.js"]
build:
context: .
dockerfile: Dockerfile-cypress
container_name: cypress
network_mode: host
and my Dokerfile-cypress
FROM cypress/browsers:chrome69
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN npm install cypress#3.1.0
COPY cypress /usr/src/app/cypress
COPY cypress.json /usr/src/app/cypress
RUN ./node_modules/.bin/cypress verify
when I run docker-compose up after I build my image I see
cypress | name_to_handle_at on /dev: Operation not permitted
cypress | Can't run because no spec files were found.
cypress |
cypress | We searched for any files matching this glob pattern:
cypress |
cypress | integration/mobile/all-control.js
cypress exited with code 1
I've verified that my cypress files and folders have been copied over and I can verify that my test files exist. I've been stuck on this for a while and i'm unsure what to do besides giving up.
Any guidance appreciated.
Turns out cypress automatically checks for /cypress/integration folder. Moving all my cypress files inside this folder got it working.
The problem: No cypress specs files were found on your automation suite.
Solution: Cypress Test files are located in cypress/integration by default, but can be configured to another directory.
In this folder insert your suites per section:
for example:
- cypress/integration/billing-scenarios-suite
- cypress/integration/user-management-suite
- cypress/integration/proccesses-and-handlers-suite
I assume that these suite directories contains sub directories (which represents some micro logic) ,therefore you need to run it recursively to gather all files:
cypress run --spec \
cypress/integration/<your-suite>/**/* , \
cypress/integration/<your-suite>/**/**/*
If you run in cypress on docker verify that cypress volume contains the tests and mapped and mounted in container on volume section (on Dockerfile / docker-compose.yml file) and run it properly:
docker exec -it <container-id> cypress run --spec \
cypress/integration/<your-suite>/**/* , \
cypress/integration/<your-suite>/**/**/*
I noticed that if you CLICK AND DRAG file method to get file path in VSC, then it generates path with SMALL c-drive letter and this causes error: "Can't run because no spec files were found. + We searched for specs matching this glob pattern:"
e.g. by click and drag I get:
cypress run --spec c:\Users\dmitr\Desktop\cno-dma-replica-for-cy-test\cypress\integration\dma-playground.spec.js
Notice SMALL c, in above.
BUT if I use right click 'get path', I get BIG C, and it works for some reason:
cypress run --spec C:\Users\dmitr\Desktop\cno-dma-replica-for-cy-test\cypress\integration\dma-playground.spec.js
and this causes it to work.
Its strange I know, but there you go.
but if you just use:
I need to run a script on a target OS built by Yocto.
This script needs to be ran as part of the install and thus must be ran only once (either after the entire OS install or on the first boot). It cannot be ran on the host system, as it depends on the hardware IO which exists only on the target.
An additional, minor, constraint is that the rootfs is mounted as read only, but I guess that can be avoided by having the script re-mount as rw and again remount as r after the execution or something along those lines.
Any help is appreciated.
I ended up doing what shibley had written. Here's a detailed howto:
Create a new layer
Put the desired layer wherever your other layers are. Mine are in stuff directory, next to the build directory.
Make the following files/directories:
meta_mylayer
├── conf
│ └── layer.conf
└── recipes-core
└── mylayer-initscript
├── initscript.bb
└── files
├── initscript.service
└── initscript.sh
meta_mylayer is the name of your new layer.
Let's define the layer in conf/layer.conf and tell it where to search for the recipes:
BBPATH .= ":${LAYERDIR}"
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend"
BBFILE_COLLECTIONS += "meta-mylayer"
BBFILE_PATTERN_meta-mylayer := "^${LAYERDIR}/"
BBFILE_PRIORITY_meta-mylayer = "99"
The recipes are defined by the name of the .bb file. This layer only has one recipe, named initscript.
initscript.bb contains the recipe information. The following recipe will add our initscript service and put the actual install script, initscript.sh, into /usr/sbin/
SUMMARY = "Initial boot script"
DESCRIPTION = "Script to do any first boot init, started as a systemd service which removes itself once finished"
LICENSE = "CLOSED"
PR = "r3"
SRC_URI = " \
file://initscript.sh \
file://initscript.service \
"
do_compile () {
}
do_install () {
install -d ${D}/${sbindir}
install -m 0755 ${WORKDIR}/initscript.sh ${D}/${sbindir}
install -d ${D}${systemd_unitdir}/system/
install -m 0644 ${WORKDIR}/initscript.service ${D}${systemd_unitdir}/system
}
NATIVE_SYSTEMD_SUPPORT = "1"
SYSTEMD_PACKAGES = "${PN}"
SYSTEMD_SERVICE_${PN} = "initscript.service"
inherit allarch systemd
install -d will create any directories needed for the specified path, while install -m 0644 will copy the specified file with 644 permissions. ${D} is the destination directory, by default it's ${WORKDIR}/image
Create the systemd service definition
I won't go into much details about how systemd works, but will rather paste the service definition:
[Unit]
Description=start initscript upon first boot
[Service]
Type=simple
ExecStart=/bin/sh -c 'sleep 5 ; /usr/sbin/initscript.sh'
Do note the script location at /usr/sbin/ - that's where it will be copied by the last line of our do_install function above.
Lastly, our initscript.sh script itself:
#!/bin/sh
logger "starting initscript"
# do some work here. Mount rootfs as rw if needed.
logger "initscript work done"
#job done, remove it from systemd services
systemctl disable initscript.service
logger "initscript disabled"
Register the layer
We need to register our new layer, so that bitbake knows it's there.
Edit the build/conf/bblayers.conf file and add the following line to the BASELAYERS variable:
${TOPDIR}/../stuff/meta-mylayer \
Now that the bitbake recognizes our layer, we need to add our recipe to the image.
Edit the build/conf/local.conf and add the initscript recipe to the IMAGE_INSTALL_append variable. Here's how it looks like when added next to the python.
IMAGE_INSTALL_append = " python initscript"
Run the build
Run the build like you usually do. For example:
bitbake angstrom-lxde-image
After you install the build and boot for the first time, your initscript.sh will be executed.
The basic approach is to write a systemd service. The service can be enabled by default as defined in the yocto recipe systemd configuration. The script or application evoked by the service will disable the service when the script/application completes - ie. systemctl disable foo. Therefore, the service will not run in future boots.
As you mentioned, the rootfs will require mounting as rw for this to work.
Thanks, this helped out. I needed to add
[Install]
WantedBy=multi-user.target
to the initscript.service to get it working
A simple solution is to use a package post/install script that stops itself running at rootfs time (exit 1 if $D is set). This will result in it running at first boot. Yes, the script will need to remount the root fs.
Besides, I don't know how to address the issue that rootfs is mounted as read-only, you can use pkg_postinst_ontarget_${PN}
Add this to one of your recipes:
pkg_postinst_ontarget_${PN}() {
#!/bin/bash
// bash script you want to run
echo Post Install Script Test > /dev/ttyS1
}
${PN} will be replaced with the package name the recipes corresponds to.
The script will be run only once, on the first boot on the target machine as a post-install script of the package.