My team runs into a recurring issue where if we mis-spell a parameter for our argo workflows, that parameter gets ignored without error. For example, say I run the following submission command, where the true (optional) parameter is validation_data_config:
argo submit --from workflowtemplate/training \
-p output=$( artifacts resolve-url $BUCKET $PROJECT $TAG) \
-p tag=${TAG} \
-p training_config="$( yq . training.yaml )" \
-p training_data_config="$( yq . train-data.yaml )" \
-p validation-data-config="$( yq . validation-data.yaml )" \
-p wandb-project="cyclegan_c48_to_c384" \
-p cpu=4 \
-p memory="15Gi" \
--name "${NAME}" \
--labels "project=${PROJECT},experiment=${EXPERIMENT},trial=${TRIAL}"
The validation configuration is ignored and the job is run without validation metrics because I used hyphens instead of underscores.
I also understand the parameters should use consistent hyphen/underscore naming, but we've also had this happen with e.g. the "memory" parameter.
Is there any way to detect this automatically, to have the submission error if a parameter is unused, or even to get a list of parameters for a given workflow template so I can write such detection myself?
I am using docker to generate the go api code but how do I pass the -Dnoservice to the docker container
docker run --rm -v "${PWD}:/local" openapitools/openapi-generator-cli generate \
-i https://raw.githubusercontent.com/openapitools/openapi-generator/master/modules/openapi-generator/src/test/resources/3_0/petstore.yaml \
-g go-server \
-o /local/out/go
referring to https://openapi-generator.tech/docs/generators/go-server
where it says
Generates a Go server library using OpenAPI-Generator. By default, it
will also generate service classes -- which you can disable with the
-Dnoservice environment variable.
I've been trying to train a new arabic font using tesseract. I was able to train it at first with the default training_text file available once you install Tesseract. But I wanted to train it using my own generated data.
So I proceeded as follow:
First I changed the ara.training_text file and I inserted some of the data that I want to train my model on.
Then I generated the .tif files using this command:
!/content/tesstutorial/tesseract/src/training/tesstrain.sh --fonts_dir /content/fonts \
--fontlist 'Traditional Arabic' \
--lang ara \
--linedata_only \
--langdata_dir /content/tesstutorial/langdata \
--tessdata_dir /content/tesstutorial/tesseract/tessdata \
--save_box_tiff \
--maxpages 100 \
--output_dir /content/train
then I combined the train_best trained data for arabic with the generated ara.lstm
!combine_tessdata -e /content/tesstutorial/tesseract/tessdata/best/ara.traineddata ara.lstm!combine_tessdata -e /content/tesstutorial/tesseract/tessdata/best/ara.traineddata ara.lstm
All good for now, then when I proceed to call lstmtraining, I am getting a "Compute CTC tagets failed" error whenever I am calling training
!OMP_THREAD_LIMIT=8 lstmtraining \
--continue_from /content/ara.lstm \
--model_output /content/output/araNewModel \
--old_traineddata /content/tesstutorial/tesseract/tessdata/best/ara.traineddata \
--traineddata /content/train/ara/ara.traineddata \
--train_listfile /content/train/ara.training_files.txt \
--max_iterations 200 \
--debug_level -1
I realized that his was only happening whenever I was adding arabic numerals to my code. When I pass in a training_text file with no arabic numerals it works fine.
Can someone tell me what this error is about how to solve it.
Given an STL file, how can you convert it to an animated gif using the command line (bash)?
I've discovered a few articles that vaguely describe how to do this through the GUI. I've been able to generate the following, however the animation is very rough and the shadows jump around.
for ((angle=0; angle <=360; angle+=5)); do
openscad /dev/null -o dump$angle.png -D "cube([2,3,4]);" --imgsize=250,250 --camera=0,0,0,45,0,$angle,25
done
# https://unix.stackexchange.com/a/489210/39263
ffmpeg \
-framerate 24 \
-pattern_type glob \
-i 'dump*.png' \
-r 8 \
-vf scale=512:-1 \
out.gif \
;
OpenScad has a built in --animation X parameter, however using that likely won't work when passing in the camera angle as a parameter.
Resources
https://github.com/openscad/openscad/issues/1632#issuecomment-219203658
https://blog.prusaprinters.org/how-to-animate-models-in-openscad_29523/
https://github.com/openscad/openscad/issues/1573
https://github.com/openscad/openscad/pull/1808
https://forum.openscad.org/Product-Video-produced-with-OpenSCAD-td15783.html
Bash + Docker
Converting an STL to a GIF requires several steps
Center the STL at the origin
Convert the STL into a collection of .PNG files from different angles
Combine those PNG files into a .gif file
Assuming you have docker installed you can run the the following to convert an STL into an animated GIF
(Note: A more up to date version of this script is available at spuder/CAD-scripts/stl2gif
This depends on 3 docker containers
spuder/stl2origin
openscad/openscad:2021.01
linuxserver/ffmpeg:version-4.4-cli
# 1. Use spuder/stl2origin:latest docker container to center the file at origin
# A file with the offsets will be saved to `${MYTMPDIR}/foo.sh`
file=/tmp/foo.stl
MYTMPDIR="$(mktemp -d)"
trap 'rm -rf -- "$MYTMPDIR"' EXIT
docker run \
-e OUTPUT_BASH_FILE=/output/foo.sh \
-v $(dirname "$file"):/input \
-v $MYTMPDIR:/output \
--rm spuder/stl2origin:latest \
"/input/$(basename "$file")"
cp "${file}" "$MYTMPDIR/foo.stl"
# 2. Read ${MYTMPDIR}/foo.sh and load the offset variables ($XTRANS, $XMID,$YTRANS,$YMID,$ZTRANS,$ZMID)
# Save the new centered STL to `$MYTMPDIR/foo-centered.stl`
source $MYTMPDIR/foo.sh
docker run \
-v "$MYTMPDIR:/input" \
-v "$MYTMPDIR:/output" \
openscad/openscad:2021.01 openscad /dev/null -D "translate([$XTRANS-$XMID,$YTRANS-$YMID,$ZTRANS-$ZMID])import(\"/input/foo.stl\");" -o "/output/foo-centered.stl"
# 3. Convert the STL into 60 .PNG images with the camera rotating around the object. Note `$t` is a built in openscad variable that is automatically set based on time when --animate option is used
# OSX users will need to replace `openscad` with `/Applications/OpenSCAD.app/Contents/MacOS/OpenSCAD`
# Save all images to $MYTMPDIR/foo{0..60}.png
# This is not yet running in a docker container due to a bug: https://github.com/openscad/openscad/issues/4028
openscad /dev/null \
-D '$vpr = [60, 0, 360 * $t];' \
-o "${MYTMPDIR}/foo.png" \
-D "import(\"${MYTMPDIR}/foo-centered.stl\");" \
--imgsize=600,600 \
--animate 60 \
--colorscheme "Tomorrow Night" \
--viewall --autocenter
# 4. Use ffmpeg to combine all images into a .GIF file
# Tune framerate (15) and -r (60) to produce a faster/slower/smoother image
yes | ffmpeg \
-framerate 15 \
-pattern_type glob \
-i "$MYTMPDIR/*.png" \
-r 60 \
-vf scale=512:-1 \
"${file}.gif" \
;
rm -rf -- "$MYTMPDIR"
STL File
Gif without centering
Gif with centering
I'm executing gcloud composer commands:
gcloud composer environments run airflow-composer \
--location europe-west1 --user-output-enabled=true \
backfill -- -s 20171201 -e 20171208 dags.my_dag_name \
kubeconfig entry generated for europe-west1-airflow-compos-007-gke.
It's a regular airflow backfill. The command above is printing the results at the end of the whole backfill range, is there any way to get the output in a streaming manner ? Each time a DAG gets backfilled it will be printed in the standard output, like in a regular airflow-cli.