Can I use multiple --subtitutions flags in gcloud builds sbmit command? - sh

This is what I'm doing right now:
gcloud builds submit ./dist \
--config=./cloudbuild.yaml \
--substitutions=_SUB_1=$VALUE_1,_SUB_2=$VALUE_2,_SUB_3=$VALUE_3 \
--project=$PROJECT_ID
This is what I'd like to do:
--substitutions=_SUB_1=$VALUE_1 \
--substitutions=_SUB_2=$VALUE_2 \
--substitutions=_SUB_3=$VALUE_3 \
Is this allowed?

Have you tried repeating the flag to prove to yourself whether that works?
I think it doesn't and you must use your first syntax.
When parsing command-line arguments, there's a distinction between a flag --xxx that has a repeating value and a repeating flag ---xxx=A --xxx=B that takes a singular value so, generally, the two aren't interchangeable (though it's logical to want these to be equivalent).
Do you want this because you're trying to script the command and encountering problems?

Related

Detect if argo workflow is given unused parameters

My team runs into a recurring issue where if we mis-spell a parameter for our argo workflows, that parameter gets ignored without error. For example, say I run the following submission command, where the true (optional) parameter is validation_data_config:
argo submit --from workflowtemplate/training \
-p output=$( artifacts resolve-url $BUCKET $PROJECT $TAG) \
-p tag=${TAG} \
-p training_config="$( yq . training.yaml )" \
-p training_data_config="$( yq . train-data.yaml )" \
-p validation-data-config="$( yq . validation-data.yaml )" \
-p wandb-project="cyclegan_c48_to_c384" \
-p cpu=4 \
-p memory="15Gi" \
--name "${NAME}" \
--labels "project=${PROJECT},experiment=${EXPERIMENT},trial=${TRIAL}"
The validation configuration is ignored and the job is run without validation metrics because I used hyphens instead of underscores.
I also understand the parameters should use consistent hyphen/underscore naming, but we've also had this happen with e.g. the "memory" parameter.
Is there any way to detect this automatically, to have the submission error if a parameter is unused, or even to get a list of parameters for a given workflow template so I can write such detection myself?

How to print debugging information on one/specific OpenAPI model?

According to the OpenAPI docs here is how one can print generator's models data:
$ java -jar openapi-generator-cli.jar generate \
-g typescript-fetch \
-o out \
-i api.yaml \
-DdebugModels
which outputs 39000 lines and it's a little difficult to find a modele of one's interest.
How to output debug information on just one model?
Unfortunately, there's no way to generate the debug log for just one model or operation.
As a workaround, you can draft a new spec that contains the model you want to debug.

Makefile target to add k8s cluster config

I want, in one command with args to config kubeconfig, that is able to connect to k8s cluster.
I tried the following which does not work.
cfg:
mkdir ~/.kube
kube: cfg
touch config $(ARGS)
In the args the user should pass the config file content of the cluster (kubeconfig).
If there is a shorter way please let me know.
update
I've used the following which (from the answer) is partially solve the issue.
kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
The problem is because of the cfg which is creating the dir in case the user not providing the args and in the second run when providing the path the dir is already exist and you get an error, is there a way to avoid it ? something like if the arg is not provided dont run the cfg
I assume the user input is the pathname of a file. The make utility can take variable assignments as arguments, in the form of make NAME=VALUE. You refer to these in your Makefile as usual, with $(NAME). So something like
kube: cfg
case "$(ARGS)" in \
("") printf "Please provide ARGS=/some/path"; exit 1;; \
(*) cp "$(ARGS)" /some/where/else;; \
esac
called with
make ARGS=/some/path/file kube
would then execute cp /some/path/file /some/where/else. If that is not what you were asking, please rephrase the question, providing exact details of what you want to do.

Filtering on labels in Docker API not working (possible bug?)

I'm using the Docker API to get info on containers in JSON format. Basically, I want to do a filter based on label values, but it is not working (just returns all containers). This filter query DOES work if you just use the command line docker, i.e.:
docker ps -a -f label=owner=fred -f label=speccont=true
However, if I try to do the equivalent filter query using the API, it just returns ALL containers (no filtering done), i.e.:
curl -s --unix-socket /var/run/docker.sock http:/containers/json?all=true&filters={"label":["speccont=true","owner=fred"]}
Note that I do uri escape the filters param when I execute it, but am just showing it here unescaped for readability.
Am I doing something wrong here? Or does this seem to be a bug in the Docker API? Thanks for any help you can give!
The correct syntax for filtering containers by label as of Docker API v1.41 is
curl -s -G -X GET --unix-socket /var/run/docker.sock http://localhost/containers/json" \
--data 'all=true' \
--data-urlencode 'filters={"label":["speccont=true","owner=fred"]}'
Note the automatic URL encoding as mentioned in this stackexchange post.
I felt there was a bug with API too. But turns out there is none. I am on API version 1.30.
I get desired results with this call:
curl -sS localhost:4243/containers/json?filters=%7B%22ancestor%22%3A%20%5B%222bab985010c3%22%5D%7D
I got the url escaped string using used above with:
python -c 'import urllib; print urllib.quote("""{"ancestor": ["2bab985010c3"]}""")'

merge chromosomes in Plink

I have downloaded 1000G dataset in the vcf format. Using Plink 2.0 I have converted them into binary format.
Now I need to merge the 1-22 chromosomes.
I am using this script:
${BIN}plink2 \
--bfile /mnt/jw01-aruk-home01/projects/jia_mtx_gwas_2016/common_files/data/clean/thousand_genomes/from_1000G_web/chr1_1000Gv3 \
--make-bed \
--merge-list /mnt/jw01-aruk-home01/projects/jia_mtx_gwas_2016/common_files/data/clean/thousand_genomes/from_1000G_web/chromosomes_1000Gv3.txt \
--out /mnt/jw01-aruk-home01/projects/jia_mtx_gwas_2016/common_files/data/clean/thousand_genomes/from_1000G_web/all_chrs_1000G_v3 \
--noweb
But, I get this error
Error: --merge-list only accepts 1 parameter.
The chromosomes_1000Gv3.txt has files related to chromosomes 2-22 in this format:
chr2_1000Gv3.bed chr2_1000Gv3.bim chr2_1000Gv3.fam
chr3_1000Gv3.bed chr3_1000Gv3.bim chr3_1000Gv3.fam
....
Any suggestions what might be the issue?
Thanks
The --merge-list cannot be used in combination with --bfile. You can either have --bfile/--bmerge or --merge-list only in one plink command.