How to make Powershell pass a list as argument to gcloud - powershell

I want to submit my neural network model to google cloud via the following command as in the tutorial:
gcloud ai-platform jobs submit training ${JOB_NAME} \
--region=us-central1 \
--master-image-uri=gcr.io/cloud-ml-public/training/pytorch-gpu.1-10 \
--scale-tier=CUSTOM \
--master-machine-type=n1-standard-8 \
--master-accelerator=type=nvidia-tesla-p100,count=1 \
--job-dir=${JOB_DIR} \
--package-path=./trainer \
--module-name=trainer.task \
-- \
--train-files=gs://cloud-samples-data/ai-platform/chicago_taxi/training/small/taxi_trips_train.csv \
--eval-files=gs://cloud-samples-data/ai-platform/chicago_taxi/training/small/taxi_trips_eval.csv \
--num-epochs=10 \
--batch-size=100 \
--learning-rate=0.001
I was working with powershell and I have a problem with the master-accelerator argument which should be a dictionnary. I don't know how to pass such to gcloud. I have tried #{count=1; type=...}, but received a Bad syntax for dict arg: [#] error.
How can I pass a list of parameters in PowerShell such that the gcloud submit command accepts it?
I thank you in advance for your help.
EDIT
I have tried to use delimiters as ^^^^:^^^^, along this, but this still does not work (Invalid delimeter).

Related

Detect if argo workflow is given unused parameters

My team runs into a recurring issue where if we mis-spell a parameter for our argo workflows, that parameter gets ignored without error. For example, say I run the following submission command, where the true (optional) parameter is validation_data_config:
argo submit --from workflowtemplate/training \
-p output=$( artifacts resolve-url $BUCKET $PROJECT $TAG) \
-p tag=${TAG} \
-p training_config="$( yq . training.yaml )" \
-p training_data_config="$( yq . train-data.yaml )" \
-p validation-data-config="$( yq . validation-data.yaml )" \
-p wandb-project="cyclegan_c48_to_c384" \
-p cpu=4 \
-p memory="15Gi" \
--name "${NAME}" \
--labels "project=${PROJECT},experiment=${EXPERIMENT},trial=${TRIAL}"
The validation configuration is ignored and the job is run without validation metrics because I used hyphens instead of underscores.
I also understand the parameters should use consistent hyphen/underscore naming, but we've also had this happen with e.g. the "memory" parameter.
Is there any way to detect this automatically, to have the submission error if a parameter is unused, or even to get a list of parameters for a given workflow template so I can write such detection myself?

ERROR: (gcloud.sql.instances.create) Projects instance [my-project] not found: The requested flag is either misspelled or unsupported by Cloud SQL

When I try to create cloud sql instance using gcloud I got this error. Any thoughts folks?
--database-version=$DB_VERSION \
--cpu=$NUMBER_CPUS \
--memory=$MEMORY_SIZE \
--storage-type=$STORAGE_TYPE \
--storage-size=$STORAGE_SIZE \
--storage-auto-increase \
--database-flags=$DATABASE_FLAGS \
--region=$REGION \
--authorized-networks=$NETWORKS \
--assign-ip \
--project=$PROJECT_ID
It doesnt mater enabled projectId or not in this command
Thanks!

In `aws cloudformation deploy --parameter-overrides`, how to pass multiple values to `List<AWS::EC2::Subnet::ID>` parameter?

I am using this CloudFormation template
The List parameter I'm trying to pass values to is:
"Subnets" : {
"Type" : "List<AWS::EC2::Subnet::Id>",
"Description" : "The list of SubnetIds in your Virtual Private Cloud (VPC)",
"ConstraintDescription" : "must be a list of at least two existing subnets associated with at least two different availability zones. They should be residing in the selected Virtual Private Cloud."
},
I've written an utility script that looks like this:
#!/bin/bash
SUBNET1=subnet-abcdef
SUBNET2=subnet-ghijlm
echo -e "\n==Deploying stack.cf.yaml===\n"
aws cloudformation deploy \
--region $REGION \
--profile $CLI_PROFILE \
--stack-name $STACK_NAME \
--template-file stack.cf.json \
--no-fail-on-empty-changeset \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides \
VpcId=$VPC_ID \
Subnets="$SUBNET1 $SUBNET2" \ #<---------------this fails
InstanceType=$EC2_INSTANCE_TYPE \
OperatorEMail=$OPERATOR_EMAIL \
KeyName=$KEY_NAME \
If I deploy this, after a while my stack fails to deploy saying that a Subnet with the value "subnet-abcdef subnet-ghijlmn" does not exist.
The correct way to pass parameters to list is to comma separate them
So:
#!/bin/bash
SUBNET1=subnet-abcdef
SUBNET2=subnet-ghijlm
aws cloudformation deploy --parameter-overrides Subnets="$SUBNET1,SUBNET2"
will work
Tried every possible solution found online, none worked.
According to the documentation below, you should escape the comma without double-slashes. Tried that, didn't work either.
https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack.html
What worked FOR ME (apparently this is very environment-dependent) was the command below, escaping the coma with just one slash.
aws cloudformation create-stack --stack-name teste-memdb --template-body file://memorydb.yml --parameters ParameterKey=VpcId,ParameterValue=vpc-xxxx ParameterKey=SubnetIDs,ParameterValue=subnet-xxxxx\,subnet-yyyy --profile whatever
From the Documentation here
List/Array can be passed just like python Lists.
'["value1", "value2", "value3"]'
Also to note Cloudformation internally used python.

Setting Dynamic Properties in Dataproc Job

Here's what I am trying to accomplish. I want to create a workflow template so that I can spin up a cluster, run a job, and delete the cluster. Within the job, I want to pass in properties that can be set dynamically. For example, set a property to the current date.
Below is a simple example. I uses the data function correctly but that is handled at creation time so it looks like it will always be 12/31/2020 if I setup the workflow today. I know I can delete the job and add it back to the template for each run, but I was was hoping for a simpler way.
gcloud dataproc workflow-templates create workflow-mk-test --region us-east1 --project data-engineering-doz4
gcloud dataproc workflow-templates set-managed-cluster workflow-mk-test \
--cluster-name=cluster-mk-test \
--project data-engineering-doz4 \
--image-version=1.3-ubuntu18 \
--bucket data-engineering-dev \
--region us-east1 \
--subnet ml-data-engineering-east1 \
--no-address \
--zone us-east1-b \
--master-machine-type n1-standard-4 \
--master-boot-disk-size 15 \
--num-workers 2 \
--worker-machine-type n1-standard-4 \
--worker-boot-disk-size 15
gcloud dataproc workflow-templates add-job pyspark gs://data-engineering-dev/jobs/millard-test.py \
--workflow-template=workflow-mk-test \
--step-id=test-job \
--region=us-east1 \
--project=data-engineering-doz4 \
-- date `date -v -1d '+%Y/%m/%d'` \
--output-location s3n://missionlane-data-engineering-dev-us-east-1/delete-me/`date -v -1d '+%Y/%m/%d'`
Dynamic properties generated by running shell commands is not a supported feature of Dataproc jobs. In this case, you might want to consider making the logic part of your job, i.e., getting the current date dynamically in millard-test.py.

Magento 2 web configuration stuck

I have a problem installing Magento 2. I set the web configuration and i click on the next button, but it is not going on the next step. I tried to set the admin url.
Yo can try install it via comand line.
bin/magento setup:install --backend-frontname="adminlogin" \
--key="YOUR MAGENTO2 REPO KEY" \
--db-host="localhost" \
--db-name="DB_NAME" \
--db-user="MYSQL USERNAME" \
--db-password="PASSWORD FOR MYSQL USER" \
--language="en_US" \
--currency="USD" \
--timezone="America/New_York" \
--use-rewrites=1 \
--use-secure=0 \
--base-url="http://YOUR.DOMAIN" \
--base-url-secure="https://YOUR.DOMAIN"" \
--admin-user=adminuser \
--admin-password=admin123# \
--admin-email=admin#newmagento.com \
--admin-firstname=admin \
--admin-lastname=user \
--cleanup-database
Try it and let me see errorlog, if it doesn't worked for you.
Maybe you have troubles with server or permissions configs.