I'm trying to add a rule to enable requests from a devops pipeline to an azure function app.
I run the following az cli command (either from the pipeline or from a VS Code terminal window)
az functionapp config access-restriction add -g rg-name-here -n func-name-here --rule-name devopsRB --action Allow --ip-address "51.142.236.175/27" --priority 146
This gives back the following error:
51.142.236.175/27 has host bits set
If I add the same rule via the Azure portal it works ok.
Anyone see what I'm doing wrong?
The error 51.142.236.175/27 has host bits set also occurred to me. But I ran Azure CLI task with command
az functionapp config access-restriction add -g GROUP -n NAME --rule-name RULEname --action Allow --ip-address 51.142.0.0/27 --priority 300
successfully. Consider using 51.142.0.0/27.
Related
I'm trying to move to windows PowerShell instead of cmd
One of the commands I run often is for connecting to GCP compute engines using ssh and binding the machine's ports to my local machine.
I use the following template (taken from GCP's docs):
gcloud compute ssh VM_NAME --project PROJECT_ID --zone ZONE -- -L LOCAL_PORT:localhost:REMOTE_PORT -- -L LOCAL_PORT:localhost:REMOTE_PORT
This works great when using cmd but when I try and run it in PowerShell I get the following error:
(gcloud.compute.ssh) unrecognized arguments:
-L
8010:localhost:8888
What am I missing?
I am facing a error below when I tried to enable app gateway addon for aks
az aks enable-addons -n pocakscluster -g POC-RG -a ingress-appgw --appgw-id $appgwId
UnrecognizedArgumentError: unrecognized arguments: --appgw-id /subscriptions/#####&&&&/resourceGroups/POC-RG/providers/Microsoft.Network/applicationGateways/pocappgw
AKS is running on version 1.17.X
AppGW has WAF_v2 SKU
The same command was working fine earlier
Are you executing these CLI commands from your MAC? If yes it looks like there is some issue with MAC version of CLI. Please continue using cloud shell for now. Let us know if you can execute from cloud shell or not.
I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.
I have installed gcloud/bq/gsutil command line tool in one linux server.
And we have several accounts configured in this server.
**gcloud config configurations list**
NAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGION
gaa True a#xxx.com a
gab False b#xxx.com b
Now I have problem to both run gaa/gab in this server at same time. Because they have different access control on BigQuery and Cloud Stroage.
I will use below commands (bq and gsutil commands):
Set up account
Gcloud config set account a#xxx.com
Copy data from bigquery to Cloud
bq extract --compression=GZIP --destination_format=NEWLINE_DELIMITED_JSON 'nl:82421.ga_sessions_20161219' gs://ga-data-export/82421/82421_ga_sessions_20161219_*.json.gz
Download data from Cloud to local system
gsutil -m cp gs://ga-data-export/82421/82421_ga_sessions_20161219*gz
If only run one account, it is not a problem.
But there are several accounts need to run on one server at same time, I have no idea how to deal with this case.
Per the gcloud documentation on configurations, you can switch your active configuration via the --configuration flag for any gcloud command. However, gsutil does not have such a flag; you must set the environment variable CLOUDSDK_ACTIVE_CONFIG_NAME:
$ # Shell 1
$ export CLOUDSDK_ACTIVE_CONFIG_NAME=gaa
$ gcloud # ...
$ # Shell 2
$ export CLOUDSDK_ACTIVE_CONFIG_NAME=gab
$ gsutil # ...
Is it possible to do silent deployment when using gcloud app deploy
When I run the command gcloud app deploy ./deployment/app.yaml --version v1 its always prompting for
Do you want to continue (Y/n)? Y
how to automate this?
is there any flag that we can pass in to mute this?
You're looking for the --quiet flag, available across all gcloud commands:
$ gcloud --help
--quiet, -q
Disable all interactive prompts when running gcloud commands. If input
is required, defaults will be used, or an error will be raised.
$ gcloud app deploy --quiet
Or also:
$ gcloud app deploy -q