I have been trying to apply my startup scripts to new Windows instances on Google Compute Engine as described here, however when I check the instances there is no trace of them ever being executed. Here is the gcloud command I am running:
gcloud compute instances create "my-instance"
--project "my-project"
--zone "us-central1-a"
--machine-type "g1-small"
--network "default"
--metadata "gce-initial-windows-user=my-user" "gce-initial-windows-password=my-pass"
--maintenance-policy "MIGRATE"
--scopes "storage-ro"
--tags "http-server" "https-server"
--image "https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/windows-server-2008-r2-dc-v20150110"
--boot-disk-type "pd-standard"
--boot-disk-device-name "my-instance"
--metadata-from-file sysprep-oobe-script-ps1=D:\Path\To\startup.ps1
I tried using all 3 startup types (sysprep-specialize-script-ps1, sysprep-oobe-script-ps1, windows-startup-script-ps1) but none worked. Can't see any indication in the Task Scheduler or Event Viewer either. The file on my system exists and does work when I run it manually. How can I get this working?
A good way to debug Powershell scripts is to have them write to the serial console (COM1). You'll be able to see the output of the script from GCE's serial port output.
gcloud compute instances get-serial-port-output my-instance --zone
us-central1-a
If there's no script you'll see something like:
Calling oobe-script from metadata.
attributes/sysprep-oobe-script-bat value is not set or metadata server is not reachable.
attributes/sysprep-oobe-script-cmd value is not set or metadata server is not reachable.
attributes/sysprep-oobe-script-ps1 value is not set or metadata server is not reachable.
Running schtasks with arguments /run /tn GCEStartup
--> SUCCESS: Attempted to run the scheduled task "GCEStartup".
-------------------------------------------------------------
Instance setup finished. windows is ready to use.
-------------------------------------------------------------
Booting on date 01/25/2015 06:26:26
attributes/windows-startup-script-bat value is not set or metadata server is not reachable.
attributes/windows-startup-script-cmd value is not set or metadata server is not reachable.
attributes/windows-startup-script-ps1 value is not set or metadata server is not reachable.
Make sure that contents of the ps1 file is actually attached to the instance.
gcloud compute instances describe my-instance --zone us-central1-a
--format json
The JSON dump should contain the powershell script within it.
Lastly, a great way to debug Powershell startup scripts is to write the output to the serial console.
You can print log messages and see them in the Google Developer Console > Compute > Compute Engine > VM Instances > (Instance Name). Then scroll to the bottom and click the expand option for "Serial console".
Function Write-SerialPort ([string] $message) {
$port = new-Object System.IO.Ports.SerialPort COM1,9600,None,8,one
$port.open()
$port.WriteLine($message)
$port.Close()
}
Write-SerialPort ("Testing GCE Startup Script")
This command worked for me, I had to make sure that the script was written in ascii. Powershell ISE writes with a different encoding that breaks gcloud compute.
gcloud compute instances create testwin2 --zone us-central1-a
--metadata-from-file sysprep-oobe-script-ps1=testconsole.ps1 --image windows-2008-r2
Related
I'm trying to move to windows PowerShell instead of cmd
One of the commands I run often is for connecting to GCP compute engines using ssh and binding the machine's ports to my local machine.
I use the following template (taken from GCP's docs):
gcloud compute ssh VM_NAME --project PROJECT_ID --zone ZONE -- -L LOCAL_PORT:localhost:REMOTE_PORT -- -L LOCAL_PORT:localhost:REMOTE_PORT
This works great when using cmd but when I try and run it in PowerShell I get the following error:
(gcloud.compute.ssh) unrecognized arguments:
-L
8010:localhost:8888
What am I missing?
Is it possible to run docker without elevated priv ex. (docker version).
Im trying to run a command on another machine (windows server with docker as service) with powershell invoke command but it seems as long as the docker insists on elevated priv i cannot.
So if i can get "docker verison" to work im all set.
The error i get is
docker.exe: error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.39/containers/create: open //./pipe/docker_engine: Access is denied. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
See 'C:\Program Files\Docker\docker.exe run --help
it works with an elevated powershell.
Any ideas?
This is normal - by default, a local named pipe is used for the Docker CLI to communicate with the service (aka daemon).
For development use you can configure the host machine's Docker service ("daemon") for TCP access but this is the least secure option. Just put this text in file daemon.json:
{
"hosts": ["tcp://0.0.0.0:2375"]
}
Once this is done you can connect with e.g.
docker --host tcp://1.2.3.4:2375 version
If this is for production use, you probably need to look at a container orchestration system.
A middle ground would be to useAttach-PSSession to attach to an admin PowerShell session on the remote machine. This still requires a privileged user but does work remotely.
I have installed gcloud/bq/gsutil command line tool in one linux server.
And we have several accounts configured in this server.
**gcloud config configurations list**
NAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGION
gaa True a#xxx.com a
gab False b#xxx.com b
Now I have problem to both run gaa/gab in this server at same time. Because they have different access control on BigQuery and Cloud Stroage.
I will use below commands (bq and gsutil commands):
Set up account
Gcloud config set account a#xxx.com
Copy data from bigquery to Cloud
bq extract --compression=GZIP --destination_format=NEWLINE_DELIMITED_JSON 'nl:82421.ga_sessions_20161219' gs://ga-data-export/82421/82421_ga_sessions_20161219_*.json.gz
Download data from Cloud to local system
gsutil -m cp gs://ga-data-export/82421/82421_ga_sessions_20161219*gz
If only run one account, it is not a problem.
But there are several accounts need to run on one server at same time, I have no idea how to deal with this case.
Per the gcloud documentation on configurations, you can switch your active configuration via the --configuration flag for any gcloud command. However, gsutil does not have such a flag; you must set the environment variable CLOUDSDK_ACTIVE_CONFIG_NAME:
$ # Shell 1
$ export CLOUDSDK_ACTIVE_CONFIG_NAME=gaa
$ gcloud # ...
$ # Shell 2
$ export CLOUDSDK_ACTIVE_CONFIG_NAME=gab
$ gsutil # ...
I am trying to set up a cluster with an initialization script, but I get the following error:
[BAD JSON: JSON Parse error: Unexpected identifier "Google"]
In the log folder the init script output log is absent.
This seems rather strange as it seemed to work past week, and the error message does not seem related to the init script, but rather to the input arguments for the cluster creation. I used the following command:
gcloud beta dataproc clusters create <clustername> --bucket <bucket> --zone <zone> --master-machine-type n1-standard-1 --master-boot-disk-size 10 --num-workers 2 --worker-machine-type n1-standard-1 --worker-boot-disk-size 10 --project <projectname> --initialization-actions <gcs-uri of script>
Apparently changing
#!/bin/sh
to
#!/bin/bash
and removing all "sudo" occurrences did the trick.
This particular error occurs most often when the initialization script is in a Cloud Storage (GCS) bucket to which the project running the cluster does not have access.
I would recommend double-checking the project which is being used for the cluster has read access to the bucket.
I utilise a number of 'throwaway' servers in AWS and we're looking at trying to keep the cost of these down.
Initially, we're looking for a fairly basic 'awsec2 stop all' command to be run on a scheduled basis from a server we do know will be running 24/7.
Upon checking against what AWS have documented, it appears that we need to pull in all the currently running instances, grab the ID's of these and then pass them through into the command, rather than simply stating I want all instances to turn off.
Is there a better method collecting these ID's such as simply being able to issue a 'stop all'?
Appreciate the help.
The AWS CLI provides built-in JSON parsing with the --query option. Additionally, you can use the --filter option to execute stop commands on running instances only.
aws ec2 describe-instances \
--filter Name=instance-state-name,Values=running \
--query 'Reservations[].Instances[].InstanceId' \
--output text | xargs aws ec2 stop-instances --instance-ids
This is untested, but should do the trick with AWS Tools for Powershell:
#(Get-EC2Instance) | % {$_.RunningInstance} | % {Stop-EC2Instance $_.InstanceId}
In plain English, the line above gets a collection of EC2 instance objects (Amazon.EC2.Model.Reservation), grabs the RunningInstance property for each (a collection of various properties relating to instance), and uses that to grab the InstanceId of each and stop the instance.
These functions are mapped as follows:
Get-EC2Instance -> ec2-describe-instances
Stop-EC2Instance -> ec2-stop-instances
Be sure to check out the help for Stop-EC2Instance... has some useful parameters like -Terminate and -Force that you may be interested in.
This one-liner will stop all the instnaces:
for i in $(aws ec2 describe-instances | jq '.Reservations[].Instances[].InstanceId'); do aws ec2 stop-instances --instance-ids $i; done
Provided:
You have AWS-CLI instlled (http://aws.amazon.com/cli/)
You have jq json parser installed. (http://stedolan.github.io/jq/)
..and yeah, above syntax is for Linux Bash shell specific. You can mimic the same for powershell on windows and figure out a powersehll way of parsing json.
if anyone ever wants to do what Peter Moon described via AWS DataPipeline:
aws ec2 describe-instances --region eu-west-1 --filter Name=instance-state-name,Values=running --query 'Reservations[].Instances[].InstanceId' --output text | xargs aws ec2 stop-instances --region eu-west-1 --instance-ids
it's basically the same command but you have to add the --region after describe-instances and after stop-instances to make it work. watch out for the a/b/c that's usually included in the region name. that does seems to cause errors if included here.