I'm using Powershell to write a folder synchronization tool to copy files from a local folder up to AWS S3 with the AWS CLI.
The script works as I can see files show up in S3, but the output of the aws sync command does not appear on screen (normally when aws sync is run from the command line it shows each file as it it uploads, the current status of all files/count, etc).
How do I get that to happen inside a Powershell script?
Here are some various things I've tried, but none of which worked:
aws s3 sync $local_folder $aws_bucket
$awsio = aws s3 sync $local_folder $aws_bucket
#Out-Host -InputObject $awsio
Write-Output $awsio
Turns out the answer was the first thing I tried which was just the normal command on its own line:
aws s3 sync $local_folder $aws_bucket
I think what happened is when I first tried that, it was doing something in the background before actually starting to run. So if I had waited longer I would have seen output appear on screen as I expected...
Related
Hopefully this is a quick fix (most likely user error) I am using PowerShell to upload to AWS S3, I'm attempting to copy x amount of .mp4s from a folder to an S3 location, I'm able to copy individual files successfully using the below command:
aws s3 cp .\video1.mp4 s3://bucketname/root/source/
But when I try to copy all the files within that directory I get an error:
aws s3 cp F:\folder1\folder2\folder3\folder4\* s3://bucketname/root/source/
The user-provided path F:\folder1\folder2\folder3\folder4\* does not exist.
I've tried multiple variations on the above, no path just *, *.mp4, .*.mp4 (coming from a Linux background, using quotation marks etc) but I can't seem to get it working.
I was using this documentation initially https://www.tutorialspoint.com/how-to-copy-folder-contents-in-powershell-with-recurse-parameter I feel the answer is probably very simple but couldn't see what I was doing wrong.
Any help would be appreciated.
Thanks.
I have a notebook instance with a notebook file. I use the instance's startup script to run this notebook file using papermill.
I want the notebook file to be run only when I remotely start the instance, and not from google cloud console.
I'd like to know if one of these is possible, or if there's another solution:
1 - The script will detect that the instance was started from the dashboard.
2 - I will remove the startups script and use another script that can be run by a remote command.
3 - The shutdown script will remove the startup script.
Script definition is kept under /var/run/google.startup.script for the scenarios mentioned:
Seems complicated detect if the instance was started from the dashboard.
Is feasible to delete the startup script mentioned earlier, and run your own script remotely.
Delete the script.
Before to delete the scrip I suggest you to make a backup of it in case of any issue arises for delete it, also keep in mind that AI Platform Notebooks is a managed service, any admin configuration could cause potential issues in your instance, be careful when deleting or modifying the startup script.
My advise will be actually run a notebook via a Scheduler system, I posted here the multiple options: GCP run a prediction of a model every day
I'm reading through the CodeDeploy reference docs here; and can't find the equivalent of aws deploy push command to send up a new version of my application to s3 to be ready for deployment.
Do I need to just zip these files myself and send them to s3 with the other PowerShell tools instead?
Since push is not a single API call, but rather a multistep operation, the simplest way to automate it in a powershell script is to literally put the command in the script
aws deploy push
You may need to make sure the aws executable is on your path.
I have to think this is a solved issue but I am just not getting it to work. So I have come to you StackOverflow with this issue:
I have a windows server 2016 machine running in amazon ec2. I have a machine.ps1 script in a config directory.
I create an image of the box. (I have tried with checking noreboot and unchecking it)
When I create a new instance of the image I want it to run machine.ps1 at launch to set the computer name and then set routes and some config settings for the box. The goal is to do this without logging into the box.
I have read and tried:
Running Powershell scripts at Start up
and used this to ensure user data was getting passed in:
EC2 Powershell Launch Tools
I have tried setting up a scheduled task that runs the machine.ps1 on start up (It just hangs)
I see the initializeInstance.ps1 on start up task and have tried to even coop that replacing the line to run userdata with the line to run my script. Nothing.
If I log into the box and run machine.ps1, it will restart the computer and set the computer name and then I need to run it once more to set routes. This works manually. I just need to find a way to do it automagically.
I want to launch these instances from powershell not with launch configurations and auto scale.
You can use User data
Whenever you deploy a new server, workstation or virtual machine there is nearly always a requirement to make final changes to the system before it’s ready for use. Typically this is normally done with a post-deployment script that might be triggered manually on start-up or it might be a final step in a Configuration Manager task sequence or if you using Azure you may use the Custom Script Extension. So how do you achieve similar functionality using EC2 instances in Amazon Web Services (AWS)? If you’ve created your own Amazon Machine Image (AMI) you can set the script to run from the Runonce registry key, but then can be a cumbersome approach particularly if you want to make changes to the script and it’s been embedded into the image. AWS offers a much more dynamic method of injecting a script to run upon start-up through a feature called user data.
Please refer following link for ther same:
Poershell User data
Windows typically won't let a powershell script call another powershell script unless it is being run as Administrator. It is a weird 'safety' feature. But it is perfectly okay to load the ps1 files and use any functions inside them.
The UserData script is typically run as "system". You would THINK that would pass muster. But it fails...
The SOLUTION: Make ALL of your scripts into powershell functions instead.
In your machine.ps1 - wrap the contents with function syntax
function MyDescriptiveName { <original script contents> }
Then in UserData - use the functions like this
# To use a relative path
Set-Location -Path <my location>
# Load script file into process memory
. <full-or-relpath>/machine.ps1
# Call function
MyDescriptiveName <params-if-applicable>
If the function needs to call other functions (aka scripts), you'll need to make those scripts into functions and load the script file into process memory in UserData also.
I've got a powershell script to archive log files. The script is intended to be run daily from a scheduled task as a specified user 'LogArchiver'.
The script uses the aws-cli to copy the file to S3 and needs sufficient credentials to access the bucket which are stored in the user directory C:\Users\LogArchiver\.aws as recommended in the aws docs.
When I run the script from a powershell terminal running as the user it recognises the credentials and successfully copies files to S3. But when it is run from the scheduled task it doesn't recognise the aws credentials and writing the Transcript to file shows the message:
Unable to locate credentials. You can configure credentials by running "aws configure".
Anyone know why this is and any fixes to it? I've read in another post about scheduled tasks doing funny things to environment variables but not sure if that would cause the problems i'm having.
Turns out that it was a bug in server 2012 and is fixed by this patch
https://support.microsoft.com/en-gb/kb/3133689
The 'fix' for me was to change the USERPROFILE environment variable at the top of the script with
$env:USERPROFILE = "C:\Users\LogArchiver"
Not elegent, but works.