How do we copy the file in S3 to a /usr/local/tomcat/webapps/ROOT/js directory? - amazon-ecs

I am looking to copy the file in S3 to the container which is created in my AWS ECS, How do we copy the file in S3 to a /usr/local/tomcat/webapps/ROOT/js directory? I am passing in in the CMD in task definition but it is not working.
I tried it via command and it is throwing "executable file not found in $PATH:unknown.

Related

How to copy or moves files from one directory to another directory inside the Kubernetes POD (Azure Devops release Pipeline)

I would like to move files that are available in the system working directory in the azure pipeline to the Kubernetes pod.
Method one (Kubectl cp command)
kubectl cp D:\a\r1\a\test-files\westus\test.txt /test-745f6564dd:/inetpub/wwwroot/
D:\a\r1\a\test-files\westus\test.txt -- my system working directory file location
(name-space)/test-745f6564dd:/inetpub/wwwroot/ -- kubernetes pod location
I have tried to use kubectl cp command but facing an error.
error: one of src or dest must be a local file specification
Method two command line tool in azure devops
Even i tried to use command line to copy files from one directory to another directory.
cd C:\inetpub\wwwroot>
copy C:\inetpub\wwwroot\west\test.txt C:\inetpub\wwwroot\
Once this task is executed in the azure pipeline, its throwing error.
he syntax of the command is incorrect.
Method three azure cli
I have tried to use azure cli and login into Kubernetes and tried to try one of the below codes. But not throwing any errors even file is not copied too.
az aks get-credentials --resource-group test --name test-dev
cd C:\inetpub\wwwroot
dir
copy C:\inetpub\wwwroot\west\test.txt C:\inetpub\wwwroot\
Is there any way do this operation.
For the first error:
error: one of src or dest must be a local file specification
Try to run the kubectl cp command from the same directory where your file is there and instead of giving the whole path try like below:
kubecto cp test.txt /test-745f6564dd:/inetpub/wwwroot/test.txt

COPY failed: stat /var/lib/docker/tmp/docker-builderXXXXXXXX/java-common/ug-common/src: no such file or directory

I am having following error when copying the file. Why is that?
docker-compose up
COPY failed: stat /var/lib/docker/tmp/docker-builderXXXXXXX/java-common/ug-common/src: no such file or directory
Whatever file / directory you're trying to copy into your image doesn't exist in your build context. Set context to a directory that contains everything you need and dockerfile to the path of your dockerfile within it.
https://docs.docker.com/compose/compose-file#build

Cannot restore files in GCS with versioning, when deleting folder

For backups, I have set versioning in GCS.
Then I created folder and I put a file in the folder. After that, I've deleted the folder.
Then I used gsutil ls -alr command, but I cannot find the file in the bucket.
I found the folder, but I cannot restore the file in the folder.
When I delete a folder, why can't I restore a file in that folder even if setting versioning of GCS?
Files in the Google Cloud Storage bucket that are archived and NOT live during the deletion of the folder remain in the archived list and can be retrieved.
For example you can:
Create a folder in bucket using Google Cloud Console: gs://[BUCKET_NAME]/example
Put a file to the folder using Google Cloud Console: gs://[BUCKET_NAME]/example/file_1.txt
Put another file to the folder using Google Cloud Console: gs://[BUCKET_NAME]/example/file_2.txt
Using the Google Cloud Console delete the file_1.txt
Using the Google Cloud Console delete the folder example
Run the command gsutil ls -alr gs://[BUCKET_NAME]/example
You will see a result as follows:
$ gsutil ls -alr gs://[BUCKET_NAME]/example
gs://[BUCKET_NAME]/example/:
11 2019-02-27T11:48:54Z gs://[BUCKET_NAME]/example/#1551268... metageneration=1
14 2019-02-27T11:49:49Z gs://[BUCKET_NAME]/example/file_1.txt#1551268189... metageneration=1
TOTAL: 2 objects, 25 bytes (25 B)
You will notice that only the file_1.txt is available for retrial since it is the one that was archived and NOT LIVE when the folder was deleted.
Also, to list all the archived objects of the bucket you can run gsutil ls -alr gs://[BUCKET_NAME]/**.
So if your files were archived and deleted before the folder was deleted, you can list them using gsutil ls -alr gs://[BUCKET_NAME]/** and retrieve them using another command, for more info visit the Using Object Versioning > Copying archived object versions documentation.

cloudformation package uploading hash instead of zip

I have a serverless api I'm trying to upload to cloudformation and am having some issues. According to the docs here,
For example, if your AWS Lambda function source code is in the /home/user/code/lambdafunction/ folder, specify CodeUri: /home/user/code/lambdafunction for the AWS::Serverless::Function resource. The command returns a template and replaces the local path with the S3 location: CodeUri: s3://mybucket/lambdafunction.zip.
I'm using a relative path (I've tried an absolute path as well), so I have CodeUri: ./ instead of /user/libs/code/functionDirectory/. When I package the files, it looks like a hash is being uploaded to S3, but it's not a zip (when I try and download it, my computer doesn't recognize the file type)
Is this expected? I was expecting a .zip file to be upload. Am I completely missing something here?
Thanks for any help.
Walker
Yes, it is expected. When you use CodeUri the files are archived and stored in S3, it can be extracted with unzip command or any other utility.
> file 009aebc05d33e5dddf9b9570e7ee45af
009aebc05d33e5dddf9b9570e7ee45af: Zip archive data, at least v2.0 to extract
> unzip 009aebc05d33e5dddf9b9570e7ee45af
Archive: 009aebc05d33e5dddf9b9570e7ee45af
replace AWSSDK.SQS.dll? [y]es, [n]o, [A]ll, [N]one, [r]ename:

AWS S3, Deleting files from local directory after upload

I have backup files in different directories in one drive. Files in those directories can be quite big up to 800GB or so. So I have a batch file with a set of scripts which upload/syncs files to S3.
See example below:
aws s3 sync R:\DB_Backups3\System s3://usa-daily/System/ --exclude "*" --include "*/*/Diff/*"
The upload time can vary but so far so good.
My question is, how do I edit the script or create a new one which checks in the s3 bucket that the files have been uploaded and ONLY if they have been uploaded then deleted them from the local drive, if not leave them on the drive?
(Ideally it would check each file)
I'm not familiar with aws s3, or aws cli command that can do that? Please let me know if I made myself clear or if you need more details.
Any help will be very appreciated.
Best would be to use mv with --recursive parameter for multiple files
When passed with the parameter --recursive, the following mv command recursively moves all files under a specified directory to a specified bucket and prefix while excluding some files by using an --exclude parameter. In this example, the directory myDir has the files test1.txt and test2.jpg:
aws s3 mv myDir s3://mybucket/ --recursive --exclude "*.jpg"
Output:
move: myDir/test1.txt to s3://mybucket2/test1.txt
Hope this helps.
As the answer by #ketan shows, Amazon aws client cannot do batch move.
You can use WinSCP put -delete command instead:
winscp.com /log=S3.log /ini=nul /command ^
"open s3://S3KEY:S3SECRET#s3.amazonaws.com/" ^
"put -delete C:\local\path\* /bucket/" ^
"exit"
You need to URL-encode special characters in the credentials. WinSCP GUI can generate an S3 script template, like the one above, for you.
Alternatively, since WinSCP 5.19, you can use -username and -password switches, which do not need any encoding:
"open s3://s3.amazonaws.com/ -username=S3KEY -password=S3SECRET" ^
(I'm the author of WinSCP)