How to copy or moves files from one directory to another directory inside the Kubernetes POD (Azure Devops release Pipeline) - kubernetes

I would like to move files that are available in the system working directory in the azure pipeline to the Kubernetes pod.
Method one (Kubectl cp command)
kubectl cp D:\a\r1\a\test-files\westus\test.txt /test-745f6564dd:/inetpub/wwwroot/
D:\a\r1\a\test-files\westus\test.txt -- my system working directory file location
(name-space)/test-745f6564dd:/inetpub/wwwroot/ -- kubernetes pod location
I have tried to use kubectl cp command but facing an error.
error: one of src or dest must be a local file specification
Method two command line tool in azure devops
Even i tried to use command line to copy files from one directory to another directory.
cd C:\inetpub\wwwroot>
copy C:\inetpub\wwwroot\west\test.txt C:\inetpub\wwwroot\
Once this task is executed in the azure pipeline, its throwing error.
he syntax of the command is incorrect.
Method three azure cli
I have tried to use azure cli and login into Kubernetes and tried to try one of the below codes. But not throwing any errors even file is not copied too.
az aks get-credentials --resource-group test --name test-dev
cd C:\inetpub\wwwroot
dir
copy C:\inetpub\wwwroot\west\test.txt C:\inetpub\wwwroot\
Is there any way do this operation.

For the first error:
error: one of src or dest must be a local file specification
Try to run the kubectl cp command from the same directory where your file is there and instead of giving the whole path try like below:
kubecto cp test.txt /test-745f6564dd:/inetpub/wwwroot/test.txt

Related

How do we copy the file in S3 to a /usr/local/tomcat/webapps/ROOT/js directory?

I am looking to copy the file in S3 to the container which is created in my AWS ECS, How do we copy the file in S3 to a /usr/local/tomcat/webapps/ROOT/js directory? I am passing in in the CMD in task definition but it is not working.
I tried it via command and it is throwing "executable file not found in $PATH:unknown.

File path from within Azure CLI task

I have an Azure CLI task which references a PowerShell script (via build artifact) running az commands. Most of these commands work successfully, but when attempting to execute the following command:
az appconfig kv import --name $resourceName -s file --path appconfig.json --format json
I've noticed that the information was not present against the Azure resource and the log file has "File is not available".
I must be referencing the file incorrectly from the build artifact but if anyone could provide some clarity around this that would be great.
I must be referencing the file incorrectly from the build artifact
You can try to add $(System.ArtifactsDirectory) to the json file path. For example: --path $(System.ArtifactsDirectory)/appconfig.json.
System.ArtifactsDirectory: The directory to which artifacts are downloaded during deployment of a release. Example: C:\agent\_work\r1\a
For details ,please refer to predefined variables .
This can be a little tricky to figure out.
System.ArtifactsDirectory is the default variable that indicates the directory to which artifacts are downloaded during deployment of a release.
However, to use a default variable in your script, you must first replace the . in the default variable names with _. For example, to print the value of artifact variable System.ArtifactsDirectory in a PowerShell script, you would have to use $env:SYSTEM_ARTIFACTSDIRECTORY.
I have a similar setup and do it this way within my PowerShell script:
# Define the path to the file
$appSettingsFile="$env:SYSTEM_ARTIFACTSDIRECTORY\<rest_of_the_path>\appconfig.json"
# Pass it to the Azure CLI command
az appconfig kv import -n $appConfigName -s file --path $appSettingsFile --format json --separator . --yes
It is also helpful to view the current values of all variables to see what they contain before using them.
References:
Default variables - System
Using default variables

"No such file or directory" when importing local file into docker container using docker exec

When running a shell command in docker exec with a local file as an argument, it fails with
-bash:  docker/mongo.archive: No such file or directory
$ docker exec -i 4cb4a63af40c sh -c 'mongorestore --archive' < 'docker/mongo.archive'
-bash:  docker/mongo.archive: No such file or directory
However, the file clearly exists at the given location:
$ ls docker/mongo.archive
docker/mongo.archive
I remember using the exact same command and it worked. Also, I tried calling the command from within its directory (./docker) as well as from outside, using relative paths. Using the absolute path fails as well. Any ideas?
Remark: 4cb4a63af40c is a mongodb container.
Adjust the quotes
docker exec -i 4cb4a63af40c sh -c 'mongorestore --archive < docker/mongo.archive'

Unable to copy hidden files using gcloud scp in cloud build - remote builder

I'm running cloud build using remote builder, able to copy all file in the workspace to my own VM but, unable to copy hidden files
Command used to copy files
gcloud compute scp --compress --recurse '/workspace/*' [username]#[instance_name]:/home/myfolder --ssh-key-file=my-key --zone=us-central1-a
so, this copies only non-hidden files.
Also used dot operator to copy hidden files
gcloud compute scp --compress --recurse '/workspace/.' [username]#[instance_name]:/home/myfolder --ssh-key-file=my-key --zone=us-central1-a
Still not able to copy and got below error
scp: error: unexpected filename: .
Can anyone suggest to me how to copy hidden files to VM using gcloud scp.
Thanks in advance
If you remove the trailing character after the slash, it may work. For example, this worked for me:
gcloud compute scp --compress --recurse 'test/' [username]#[instance_name]:/home/myfolder

gsutil AccessDeniedException: 401 Login Required

So I run the following:
gsutil -m cp -R file.png gs://bucket/file.png
And I get the following error message:
Copying file://file.png [Content-Type=application/pdf]...
Uploading file.png: 42.59 KiB/42.59 KiB
AccessDeniedException: 401 Login Required
CommandException: 1 files/objects could not be transferred.
I'm not sure what the problem is since I ran config and I can see all my buckets. Does anyone know what I need to do?
Note: I do not have gcloud, I just installed gsutil and ran the config.
Login to Google Cloud is needed for accessing any Cloud service. You need to use below command which will guide you through login steps like typing verification code you generate by opening browser link given in console.
gcloud auth login
I was getting a similar response, and was able to solve this problem by looking at the read permissions on the .boto file. In my case, I was using a service account and the .boto file that was created by
gsutil config -e
only had read permissions set for user. Since it was being read by a service running as a different user, it wasn't able to read the file and yielding a 401 Login Required error. I fixed it by adding read permissions for the service's group.
In the least sophisticated case, you could fix it by giving any user read permission with
chmod a+r .boto
A more detailed explanation for troubleshooting
To get more information, run the same command with a -D flag, like:
gsutil -m -D cp ....
In the debug output, look at:
Command being run: /path/to/gsutil
config_file_list: /path/to/boto/config
Create your login credentials using the executable at /path/to/gsutil, not gcloud auth or any other gsutil executable on the machine, using:
/path/to/gsutil config
For a service account, use:
/path/to/gsutil config -e
These should create a .boto config file in your home directory, $HOME/.boto. If you are running the gsutil command this file should be referenced in the config_file_list variable in the debug output. If not, see below to change it.
Running gsutil under a service account or as another user
If you are running as another user, and need to reference a newly-created config file, set the environment variable BOTO_CONFIG (don't forget to export it):
BOTO_CONFIG=/path/to/$HOME/.boto
export BOTO_CONFIG
By setting this variable, when you execute gsutil, it will reference the config file you have placed in BOTO_CONFIG. You can confirm that you are referencing the correct config file by looking at the config_file_list variable in the gsutil -D command's output.
make sure the referenced .boto file is readable by the user who is executing the gsutil command
Running the /path/to/gsutil with the BOTO_CONFIG variable set allowed me to execute gsutil as another user, referencing an arbitrary BOTO_CONFIG file that was set up with a service-account's credentials.
To set up the service account:
https://console.cloud.google.com/permissions/serviceaccounts
The key file from the service account set-up process needs to be downloaded, and the path to it is requested during the gsutil config -e step.
This may be an issue with how gsutil/boto handles the OS path separators on Windows, as referenced here. This should eventually get merged into the sdk tools, but until then the following should work:
Go to
google-cloud-sdk\platform\gsutil\third_party\boto\boto\pyami\config.py
and replace the line:
for path in os.environ['BOTO_PATH'].split(':'):
with:
for path in os.environ['BOTO_PATH'].split(os.path.pathsep):
Next, go to
google-cloud-sdk\bin\bootstrapping\gsutil.py
replace the lines that use ':'
if boto_config:
boto_path = ':'.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = ':'.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = ':'.join(path_parts)
with
if boto_config:
boto_path = os.path.pathsep.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = os.path.pathsep.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = os.path.pathsep.join(path_parts)
Restart cmd and now the error should go away.