Hopefully this is a quick fix (most likely user error) I am using PowerShell to upload to AWS S3, I'm attempting to copy x amount of .mp4s from a folder to an S3 location, I'm able to copy individual files successfully using the below command:
aws s3 cp .\video1.mp4 s3://bucketname/root/source/
But when I try to copy all the files within that directory I get an error:
aws s3 cp F:\folder1\folder2\folder3\folder4\* s3://bucketname/root/source/
The user-provided path F:\folder1\folder2\folder3\folder4\* does not exist.
I've tried multiple variations on the above, no path just *, *.mp4, .*.mp4 (coming from a Linux background, using quotation marks etc) but I can't seem to get it working.
I was using this documentation initially https://www.tutorialspoint.com/how-to-copy-folder-contents-in-powershell-with-recurse-parameter I feel the answer is probably very simple but couldn't see what I was doing wrong.
Any help would be appreciated.
Thanks.
I'm trying to transfer files to my virtual machine instance on GCP using the gcloud client using gcloud compute scp --recurse ./local-repo foo:~/ and it looks like the files transfer, but once I ssh into foo, I don't see anything. Any help would be much appreciated. Thank you!
I have a Talend job that searches a directory and then uploads it to our database.
It's something like this: dbconnection>twaitforfile>tfilelist>fileschema>tmap>db
I have a subjobok that then commits the data into the table iterates through the directory and movies files to another folder.
Recently I was instructed to change the directory to a shared network path using the same components as before (I originally thought of changing components to tftpfilelist, etc.)
My question being how to direct it to the shared network path. I was able to get it to go through using double \ but it won't read any of the new files arriving.
Thanks!
I suppose if you use tWaitForFile on the local filesystem Talend/Java will hook somehow into the folder and get a message if a new file is being put into it.
Now, since you are on a network drive first of all this is out of reach of the component. Second, the OS behind the network drive could be different.
I understand your job is running all the time, listening. You could change the behaviour to putting a tLoop first which would check the file system for new files and then proceed. There must be some delta check in how the new files get recognized.
We are dockerizing an application (written in Node.js) that will need to access some sensitive data at run-time (API tokens for different services) and I can't find any recommended approach to deal with that.
Some information:
The sensitive information is not in our codebase, but it's kept on another repository in encrypted format.
On our current deployment, without Docker, we update the codebase with git, and then we manually copy the sensitive information via SSH.
The docker images will be stored in a private, self-hosted registry
I can think of some different approaches, but all of them have some drawbacks:
Include the sensitive information in the Docker images at build time. This is certainly the easiest one; however, it makes them available to anyone with access to the image (I don't know if we should trust the registry that much).
Like 1, but having the credentials in a data-only image.
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now. This is very convenient too, but then we can't spin up new servers easily (maybe we could use something like etcd to synchronize them?)
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
PS: I've done some research but couldn't find anything similar to my problem. Other questions (like this one) were about sensitive information needed at build-time; in our case, we need the information at run-time
I've used your options 3 and 4 to solve this in the past. To rephrase/elaborate:
Create a volume in the image that links to a directory in the host system, and manually copy the credentials over SSH like we're doing right now.
I use config management (Chef or Ansible) to set up the credentials on the host. If the app takes a config file needing API tokens or database credentials, I use config management to create that file from a template. Chef can read the credentials from encrypted data bag or attributes, set up the files on the host, then start the container with a volume just like you describe.
Note that in the container you may need a wrapper to run the app. The wrapper copies the config file from whatever the volume is mounted to wherever the application expects it, then starts the app.
Pass the information as environment variables. However, we have 5 different pairs of API credentials right now, which makes this a bit inconvenient. Most importantly, however, we would need to keep another copy of the sensitive information in the configuration scripts (the commands that will be executed to run Docker images), and this can easily create problems (e.g. credentials accidentally included in git, etc).
Yes, it's cumbersome to pass a bunch of env variables using -e key=value syntax, but this is how I prefer to do it. Remember the variables are still exposed to anyone with access to the Docker daemon. If your docker run command is composed programmatically it's easier.
If not, use the --env-file flag as discussed here in the Docker docs. You create a file with key=value pairs, then run a container using that file.
$ cat >> myenv << END
FOO=BAR
BAR=BAZ
END
$ docker run --env-file myenv
That myenv file can be created using chef/config management as described above.
If you're hosting on AWS you can leverage KMS here. Keep either the env file or the config file (that is passed to the container in a volume) encrypted via KMS. In the container, use a wrapper script to call out to KMS, decrypt the file, move it in to place and start the app. This way the config data is not exposed on disk.
I want to run the sdbinst command on a .sdb database file as well as open it in the compatibility administrator. I have no problem doing this locally when the .sdb is stored on the machine i'm using, but i'd like to be able to open and run sdbinst on it when the file is stored in a network store location.
Is this possible?
Yes, according to the MS Help files within the MS Compatibility Toolkit.
See: "Mitigating Issues by using Compatibility Fixes". There is an example of a network deployment workflow: "Deploying the Contoso.sdb Database to your environment".
The basic pattern is to place the sdb on a network Share. Create a one line deployment script that references a path to that share.(sdbinst "\\SomePath\Ex.sdb" -q) Either push or execute the deployment script to/on each target computer in your environment.