Writing a file's content on a vault mount is quite easy. I am doing this:
vault write kv key=#C:\Users\abc\Desktop\abc.txt value=C:\Users\abc\Desktop\abc.txt
Reading this gives key as the content of abc.txt and value as full path of abc.txt
However, doing so using HTTP API is not getting through with me.
The documentation says, send a POST request with the value to write as JSON Body.
Now, I have tried below as the payload but it simply writes with '#' on vault instead of the content.
{
"key": "#C:\\Users\\abc\\Desktop\\abc.txt",
"value": "C:\\Users\\abc\\Desktop\\abc.txt"
}
How to achieve the write part where content of a file is written on vault mount path.
Related
I have a pipeline in Azure Data Factory that is using a web task to rename a file on a file share on one of our azure storage accounts using rest api.
The process almost works and creates a copy of the file with the new name, but the new file is empty. I’ve tried this with both xlsx and a standard txt file. These are the headers I’m using:
x-ms-date: <generating in ADF>
x-ms-version: 2021-08-06
x-ms-rename-source: <path to original file>
x-ms-type: file
x-ms-content-length: <?>
I put <?> for content length because I think this is the issue and I’m not sure what value I should use here. I tried not including the x-ms-content-length to preserve the file attributes but I get an error that the header is required. Any thoughts on why the file is empty/being resized?
I created a snapshot of my vault server with the following command:
vault operator raft snapshot save snapshot.save
I now have a file of the snapshot, and I am able to use it to restore the server. I am trying to decrypt and read the snapshot file programmatically so that I can search for a value inside the snapshot. Is there a way to decrypt vault snapshots into plaintext ?
There isn't a way to just decrypt it into a mysql-dump-like output file, no.
You can put it into recovery mode, then iterate thru the tree, greping for what you're looking for.
You can find docs on that here:
https://learn.hashicorp.com/tutorials/vault/inspecting-data-integrated-storage?in=vault/monitoring#secret-engine-data-example
I am trying to build a pipeline in StreamSets wherein when a file comes to a directory i want to invoke a rest api with just the file name; I don't want StreamSets to read the file or do any processing on it.
But whatever I try, it's trying to send the whole file to the destination.
The file is a special SEGD format file which is kind a binary file.
It is trying to read the file and failing.
My requirement is to invoke a REST API as soon as a file comes to a folder.
As you've discovered, by default, StreamSets Data Collector's Directory origin will parse the contents of the file as JSON, delimited data etc. If you use the Whole File format, though, the origin will instead read only the file metadata, and pass a special record along the pipeline, with the following fields:
You can then use the HTTP Client processor or destination, referencing the filename with the expression ${record:value('/fileInfo/filename')}.
I am following the steps of setting up Django on Google App Engine, and since Gunicorn does not serve static files, I have to store my static files to Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically readable." on https://cloud.google.com/python/django/flexible-environment#run_the_app_on_your_local_computer. I ran the following commands as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second line sets its default ACL. When I type in the command, the second line returns an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the problem?
I figured it out myself, and it is kind of silly. I didn't notice if the first command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like bucket name and project name are global across its space. And what happened was that the name I used to create a new bucket is already used by other people. And no wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like prefixing project name and application name.
I'm trying to use command line curl to test an API. The call takes in some parameters and a image file. Is there a way for me to specify the parameters using a json file, and make the request via curl so both the image file and the json file gets uploaded to the server?
Check the curl documentation here
In the POST (HTTP) part you'll find the answer to your question.
You need to use the -F parameter