s3fs-fuse encrypt/securely store passwords - s3fs

We are following this instruction to mount S3 bucket to a machine using this below instruction:
https://docs.jdcloud.com/en/object-storage-service/s3fs
Question mark :
we are storing plain text secrets/keys in a file required to mount, is there any other way we can avoid exposing plain text something can we encrypt or store somewhere. For example, we are mounting with "-o password.." with this, we will find this in the process daemon like lsof. Hence. We need security fix for this.

Related

unexpected "ed25519-nkey" algorithm error when using NAS and NSC of NATS.io

A team I'm working with, has created a NAS Docker container. The Dockerfile uses FROM synadia/nats-account-server:0.8.4 and installs NSC using curl -L https://raw.githubusercontent.com/nats-io/nsc/master/install.py | python. When NAS is run in the Docker container, it is given a path to a server.conf file that contains operatorjwtpath: "/nsc/accounts/nats/OperatorName/OperatorName.jwt".
The problem is, that when I generate the operator on my PC using nsc add operator -i and when I run the Docker container on AWS Fargate and mount the JWT file to the appropriate folder using an AWS EFS filesystem, the container crashes and shows the error unexpected "ed25519-nkey" algorithm.
According to the NATS basics page, the algorithm that should be used is "alg": "ed25519". But when I generated the JWT and decoded it on this site, I see that what's being used is "alg": "ed25519-nkey".
So what is going on here? I can't find any specific info about an algorithm that has the "nkey" appended to its name. This is the default JWT that's generated. Why is it different from what the NAS algorithm expects? How do I solve this error?
Extra info: According to this site, it's supposed to be due to a version conflict, but even upgrading to FROM synadia/nats-account-server:1.0.0 didn't solve it.

GCS encryption always fails on big files

I'm trying to encrypt a file on GCS with my own key using gsutil rewrite command (following https://cloud.google.com/storage/docs/using-encryption-keys)
As instructed I'm using a boto file including
[GSUtil]
encryption_key = p9syBNA0ycKxGotK3XinNZC6aCpdn3ZQ7WWOhKNgBaY=
It is working without a problem on small files but fails constantly on big ones.
I'm running the command:
gsutil rewrite -k -O gs://ywz-tmp/bigfile.txt
Is that a know issue?
Any workaround?
Feel free to use the file and key (both were generated for this post)
The fix for this issue should be in production now.

dsx writing to blue-mix object storage

Will bluemix object storage ever have folder capability inside a container like amazon s3. I am not sure about other folks but pretty soon writing from DSX, it gets such a mess in a container. Its like a computer with no capability of creating folders under C:\ drive . Its a complete mess.
Since its DSX's primary storage, is the DSX pushing for this capability.Bluemix object storage no folder capability
Here's the s3 container and how beautifully you can organize everything S3 conatiner
i believe what you are looking for is something like subcontainers and to organize your files.
I think Object-storage service is based Openstack Object Storage and according to Openstack doc it is not possible to create nested directories.
https://docs.openstack.org/user-guide/cli-swift-pseudo-hierarchical-folders-directories.html
You can use the path in the filename to simulate subdirectories by seperating with / when writing/reading file you can use something like this 'swift://containername.' + name + '/foldername/fillename.csv'
So anything you write with /foldername/filename.csv will be organized under foldername.
Thanks,
Charles.

How to compress a list of files into a single gzip file using elasticluster, grid-engine-tools, and google cloud

I want to start by thanking you all for your help ahead of time, as this will help clear up a detail left out on the readthedocs.io guide. What I need is to compress several files into a single gzip, however, the guide shows only how to compress a list of files as individual gzipped file. Again, I appreciate any help as there is very few resources and documentation for this set up. (If there is some extra info, please include links to sources)
After I had set up the grid engine, I ran through the samples in the guide.
Am I right in assuming there is not a script for combining multiple files into one gzip using grid-computing-tools?
Are there any solutions on the Elasticluster Grid Engine setup to compress multiple files into 1 gzip?
What changes can be made to the grid-engine-tools to make it work?
EDIT
The reason we are considering a cluster is that we do expect multiple operations occurring simultaneously, zipped up files per order, which will occur systematically so that a vendor can download a single compressed file per order.
May I state the definition of the problem and you can let me know if I understood it correctly, as both Matt and I provided the exact same solution and somehow it doesn't seem sufficient.
Problem Definition
You have an Order defining the start of a task to process some data.
The processing of data would be split among several compute nodes, each producing a resulting file stored on GS directories.
The goal is:
Collect the files from GS bucket (that were produced by each of the nodes),
Archive the collection of files as one file,
Then compress that archive, and
Push it back to a different GS location.
Let me know if I summarized it properly,
Thanks,
Paul
Are the files in question in Cloud Storage?
Are the files in question on a local or network drive?
In your description, you indicate "What I need is to compress several files into a single gzip". It isn't clear to me that a cluster of computers is needed for this. It sounds more like you just want to use tar along with gzip.
The tar utility will create an archive file it can compress it as well. For example:
$ # Create a directory with a few input files
$ mkdir myfiles
$ echo "This is file1" > myfiles/file1.txt
$ echo "This is file2" > myfiles/file2.txt
$ # (C)reate a compressed archive
$ tar cvfz archive.tgz myfiles/*
a myfiles/file1.txt
a myfiles/file2.txt
$ # (V)erify the archive
$ tar tvfz archive.tgz
-rw-r--r-- 0 myuser mygroup 14 Jul 20 15:19 myfiles/file1.txt
-rw-r--r-- 0 myuser mygroup 14 Jul 20 15:19 myfiles/file2.txt
To extract the contents use:
$ # E(x)tract the archive contents
$ tar xvfz archive.tgz
x myfiles/file1.txt
x myfiles/file2.txt
UPDATE:
In your updated problem description, you have indicated that you may have multiple orders processed simultaneously. If the frequency in which results need to be tar-ed is low, and providing the tar-ed results is not extremely time-sensitive, then you could likely do this with a single node.
However, as the scale of the problem ramps up, you might take a look at using the Pipelines API.
Rather than keeping a fixed cluster running, you could initiate a "pipeline" (in this case a single task) when a customer's order completes.
A call to the Pipelines API would start a VM whose sole purpose is to download the customer's files, tar them up, and push the resulting tar file into Cloud Storage. The Pipelines API infrastructure does the copying from and to Cloud Storage for you. You would effectively just need to supply the tar command line.
There is an example that does something similar here:
https://github.com/googlegenomics/pipelines-api-examples/tree/master/compress
This example will download a list of files and compress each of them independently. It could be easily modified to tar the list of input files.
Take a look at the https://github.com/googlegenomics/pipelines-api-examples github repository for more information and examples.
-Matt
So there are many ways to do it, but the thing is that you cannot directly compress on Google Storage a collection of files - or a directory - into one file, and would need to perform the tar/gzip combination locally before transferring it.
If you want you can have the data compressed automatically via:
gsutil cp -Z
Which is detailed at the following link:
https://cloud.google.com/storage/docs/gsutil/commands/cp#changing-temp-directories
And the nice thing is that you retrieve uncompressed results from compressed data on Google Storage, because it has the ability to perform Decompressive Transcoding:
https://cloud.google.com/storage/docs/transcoding#decompressive_transcoding
You will notice on the last line in the following script:
https://github.com/googlegenomics/grid-computing-tools/blob/master/src/compress/do_compress.sh
The following line will basically copy the current compressed file to Google Cloud Storage:
gcs_util::upload "${WS_OUT_DIR}/*" "${OUTPUT_PATH}/"
What you will need is to first perform the tar/zip on the files in the local scratch directory, and then gsutil copy the compressed file over to Google Storage, but make sure that all the files that need to be compressed are in the scratch directory before starting to compress them. Most likely you would need to SSH copy (scp) them to one of the nodes (i.e. master), and then have the master tar/gzip the whole directory before sending it over to Google Storage. I am assuming each GCE instance has its own scratch disk, but the "gsutil cp" transfer is very fast when working on GCE.
Since Google Storage is fast at data transfers with Google Compute instances, the easiest second option to pursue is to mark out lines 66-69 in the do_compress.sh file:
https://github.com/googlegenomics/grid-computing-tools/blob/master/src/compress/do_compress.sh
This way no compression happens, but the copy happens on the last line via gsutil::upload, in order to have all the uncompressed files transferred to the same Google Storage bucket. Then using "gsutil cp" from the master node you would copy them back locally, in order to compress them locally via tar/gz and then copy the compressed directory file back to the bucket using "gsutil cp".
Hope it helps but it's tricky,
Paul

gsutil acl set command AccessDeniedException: 403 Forbidden

I am following the steps of setting up Django on Google App Engine, and since Gunicorn does not serve static files, I have to store my static files to Google Cloud Storage.
I am at the line with "Create a Cloud Storage bucket and make it publically readable." on https://cloud.google.com/python/django/flexible-environment#run_the_app_on_your_local_computer. I ran the following commands as suggested:
$ gsutil mb gs://your-gcs-bucket
$ gsutil defacl set public-read gs://your-gcs-bucket
The first command is supposed to create a new storage bucket, and the second line sets its default ACL. When I type in the command, the second line returns an error.
Setting default object ACL on gs://your-gcs-bucket/...
AccessDeniedException: 403 Forbidden
I also tried other commands setting or getting acl, but all returns the same error, with no additional information.
I am a newbie with google cloud services, could anyone point out what is the problem?
I figured it out myself, and it is kind of silly. I didn't notice if the first command is successful or not. And apparently it did not.
For a newbie like me, it is important to note that things like bucket name and project name are global across its space. And what happened was that the name I used to create a new bucket is already used by other people. And no wonder that I do not have permission to access that bucket.
A better way to work with this is to name the bucket name wisely, like prefixing project name and application name.