What is the file timestamp file stored? - metadata

what is the information stored, about creating, modified, last open, and the permission store in a file? How can I get this information?

in inode and can be retrived by stat call

Related

How to cache firestore documents in flutter?

I want to store documents that I download from firestore. Each document has a unique id so I was thinking of a way I can store that document in json format or in database table (so I can access its fields) and if that id is not present in the memory than just simply download it from firestore and save it. I also want these documents to be deleted if they are not used/called for a long time. I found this cache manager but was unable to understand how to use it.
you can download the content by using any of the flutter downloadmanager and keep the path of the downloaded content in sqllite and load the data from local storage rather than url if the data is already saved.
Following pseudo-code may help you:
if(isDownloaded(id)){
//show from local;
} else {
// show from remote
}
you can manually check whether the resources are being used or not by keeping a timestam of it and update the timestamp only when it is shown. and you can run a service which looks for the unused resources and delete it from storage and the databse too.

Is there a way to have Harmon.ie keep original modified date?

When uploading a file from a drive/fileshare through Harmon.ie, is there a way to have SharePoint keep the original file modified date? Looks like the default is to update Modified with the current date and time.
Thanks!
Troy
The modified field you'd like to keep is an attribute of the file system not of the document, the Modified field in Sharepoint is a --system-- field that is set by Sharepoint server (not by harmon.ie) at the time the document is uploaded and each time it gets modified. As a result, the modified date you see in your file won't be reflected in Sharepoint. This is the reason why you will get the same behavior when uploading documents with Sharepoint web interface in IE.
----- Jean

How do I do bulk file storage with IBM Object Storage?

I'm using IBM Object Storage to store huge amounts of very small files,
say more than 1500 small files in one hour. (Total size of the 1500 files is about 5 MB)
I'm using the object store api to post the files, one file at a time.
The problem is that for storing 1500 small files it takes about 15 minutes in total. This is with setting up and closing the connection with the object store.
Is there a way to do a sort of bulk post, to send more than one file in one post?
Regards,
Look at the archive-auto-extract feature available within Openstack Swift (Bluemix Object Storage). I assume that you are familiar with obtaining the X-Auth-Token and Storage_URL from Bluemix object storage. If not, my post about large file manifests explains the process. From the doc, the constraints include:
You must use the tar utility to create the tar archive file.
You can upload regular files but you cannot upload other items (for example, empty directories or symbolic links).
You must UTF-8-encode the member names.
Basic steps would be:
Confirm that IBM Bluemix supports this feature by viewing info details for the service # https://dal.objectstorage.open.softlayer.com/info . You'll see a JSON section within the response similar to:
"bulk_upload": {
"max_failed_extractions": 1000,
"max_containers_per_extraction": 10000
}
Create a tar archive of your desired file set. tar gzip is most common.
Upload this tar archive to object storage with a special parameter that tells swift to auto-extract the contents into the container for you.PUT /v1/AUTH_myaccount/my_backups/?extract-archive=tar.gz
From the docs: To upload an archive file, make a PUT request. Add the extract-archive=format query parameter to indicate that you are uploading a tar archive file instead of normal content. Include within the request body the contents of the local file backup.tar.gz.
Something like:
AUTH_myaccount/my_backups/etc/config1.conf
AUTH_myaccount/my_backups/etc/cool.jpg
AUTH_myaccount/my_backups/home/john/bluemix.conf
...
Inspect the results. Any top-level directory in the archive should create a new container in your Swift object-storage account.
Voila! Bulk upload. Hope this helps.

update merge path in word doc

I have 2000 documents which were created by an incorrectly configured application which now uses an invalid merge data path.
e.g. c:\APPNAME\WP\MERGE.TXT
The files now live in H:\MERGE.TXT so all users can access them.
Is there a way to update this path without opening each file in MS Word and reselecting the data source?
Looking forward to your replies.

Google Cloud Storage : Can we Get file or Search file based on meta data?

While Uploading Object , I have assigned metadata using x-goog-meta-<keyname>.
Currently to get file , we have to use Get Object using Key/Filename.
I want know is it possible like Get Object using META-DATA ?
Is there any way we can directly get/search file by passing metadata ?
Thanks!
No, you cannot.
You can retrieve the metadata via the object name, but you cannot retrieve the object name via the metadata.
If you really needed to, you could create a second bucket that contained objects with the metadata names with data or metadata that referred to the original object name in the first bucket.