Atomic upload of many files to dropbox? - operating-system

This was asked in one of the interviews. I would like to know the possible answers to this question.
"You have a shared folder, which everybody can see. You want to upload 100 files. This upload of 100 files should atomic i.e either all files are available to download to any user or no file is available to download.
One can argue that he will delete the uploaded files if operation fails in between but that is not an option because once a file is uploaded, it becomes visible to other users.
What can be the possible solutions?
My solution - Upload them first to a private folder and then share that folder inside the main shared folder.

It will be almost impossible to achieve isolation if you are using these cloud services. You can do it if you have your own server. Distributed systems is the topic which deals with similar kind of problems.
You can put a lock on a folder and upload all the files and then can change the lock on that folder.

Related

How to assign custom names to release artifacts on GitHub

I'm evaluating GitHub as a way to keep under version control and share some excel files containing basically financial models.
The issue I'm facing is this one: I need to share the Release artifacts (a bunch of xlsx files) with people outside GitHub, so I'd like to include the version number in the filename to be sure that, even when the files will be further shared by business people through email or other non-GitHub means, that information won't be lost.
Is there a way to rename the artifacts automatically? GitHub Actions seemed to be the right way to address this thing, but unfortunately they're still unavailable on the Enterprise Server my company is using (v 2.19.13, I don't have any administrative access to it, btw), and adding some CI toolchain just to rename some files is probably too much.
Thank you in advance for any response!
Michele

Create a zip file inside dropbox

I have a system in dropbox where i have folders for multiple users with contents inside that i have them download every now and then. Anyways, I want to zip the contents of each persons folder for faster downloading as downloading folders over dropbox is very slow (at least to my knowledge unless there is a faster way to download besides sharing url) How could I go about doing that?
No, unfortunately the Dropbox API doesn't currently offer a way to create/download a zip of a folder, or otherwise download folders in bulk, but we'll consider it a feature request.
Edit:
The Dropbox API now offers the ability to download folders as zips:
https://www.dropbox.com/developers/documentation/http/documentation#files-download_zip
If you're using an official SDK, there will also be a corresponding method for this endpoint.

How can I do a stream copy/merge of only a subdirectory in Perforce?

I'm using P4V. I work in a subdirectory (eg code/jorge) and other people work in another subdirectory (eg art/) that I never deal with. Additionally I have a stream where I do my personal work. Every so often I need to merge changes from the main line to my stream, and copy them back up. However, the files in art/ are large binaries and Perforce spends a long time thinking about them even though I've not touched them. Is there any way to have perforce merge/copy my directory (code/jorge) without it spending time trying to merge art/? Can I tell P4V to merge/copy only the code directory?
Related but not identical question: Perforce streams, exclude files from merge/copy
If you don't touch those files, it might be easier to not include them in your stream at all rather than manually exclude them every time you do a merge.
I.e. if your stream Paths currently says:
share ...
maybe it should instead be:
share code/jorge/...
or, if you need the art for builds but never need to modify it, you might consider doing something like:
import art/...
share code/...
I am not sure this is the recommended option but you can actually merge without using the "Stream to Stream" option but the standard "Specify source and target file" options, even if you are in a stream depot.
So you can select any subdirectory as your source like 'dev/code/jorge' and the same subdirectory as destination like "main/code/jorge' and it will only consider that directory. We do it routinely in my team because we have a big mono repo and have not taken the time to setup multiple depots when we migrated to Perforce.

Sharing of Sub Folders ownCloud

I have a complicated system of folders and I need to share 2nd and 3rd level folders with certain groups of users while maintaining the full path to the folder.
Is this possible? I tried but without success as if I share a folder eg. Project 1->Administration with the "Group Administration" on the client I only see the Administration folder and I need, instead, to replicate the entire structure.
Thanks for the support
With the current ownCloud sharing implementation this is simply not possible. Every shared item appears directly in the "Shared" folder of the user the file/folder is shared with.
Update: At the moment ownCloud (and I guess also nextCloud) allow a user to move around and rename files/folders shared with them. So even if you could enforce a certain structure on your users, they could always change it afterwards.
You could always report a feature request for it (or maybe there even already is one) here: https://github.com/owncloud/core/issues/ .

Does perforce supports file streams on Windows?

Does Perforce supports file streams on Windows, on NTFS?
Sorry to resurrect such an old thread, but I found a workaround that will allow Perforce clients (P4/P4V) to create ADS data.
Chapter 2 of the Perforce Users Guide has a section titled "Mapping files to different locations in the workspace". This section covers how to remap the depot to the workspace and vice-versa.
Let's assume that you want to store some asset metadata with your files in Perforce. You create a tool that generates an ADS called asset.meta such that your filenames are of the form file.ext:asset.meta.
If you modify your Perforce Workspace to include the following:
//depot/....asset.meta //CLIENT/...:asset.meta
This will take ADS asset.meta streams and create files for them in Perforce.
foo.txt with an asset.meta ADS gets stored as 2 files in the depot: foo.txt and foo.txt.asset.meta. When you sync them down, they end joined correctly.
Now there are 2 caviats to be aware of.
1.) P4V will not see the ADSs. You have to add them manually through P4, the P4API or some other explicit mechanism.
2.) If the base file (foo.txt from our example) is not marked writable, you will not be able to sync the ADS.
You will have to deal with #1 in whatever way you want. #2 is trickier IMO. You can +w the main files so they are always writable on the client (if your workflows can accommodate that), or you can write a custom sync routine that handles making files read-only or read-write as necessary.
I may respond to this if I hear any good ideas from Perforce other than the ones mentioned above, but considering how high this page shows up in Google when searching for "Perforce Alternate Data Stream", I thought this might help someone.
I just got a response from Perforce:
Perforce does not have any special support for NTFS Alternate Data Streams.
This means that you will lose any additional data stream when you submit a file into perforce.