Get names of files contained in Dropbox directory - matlab

I'm in need to read a large number of files from a permanent Dropbox webpage.
The reading part is fine, but I'm having troubles finding a way to list all the files contained within.
To be more precise, I would need something like
files = dir([url_to_Dropbox_directory,'*.file_extension']);
returning the names of all the files.
I've found an example in php, but nothing for MATLAB. Using dir was just an example, I'm looking for any solution to this problem.
How can I get the file list from a permanent Dropbox webpage?

You should use the Dropbox API where you can acces that data by a http request. "file-list-folder" is the specific endpoint that you are looking for.
Here is the documentation for it:
https://www.dropbox.com/developers/documentation/http/documentation#files-list_folder
In addition you could use the SDK for PHP (or others programming languages). I have used the SDK for JS and it's easy and works well.
PHP SDK:
https://dropbox.github.io/dropbox-sdk-php/api-docs/v1.1.x/

Related

Microsoft Graph API - File Download - How to use library and subsite paths instead of ids?

Main Problem
I am using Microsoft Graph to access SharePoint resources and I'm having trouble understanding why my syntax isn't working when I try to access a specific file.
My SharePoint is structured like this:
[domain]
- site
- subsite
- library
- file1.pdf
I am able to access a file using the following: (this works)
sites/{SITE_HOSTNAME},{SITE_ID},{SUBSITE_ID}/drives/{DRIVE_ID}/root:/{FILENAME}:/content
However, I would like to access the file using only site, library, and file names (or paths), instead of using ids. Ideally something like this would work: (this does not work)
sites/{SITE_HOSTNAME}:/sites/{SITE_NAME}/{SUBSITE_NAME}/{LIBRARY_NAME}/{FILENAME}:/content
Is anything like this possible?
More details...
I seem to be running into at least two problems:
1. I am unable to access the library as a drive in the same way using site paths as using site ids. E.g.:
# these work as expected
sites/{SITE_HOSTNAME},{SITE_ID},{SUBSITE_ID}/drives/{DRIVE_ID}
sites/{SITE_HOSTNAME}:/sites/{SITE_NAME}/{SUBSITE_NAME}:/drives
# this does not work
sites/{SITE_HOSTNAME}:/sites/{SITE_NAME}/{SUBSITE_NAME}:/drives/{DRIVE_ID}
Why am I unable to access a specific DRIVE_ID in this way?
2. As a workaround, I can access the library as a list instead of a drive ...but then I am unable to get the contents of the list items. For example:
# this works
sites/{S_HOSTNAME}:/sites/{S_NAME}/{SS_NAME}:/lists/{LIB_NAME}/items/{ITEM_ID}
# this does not work (`/content` is not available)
sites/{S_HOSTNAME}:/sites/{S_NAME}/{SS_NAME}:/lists/{LIB_NAME}/items/{ITEM_ID}/content
Is there a way to get content from a list item, the way it can be retrieved from a drive item?
Maybe I'm missing something about how path-based and id-based endpoints are supposed to work together? None of the examples in the documentation are much help.

How to upload files and attachments to the sobject record using REST API?

Salesforce has two different UIs and in accordance with it, it has the possibility to store attached files differently.
Two files were uploaded via the classic UI and they are marked as 'attachments'. Other files were uploaded through the new UI and they are marked as 'files'.
I want to upload all of these files using REST API. I cannot find the proper documentation. Can somebody help me with this?
That's not 100% true. In SF Classic UI you were able to upload Files too. It's "just" about knowing the right API name of the table and you'll find lots of examples online.
Attachment and Document objects have exactly same API names, you can view their definitions in SOAP API definition or in REST API explorer (there was something which you can still see in screenshot in here, seems to be down now, maybe they're moving it to another area in documentation...)
The Files (incl. "Chatter Files") are stored in ContentDocument and ContentVersion object. The name is unexpected because long time ago SF purchased another company's product and it was called "Salesforce Content". In beginning it was bit of mess, now it's better integrated into whole platform but still some things lurk like File folders can be called Libraries sometimes in documentation but actual API name is ContentWorkspace. The entity relationship diagram can help a bit: https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_erd_content.htm
ContentDocument is a header to which many places in SF link (imagine file wasting space on disk only once but being cross-linked from multiple records). It can have at least 1 version and if you need to update the document - you'd upload new version but all links in org wouldn't change, they'd still link to header.
So, how to use it?
REST API guide: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_insert_update_blob.htm
or maybe Chatter API guide (you tagged it with chatter so chances are you already use it): https://developer.salesforce.com/docs/atlas.en-us.chatterapi.meta/chatterapi/connect_resources_files.htm
some of my answers here might help (shameless plug). They're about upload and reading data too and one is even about data loader... but you might experiment with exporting files first, get familiar with structure before you load?
https://stackoverflow.com/a/48668673/313628
https://stackoverflow.com/a/56268939/313628
https://stackoverflow.com/a/60284736/313628

SLIM API - Offer Files to Download

I am using Slim API for my Project. I want to offer Files for Download (Mostly PDF files). I found several Ways sending out a public link to the file, which i dont want. I also found an Middleware for the Version 2.4 of Slim, but I am using 3.x.
I just want to access the Route e.g. /downloads/version/2183
And the a Downlod with this certain File ID should start. I have a Path to the File on the Server in a variable available.
The Basic Idea behind is different restrictions, which user can download the file - but i can do that myself - the problem where I am stuck is, how to bring the Download over the Route to the Clients Browser
Does anyone know how to achieve this?
Cheers,
Niklas
This is actually very easy.
Set the Proper Headers for the file on the Response Object
Read the contents of the file into the body of the Response Object
$app->get('/my/file', function ($req, $res, $args) {
return $res->withHeader('Content-Type', 'application/octet-stream')
->withHeader('Content-Disposition', 'attachment')
->write(file_get_contents("file.txt"));
});

Download / upload file using the Add-On SDK

I am currently trying to download a small binary file from the web, in order to upload that to another website, both using the API.
Previous versions seemed to have the "file" API module for such purposes, but I can't see anything similar as of the latest (1.14).
The file to be downloaded would be saved in some form of cache (browser cache, preferably), its path stored somewhere, to be then uploaded to another URL via POST.
How would I go about it, when the process should happen completely in the background?
I checked out the how to download a file page, but can't figure out where to download.
Is there a variable URI for the "Downloads" directory, and does a regular Add-On has write privileges in it?.
This is important, because the add-on must be able to function properly on various platforms.
You can use the pref, browser.download.lastDir, which should work for windows/mac as it will be saved in the OS format. However the pref may not always be set if the person has never downloaded anything before. In that case you'll have to build the directory yourself.
var dir = require("sdk/preferences/service").get('browser.download.lastDir');
To build the directory yourself you're going to have to go a little deeper. Check this article on MDN about File I/O which has examples. The DfltDwnld key should give you the directory you want.
Your add-on will have write permissions to everything Firefox has write permission to.

How can I get all HTML pages from a website subfolder with Perl?

Can you point me on an idea of how to get all the HTML files in a subfolder and all the folders in it of a website?
For example:
www.K.com/goo
I want all the HTML files that are in: www.K.com/goo/1.html, ......n.html
Also, if there are subfolders so I want to get also them: www.K.com/goo/foo/1.html...n.html
Assuming you don't have access to the server's filesystem, then unless each directory has an index of the files it contains, you can't be guaranteed to achieve this.
The normal way would be to use a web crawler, and hope that all the files you want are linked to from pages you find.
Look at lwp-mirror and follow its lead.
I would suggest using the wget program to download the website rather than perl, it's not that well suited to the problem.
There are also a number of useful modules on CPAN which will be named things like "Spider" or "Crawler". But ishnid is right. They will only find files which are linked from somewhere on the site. They won't find every file that's on the file system.
You can also use curl to get all the files from a website folder.
Look at this man page and go to the section -o/--output which gives u a good idead about that.
I have used this a couple of times.
Read perldoc File::Find, then use File::Find.