SharePoint::SOAPHandler file size limit - perl

I am having the same issue as described in SharePoint::SOAPHandler perl script works for display not copy
In my case I have been able to narrow down the issue to file size. I can successfully copy a 47332 byte file to sharepoint but I cannot copy a 53689 byte file to sharepoint.
Note that using the WebUI I can successfully upload files of much larger size.
Any thoughts on why uploading using SharePoint::SOAPHandler is failing in this manner?

Related

Does IPFS provide block-level file copying feature?

Update 11-14-2019: Please see the following discussion for feature request to IPFS: git-diff feature: Improve efficiency of IPFS for sharing updated file. Decrease file/block duplication
There is a .tar.gz file, which contains a data.txt file, file.tar.gz (~1 GB) stored in my IPFS-repo, which is pulled from another node-a.
I open the data.txt file and added a single character in a random locations in the file(beginning of the file, middle of the file, and end of the file), and compress it again as file.tar.gz and store it in my IPFS-repo.
When node-a wants to re-get the updated tar.gz file a re-sync will take place.
I want to know using IPFS whether the entire 1 GB of file will be synced or there exists a way by which parts of the file that is changed (called the delta) get synced.
Similar question is asked for the Google Drive: Does Google Drive sync entire file again on making small changes?
The feature you are asking is called "Block-Level File Copying". With
this feature, when you make a change to a file, rather than copying
the entire file from your hard drive to the cloud server again, only
the parts of the file that changed (called the delta) get sent.
As far as I know, Drobox, pCloud, and OneDrive, which however only supports it for Microsoft Office documents, offers block level sync.

copy (multiple) files only if filesize is smaller

I'm trying to make my image reference library take up less space. I know how to make Photoshop batch save directories of images with a particular amount of compression. BUT some of my images were originally save with more compression than what I would have done.
So I wind up with two directories of images, some of the newer files have a larger filesize, some smaller, and some the same. I want to copy over the new images into the old directory, excluding any files that have a larger filesize (or the same, though these probably aren't numerous enough for me to care about the extra time to process them).
I obviously don't want to sit there and parse through each file, but other than that I'm not picky about how it gets tackled.
running Windows 10, btw.
We have similar situations. Instead of Photoshop, I use FFmpeg (using its qscale option) to batch re-encode multiple images into a subfolder then use XXCOPY to overwrite only the larger original source images. In fact I ended up creating my BATCH file which let FFmpeg do the batch e-encoding (using its "best" qscale setting), then let ExifTool batch copy the metadata to the newly encoded images, then let XXCOPY copy only the smaller newly created images. All automated, with the "new" folder and its leftover newly created but larger-sized images deleted too. Thus I save considerable disk space, as I have many images categorized/grouped in many different folders. But you should make a test run first or back up your images. I hope this works for you.
Here is my XXCOPY command line:
xxcopy "C:\SOURCE" "C:\DESTINATION" /s /bzs /y
The original post/forum where I learned this from is:
overwrite only files wich are smaller
https://groups.google.com/forum/#!topic/alt.msdos.batch.nt/Agooyf23kFw
Just to add, XXCOPY can also do it if the larger file size is wanted instead which I think is /BZL. I think it's also mentioned in that original post/forum.

How can I read lines from a file in Typo3?

I have to read in a big file in Typo3 (Version 6.2.10) in a plugin we wrote. The file is uploaded via the backend and as it changes it will be newly uploaded.
Currently I use:
$file->getOriginalResource()->getContents();
$file is a \TYPO3\CMS\Extbase\Domain\Model\FileReference.
That works fine, as long as the file in question is small enough. The problem is, that the content of the file is read in the memory completely. With bigger files I reach the point, at which this fails. So my question is, how can I read in the contents of the file line by line?
You can copy it to a temporary local path with
$path = $file->getOriginalResource()->getForLocalProcessing(false);
Then you can use fgets as usual to loop through the file line by line.

Talend tWaitForFile insufficiency

We have a producer process that write files into a specific folder, which run continuously, we have to read files one by one using talend, there is 2 issues:
The 1st: tWaitForFile read only files which exist before its starting, so files which have created after the component starting are not visible for it.
The 2nd: There is no way to know if the file is released by the producer process, it may be read while it is not completely written, the parameter _wait_release_ of tWaitForFile does not work on Linux system !
So how can make Talend read complete written files from a directory that have an increasing files number ?
I'm not sure what you mean by your first issue. tWaitForFile has options to trigger when files are created, modified or deleted in a folder.
As for the second issue, your best bet here is for the file producer to be creating an OK or control file which is a 0 byte touch when it has finished writing the file you want.
In this case you simply look for the appearance of the OK file and then pick up the relevant completed file. If you name the 2 files the same but with a different file extension (the OK file is typically called ".OK" then this should be easy enough to look for. So you would set your tWaitForFile to look for "*.OK" files and then connect this to an iterate to a tFileInputDelimited (in the case you want to pick up a delimited text file) and then declare the file name as ((String)globalMap.get("tWaitForFile_1_CREATED_FILE")).substring(0,((String)globalMap.get("tWaitForFile_1_CREATED_FILE")).length()-3) + ".txt"
I've included some screenshots to help you below:

Extracting file names from an online data server in Matlab

I am trying to write a script that will allow me to download numerous (1000s) of data files from a data server (e.g, http://hydro1.sci.gsfc.nasa.gov/thredds/catalog/GLDAS_NOAH10SUBP_3H/2011/345/). Unfortunately, the names of the files in each directory are not formatted in a similar way (the time that they were created were appended to the end of the file name). I need to be able to specify the file name to subset the data (I have a special tool for these data types) and download it. I cannot find a function in matlab that will extract the file names.
I have looked at URLREAD, but it downloads everything including html code.
Thanks for your help!
You can easily parse the link.
x=urlread(url)
links=regexp(x,'<a href=''([^>]+)''>','tokens')
Reads every link, you have to filter all unwanted links.
For example this gets all grb files:
a=regexp(x,'<a href=''([^>]+.grb)''>','tokens')