O365 REST API Error Response for Version method - rest

Large Files in Microsoft Office 365 SharePoint do not support Versions request.
I've isolated the problem to the file size. This is a sharepoint deployment on Microsoft O365.
The issue seems to be with Microsoft’ Rest API and it appears to be related to the size of the file. This makes no sense to me, but my test data reflects these facts.
When I request Version I receive this response, but only for files that are greater than 1 GB in size:
Operation is not valid due to the current state of the object.
I've tried Microsoft support and they have provided no help in regards to the error message or how to change the state of the object.
Any insights from the community would be greatly appreciated as this error has stopped my migration between sites dead in its tracks.
My tests
The file is unlocked. It is visible and checked in to the users. I have administrative privileges. A very simple versions GET results in an error.
Test 1: Does the format of the file make any difference?
Step 1: I zipped the ISO file and stored it in the same location.
Step 2: When I used the REST API to explore the versions, I was presented with the same error:
Operation is not valid due to the current state of the object.
Test 2: Is the file corrupt in some way?
Step 1: Upload the .ISO file to the same location with a new name.
Step 2: when I used the REST AP to explore the versions, I was presented with the same error:
Operation is not valid due to the current state of the object.
Test 3: Is the error related to size?
Step 1: I created a small program to generate a text file, which repeats a text string, This is a test string that is 4, over and over again for the size of the file in gigabytes.
Step 2: Create a 1 GB file, test.txt
Step 3: create a 4 GB file, test4.txt
Step 4: Transfer files to the folder.
Step 5: Use the REST API to retrieve test.txt versions, it works.
<?xml version="1.0" encoding="utf-8"?><feed ...
Step 6: Use the REST API to retrieve the test4.txt versions, it fails:
<m:error
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code>-1, System.InvalidOperationException</m:code> <m:message
xml:lang="en-US"> Operation is not valid due to the current state of
the object. </m:message> </m:error>
This leads me to believe that the O365 implementation of Sharepoint has a size related issue. In the past, there has been a problem with files over 2GB in size but today Microsoft claims that up to 30 GB can be stored in a single file:
https://support.office.com/en-us/article/File-size-limits-for-workbooks-in-SharePoint-Online-9E5BC6F8-018F-415A-B890-5452687B325E
Given that all of my files are within the guidance provided by Microsoft, all of the files should behave in the same manner.
From the Microsoft site
What are the current file size limits for workbooks?
Your file size limits are determined by your particular subscription to Office 365.
If your Office 365 subscription includes… And the workbook is stored here… Then these file size limits apply to workbooks in a browser window
SharePoint Online A library in a site such as a team site 0-30 MB
Outlook Web App Attached to an email message 0-10 MB
If you’re trying to open a workbook that is attached to an email message in Outlook Web App, a smaller file size limits applies. In this case, the workbook must be smaller than 10 MB to open in a browser window.
If you are using Excel Online and Power BI, different file size limits apply. For more information, see Data storage in Power BI and Reduce the size of a workbook for Power BI.
Here’s the REST API GET, it's very simple:
https://oceusnetworks.sharepoint.com/opp/_api/web/GetFileByServerRelativeUrl('/opp/Opportunities%202018/RPW%20Spectrum%20Prototype%20Technologies/03-Delivery/Deliverables/Software%20and%20License%20Keys/Windows/WIN2016-SESS_X64FRE_EN-US_DV9.ISO')/Versions
Here’s the URL:
https://oceusnetworks.sharepoint.com/opp/_api/web/GetFileByServerRelativeUrl('/opp/Opportunities%202018/RPW%20Spectrum%20Prototype%20Technologies/03-Delivery/Deliverables/Software%20and%20License%20Keys/Windows/WIN2016-SESS_X64FRE_EN-US_DV9.ISO')/Versions
and the response:
<m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code>-1, System.InvalidOperationException</m:code>
<m:message xml:lang="en-US">
Operation is not valid due to the current state of the object.
</m:message>
</m:error>
Another file in the same directory:
https://oceusnetworks.sharepoint.com/opp/_api/web/GetFileByServerRelativeUrl('/opp/Opportunities%202018/RPW%20Spectrum%20Prototype%20Technologies/03-Delivery/Deliverables/Software%20and%20License%20Keys/Windows/WIN2016-SESS_X64FRE_EN-US_DV9.MDS')/Versions
with the expected response (XML response truncated for brevity):
< ? xml version="1.0" encoding="utf-8"?><feed ... ></feed>

Related

SharePoint REST API returns incomplete content of file during downloading

I work on application for fetching and downloading SharePoint data. For every folder in SharePoint I can get the list of all files inside given folder by using next SharePoint REST API endpoint:
/_api/web/GetFolderById('<folder_guid>')/Files
The expected size and guid is provided for every file so I can use them when I want to download the file. Then I use the next endpoint from SharePoint REST API in order to actually get file content:
/_api/web/GetFileById('<file_guid>')/$value
From time to time when I download the file I get less data than expected: size of downloaded data is just different from the value I obtain while getting the properties list of files. However when I try to get its content again it can be successfully downloaded (size of downloaded data is equal the expected value) or I can get another incomplete data.
I verified that the first endpoint (one used to get properties of all files in the folder) returns the correct file size. The problem is in the call of the second one.
I see that there is "transfer-encoding" header with "chunked" value in response. So when my http client performs chunked data download and if zero chunk is received at some point then we reached the end of the body by definition. So it looks like in some cases SharePoint either returns the incomplete data or zero chunks when they should not be sent.
What can be the reason of such strange behavior? Is it a know issue?
We actually also see this, strange behaviour, many files are just small aspx files, about 3-4kb and they are constantly smaller by 15% and more than appears in file propertis. We're also using REST API and this is really frustrating. All those strange bugs in Sharepoint Online are very annoying.
this is an interesting topic... are those files large? like over 1GB? It would seem that chunk file download is not supported way in SP Online. Better option is to user RPC. Please see this links for examples:
https://sharepoint.stackexchange.com/questions/184789/download-large-files-from-sharepoint-online
https://social.msdn.microsoft.com/Forums/office/en-US/03e55d41-1daf-46a5-b61d-2d80139123f4/download-large-files-using-rest?forum=sharepointdevelopment
https://piyushksingh.com/2016/08/15/download-large-files-from-sharepoint-online/
You could also check the MS Graph API if maybe will work better for this case
https://learn.microsoft.com/en-us/graph/api/driveitem-get-content?view=graph-rest-1.0&tabs=http
... I hope this will be of any help

How do I do bulk file storage with IBM Object Storage?

I'm using IBM Object Storage to store huge amounts of very small files,
say more than 1500 small files in one hour. (Total size of the 1500 files is about 5 MB)
I'm using the object store api to post the files, one file at a time.
The problem is that for storing 1500 small files it takes about 15 minutes in total. This is with setting up and closing the connection with the object store.
Is there a way to do a sort of bulk post, to send more than one file in one post?
Regards,
Look at the archive-auto-extract feature available within Openstack Swift (Bluemix Object Storage). I assume that you are familiar with obtaining the X-Auth-Token and Storage_URL from Bluemix object storage. If not, my post about large file manifests explains the process. From the doc, the constraints include:
You must use the tar utility to create the tar archive file.
You can upload regular files but you cannot upload other items (for example, empty directories or symbolic links).
You must UTF-8-encode the member names.
Basic steps would be:
Confirm that IBM Bluemix supports this feature by viewing info details for the service # https://dal.objectstorage.open.softlayer.com/info . You'll see a JSON section within the response similar to:
"bulk_upload": {
"max_failed_extractions": 1000,
"max_containers_per_extraction": 10000
}
Create a tar archive of your desired file set. tar gzip is most common.
Upload this tar archive to object storage with a special parameter that tells swift to auto-extract the contents into the container for you.PUT /v1/AUTH_myaccount/my_backups/?extract-archive=tar.gz
From the docs: To upload an archive file, make a PUT request. Add the extract-archive=format query parameter to indicate that you are uploading a tar archive file instead of normal content. Include within the request body the contents of the local file backup.tar.gz.
Something like:
AUTH_myaccount/my_backups/etc/config1.conf
AUTH_myaccount/my_backups/etc/cool.jpg
AUTH_myaccount/my_backups/home/john/bluemix.conf
...
Inspect the results. Any top-level directory in the archive should create a new container in your Swift object-storage account.
Voila! Bulk upload. Hope this helps.

Watson Visual Recognition Create Classifier 413 Request Entity Too Large

I am trying to create a Watson Visual Recognition Create Classifier using v3 of the rest API following the documentation https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/visual-recognition/customizing.shtml#goodclassifying which states:
There are size limitations for training calls and data:
The service accepts a maximum of 10,000 images or 100 MB per .zip file
The service requires a minimum of 10 images per .zip file.
The service accepts a maximum of 256 MB per training call.
However, using a "positive" zip file of 48MB containing 594 images (max size of an image is 144Kb) and a "negative" zip file of 16MB containing 218 images (max size of an image is 114Kb) but I keep getting the error:
<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx</center>
</body>
</html>
In response to:
curl -X POST -F "good_positive_examples=#positive.zip"
-F "negative_examples=#negative.zip"
-F "name=myclassifier"
-H "X-Watson-Learning-Opt-Out=true"
"https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers?api_key=<mykey>&version=2016-05-20"
I've kept trying reducing the file size by deleting images within the zips and re-trying but I'm well below the stated limits.
Anyone got any idea?
Thanks
This (413 Entity Too Large) error is intermittent when submitting jobs for training classifiers. I have written a script to process a directory structure of images as classes for training, including both a training (51%) and a test (49%) set. As the API restricts payload sizes to 100MB per ZIP file, I zipsplit(1) the class ZIP files into batches. When submitting those batches I receive this error, but discard and retry; invariably after 2-3 attempts, the API call succeeds.
I would guess that your in-bound connection manager is counting bytes including re-transmissions over the socket and not reporting actual payload size.
I recommend splitting ZIPs into sizes of <95 MB in order to avoid this complication in submitting images to the training API.
The code is in the age-at-home project under dcmartin on github.com; the training script is in bin/train_vr and testing script is in bin/test_vr. Your mileage may vary.
I just tried with 2 zip files (~45 MB each) and it works.
I think it was a temporary problem in the nginx server. The requests to Visual Recognition go to nginx before going to the actual service.

Azure Rest Service to create OS disk

I'm trying to add a disk to a Subscription using the Add Disk REST service ( http://msdn.microsoft.com/en-us/library/windowsazure/jj157178.aspx )
I tried pretty much every combination explained but no matter what I do, the disk is listed as a Data Disk.
Trying use fiddler to inspect how the Azure PowerShell (https://www.windowsazure.com/en-us/manage/downloads/ ) just results in an error.
According to MS, you should specify HasOperatingSystem but you don’t supply it when using Microsoft’s PScmdlet. If you do a List Disks ( http://msdn.microsoft.com/en-us/library/windowsazure/jj157176 ) it should send this too, but the only way to distinct Data disks from OS disk’s is weather ”OS” is null or contains ”windows”/”Linux”. Given that information I tried creating the disk with/without OS and/or HasOperatingSystem in all combinations, and no matter what I always end up being a Data disk.
Using Microsoft PowerShell CDMLets allow using both HTTP and HTTPS in URI, so tried both of those too.
Does anyone have a WORKING example of the xml to send, to create an OS disk?
<Disk xmlns="http://schemas.microsoft.com/windowsazure">
<HasOperatingSystem>true</HasOperatingSystem>
<Label>d2luZ3VuYXY3MDEtbmF2NzAxLTAtMjAxMjA4MjcxNTA5NTU=</Label>
<MediaLink>http://winguvhd.blob.core.windows.net/nav701/nav701-0-20120827150955_osdisk.vhd</MediaLink>
<Name>wingunav701-nav701-0-20120827150955</Name>
<OS>Windows</OS>
</Disk>
As I mentioned in my comment, there is indeed an issue with the documentation.
Try this as your request payload. This was provided to me by the person from Microsoft who wrote Windows Azure PowerShell Cmdlets:
<Disk xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/windowsazure">
<OS>Windows</OS>
<Label>mydisk.vhd</Label>
<MediaLink>https://vmdemostorage.blob.core.windows.net/uploads/mydisk.vhd</MediaLink>
<Name>mydisk.vhd</Name>
</Disk>
I just tried using the XML above, and I can see an OS Disk in my subscription.

JasperServer using REST to run a report with data source specified at run time

I have no problem executing a report on JasperServer using the RESTful api when the report unit has data source predefined.
What need to do though is allow my customers to select what database they want to run the report against when they are getting ready to execute a report. I assumed that when I make the PUT request to run the report I could simply throw the data source resource descriptor in the ReportUnit resource descriptor passed in the PUT but it doesn't seem to work.
I even went as far as to pull the resource descritor for the ReportUnit when it had the data source predfined. Tested that passing that resource descritor in the PUT worked. Then removed the predifiend data source and tried executing the report again using the exact resource descriptor I pulled previously and it would not work.
Is this possible?
I may be wrong, without any much reading, I think you can create data source and domain via resource services.
To update the report file using the resource service, you may have to change the domainQuery node.
I had pulled out the jrxml for my json based report file and it looked something like this:
<resourceDescriptor name="domainQuery.xml" wsType="xml" uriString="/adhoc/topics/myjsonposts_files/domainQuery.xml" isNew="false">
Hope this will help you find your solution.