Find paths that exceed SharePoint's 400 character limit - powershell

We recently migrated to SharePoint online and have found that a few of our paths are getting corrupt files because they exceed the 400 character limit set by SharePoint . I am only an admin of our specific site and not a global admin of our SharePoint Tenant, so trying to use SharePoint's PowerShell integration does not work. I also tried viewing in explorer view and running PowerShell from there to find anything -gt 400, but there is a windows limitation of only being able to find paths up to 248 characters before getting an error. Has anyone ran into this issue or know of any workarounds?
I have tried using the module SharePointPnPPowerShellOnline with PowerShell, but get forbidden error because I am not a global admin. I also tried recursively looking in Windows Explorer view but get an error.
Here is the error when trying to do it in windows explorer view:
Get-ChildItem : The specified path, file name, or both are too long.
The fully qualified file name must be less than 260 characters, and
the directory name must be less than 248 characters

did You consider using SharePoint online CSOM? You could use this nuget -> SharePointPnP
with this You could implement any kind of application (like a console app) and get all the folders, files and files in all folders and get their name, or path or whatever is needed and check if it is longer then 400 characters. (please remember that additional columns like FileServerRelativeUrl needs to be loaded in context)
try
{
string siteUrl = "SiteURL";
AuthenticationManager authManager = new AuthenticationManager();
using (ClientContext context = authManager.GetWebLoginClientContext(siteUrl))
{
List list = context.Web.Lists.GetByTitle("LibName");
context.Load(list);
context.Load(list.RootFolder);
context.Load(list.RootFolder.Folders);
context.Load(list.RootFolder.Files);
context.ExecuteQuery();
FolderCollection folderCol = list.RootFolder.Folders;
foreach (Folder f in folderCol)
{
context.Load(f.Files);
context.ExecuteQuery();
FileCollection innerFileCol = f.Files;
foreach (File file in innerFileCol)
{
//var x = file.Name;
// ToDo other logic here
}
}
FileCollection fileCol = list.RootFolder.Files;
foreach (File file in fileCol)
{
//var x = file.Name;
// ToDo other logic here
}
}
}
catch (Exception ex)
{
// log error
throw;
}
I hope it will be of any help :)

Related

Powershell: FTP download not working despite having permissions [duplicate]

What is the best way to download all files in a remote directory using C# and FTP and save them to a local directory?
Thanks.
downloading all files in a specific folder seems to be an easy task. However, there are some issues which has to be solved. To name a few:
How to get list of files (System.Net.FtpWebRequest gives you unparsed list and directory list format is not standardized in any RFC)
What if remote directory has both files and subdirectories. Do we have to dive into the subdirs and download it's content?
What if some of the remote files already exist on the local computer? Should they be overwritten? Skipped? Should we overwrite older files only?
What if the local file is not writable? Should the whole transfer fail? Should we skip the file and continue to the next?
How to handle files on a remote disk which are unreadable because we don’t have sufficient access rights?
How are the symlinks, hard links and junction points handled? Links can easily be used to create an infinite recursive directory tree structure. Consider folder A with subfolder B which in fact is not the real folder but the *nix hard link pointing back to folder A. The naive approach will end in an application which never ends (at least if nobody manage to pull the plug).
Decent third party FTP component should have a method for handling those issues. Following code uses our Rebex FTP for .NET.
using (Ftp client = new Ftp())
{
// connect and login to the FTP site
client.Connect("mirror.aarnet.edu.au");
client.Login("anonymous", "my#password");
// download all files
client.GetFiles(
"/pub/fedora/linux/development/i386/os/EFI/*",
"c:\\temp\\download",
FtpBatchTransferOptions.Recursive,
FtpActionOnExistingFiles.OverwriteAll
);
client.Disconnect();
}
The code is taken from my blogpost available at blog.rebex.net. The blogpost also references a sample which shows how ask the user how to handle each problem (e.g. Overwrite/Overwrite older/Skip/Skip all).
Using C# FtpWebRequest and FtpWebReponse, you can use the following recursion (make sure the folder strings terminate in '\'):
public void GetAllDirectoriesAndFiles(string getFolder, string putFolder)
{
List<string> dirIitems = DirectoryListing(getFolder);
foreach (var item in dirIitems)
{
if ( item.Contains('.') )
{
GetFile(getFolder + item, putFolder + item);
}
else
{
var subDirPut = new DirectoryInfo(putFolder + "\\" + item);
subDirPut.Create();
GetAllDirectoriesAndFiles(getFolder + item + "\\", subDirPut.FullName + "\\");
}
}
}
The "item.Contains('.')" is a bit primitive, but has worked for my purposes. Post a comment if you need an example of the methods:
GetFile(string getFileAndPath, string putFileAndPath)
or
DirectoryListing(getFolder)
For FTP protocol you can use FtpWebRequest class from .NET framework. Though it does not have any explicit support for recursive file operations (including downloads). You have to implement the recursion yourself:
List the remote directory
Iterate the entries, downloading files and recursing into subdirectories (listing them again, etc.)
Tricky part is to identify files from subdirectories. There's no way to do that in a portable way with the FtpWebRequest. The FtpWebRequest unfortunately does not support the MLSD command, which is the only portable way to retrieve directory listing with file attributes in FTP protocol. See also Checking if object on FTP server is file or directory.
Your options are:
Do an operation on a file name that is certain to fail for file and succeeds for directories (or vice versa). I.e. you can try to download the "name". If that succeeds, it's a file, if that fails, it's a directory. But that can become a performance problem, when you have a large number of entries.
You may be lucky and in your specific case, you can tell a file from a directory by a file name (i.e. all your files have an extension, while subdirectories do not)
You use a long directory listing (LIST command = ListDirectoryDetails method) and try to parse a server-specific listing. Many FTP servers use *nix-style listing, where you identify a directory by the d at the very beginning of the entry. But many servers use a different format. The following example uses this approach (assuming the *nix format)
void DownloadFtpDirectory(
string url, NetworkCredential credentials, string localPath)
{
FtpWebRequest listRequest = (FtpWebRequest)WebRequest.Create(url);
listRequest.UsePassive = true;
listRequest.Method = WebRequestMethods.Ftp.ListDirectoryDetails;
listRequest.Credentials = credentials;
List<string> lines = new List<string>();
using (WebResponse listResponse = listRequest.GetResponse())
using (Stream listStream = listResponse.GetResponseStream())
using (StreamReader listReader = new StreamReader(listStream))
{
while (!listReader.EndOfStream)
{
lines.Add(listReader.ReadLine());
}
}
foreach (string line in lines)
{
string[] tokens =
line.Split(new[] { ' ' }, 9, StringSplitOptions.RemoveEmptyEntries);
string name = tokens[8];
string permissions = tokens[0];
string localFilePath = Path.Combine(localPath, name);
string fileUrl = url + name;
if (permissions[0] == 'd')
{
Directory.CreateDirectory(localFilePath);
DownloadFtpDirectory(fileUrl + "/", credentials, localFilePath);
}
else
{
var downloadRequest = (FtpWebRequest)WebRequest.Create(fileUrl);
downloadRequest.UsePassive = true;
downloadRequest.UseBinary = true;
downloadRequest.Method = WebRequestMethods.Ftp.DownloadFile;
downloadRequest.Credentials = credentials;
var response = downloadRequest.GetResponse();
using (Stream ftpStream = response.GetResponseStream())
using (Stream fileStream = File.Create(localFilePath))
{
ftpStream.CopyTo(fileStream);
}
}
}
}
The url must be like:
ftp://example.com/ or
ftp://example.com/path/
Or use 3rd party library that supports recursive downloads.
For example with WinSCP .NET assembly you can download whole directory with a single call to Session.GetFiles:
// Setup session options
SessionOptions sessionOptions = new SessionOptions
{
Protocol = Protocol.Ftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
// Download files
session.GetFiles("/home/user/*", #"d:\download\").Check();
}
Internally, WinSCP uses the MLSD command, if supported by the server. If not, it uses the LIST command and supports dozens of different listing formats.
(I'm the author of WinSCP)
You could use System.Net.WebClient.DownloadFile(), which supports FTP. MSDN Details here
You can use FTPClient from laedit.net. It's under Apache license and easy to use.
It use FtpWebRequest :
first you need to use WebRequestMethods.Ftp.ListDirectoryDetails to get the detail of all the list of the folder
for each files you need to use WebRequestMethods.Ftp.DownloadFile to download it to a local folder

WordprocessingDocument.CreateFromTemplate method creates corrupted MS Word files

I have proper .dotm template.
When I create a new file based on a template by double clicking in explorer it creates the correct file (based on this template). Created file size after save is 16Kb (without any content).
But if I want to use .CreateFromTemplate method in my code I cannot open a newly created .docx file in MS Word.
New file size is 207Kb (just like .dotm file). MS Word display "run-time error 5398" and not open the file.
I'm using nuget package DocumentFormat.OpenXml 2.19.0, Word 365 version 16.0.14931.20648 - 32bit and code like this:
using (WordprocessingDocument doc = WordprocessingDocument.CreateFromTemplate(templatePath))
{
doc.SaveAs(newFileName);
}
Google is silent about this error, ChatGPT says that:
The "Run-time Error 5398" error means that the file you are trying to open is corrupted or not a valid docx file. Possible reasons for this error may be the following:
The file was not saved correctly after making changes. Verify that the Save() method was called after making changes to the file.
The file was saved with the wrong extension, e.g. as DOTM instead of DOCX
The file was saved in an invalid format.
There may have been some unhandled exceptions in your code.
When I manually change the extension of a new file from docx to dotm, there is no error when opening, but the file does not open.
What am I doing wrong with CreateFromTemplate method?
I tried to reproduce the behavior you described, using the following unit tests:
public sealed class CreateFromTemplateTests
{
private readonly ITestOutputHelper _output;
public CreateFromTemplateTests(ITestOutputHelper output)
{
_output = output;
}
[Theory]
[InlineData("c:\\temp\\MacroEnabledTemplate.dotm", "c:\\temp\\MacroEnabledDocument.docm")]
[InlineData("c:\\temp\\Template.dotx", "c:\\temp\\Document.docx")]
public void CanCreateDocmFromDotm(string templatePath, string documentPath)
{
// Let's not attach the template, which is done by default. If a template is attached, the validator complains as follows:
// The element has unexpected child element 'http://schemas.openxmlformats.org/wordprocessingml/2006/main:attachedTemplate'.
using (var wordDocument = WordprocessingDocument.CreateFromTemplate(templatePath, false))
{
// Validate the document as created with CreateFromTemplate.
ValidateOpenXmlPackage(wordDocument);
// Save that document to disk so we can open it with Word, for example.
wordDocument.SaveAs(documentPath).Dispose();
}
using (WordprocessingDocument wordDocument = WordprocessingDocument.Open(documentPath, true))
{
// Validate the document that was opened from disk, just to see what Word would open.
ValidateOpenXmlPackage(wordDocument);
}
}
private void ValidateOpenXmlPackage(OpenXmlPackage openXmlPackage)
{
OpenXmlValidator validator = new(FileFormatVersions.Office2019);
List<ValidationErrorInfo> validationErrors = validator.Validate(openXmlPackage).ToList();
foreach (ValidationErrorInfo validationError in validationErrors)
{
_output.WriteLine(validationError.Description);
}
if (validationErrors.Any())
{
// Note that Word will most often be able to open the document even if there are validation errors.
throw new Exception("The validator found validation errors.");
}
}
}
In both tests, the documents are created without an issue. Looking at the Open XML markup, both documents look fine. However, while I don't get any runtime error, Word also does not open the macro-enabled document.
I am not sure why that happens. It might be related to your security settings.
Depending on whether or not you really need to use CreateFromTemplate(), you could create a .docm (rather than a .dotm) and create new macro-enabled documents by copying that .docm.
I opened an issue in the Open XML SDK project on GitHub.

Dynamically Fill Jenkins Job Parameter with Github Directory Files

I'm aware of the Dynamic Parameter and Dynamic Extended Choice Parameter plugins. I'm looking for a way to list of text files in a directory in github as drop down list parameter. is there a groovy script or similar that can populate the dropdown with file names?
You can use the github api to fetch a list of files for a given path on a given repo.
So for example, to look inside the ratpack-groovy/src/main/java/ratpack/groovy folder of the ratpack project on github, you can do:
import groovy.json.*
def contents = new JsonSlurper().parse('https://api.github.com/repos/ratpack/ratpack/contents/ratpack-groovy/src/main/java/ratpack/groovy'.toURL())
.sort { it.type } // Directories first
.collect { it.name + (it.type == 'dir' ? '/' : '') } // Put a slash on the end of directories
assert contents = ['handling/',
'internal/',
'render/',
'script/',
'server/',
'sql/',
'template/',
'Groovy.java',
'GroovyRatpackMain.java',
'package-info.java']
Obviously you can only make 60 requests per hour

Unity Patcher. Name has invalid chars Error

I have written a patcher for my game but I am stuck at the actual saving of the files part. I keep on getting the following error from unity:
System.ArgumentException: Name has invalid chars
at System.IO.FileStream..ctor....
Here is the code that is in charge of saving my files:
function downloadFile(file:String){
var download:WWW = WWW(rawDataFolder+""+file); //download file from platforms raw folder
yield download; // wait for download to finish
// var saveLoc = Application.persistentDataPath; //Location where the files will go
var saveLoc = "C:\\games";
try{
Debug.Log(saveLoc+"\\"+file);
File.WriteAllBytes (saveLoc+"\\"+file+".FILE", download.bytes); //<----PROBLEM HERE.
}
catch(error){
updateMsg ="Update Failed with error message:\n\n"+error.ToString();
errorOccured = true;
Debug.Log(error);
}
}
I am trying to download a file called "level0". It doesn't have a file extension... in windows explorer it says it is simply 'FILE'. So I was thinking it was a binary file. Am I wrong? What might be causing my null character problem? This missing extension? Any help on this would be amazing.
I found out that my problem originated in the text file that I was reading. The text file must have had spaces in it. Using the ".Trim()" command I was able to remove the invalid char error. Once that was removed it worked perfectly reading files without extensions (Binary Files).

HTML5 File API in Firefox Addon SDK

Is there a way to access Html5 file api in Fire Fox addon sdk in the content script?
This is needed in order to store user added words and their meanings. The data can grow large and so local storage isn't an option.
window.requestFileSystem3 = window.requestFileSystem || window.webkitRequestFileSystem;
gives me the error TypeError: window.requestFileSystem3 is not a function.
I am asking this because i am porting this code from a Google Chrome Extension which allows accessing the file api in a content script.
Additional Questions
1) If HTML5 File API is not allowed then should i use file module?
2) Does the file module allow access to any file on the file system as opposed to the Html5 file api which only access to a sandboxed access to file system?
3) Assuming i have to use file module what would be the best location to store my files ( like the user profile directory or extension directory ) and how would i get this path in code.
I apologize for so many sub questions inside this questions. Google wasn't very helpful regarding this topic.
Any sample code would be very helpful.
Firefox doesn't support writing files via File API yet and even when this will be added it will probably be accessible to web pages only and not extensions. In other words: yes, if you absolutely need to write to files then you should use low-level APIs. You want to store your data in the user profile directory (there is no extension directory, your extension is usually installed as a single packed file). Something like this should work to write a file:
var file = require("sdk/io/file");
var profilePath = require("sdk/system").pathFor("ProfD");
var filePath = file.join(profilePath, "foo.txt");
var writer = file.open(filePath, "w");
writer.writeAsync("foo!", function(error)
{
if (error)
console.log("Error: " + error);
else
console.log("Success!");
});
For reference: sdk/io/file, sdk/system
You could use TextReader.read() or file.read() to read the file. Unfortunately, Add-on SDK doesn't seem to support asynchronous file reading so the read will block the Firefox UI. The only alternative would be importing NetUtil and FileUtils via chrome authority, something like this:
var {components, Cu} = require("chrome");
var {NetUtil} = Cu.import("resource://gre/modules/NetUtil.jsm", null);
var {FileUtils} = Cu.import("resource://gre/modules/FileUtils.jsm", null);
NetUtil.asyncFetch(new FileUtils.File(filePath), function(stream, result)
{
if (components.isSuccessCode(result))
{
var data = NetUtil.readInputStreamToString(stream, stream.available());
console.log("Success: " + data);
}
else
console.log("Error: " + result);
});