Dynamically Fill Jenkins Job Parameter with Github Directory Files - github

I'm aware of the Dynamic Parameter and Dynamic Extended Choice Parameter plugins. I'm looking for a way to list of text files in a directory in github as drop down list parameter. is there a groovy script or similar that can populate the dropdown with file names?

You can use the github api to fetch a list of files for a given path on a given repo.
So for example, to look inside the ratpack-groovy/src/main/java/ratpack/groovy folder of the ratpack project on github, you can do:
import groovy.json.*
def contents = new JsonSlurper().parse('https://api.github.com/repos/ratpack/ratpack/contents/ratpack-groovy/src/main/java/ratpack/groovy'.toURL())
.sort { it.type } // Directories first
.collect { it.name + (it.type == 'dir' ? '/' : '') } // Put a slash on the end of directories
assert contents = ['handling/',
'internal/',
'render/',
'script/',
'server/',
'sql/',
'template/',
'Groovy.java',
'GroovyRatpackMain.java',
'package-info.java']
Obviously you can only make 60 requests per hour

Related

Powershell: FTP download not working despite having permissions [duplicate]

What is the best way to download all files in a remote directory using C# and FTP and save them to a local directory?
Thanks.
downloading all files in a specific folder seems to be an easy task. However, there are some issues which has to be solved. To name a few:
How to get list of files (System.Net.FtpWebRequest gives you unparsed list and directory list format is not standardized in any RFC)
What if remote directory has both files and subdirectories. Do we have to dive into the subdirs and download it's content?
What if some of the remote files already exist on the local computer? Should they be overwritten? Skipped? Should we overwrite older files only?
What if the local file is not writable? Should the whole transfer fail? Should we skip the file and continue to the next?
How to handle files on a remote disk which are unreadable because we don’t have sufficient access rights?
How are the symlinks, hard links and junction points handled? Links can easily be used to create an infinite recursive directory tree structure. Consider folder A with subfolder B which in fact is not the real folder but the *nix hard link pointing back to folder A. The naive approach will end in an application which never ends (at least if nobody manage to pull the plug).
Decent third party FTP component should have a method for handling those issues. Following code uses our Rebex FTP for .NET.
using (Ftp client = new Ftp())
{
// connect and login to the FTP site
client.Connect("mirror.aarnet.edu.au");
client.Login("anonymous", "my#password");
// download all files
client.GetFiles(
"/pub/fedora/linux/development/i386/os/EFI/*",
"c:\\temp\\download",
FtpBatchTransferOptions.Recursive,
FtpActionOnExistingFiles.OverwriteAll
);
client.Disconnect();
}
The code is taken from my blogpost available at blog.rebex.net. The blogpost also references a sample which shows how ask the user how to handle each problem (e.g. Overwrite/Overwrite older/Skip/Skip all).
Using C# FtpWebRequest and FtpWebReponse, you can use the following recursion (make sure the folder strings terminate in '\'):
public void GetAllDirectoriesAndFiles(string getFolder, string putFolder)
{
List<string> dirIitems = DirectoryListing(getFolder);
foreach (var item in dirIitems)
{
if ( item.Contains('.') )
{
GetFile(getFolder + item, putFolder + item);
}
else
{
var subDirPut = new DirectoryInfo(putFolder + "\\" + item);
subDirPut.Create();
GetAllDirectoriesAndFiles(getFolder + item + "\\", subDirPut.FullName + "\\");
}
}
}
The "item.Contains('.')" is a bit primitive, but has worked for my purposes. Post a comment if you need an example of the methods:
GetFile(string getFileAndPath, string putFileAndPath)
or
DirectoryListing(getFolder)
For FTP protocol you can use FtpWebRequest class from .NET framework. Though it does not have any explicit support for recursive file operations (including downloads). You have to implement the recursion yourself:
List the remote directory
Iterate the entries, downloading files and recursing into subdirectories (listing them again, etc.)
Tricky part is to identify files from subdirectories. There's no way to do that in a portable way with the FtpWebRequest. The FtpWebRequest unfortunately does not support the MLSD command, which is the only portable way to retrieve directory listing with file attributes in FTP protocol. See also Checking if object on FTP server is file or directory.
Your options are:
Do an operation on a file name that is certain to fail for file and succeeds for directories (or vice versa). I.e. you can try to download the "name". If that succeeds, it's a file, if that fails, it's a directory. But that can become a performance problem, when you have a large number of entries.
You may be lucky and in your specific case, you can tell a file from a directory by a file name (i.e. all your files have an extension, while subdirectories do not)
You use a long directory listing (LIST command = ListDirectoryDetails method) and try to parse a server-specific listing. Many FTP servers use *nix-style listing, where you identify a directory by the d at the very beginning of the entry. But many servers use a different format. The following example uses this approach (assuming the *nix format)
void DownloadFtpDirectory(
string url, NetworkCredential credentials, string localPath)
{
FtpWebRequest listRequest = (FtpWebRequest)WebRequest.Create(url);
listRequest.UsePassive = true;
listRequest.Method = WebRequestMethods.Ftp.ListDirectoryDetails;
listRequest.Credentials = credentials;
List<string> lines = new List<string>();
using (WebResponse listResponse = listRequest.GetResponse())
using (Stream listStream = listResponse.GetResponseStream())
using (StreamReader listReader = new StreamReader(listStream))
{
while (!listReader.EndOfStream)
{
lines.Add(listReader.ReadLine());
}
}
foreach (string line in lines)
{
string[] tokens =
line.Split(new[] { ' ' }, 9, StringSplitOptions.RemoveEmptyEntries);
string name = tokens[8];
string permissions = tokens[0];
string localFilePath = Path.Combine(localPath, name);
string fileUrl = url + name;
if (permissions[0] == 'd')
{
Directory.CreateDirectory(localFilePath);
DownloadFtpDirectory(fileUrl + "/", credentials, localFilePath);
}
else
{
var downloadRequest = (FtpWebRequest)WebRequest.Create(fileUrl);
downloadRequest.UsePassive = true;
downloadRequest.UseBinary = true;
downloadRequest.Method = WebRequestMethods.Ftp.DownloadFile;
downloadRequest.Credentials = credentials;
var response = downloadRequest.GetResponse();
using (Stream ftpStream = response.GetResponseStream())
using (Stream fileStream = File.Create(localFilePath))
{
ftpStream.CopyTo(fileStream);
}
}
}
}
The url must be like:
ftp://example.com/ or
ftp://example.com/path/
Or use 3rd party library that supports recursive downloads.
For example with WinSCP .NET assembly you can download whole directory with a single call to Session.GetFiles:
// Setup session options
SessionOptions sessionOptions = new SessionOptions
{
Protocol = Protocol.Ftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
// Download files
session.GetFiles("/home/user/*", #"d:\download\").Check();
}
Internally, WinSCP uses the MLSD command, if supported by the server. If not, it uses the LIST command and supports dozens of different listing formats.
(I'm the author of WinSCP)
You could use System.Net.WebClient.DownloadFile(), which supports FTP. MSDN Details here
You can use FTPClient from laedit.net. It's under Apache license and easy to use.
It use FtpWebRequest :
first you need to use WebRequestMethods.Ftp.ListDirectoryDetails to get the detail of all the list of the folder
for each files you need to use WebRequestMethods.Ftp.DownloadFile to download it to a local folder

Get file content on Github with GraphQL API and Python

Given a list of repositories on GitHub with 'repoName' and 'userName', I need to find all the '.java' files, and get the content of the java files. Tried to use RestAPI first, but ran into the rate limit very easily, now I'm switching to GraphQL API, but don't know how to achieve that.
Here is how I would do it:
Algo Identify_java_files
Entry: path the folder
Out: java files in the folder
Identify all files in the folder of the repository
For each file
if the type of the file is "blob":
if ".java" is the end of the name of the file
get its content
else if the type of the file is "tree":
Identify_java_files(path of the tree file)
You can easily implement this pseudo code using Python. My pseudo code makes the assumption to use recursion, but it can be done otherwise, it's just for the example. You will need the requests and json libraries.
Here are the queries you might need, and you can use the explorer to test them.
{
repository(name: "checkout", owner: "actions") {
defaultBranchRef {
name
}
}
}
This query allows you to check the name of the default branch of the repository. You will need it for the following queries, or you can use a specific branch but you will have to know its name. I don't know (and don't believe) if you can get all the branches names for a repository.
{
repository(name: "checkout", owner: "actions") {
object(expression: "main:") {
... on Tree {
entries {
path
type
}
}
}
}
}
This query gets the content of the root folder for a specific repository. The expression: "main:" refers to the branch of the repository along with the path. Here the branch is main and the path is empty (it comes after the ":"), meaning we are looking at the root folder. Some repositories are using "master" as default branch, so be sure of which branch to use.
To check the content of a file, you can use this accepted answer.
I updated the example in order for you to be able to try it.
{
repository(name: "checkout", owner: "actions") {
object(expression: "main:.github/workflows/test.yml") {
... on Blob {
text
}
}
}
}
You send your requests to the API using requests or alike, and store the responses as JSON or alike for treatment.
As a side note, I do not think it is possible to achieve this without issuing multiple queries. I recently had to do something similar, and this is my first SO answer, so let me know if anything is unclear.
Edit:
You can use this answer to list all files in a repository.

How to get the content of all files in a given directory?

Octokit allows me to easily retrieve the list of files in a given folder like so:
var directoryContent = await _githubClient.Repository.Content.GetAllContents("MyUsername", "MyProject", "TheFolderName").ConfigureAwait(false);
Unfortunately, the Content property for each file is null which means that I have to loop through the list of files and retrieve each one separately like so:
foreach (var file in directoryContent)
{
var fileWithContent = await _githubClient.Repository.Content.GetAllContents("MyUsername", "MyProject", file.Path).ConfigureAwait(false);
... do something with the content ...
}
Is there a more efficient way to get the content of all files in a given folder?
Just for reference: This is on purpose. If you read a whole directory and load all the content in one step it's huge load. So, the first call skips the content (Content property is null). The second step requests just this file and then the content is present.
Only thing I'm still missing is a method that can return just one result instead of a list if I pull a single file.

TypeLite generate external modules?

I am trying to generate external modules rather than a type definition file. I believe I need to do the following:
Change the extension of the file to .ts instead of .d.ts.
Generate one file per module.
Add the key word "Export" in front of each interface and enum.
I was easily able to change the extension of the file by changing the "output extension" setting in the tt file.
I cannot figure out how to split the modules into separate files.
I cannot figure out how to add the Export key word to each interface.
TypeLITE doesn't support generating multiple files. This feature has been requested by several users, but I am not aware of a simple way to generate multiple files from the single tt file.
export keyword can't be added without changing source code of the library (TsGenerator.cs). This is very specific requirement, so I probably won't add it to the library.
TypeLite is a good project but lacking in Documentation and examples, it's open source so anyone can contribute and make it better.
As for creating a file per class i solved it using the code below.
private static void GenerateTypeScriptContracts(string assemblyFile, string outputPath)
{
// Clean TS Folder
System.IO.DirectoryInfo di = new DirectoryInfo(outputPath);
foreach (FileInfo file in di.GetFiles())
{
file.Delete();
}
// --
var assembly = Assembly.LoadFrom(assemblyFile);
// If you want a subset of classes from this assembly, filter them here
var models = assembly.GetTypes();
foreach (var model in models)
{
var generator = new TypeScriptFluent()
.WithConvertor<Guid>(c => "string")
.WithMemberFormatter((identifier) => Char.ToLower(identifier.Name[0]) + identifier.Name.Substring(1));
generator.ModelBuilder.Add(model);
// Generate TS interface definitions
var tsClassDefinitions = generator.Generate(TsGeneratorOutput.Properties | TsGeneratorOutput.Fields);
File.WriteAllText(Path.Combine(outputPath, "I" + model.FullName.Replace("ProjectName.DtoModels.", "") + ".ts"), tsClassDefinitions);
}
}

Mirth: How to get source file directory from file reader channel

I have a file reader channel picking up an xml document. By default, a file reader channel populates the 'originalFilename' in the channel map, which ony gives me the name of the file, not the full path. Is there any way to get the full path, withouth having to hard code something?
You can get any of the Source reader properties like this:
var sourceFolder = Packages.com.mirth.connect.server.controllers.ChannelController.getInstance().getDeployedChannelById(channelId).getSourceConnector().getProperties().getProperty('host');
I put it up in the Mirth forums with a list of the other properties you can access
http://www.mirthcorp.com/community/forums/showthread.php?t=2210
You could put the directory in a channel deploy script:
globalChannelMap.put("pickupDirectory", "/Mirth/inbox");
then use that map in both your source connector:
${pickupDirectory}
and in another channel script:
function getFileLastModified(fileName) {
var directory = globalChannelMap.get("pickupDirectory").toString();
var fullPath = directory + "/" + fileName;
var file = Packages.java.io.File(fullPath);
var formatter = new Packages.java.text.SimpleDateFormat("yyyyMMddhhmmss");
formatter.setTimeZone(Packages.java.util.TimeZone.getTimeZone("UTC"));
return formatter.format(file.lastModified());
};
Unfortunately, there is no variable or method for retrieving the file's full path. Of course, you probably already know the path, since you would have had to provide it in the Directory field. I experimented with using the preprocessor to store the path in a channel variable, but the Directory field is unable to reference variables. Thus, you're stuck having to hard code the full path everywhere you need it.