I'm collecting GitHub issue statistics over time on our project: total number of issues, number of issues with a particular label, number of issues in a given state (open/closed). Right now, I have a Python script to parse the project webpage with the desired labeling/state for the info I want, e.g., http://github.com/<projectname>/issues?label=<label_of_interest>&state=<state_of_interest>
However, parsing the HTML is fragile since if the GitHub API changes, more often than not, my code fails.
Does someone describe how to use the GitHub API (or barring that, know of some other way, preferably in Python) to collect these statistics without relying on the underlying HTML?
May I be so forward as to suggest that you use my wrapper around the GitHub API for this? With github3.py, you can do the following:
import github3
github = github3.login("braymp", "braymp's super secret password")
repo = github.repository("owner", "reponame")
open_issues = [i for i in repo.iter_issues()]
closed_issues = [i for i in repo.iter_issues(state='closed')]
A call to refresh may be necessary because I don't honestly recall if GitHub sends all of the issue information upon the iteration like that (e.g., replace i.refresh() for i in <generator> as the body of the list comprehensions above).
With those, you can iterate over the two lists and you will be able to use the labels attribute on each issue to figure out which labels are on an issue. If you decide to merge the two lists, you can always check the status of the issue with the is_closed method.
I suspect the actual statistics you can do yourself. :)
The documentation for github3.py can be found on ReadTheDocs and you'll be particularly interested in Issue and Repository objects.
You can also ask further questions about github3.py by adding the tag for it in your StackOverflow question.
Cheers!
I'd take a look at Octokit. Which doesn't support Python currently, but does provide a supported interface to the GitHub API for Ruby.
https://github.com/blog/1517-introducing-octokit
Although this doesn't fully meet your specifications (the "preferably Python" part), Octokit is a fantastic (and official - it's developed by GitHub) way of interacting with the GitHub API. You wrote you'd like to get Issues data. It's as easy as installing, requiring the library, and getting the data (no need for authentication if the project is public).
Install:
gem install octokit
Add this to your Ruby file to require the Octokit library:
require 'octokit'
Although there are a lot of things you can get from Octokit::Client::Issues, you may want to get a paginated list of all the issues in a repository:
Octokit.list_issues('octokit/octokit.rb')
# => [Array<Sawyer::Resource>] A list of issues for a repository.
If you're really keen on using Python, you might want to have a look at the GitHub API docs for Issues. Really, it's as easy as getting a URL like: https://api.github.com/repos/octokit/octokit.rb/issues and get the JSON data (although I'm not familiar with Python, I'm sure these some JSON parsing library); no need for authentication for public repos.
Related
I've checked out this here on Stack Overflow, but following what's outlined there, I'm still getting "not found" on pages that I know have releases.
For example, if I want to download the latest URL for something like https://github.com/storybookjs/storybook/releases, for a number of repositories dynamically, why do I get a 404 when I use https://github.com/storybookjs/storybook/releases/latest/download/package.zip
What am I missing here?
Thanks in advance!
It might be easier to use GitHub's API for that, as documented for releases. The URL format is
https://api.github.com/repos/{owner}/{repo}/releases
For example, https://api.github.com/repos/symfony/symfony/releases lists all releases for a PHP package. The JSON node zipball_url contains a link to a ZIP package for each release.
I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.
I was trying to use the REST API of TeamCity but i can't find a list of all supported requests and the names of parameters. I wanted to look it up in their official documentation (https://confluence.jetbrains.com/display/TCD10/REST+API)
where a link to exactly this list is provided
(http://teamcity:8111/app/rest/application.wadl)
but i just can't connect to it. Seems like the page is down.
I have googled all kinds of stuff in the hope to find this list somewhere else but i couldn't find anything smart.
Does anyone know where to find such a list or can provide one? That would be fantastic.
Thanks
You can find available API on the public teamcity:
https://teamcity.jetbrains.com/app/rest/swagger.json
You can navigate through the response inside the paths and look at supported methods, and required parameters for each API.
You can also access to the .wadl on the public teamcity server:
https://teamcity.jetbrains.com/app/rest/application.wadl
When you are inside the documentation of teamcity, they use teamcity:8111 as your own teamcity, installed on your server.
The URL examples on this page assume that your TeamCity server web UI is accessible via the http://teamcity:8111 URL.
If you download a copy of teamcity, start a copy of the server, and then connect to /app/rest/application.wadl, you will probably get a copy of representation.
Summary: My simple website now successfully communicates with Google Spreadsheets, but the inconvenience of adding this Google Spreadsheets API is that deployments of my website (via deployhq.com) now take 50 minutes when they used to take 30 seconds!
Details:
I created a simple webpage using PHP that accepts parameters and then appends a new row of data to a Google Spreadsheet. Getting it working felt like a miracle because Google's documentation was so sparse and often outdated.
Following the example there and on Google's Github page, my composer.json file is:
{
"require": {
"google/apiclient": "^2.0"
}
}
Can I somehow avoid requiring all of those Google dependencies for all of their PHP APIs?
I'd love not to download all of the irrelevant Google API code that has nothing to do with Google Spreadsheets.
I think the massive amount of files is what is causing my deployments to take 50 minutes instead of 30 seconds.
My super basic webpage pretty much just uses the Google_Service_Sheets class and related classes. I want anything extraneous.
If you download a release of the client library it will include the core library and all of its dependencies, without the auto-generated classes. Then you can then download the Sheets API generated classes separately and add them to your project. Using composer is the preferred method, as it makes it easy to get updates later.
P.S. - There are ~4200 generated files downloaded with the library. That's not trivial, but any process that takes 50 minutes to copy those over likely has room for improvement.
I need help on how to successfully create a RepositoryMergeRequestCheck
As part of our merge workflow, we need to ensure some policies about some files. Policies includes among others:
File naming conventions of individual files
File naming conventions between multiple files (for example, correlative-named files)
Inspection of files to enforce or disallow usage of statements or functions
I want to be able to check for this policies on a repository merge request check so I’m building a plugin for Astlassian Stash
I have tried the following approaches:
Using the RepositoryMergeRequestCheckContext parameter of RepositoryMergeRequestCheck.check()
Since the method signature is:
#Override
public void check(RepositoryMergeRequestCheckContext context)
The first thing I tried using was the context parameter. I can say context.getMergeRequest().getPullRequest().getFromRef().getRepository()
Now I get a Repository instance and I’m not sure how to extract commit info from it.
Calling Git directly: Since this check was originally developed as a git hook script, calling git from the SDK made sense to me. It led me to this situation:
String result = gitScm.getCommandBuilderFactory().builder().lsTree().build(…).call();
Where gitScm gets dependency injected in the plugin’s constructor.
Notice the build parameter? It expects a CommandOutputHandler<T> in this case T is string, but that’s an interface, and I’m not sure how to get an instance that implements that interface or how to create one.
REST API
REST API looks the easiest of them but it still doesn’t help with the third requirement of inspecting file’s source code and also, spawning web requests from the merge request check that is itself a web request from stash doesn’t seem to be a good idea from the performance side.
What path should I follow or how can I do it?
I started writing you a response, and then realised that I'd already answered this on Answers (which was what I was going to suggest as well).
https://answers.atlassian.com/questions/182943/enforcing-policies-from-repository-merge-request-check-plugin
Cheers,
Charles