Is there a REST API for Jenkins plugins? - rest

I'm trying to write a script that quickly checks if our Jenkins plugins are up to date. I know that this is a built in feature in Jenkins, but for security reasons, our Jenkins instance doesn't have internet access.
I know that I can get a lot of information about a plugin, including version, from:
https://plugins.jenkins.io/<name-of-plugin>
However, I can't get it to return anything other than HTML. I could scrape the HTML for the version number, but if there is a stable API that returns JSON or similar, that would be preferred. I'm pretty sure Jenkins isn't scraping HTML to check for updates, so the API must exist. Does anyone know where it is?

There seem to be two solutions available. I ended up scraping:
https://updates.jenkins.io/download/plugins/<name-of-plugin>
The latest version is always in the second column of the second row, so scraping is trivial. It works well most of the time, but sometimes the connection is refused, which I assume might be due to of the volume of requests sent by the script.
Another option that I found is to download the following JSON file:
https://updates.jenkins.io/current/update-center.actual.json
It is currently 1.7MB and contains information about the latest version of all Jenkins plugins. It also contains meta data like dependencies, which allows your script to validate that all dependencies are satisfied.
Unfortunately I haven't found a way to download JSON for individual plugins, so you either have to scrape HTML for individual plugins or download a massive JSON for all plugins.
Update: I found the API:
https://plugins.jenkins.io/api/plugin/<name-of-plugin>
And I also found the source code and the documentation:
https://github.com/jenkins-infra/plugin-site-api

Related

Netlify.toml vs netlify.yaml

I am a little confused by the new configuration file netlify.yaml.
I imagined it would be a drop-in replacement for netlify.toml, but without the toml file I get the following error:
No netlify.toml found. This is needed to configure the function settings. For more info: https://github.com/netlify/netlify-lambda#installation
When both of them are present I have
failed during stage 'Reading and parsing configuration files': Multiple potential Netlify configuration files in "/opt/build/repo": netlify.toml, netlify.yaml
I would like to access the “plugins” functionality and I am not certain if it exists on the toml version of the configuration as this doesn’t seem to trigger anything:
[[plugins]]
type = "./.netlify/plugins/xxx"
What would you recommend as the best course of action?
Here is the response I have received on the community forum (https://community.netlify.com/t/netlify-toml-vs-netlify-yaml/6482/2):
We haven’t quite finished implementing the json and yml support, but these are the docs: https://docs.netlify.com/configure-builds/file-based-configuration/#json-and-yaml-configuration-files .
It is definitely not implemented in the private beta for build plugins yet, so you’ll need to stick with toml as the docs advise.

How to make buildbot nine host an html resource?

I used to use buildbot eight before, where i was able to access my html artifacts generated by tests by just using URL:
<server>:<port>/path_to_resource,
where path to resource was in <preifx>/master/public_html.
I can't access it in buildbot 0.9.10 as I get resource not found.
I wonder if there is an option what would allow me to access my html files from a browser?
This feature was removed in 0.9.0.
An issue asks for the implementation of a plugin implementing this feature. It is still open currently with no assignee.
I have not found any other solution. I would be very interested if a solution was found.

Accessing AEM 6.2 error logs over HTTP

In previous versions of AEM, certainly in CQ 5.6 and AEM 6.0, it was possible to tail the error logs over HTTP, without connecting to the server over SSH.
For example, I could get the last 1000 lines from the error log of my AEM author instance by calling:
http://localhost:4502/bin/crxde/logs?tail=1000
This seems to no longer be possible in AEM 6.2, this path does not resolve to anything.
Is there another way I could still tail the log over HTTP?
A colleague answered this question for me on a chat so I'm putting it here to make it easier to find in the future.
There's now a neat utility in the OSGi console that allows one to view the logs as well as configure the various loggers. You can find it at http://localhost:4502/system/console/slinglog
The Appender tab provides links to the various log files that can be used to load logs over HTTP.
Here's an example request it makes:
http://localhost:4502/system/console/slinglog/tailer.txt?tail=1000&name=%2Flogs%2Ferror.log
As you can see, both the log file name and the tail parameter can be specified. You can also use grep with both simple phrases and regular expressions.
This is a built-in feature of Apache Sling.
In addition FYI, you can also find the status-slinglogs where you can perform log file downloads in a zip and logger actions in a txt to your local at /system/console/status-slinglogs
http://localhost:4502/system/console/status-slinglogs
and the direct urls for the downloading these zip files are as below
http://localhost:4502/system/console/status-slinglogs.zip
http://localhost:4502/system/console/status-slinglogs/configuration-status-20170126-183246.zip (where as 20170126-183246 is and time stamp)
You should not be looking at log files via CRXDE lite.
log files in 6.2 are project specific - better to open them from a text editor.
see attached screenshot.
Hope this helps!
Regards,
Prince
You can curl the log with e.g.:
curl -u admin:admin 'http://localhost:4502/system/console/slinglog/tailer.txt?tail=4000&name=%2Flogs%2Ferror.log'
where 4000 is the number of lines you want to get.
I recently wrote a tool named "Log Tailer Plus" to solve exactly this problem. It's entirely free/open source - Take a look at a post describing usage here : https://blogs.perficientdigital.com/2019/05/14/introducing-aem-logtailerplus/
TLDR; You can grab an AEM package from here ( https://github.com/prftryan/LogTailerPlus ) install it to your machine, and access via http://localhost:4502/log-tailer-plus (if local) or http://server:port/log-tailer-plus
This tool will allow you to follow any number of logs at once by leveraging the out of the box logging endpoint(/system/console/tailer ) as well as dynamically checking active OSGI Logging Logger configurations. Currently, highlighting is supported, but only for relatively standard logging patterns (it's done via regex).
This is a new release, works on AEM 6.2+. Enjoy

How can I overwrite an unmanaged solution when using deploying a CRM package?

Using a powershell script to deploy a CRM Package works well, but I am running into some unexpected behavior.
The package has 1 unmanaged solution that it uploads. It works perfectly if the solution does not exist on the target CRM organization. However, if the solution does already exist on the organization and I try to deploy it again with some changes, it will not work. The changes are not uploaded and I do not get any errors.
If I change the version number in the solution (from 0.0.1 to 0.0.2, for example) then uploading it works as expected.
I would rather not change the version every time though, and since manually uploading an unmanaged solution with the same version number works perfectly I would expect the script to be able to do it as well.
I tried using the CRM Package Deployer method of importing a package to see if it would work as I expect or if it would show any error messages.
It's messages show:
Skipping solution MySolution. Version 0.0.2 of the solution is already loaded.
So it appears that if a solution with the same name and version number exists in the organization then it will be skipped entirely. This is sort of unfortunate.
It seems I'll have to implement a workaround. I see two options:
The DeployPackage script deletes the solution in the target CRM organization (if it exists) before attempting to upload.
My ExportSolution script changes the version number every time it runs.

Packages not appearing in proget feed

I am using proget to upload packages, I am manually uploading from disk, but when I go to check if the package exists in the feed it isn't there. When I logon to the server which is hosting proget and go to the PackagesRootPath I can see the package is indeed on the server!
Any ideas why it's not showing up in the feed?
p.s. I have restarted the website/application pool and ProGet service and still doesn't work.
If you're not seeing any packages in the web application (and you've verified that they are, in fact, in the right place on disk), this means that the packages aren't getting indexed by the ProGet Service.
Since you've already restarted the ProGet web service, it's likely a problem with the individual package.
Check to see if there are "indexing errors" in the admin section; this will give some insight into what the problem might be. Often times, the file name does not match the package name/version; this is a requirement. If you're package is named MyFoo and is version 3.0.1, it must be MyFoo.3.0.1.nupkg and have an appropriately named MyFoo.nuspec within it.
If there are no errors logged, then you can try to run the service interactively. Simply stop the Windows service, then run the .exe file and select the appropriate option to run.
Another option to verify that the indexing is working OK is to pull a package from a remote connector (like JQuery or something), then drop that package in another feed (that doesn't use a connector).