Bintray support for Eclipse p2 repositories - eclipse

Probably this is a recurring issue, but I could not find a reliable way to publish Eclipse p2 repositories on Bintray.
Manually creating a repo/product/version and populating with files was partly ok, but, for production environments, a reliable scriptable solution is needed.
Purpose
Deploy Eclipse p2 repositories to Bintray.
What are Eclipse p2 repositories?
(Sorry Eclipse folks, but for Bintry support people we better define what we are talking about).
An Eclipse p2 repository is a folder which must be published at a single URL that is stable and does not change, even if multiple versions are published in time.
An Eclipse p2 repository folder, generated with the latest versions of the Tycho Maven plugins, has the following structure:
5 files in root (p2.index, artifacts.jar, artifacts.xml.xz, content.jar, content.xml.xz)
2 sub-folders, plugins and features, each with multiple .jar files, with version specific names, like ilg.gnuarmeclipse.core_3.3.1.201702251311.jar
For example:
artifacts.jar
artifacts.xml.xz
content.jar
content.xml.xz
features
ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar
...
p2.index
plugins
ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar
...
The exact p2 repository folders that I want to deploy are:
https://sourceforge.net/projects/gnuarmeclipse/files/Eclipse/updates/
https://sourceforge.net/projects/gnuarmeclipse/files/Eclipse/updates-test/
(both part of the GNU ARM Eclipse project).
The actual URLs that must be configured in Eclipse to access these two p2 repositories are:
http://gnuarmeclipse.sourceforge.net/updates
http://gnuarmeclipse.sourceforge.net/updates-test
Access to these p2 repositories is actually a sequence of accesses to files directly beneath these URLs, like:
$ curl -L http://gnuarmeclipse.sourceforge.net/updates/p2.index
#Sat Feb 25 15:11:37 EET 2017 version=1
metadata.repository.factory.order=content.xml.xz,content.xml,\!
artifact.repository.factory.order=artifacts.xml.xz,artifacts.xml,\!
$
Use of version specific sub-folders
Eclipse p2 repositories to not have version specific sub-folders, both subfolders used (plugins and features) have the same name in each version; it is not possible to access version specific sub-folders.
As a consequence, deploying multiple versions should not create version specific subfolders, since their content will be ignored.
Use of version specific URLs
Eclipse plug-ins have configured inside them a single URL that can be used to automatically get new updates. This is the URL of a p2 repository, and it cannot be changed to point to version specific URLs, so, for updates to work, the p2 repository should have a unique URL.
Eclipse p2 repository lifecycle
The lifecycle of an Eclipse p2 repository should allow new versions to completely replace the previous version, i.e. the top 5 files and the two-subfolders should all be part of a single version; if, for any reason, publishing fails, the previous version should continue to be visible, unchanged
once a version is released, the files associated with it will never change, so it is not necessary to allow a given file to be replaced with a file with the same name, but different content
however, top files and folders have the same names for all releases, and the server should allow uploading them without complaining that the name was already uploaded by a previous version
the moments in time when new versions are released are not known in advance, there might be releases every month, but there might also be releases at more than 180 days apart
Publish to the product/version URL
The first attempt was to upload all files to the product/version URL, using the following bash function:
curl \
--request PUT \
--upload-file "${file_absolute_path}" \
--user ${BINTRAY_USER}:${BINTRAY_API_KEY} \
"${API}/content/${BINTRAY_OWNER}/${repo}/${package}/${version}/${file_relative_path}?publish=1?override=1?explode=0"
The upload was successful:
Processing artifacts.jar file...
{"message":"success"}
Processing artifacts.xml.xz file...
{"message":"success"}
Processing content.jar file...
{"message":"success"}
Processing content.xml.xz file...
{"message":"success"}
Processing p2.index file...
{"message":"success"}
Processing feature: features/ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar file...
{"message":"success"}
Processing plugin: plugins/ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar file...
{"message":"success"}
but, although all files were uploaded identically, some of the files were stored in the repo folder, not in the product/version folder, as expected:
artifacts.xml.xz
content.xml.xz
features
ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar
pack3
3.2.1-201701141320
artifacts.jar
content.jar
p2.index
plugins
ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar
Please note that although I did not explicitly set the list_in_downloads property to any of the files, some of the files, uploaded to product/version were moved to the parent repo folder.
As it can be seen, the *.xz files and the features and plugins folders were promoted to the repo folder, while the *.jar files and the p2.index file were ignored.
A repository created with this procedure is:
https://dl.bintray.com/ilg-ul/repo3/
Publish to the product/version URL with different POST methods
As documented, there are 3 methods of passing parameters to curl. The previous test used one; in two more tests, I tried the next two, with the following upload code:
curl \
--request PUT \
--upload-file "${file_absolute_path}" \
--user ${BINTRAY_USER}:${BINTRAY_API_KEY} \
--header "X-Bintray-Package: ${package}" \
--header "X-Bintray-Version: ${version}" \
--header "X-Bintray-Publish: 1" \
--header "X-Bintray-Override: 1" \
--header "X-Bintray-Explode: 0" \
"${API}/content/${BINTRAY_OWNER}/${repo}/${file_relative_path}"
and separately with
curl \
--request PUT \
--upload-file "${file_absolute_path}" \
--user ${BINTRAY_USER}:${BINTRAY_API_KEY} \
"${API}/content/${BINTRAY_OWNER}/${repo}/${file_relative_path};bt_package=${package};bt_version=${version};publish=1;override=1;explode=0"
Both behaved better than the previous test, the upload for the first version was successful and the folder structure was preserved:
artifacts.jar
artifacts.xml.xz
content.jar
content.xml.xz
features
ilg.gnuarmeclipse.codered_1.1.1.201701141320.jar
p2.index
plugins
ilg.gnuarmeclipse.codered_1.1.1.201701141320.jar
but when uploading the second version, most files were ok, except that uploading artifacts.xml.xz and content.xml.xz failed:
Upload 'artifacts.jar' to '/repo6/pack6/3.3.1-201702251311/'
{"message":"success"}
Upload 'artifacts.xml.xz' to '/repo6/pack6/3.3.1-201702251311/'
{"message":"Unable to upload files: An artifact with the path 'artifacts.xml.xz' already exists under another version"}
Upload 'content.jar' to '/repo6/pack6/3.3.1-201702251311/'
{"message":"success"}
Upload 'content.xml.xz' to '/repo6/pack6/3.3.1-201702251311/'
{"message":"Unable to upload files: An artifact with the path 'content.xml.xz' already exists under another version"}
Upload 'p2.index' to '/repo6/pack6/3.3.1-201702251311/'
{"message":"success"}
...
Please note that, as far as I can tell, there is nothing special with these files.
A repository created using this procedure is
https://dl.bintray.com/ilg-ul/repo6/
Although it looks like a valid p2 repository, it isn't, since most files are from the second version, but artifacts.xml.xz and content.xml.xz are from the first version, so the repository is not consistent.
Publish to the repo URL
Although not officially mentioned in the Bintray documentation, some suggested to try to upload to a shorter path, corresponding to the root, or repo URL.
I did, using the following code:
curl \
--request PUT \
--upload-file "${file_absolute_path}" \
--user ${BINTRAY_USER}:${BINTRAY_API_KEY} \
"${API}/content/${BINTRAY_OWNER}/${repo}/${file_relative_path}?publish=1?override=1"
but in this case I got errors for most of the files:
Processing artifacts.jar file...
{"message":"success"}
Processing artifacts.xml.xz file...
{"message":"Invalid file path and name"}
Processing content.jar file...
{"message":"success"}
Processing content.xml.xz file...
{"message":"Invalid file path and name"}
Processing p2.index file...
{"message":"success"}
Processing feature: features/ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar file...
{"message":"Invalid file path and name"}
Processing plugin: plugins/ilg.gnuarmeclipse.codered_1.1.1.201702231729.jar file...
{"message":"Invalid file path and name"}
It looks like the upload mechanism is picky, and accepts to upload some files (like artifacts.jar, content.jar, and p2.index), to the repo URL, but for all other files it fails.
A repository created with this procedure is:
https://dl.bintray.com/ilg-ul/repo1/
Publish both to repo and product/version URLs
I also tried to selectively upload some files to the repo and some files to the product/version (artifacts.xml.xz, content.xml.xz and the features/plugins folders); this created a correct p2, but when I tried to repeat the process for another version, I got errors:
Processing artifacts.jar file...
{"message":"success"}
Processing artifacts.xml.xz file...
{"message":"Unable to upload files: An artifact with the path 'artifacts.xml.xz' already exists under another version"}
Processing content.jar file...
{"message":"success"}
Processing content.xml.xz file...
{"message":"Unable to upload files: An artifact with the path 'content.xml.xz' already exists under another version"}
Processing p2.index file...
{"message":"success"}
The override flag
Please note that the override flag was set on all tests.
The publish flag
Please note that the publish flag was set on all tests, although this is not the expected behaviour.
To keep repositories consistent, the expected behaviour is to try to upload all files without the publish flag, and to do the publish at the end, only if all files were correctly uploaded; if an error occur, without the publish command issued, it is expected that the files published for the previous version will remain accessible.
The complete test script
The complete bash script used for these tests (and a few more) is available from GitHub gists:
https://gist.github.com/ilg-ul/568a6806d5e97fcc1384d7acda4ffe36
To download this script, use the following
mkdir -p "${HOME}/Downloads"
curl -L https://gist.github.com/ilg-ul/568a6806d5e97fcc1384d7acda4ffe36/raw/2df98f4899862f1d7e65f1601ccdbd320dce9021/bintray-test.sh -o "${HOME}/Downloads/bintray-test.sh"
This script expects the following variables to be set in the environment:
export BINTRAY_USER=<user>
export BINTRAY_API_KEY=<auth>
export BINTRAY_OWNER=${BINTRAY_USER}
To run the script, enter:
bash "${HOME}/Downloads/bintray-test.sh"
Problems identified
Refusal of the server to upload the artifacts.xml.xz and content.xml.xz
Considering that publishing to the product/version URL with different POST methods (repo6) was the most advanced test, the only problem identified was the refusal of the server to upload the artifacts.xml.xz and content.xml.xz.
Creating intermediate folders and storing content
Passing the package and version as part o the URL (repo3) produced the most bizarre results, with additional folders:
pack3
3.2.1-201701141320
artifacts.jar
content.jar
p2.index
All other files were uploaded correctly, but these three files were processed in a special (and I would say erroneous) manner.
Attempts to publish to the repo URL fail for most files
If this is not a legal way of publishing to Bintray, please ignore section, but attempts to publish to the repo URL were successful only for the following 3 files artifacts.jar, content.jar, and p2.index) and failed for all other.
Conclusion
As a conclusion, based on the existing documentation, I could not find a reliable method to publish usual Eclipse p2 repositories to Bintray.
I saw several proposals with curious solutions to post composite p2 repositories, but this is not my case, I have two common repositories, which do not need any versioning (http://gnuarmeclipse.sourceforge.net/updates and http://gnuarmeclipse.sourceforge.net/updates-test), and I would like to publish them on Bintray.
Suggestions for Bintray
Remove special processing for some files in Generic repositories
As proved by these tests, Bintray generic repositories are not that generic, since they do not process all files equally, as expected; it looks like attempts to support Eclipse p2 repositories were made, and the server upload code was patched to process some Eclipse files differently, but the result is not fully functional, and very confusing.
Add explicit support for Eclipse p2 repositories
Instead of making unfortunate patches to the Generic repo, it would be great if Bintray would support a new repository type "Eclipse p2", where there are no products nor versions, and each publish will be allowed to remove all existing files and add the new ones.
This is equivalent of being allowed to publish in the repo folder, and being allowed to remove and upload again files later, at any time.
If getting rid of the versioning mechanism is not possible, it would be acceptable to publish to the version folder, but automagically the files from the latest version to be also visible in the product folder, as in repo6, but to be sure all files are accepted, including artifacts.xml.xz and content.xml.xz.
2017-03-31 Update
After countless messages exchanged with Bintray support, they finally understood the problem and provided a fix.
Running the script now is functional for tests 4, 5 and 6, which, basically, are identical, except small variations in how the information is passed to Bintray.
The results of the test are:
Upload to the repo root in not functional
Direct upload, specifying package and version as a path prefix to the file's target path in not functional
Uploading while specifying package and version using HTTP headers is functional
Uploading while specifying package and version using HTTP matrix parameters is functional
In conclusion, do not try to upload to the root url and use HTTP headers or HTTP matrix parameters.
2017-07-31 Update
I'm hosting the update site on Bintray for a few months already, and things seem ok: https://bintray.com/gnu-mcu-eclipse.
The actual script used to publish is: https://github.com/gnu-mcu-eclipse/eclipse-plugins/blob/develop/scripts/publish-updates.sh
The public URLs used for the update site look like: https://dl.bintray.com/gnu-mcu-eclipse/updates.
Actually there are multiple Bintray repositories, for different 'stages' of the project (https://bintray.com/gnu-mcu-eclipse); below them there is a single Bintray package (I called it p2) and below this are multiple Bintray versions (https://bintray.com/gnu-mcu-eclipse/v4-neon-updates-experimental/p2).

Last year I was struggling with exactly the same problem
when trying to upload a simple Eclipse p2 repository via curl to Bintray.
Inspired by an article of Lorenzo Bettini
I found a solution.
The key is to use path and matrix parameters in the URL, for example like this:
curl -X PUT -T $F -u $BINTRAY_USER:$BINTRAY_API_KEY "https://api.bintray.com/content/$BT_OWNER/$BT_REPO/$BT_PACKAGE/$BT_VERSION/$F;bt_package=$BT_PACKAGE;bt_version=$BT_VERSION;publish=1"
Feel free to look at my shell script deployToBintray.sh.

I am hosting my eclipse project on bintray and had also some problems but was able to fix:
Preparation
created a generic repository - e.g. "myproject"
created one package inside - e.g. "update-site"
created one version - e.g. "current"
So at my project I wanted to use only one URL for the update site which will not change and simply put all new content at same bintray-version ("current"). site.xml, content.jar etc. will be overriden.
HowTo upload with override
The "new" REST API has the ability to override files.
PUT /content/:subject/:repo/:file_path;bt_package=:package;bt_version=:version[;publish=0/1][;override=0/1][;explode=0/1]
I am using an adopted version of https://github.com/vogellacompany/bintray-publish-p2-updatesite/blob/master/pushToBintray.sh to upload my files.
180 day override problem and how to fix
When you look at https://bintray.com/docs/api/index.html#_content_uploading_publishing
there exists a passus:
Optionally publishing the uploaded artifact(s) as part of uploading
(off by default). Additional content can be uploaded to a published
version within 180 days from its publish date.
And I did run into the problem after 180 days I was not able to upload any longer...
But there is an easy way to solve:
Edit your version
https://bintray.com/$userName/myProject/update-site/current/edit?tab=general
and just klick on "Update Version" button.
After doing this I was able to upload again, so it seems this resets the 180 days. Currently I hosting my upate-site nearly one year now and have no problems with updating etc.

Related

GitHub releases with generated files

I have a GitHub repo and would like to use GitHub Actions to create a release with a generated file included:
push a commit with a tag
the GitHub Action starts
it runs yarn run build (generates dist/index.js)
release is created that includes the dist folder
So far, I have not been able to do this. I've been able to use "marvinpinto/action-automatic-releases#latest" action to package the dist folder as an additional asset, but that's not it.
I want the Source code (tar.gz) in the GitHub release to contain the dist folder.
What I'm trying to do is use this generated asset as a yarn dependency, which works if I use the Source Code (tar.gz) but not if I use the additional generated asset.
The entries your images show that are labeled “Source code (tar.gz)” and “Source code (zip)” are autogenerated from the contents of the repository and contain only and exactly what's in the repository. They can't be modified in any way because they aren't persisted: they may be regenerated in the future. That's also why they may change (so, for example, the hash of the contents need not be stable).
If you want to include additional dependencies in your tarball, simply upload your own source release that contains the generated files. Many projects do this for various reasons, and you can do it, too.

Github - Download Not Fully Included All Folders/Files

I am new to Github. I downloaded the PHPWord (link below) on Github, but some how the folders: samples, tests, docs are missing. It only has the folder "src/PhpWord".
https://github.com/PHPOffice/PHPWord
I also use composer require phpoffice/phpword and the result is the same (missing folders)
Am I doing something wrong or there is another way to download which will include the other folders: samples, tests, docs.
Thanks in advance,
The reason the other folders are missing may be due to you downloaded the Zip file and not a clone to your PC which downloads everything including the source code.
The Zip download only contains one file folder to the program. I suspect this is the finished product rather than the development product your seeing in the GitHub repository. Although, usually the author of the program includes this in the instructions which I didn't see when reading them.
You may also want to re-read the instructions on the REAME.md document in the repository. There are some requirements that need to be preformed before using composer to install the dependencies. It may account for the "missing folders" message

How do I download source code of all the versions of a repository on Github using some automated script?

I have details of few public repositories on GitHub. Is there a way to write a script which downloads the source code of all those repositories on to my local machine? While downloading the source code I want all the previous versions of project to be downloaded.
Ex: Project RxJava has about 124 releases as shown here. I want to know if there is a way to write a program which downloads source code of all these 124 releases on to my machine. I don't want to click on download source code button on each of these releases.
The easiest is to:
clone the repo (that will give you the sources matching each tags)
do a git tag and for each tag, curl the release.
Actually, since the release is the source code, you don't have to curl anything.
To access the source code of a "release", simply checkout the tag matching the release.
cd /path/to/cloned/repo
git checkout 1.0.8
This is how I figured the solution:
Using the Repository Search API get the details of required the projects.
This gives you a JSON object which has the below property
"releases_url": "https://api.github.com/repos/ReactiveX/RxJava/releases",
Use the above url to get a JSON object which describes release details of project
The JSON obtained in step 3 has a property as given below for each version of project
"zipball_url": "https://api.github.com/repos/ReactiveX/RxJava/zipball/v1.0.8",
Now copy the content from above URL in to an output stream to fetch the required source code.
Sample source code is available here
I know I am seven years late, yet I think this solution might help people with the same prolem:
I developed a simple bash script that you can find in this GitHub Gist that allows you to download all versions from every file currently in the repository. The output data is located in subfolders matching the name of the file, containing all its versions. The original directory tree of the repository is kept.
Hope anyone finds this useful!

Files from Yeoman web-app that needs to be committed in SCM/GIT

When we do "yo webapp" (assuming webapp generator is installed), it scaffold projects which contains file relevant to bower, grunt and then there is app folder, which we all know what's it about.
My question is, out of this structure what are the files that needs to be maintained in SCM, Should it be only app directory or should it whole structure ?(assuming there are no additional grunt task or any build file changes from earlier scaffolding)
Yeoman webapp generator will produce a .gitignore file which includes files that should not be committed to a SCM. This file includes the following directories:
node_modules
dist
.tmp
.sass-cache
bower_components
test/bower_components
It is clear that .tmp and .sass-cache have no reason to be in the repo as they both are only temporary.
There is however a discussion whether bower (and rarely node) dependencies should be checked in. For most projects I recommend not to.
Please note that in either case one should never change the packages directly in the bower_components or node_modules folder as any change will be lost at next bower install or npm install. A fork of the original project (either as a independent repo or to folder in the project - e.g. lib) is a better idea - a follow up pull request would then add a lot of karma :)
The dist folder with the build of the application may be committed depending on your deployment method. There is a very good guide on deployment on Yeoman site.
As a start, you should put everything into SCM with the exception of app/bower_components, test/bower_components and node_modules. All files under these directories come from public repo, either node or bower repo.
In this setup, whenever another developer checkout from SCM, he needs to run 2 commands: npm install and bower install. What I typically do is I create a file called install.sh (install.bat on Windows) and have these 2 commands inside this script file. In this way, when you find that you need to run more commands for initialization, you can easily add to this script file and new developers can just checkout and run install.sh.
In some cases, I found that I need to perform small modification to a public library. In this case, I will check this library inside bower_components into SCM as well. This is not common, but it happens.

Should I include configure and makefile in a github repository?

We recently moved from subversion to git, and then to Github, for several open source projects. Github was nice in that it provided a lot of functionality. One of the things I particularly like is the ability to download tags as zip or .tar.gz files.
Unfortunately Github recently discontinued downloads. That shouldn't be a problem because of the ability to download tags. However in the past we have not put a Makefile , configure script or any other autoconf-generated files into the repo because they get lots of conflicts when people merge.
What's the proper way to handle this?
Should I put autoconf and automake-generated files in the repo so people can download tags directly?
Or should there be a bootstrap.sh file and people are told to run that?
Or should I just do a make dist and put that into the repo?
Thanks
Publish the output of make dist via GitHub Releases
Your first option—putting the Autoconf- and Automake-generated files into the repository—is not a good idea. It's almost never beneficial to store generated files in source control. In this case, it's going to pollute your history with a lot of unnecessary and potentially conflicting commits, particularly if not all your contributors are using the same version of Autotools. Your third option—checking in the output of make dist—is a bad idea for exactly the same reasons as the first option.
Your second option—adding a "bootstrap" script that calls Autoconf and Automake to generate the configure scripts—is also a bad idea. This defeats the entire purpose of Autotools, which is to make your source portable across systems—including those for which Autotools is not available! (Consider what would happen if someone wanted to build and install your software on a machine on which they don't have root access, and where the GNU Build System is not installed. A bootstrap script is not going to help them because they'd first need to make a local installation of Autotools and possibly all its dependencies.)
The proper way of releasing code that uses Autotools is to produce a tarball with make dist (or better yet, make distcheck, since this will also run tests and do other sanity checks), and then publish this tarball somewhere other than the source repository.
Your original question, from April 2013, states that GitHub discontinued download pages. However, in July 2013, GitHub added a "Releases" feature that not only pre-packages your source tags, but also allows you to attach arbitrary files to each release. So on GitHub, the Releases page is where you should publish your make dist tarballs (and preferably also the detached GnuPG signatures of them).
Basic steps
When you are ready to make a release, tag it and push the tag to GitHub:
$ git tag 1.0 # Also use -s if desired
$ git push --tags
Use your Makefile to produce a tarball:
$ make dist # Alternatively, 'make distcheck'
Visit the GitHub page for your project and follow the "releases" link:
You will be taken to the Releases page for your project. The first time you visit, all you will see is a list of tags and automatically produced tarballs from the source tree:
Press the "Draft a new release" button.
You will then be presented with a form in which you should fill in the Git tag associated with the release and an optional title and description. Below this there is also a file selector labelled "Attach binaries by dropping them here or selecting them". Use this to upload the tarball you created in Step 2 (and maybe also a detached GnuPG signature of it).
When you're done, press the "Publish release" button.
Your project's Releases page will now display the release, including prominent download links for the attached files:
If you don't want to use GitHub Releases, then as pointed out in a previous answer, you should upload the tarballs somewhere else, such as your own website or FTP site. Add a link to this repository from your project's README.md so that users can find it.
The second is better: you want any user of your repo to be up and running as fast as possible, re-generating what he/she needs in order to build your program.
Since Git is very much a version control for text (as opposed to an artifact repo like Nexus), providing a way to generate the final binary is the way to go.
When you cut a release, upload the result of make distcheck to your project's download page: it's a makefile target that builds the tarball and verifies that it installs, uninstalls, passes tests and other sanity checks. Github being wrong-headed isn't an excuse: create a tree like this in your repo:
/
/source
/source/configure.ac
/source/Makefile.am
/source/...
/releases
/releases/foo-0.1.tar.gz
/releases/...
For developers, you should not have generated files in source control. Many modern autotooled projects bootstrap fine off an invocation of autoreconf -i.