Referencing an artifact built by Github Actions - github

The upload/download artifact documentation implies that one should be able to build into the dist folder. My interpretation of this is that we can then reference this content in, for example, a static site, so that a site auto-builds itself for github pages on master pushes. However, it seems that artifacts are only uploaded to a specific location (i.e. GET /repos/{owner}/{repo}/actions/artifacts ) and can be downloaded only in zipped format, which defeats the purpose.
Is there a way to populate the dist folder of the repo, so that the file that was built becomes publicly and permanently accessible as part of the repo, and I can reference it without having to deploy it elsewhere like S3 etc?
Example
Here's a use case:
I have a dashboard which parses some data from several remote locations and shows it in charts. The page is deployed from /docs because it's a Github Pages hosted page.
the web page only reads static, cached data from /docs/cache/dump.json.
the dump.json file is generated via a scheduled Github Action which invokes a script that goes to the data sources and generates the dump.
This is how the web page can function quickly without on-page lockups due to lengthy data processing while the dump generation happes in the background. The web page periodically re-reads the /docs/cache/dump.json file to get new data, which should override old data on every scheduled trigger.
The idea is to have the action run and replace the dump.json file periodically, but all I can do is produce an artifact which I then have to manually fetch and unzip. Ideally, it would just replace the current dump.json file in place.

To persist changes made by a build process, it is necessary to add and commit them like after any change to a repo. Several actions exist for this, like this one.
So you would add the following to the workflow:
- name: Commit changes
uses: EndBug/add-and-commit#v7
with:
author_name: Commitobot
author_email: my#mail.com
message: "Updating build result!"
add: "docs/cache/dump.json"

Related

How do I protect my github website repository from being copied?

I built a website on github and would like to protect it from someone copying my repository and running the same website (either online or offline for themselves).
The website is fairly basic and builds on github action, which excute on schedule a Rmarkdown file that produces (updates) the index.html file. I want to avoid people being able to copy and freely execute that Rmarkdown file. I wonder if I could encrypt that specific file, and simply use a secret key with github actions to decrypt it when updating the website. Is this possible and would it be a good solution?
I also thought about having a private repository with my Rmarkdown file and simply push the html file to the public repository via github action, the problem is the github action takes a while to execute and I would quickly run out of the computation time (2000-3000 mins/month) offered by github.
I also thought about having a private repository with my Rmarkdown file and simply push the html file to the public repository via github action
That would have been the first approach, but since the RMarkdown process consumes to much tasks, it needs to be executed elsewhere.
Since other online free plans (like RStudio Cloud) are also limited in their project hours per month, another approach would be to call your own managed server (for instance, a Google Cloud compute engine, or Digital Ocean Droplet) where:
the RMarkdown file would reside (meaning, it would not be in the GitHub repository at all: no need to obfuscate/encrypt anything)
the process can take place
the generated index.html can uploaded back to your repository, and the rest of your GitHub action can publish the pages.

github actions not discovering artifacts uploaded between runs

I have a set of static binaries that I am currently re-downloading every CI run. I need these binaries to test against. I would like to cache these OS specific binaries on github actions so i don't need to re-download them everytime.
A key consideration here is the binaries do not change between jobs, they are 3rd party binaries that I do not want to re-download from the 3rd party site every time a PR is submitted to github. These binaries are used to test against, and the 3rd party publishes a release once every 6 months
I have attempted to do this with the upload-artifact and download-artifact flow with github actions.
I first created an action to upload the artifacts. These are static binaries I would like to cache repository wide and re-use everytime a PR is opened.
Here is the commit that did that:
https://github.com/bitcoin-s/bitcoin-s/runs/2841148806
I pushed a subsequent commit and added logic to download-artifact on the same CI job. When it runs, it claims that there is no artifact with that name despite the prior commit on the same job uploading it
https://github.com/bitcoin-s/bitcoin-s/pull/3281/checks?check_run_id=2841381241#step:4:11
What am i doing wrong?
Next
Artifacts and cache achieve the same thing, but should be used for different use cases. From the GitHub docs:
Artifacts and caching are similar because they provide the ability to store files on GitHub, but each feature offers different use cases and cannot be used interchangeably.
Use caching when you want to reuse files that don't change often
between jobs or workflow runs.
Use artifacts when you want to save
files produced by a job to view after a workflow has ended.
In your case you could use caching and set up a cache action. You will need a key and a path, and it will look something like this:
- name: Cache dmg
uses: actions/cache#v2
with:
key: "bitcoin-s-dmg-${{steps.previoustag.outputs.tag}}-${{github.sha}}"
path: ${{ env.pkg-name }}-${{steps.previoustag.outputs.tag}}.dmg
When there's a cache hit (your key is found), the action restores the cached files to your specified path.
When there's a cache miss (your key is not found), a new cache is created.
By using contexts you can update your key and observe changes in files or directories. E.g. to update the cache whenever your package-lock.json file changes you can use ${{ hashFiles('**/package-lock.json') }}.

How to create tag automatically upon accepted merge request in GitLab?

This is for a repository containing a library. The library version number is incremented (manually) each time a Merge Request to master is accepted.
However, if I want to access a file from version X.Y.Z, I have to look for the commit that incremented the version number to X.Y.Z, get its date, and then look in the history of the file for the version at that date.
I would like to create a tag per version, automatically when the Merge Request to master is created. Is this possible?
I hoped it would be possible with the new GitLab slash commands, but there currently is not support for tags.
Is there any other possibility than using web hooks?
While facing the same challenge, I stumbled upon this suggestion on GitLab's former issue tracker on GitHub1:
“You can write up a script to use GitLab API to accept a merge request, get the commit of the merge and then tag that commit.” --MadhavGitlab
(just to mention that — for me that's not sufficient)
1 EDIT:
Looks like all issues have been purged from the GitHub mirror, so this link does no longer work, but luckily the relevant quote persists right here.
I first tried to do it the gitlab way, by creating a .gitlab-ci.yml file in the project top-level directory. That file can contain the commands creating the version tag. The user executing the script has to have enough permission to push to the git project, and be configured with authoring information.
I finally did it on a Jenkins server, where I created a job that is invoked when commits are pushed into a specific branch. The tag can be created in the execute shell commands.

Build an open source project from github (not mine) with a ci

There is an open source project (https://github.com/firebase/firebase-jobdispatcher-android), which I would like to get built using travis/circleci or another cloud ci. However, those CI's don't allow you to get to repos that are not yours.
I didn't try, but I have a hunch that I won't be able to get a webhook setup as well to get notified when those repos 'master' branch is updated.
Why not fork ? Because then I somehow need to manually\use cron server to get my forked repo updated! It loses the point of having open source repo builds...
Why do I want to build it continually? Because they do not upload their .aar output to mavencetral or jcenter and I don't want to put the .aars in my project and get it updated all the time - bloats the repo...
In any case, I don't get it - there's an open source project, the repo exists and open to everyone, pulling the data and getting webhooks doesn't compromise that repo in any way why isn't this possible ????
If I'm mistaken and web hook is possible, how can I set up a build that will end up in uploading to mavencentral (probably gradle plugin, I have an account and be happy to have a public copy there)?
(I thought of micro service, free of course of some kind + docker based ci which I can pull and build whatever, I don't mind if a build will take time).

Azure Continuous Integration Overwriting App_Data even with WebDeploy file specified to "exclude app data"

I have a Windows Azure Website and I've setup Azure Continuous Integration with hosted Team Foundation Server. I make a change on my local copy, commit to TFS, and it gets published to Azure. This is great, the problem is that I have an Access database in the ~\App_Data\ folder and when I check-in the copy on Azure gets overwritten.
I setup a web-deploy publish profile to "Exclude App_Data" and configured the build task to use the web-deploy profile, and now it DELETES my ~\App_Data\ folder.
Is there a way to configure Azure Continuous Integration to deploy everything and leave the App_Data alone?
I use the 'Publish Web' tool within Visual Studio, but I think the principles are the same:
if you modify a file locally and publish, it will overwrite whatever's on the web
if you have no file locally - but the file exists on the web - it will still exist on the web after publishing
The App_Data folder gets no special treatment in this behaviour by default. Which makes sense - if you modified an .aspx or .jpg file locally, you would want the latest version to go on the web, right?
I also use App_Data to store some files which I want the web server (ASP.NET code) to modify and have it stay current on the web.
The solution is to:
Allow the web publishing to upload App_Data, no exclusions.
Don't store files in App_Data (locally) that you want to modify on the web.
Let the web server be in charge of creating and modifying the files exclusively.
Ideally you would not have to change any code and the server can create a blank file if necessary to get started.
However if you must start off with some content, say, a new blank .mdf file, you could do the following:
Locally/in source repository, create App_Data/blank.mdf (this is going to be a starting point, not the working file).
In Global.asax, modify "Application_Start" to create the real working .mdf file from the blank starting file:
// If the real file doesn't exist yet (first run),
// then create it using a copy of the placeholder.
// If it exists then we re-use the existing file.
string real_file = HttpContext.Current.Server.MapPath("~/App_Data/working.mdf");
if (!File.Exists(real_file))
File.Copy(HttpContext.Current.Server.MapPath("~/App_Data/blank.mdf"), real_file);