Set the name of a ZIP downloadable from GitHub or Other ways to enroll Google Transit project on GitHub - github

I wan to start a Google Transit project (a city transport feed for google maps) and for the purpose of collaboration I want to use GitHub. Now one great thing is that GitHub is offering a ZIP file download that contains all your repository, and Google wants a ZIP with a required data, but that file should have name: google_transit.zip.
So my question is:
Can I somehow give Google a link that will give it a file called google_transit.zip, that will contain all the stuff that's in the master branch? Maybe this can be done with standard "download zip" option or with some hooks or something elseā€¦

GitHub will allow you to automatically download a Zip archive of the latest version of a branch using the following url:
https://github.com/:user/:repository/zipball/:branch [GET]
The archive will be given a special name following the git describe command output.
However, there's one way to achieve what you're after by leveraging the GitHub Repo Downloads API.
Every time your master branch is ready to be published, you'd execute the following steps:
If the download resource google_transit.zip already exists, remove it
Create a new download resource and name it google_transit.zip
Upload the latest zip archive using the provided information of the previous request
There's even a Ruby library (ruby-net-github-upload) that may help you automating this task.

Related

How do I protect my github website repository from being copied?

I built a website on github and would like to protect it from someone copying my repository and running the same website (either online or offline for themselves).
The website is fairly basic and builds on github action, which excute on schedule a Rmarkdown file that produces (updates) the index.html file. I want to avoid people being able to copy and freely execute that Rmarkdown file. I wonder if I could encrypt that specific file, and simply use a secret key with github actions to decrypt it when updating the website. Is this possible and would it be a good solution?
I also thought about having a private repository with my Rmarkdown file and simply push the html file to the public repository via github action, the problem is the github action takes a while to execute and I would quickly run out of the computation time (2000-3000 mins/month) offered by github.
I also thought about having a private repository with my Rmarkdown file and simply push the html file to the public repository via github action
That would have been the first approach, but since the RMarkdown process consumes to much tasks, it needs to be executed elsewhere.
Since other online free plans (like RStudio Cloud) are also limited in their project hours per month, another approach would be to call your own managed server (for instance, a Google Cloud compute engine, or Digital Ocean Droplet) where:
the RMarkdown file would reside (meaning, it would not be in the GitHub repository at all: no need to obfuscate/encrypt anything)
the process can take place
the generated index.html can uploaded back to your repository, and the rest of your GitHub action can publish the pages.

GitHub Actions: auto-PR on some files update?

I'm very new to GitHub Actions/CI/CD, and I want to know whether it is possible to automate the following scenario:
I have a local script that makes use of some APIs to download some files onto my local machine. My current status is that: I have to run the script every day to check whether the content of these files is updated or not. If some of those files got updated then I need to add those changes into a new branch and push it to a repository as a PR.
My trying: My idea is that since it's possible to compare the hash of the downloaded files to know whether any of those got updated. The next thing to do is to make this into an event to trigger some action?
If it's possible could you share some resources/tutorials about how to do it?
I tested something similar on GitHub to understand how the CI/CD GitHub actions works.
the script is based on an SQLite DataBase which is updated automatically each time (automatic git push). And it uses Github Secrets to store encrypted Tokens/Passwords.
You can find my scheduler in the follow link: https://github.com/noweh/project-marvel-memories/blob/master/.github/workflows/run-schedule.yml.
you can find more information directly in the github documentation.
Here for the Github actions: https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows.
And here for the Github encrypted secrets: https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-an-environment

Referencing an artifact built by Github Actions

The upload/download artifact documentation implies that one should be able to build into the dist folder. My interpretation of this is that we can then reference this content in, for example, a static site, so that a site auto-builds itself for github pages on master pushes. However, it seems that artifacts are only uploaded to a specific location (i.e. GET /repos/{owner}/{repo}/actions/artifacts ) and can be downloaded only in zipped format, which defeats the purpose.
Is there a way to populate the dist folder of the repo, so that the file that was built becomes publicly and permanently accessible as part of the repo, and I can reference it without having to deploy it elsewhere like S3 etc?
Example
Here's a use case:
I have a dashboard which parses some data from several remote locations and shows it in charts. The page is deployed from /docs because it's a Github Pages hosted page.
the web page only reads static, cached data from /docs/cache/dump.json.
the dump.json file is generated via a scheduled Github Action which invokes a script that goes to the data sources and generates the dump.
This is how the web page can function quickly without on-page lockups due to lengthy data processing while the dump generation happes in the background. The web page periodically re-reads the /docs/cache/dump.json file to get new data, which should override old data on every scheduled trigger.
The idea is to have the action run and replace the dump.json file periodically, but all I can do is produce an artifact which I then have to manually fetch and unzip. Ideally, it would just replace the current dump.json file in place.
To persist changes made by a build process, it is necessary to add and commit them like after any change to a repo. Several actions exist for this, like this one.
So you would add the following to the workflow:
- name: Commit changes
uses: EndBug/add-and-commit#v7
with:
author_name: Commitobot
author_email: my#mail.com
message: "Updating build result!"
add: "docs/cache/dump.json"

Could I see how many times my github project was downloaded without github API?

My teacher asked me to count how many times our github project has been downloaded. I know Github API can provide download counts for all releases, but in my case it doesn't work because we didn't upload archive files in releases.
Is there another way to do this job? Maybe any tools in marketplace?
someone said it's impossible to count downloads of non-asset files (Source code(zip) and Source code(tar.gz)). That was in 2016, can I do this now?
Not to my knowledge: those specific data are not publicly exposed.
Maybe GitHub support has access to internal data about those.
This is different from download_count for a release artifact (which is not the same as the project source code archive, zip or tgz).
See "List releases for a repository", which does include a download_count field per asset.

Build an open source project from github (not mine) with a ci

There is an open source project (https://github.com/firebase/firebase-jobdispatcher-android), which I would like to get built using travis/circleci or another cloud ci. However, those CI's don't allow you to get to repos that are not yours.
I didn't try, but I have a hunch that I won't be able to get a webhook setup as well to get notified when those repos 'master' branch is updated.
Why not fork ? Because then I somehow need to manually\use cron server to get my forked repo updated! It loses the point of having open source repo builds...
Why do I want to build it continually? Because they do not upload their .aar output to mavencetral or jcenter and I don't want to put the .aars in my project and get it updated all the time - bloats the repo...
In any case, I don't get it - there's an open source project, the repo exists and open to everyone, pulling the data and getting webhooks doesn't compromise that repo in any way why isn't this possible ????
If I'm mistaken and web hook is possible, how can I set up a build that will end up in uploading to mavencentral (probably gradle plugin, I have an account and be happy to have a public copy there)?
(I thought of micro service, free of course of some kind + docker based ci which I can pull and build whatever, I don't mind if a build will take time).