Unable to reclaim Storage for Actions and Packages after deleting all files - github

When i try to run a github action (it will build android apk) it showing an error
You've used 100% of included services for GitHub Storage (GitHub
Actions and Packages). GitHub Actions and Packages won’t work until a
monthly spending limit is set.
So i delete all Artifacts files but after i delete each Artifacts the Storage for Actions is not reducing For example i delete 20 Artifacts file and each contains 20mb. Which means 400Mb and when i check the "Storage for Actions" it is still showing it is overflowed Why this is happening?

I encountered an identical problem After looking at the docs, it seems it takes one hour for storage usage to update.
From the documentation:
Storage usage data synchronizes every hour.

Related

Very Slow Upload Times to GCP Apt Artifact registry

I have a CI/CD system uploading numerous large deb's into a Google Cloud Artifact Registry for Apt packages. The normal upload time is roughly 10 seconds for the average package. Yesterday all of the upload commands to this artifact registry started to hang until they are either terminated by an external trigger or timeout (over 30 minutes).
Any attempt to delete packages from the registry timeout without deleting the package.
The command I have been using to upload is:
gcloud artifacts apt upload ${ARTIFACT_REPOSITORY} --location=${ARTIFACT_LOCATION} --project ${ARTIFACT_PROJECT} --source=${debPackageName} --verbosity=debug
I started by updating all Gcloud versions to the latest version
Google Cloud SDK 409.0.0
alpha 2022.11.04
beta 2022.11.04
bq 2.0.81
bundled-python3-unix 3.9.12
core 2022.11.04
gcloud-crc32c 1.0.0
gsutil 5.16
I try to delete packages thinking perhaps the artifact registry was getting bloated using this command:
gcloud artifacts packages delete --location={LOCATION} --project {PROJECT} --repository={REPOSITORY} {PACKAGE} --verbosity=debug
But I consistently get:
"message": "Deadline expired before operation could complete."
The debug output from the original command and the delete command both spam this kind of message:
DEBUG: https://artifactregistry.googleapis.com:443 "GET /v1/projects/{PROJECT}/locations/{LOCATION}/operations/f9885192-e1aa-4273-9b61-7b0cacdd5023?alt=json HTTP/1.1" 200 None
When I created a new repository I was able to upload to it without the timeout issues.
I'm the lead for Artifact Registry. Firstly apologies that you're seeing this kind of latency with update operations to Apt Repositories. They are likely caused by regenerating the index for the repo. The bigger the repo gets, the longer this takes.
If you do a bunch of individual uploads/deletes, this causes the index generation to queue up, and you're getting timeouts. We did change some of the locking behavior around this recently, so we may have inadvertently swapped one performance issue with another.
We are planning to stop doing the index generation in the same transaction as the file modification. Instead we'll generate it asynchronously, and will look at batching or de-duping so that less work is done for a large number of individual updates. It will mean that the index isn't up-to-date the moment the upload call finishes, but will be eventually consistent.
We're working on this now as a priority but you may not see changes in performance for a few weeks. The only real workaround is to do less frequent updates or to keep the repositories smaller.
Apologies again, we definitely want to get this working in a performant way.

Can I delete the repository and create a new one named the same to resolve the problem when GitHub LFS quota exceeds?

As the title describes, I did so but the problem remains with the log:
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
So is there a way that works? I don't need to retain the history of the repository or any contents within it, since it is only used to store & publish on-line built binaries. Purchasing more data packs and bandwidth is not an option for me either.
GitHub's Git LFS quota is per-user account. It resets once a month, and then you get another free gigabyte of download. It doesn't matter how many repositories you have, and deleting them doesn't help.
In general, Git repositories, whether using Git LFS or not, are not a good fit for storing binaries. If you're using a GitHub repository, you can use release assets for binaries built from your repository, which are available without charge. If you're just trying to upload and distribute binaries, a different approach would be warranted, such as a VPS with a web server or a cloud bucket.

Make VSTS agent use the same "Working Directory" every time?

I have a VSTS local agent that runs a "Get Sources" build task, which causes the GIT repository to be downloaded. This works fine.
Unfortunately, my GIT repository is over 20 gigs in size. I have the "Get Sources" task set to not do any cleanup, because I want to prevent subsequent GIT downloads to not have to download the entire 20 gig repository every time.
Today, I noticed that the agent switched the working directory from
C:\agent_work\1
to
C:\agent_work\2
which caused the entire repository to be re-downloaded again when the "Get Sources" build task executed.
What is method by which the Build Agent decides what the "working directory" resolves to and is there a way to force the agent to use the same directory?
I really can't afford the time to download 20 gigs every time I need to do a deployment.
I have no tagging or branching going on in the repository. It's fairly straight forward aside from the size.
Thanks in advance!
Each build definition goes into its own directory within the agent's working directory.
This is intentional, cannot be changed, and should not be changed. The reason it behaves this way is to support the ability to build concurrently -- multiple running builds sharing the same copy of the repository are guaranteed to step on each other sooner or later.
Synchronizing the repo will only happen once per build definition per agent.

VSTS Hosted Agent, not enough space in the disk

I cannot build in VSTS with hosted agent (VS 2017) with error:
System.IO.IOException: There is not enough space on the disk
I have tried setting "Clean" option to true on Build , Repository definition without solving the issue. I didn't have this option set to true which I imagine led to the current situation.
Also installed VSTS extension "Clean Agent Directories" and added as last step of the build process without solving the issue either.
Is there an option that would allow me to solve this issue and continue using the hosted build agent ?
Hosted agents offer 10 GB of space. You stated that your entire solution folder is 2.6 GB. Your build outputs will typically be somewhere in the range of 2x that size, if not larger, depending on various factors.
If you're a Git user, this the entire repo that's being cloned may be significantly larger than 2.6 GB, as well -- cloning the repo brings down not only the current working copy of the code, but also all of the history.
You can control the clone depth (e.g. how much history is pulled down) by enabling Shallow fetch under the Advanced options of your repo settings.
If you're a TFVC user, you can check your workspace mappings to ensure only relevant source code is being pulled down.
You may be in a situation where the 10 GB simply isn't sufficient for your purposes. If the 2.6 GB is purely code and contains no binary assets (images, PDFs, video files, etc), you may want to start modularizing your application so smaller subsections can be built and independently deployed. If the 2.6 GB contains a lot of binary assets, you'll likely want to separate static content (images, et al) from source code and devise a separate static content deployment process.
According to Microsoft's documentation,
(Microsoft-hosted agents) Provide at least 10 GB of storage for your source and build outputs.
So, if you are getting "not enough space in disk error" it might mean that the amount of disk space used by your source code (files, repos, branches, etc), together with the amount of disk space taken by your build output (files generated as a result of the build process) is crossing the 10 GB of storaged provided by your DevOps plan.
When getting this error I had to delete an old git repo and an old git branch, getting 17 MB of free space, which was enough for my build to process. Thus, in my case the space was being used up by source code. It could well be too many or too big files being generated by the build. That is, you just need to find out which one of these two is the cause of your lack of disk space, and work on freeing it.
There is a trick to free agent space by removing the cached docker images (if you don't need them of course). With the Microsoft hosted agent there is a list of docker images pre-provisioned. This SO answer describes where to find the docs on the different images / cached container images.
It's as simple as adding an extra command task to cleanup the cached images. For Linux / Ubuntu:
steps:
- script: |
df -h
- script: |
docker rmi -f $(docker images -aq)
- script: |
df -h
The df (disk-free) command shows you how much is saved. This will probably free up another 5Gb.

Add binary distribution to github's download link

Github has this download link on the repositories. How can I add binary distributions to this list?
I cannot find any info on help.github, so a link to some documentation would be helpful.
On Dec 11, 2012 "Upload Releases" functionality aka "Downloads" was deprecated.
https://github.com/blog/1302-goodbye-uploads
Update: On July 2, 2013 GitHub team announced a new "Releases" feature as a replacement for "Downloads"
https://github.com/blog/1547-release-your-software
There is a new kid in town:
https://bintray.com/
* I am not affiliated
How to add files to the release
Simply follow the "releases" link within your github project.
Given this example:
user: thoughtbot
repo: neat
Final link would be: https://github.com/thoughtbot/neat/releases
Then click "Add new release" or "Edit release" to get into the upload page and at the bottom of that page you will see a legend:
Attach binaries for this release by dropping them here.
Some notes regarding size limits:
Github release feature is awesome! Just consider it is designed to host files under 50mb without a warning and a hard-limit of 100mb. Also, please no more than 1GB per account!
For large binary files they recommend using a third-party service like Dropbox but if you are open source or on a tight budget i recommend you use sourceforge.net.
Sourceforge is for open source, is free, and holds large files (up to 5GB per file) without regret. I managed to share an entire VirtualBox image of 1.1gb!! The amount of files you can upload is not clearly limited so assume unlimited
Bintray is nice but possess a 30mb limit per file and 500mb per account so you may stick with github if your files are under those limits.
Disclaimer: I'm not affiliated nor do i work for any of the mentioned companies.
The download link is first meant for git archive.
As Holger Just point out in his answer (upvoted), you can add "a new download".
See the blog post "Nodeload2: Downloads Reloaded" for considering all the troubles they have with providing that one service:
Nodeload is what prepares git repository contents into zip and tarballs.
Essentially, we have too many requests flowing through the single nodeload server. These requests were spawning git archive processes, which spawn ssh processes to communicate with the file servers.
You can create releases and attach binary downloads to each release. This replaced a similar feature called the downloads page that was removed in late 2012.