How to k8s deploy to a minikube that runs in github codespaces? - github

Context
I've installed minikube in github codespaces, and that works fantastic! With this setup I'm able to port forward any application running in minikube and reach it with the url generated by github codespaces.
Problem
I'd like to use github actions to deploy an app into the minikube cluster that runs in github codespaces.
Question
Is it possible, if so the how to do it?

It toured out that it is possible. There are 2 ways that you could solve this problem.
Push based
Start GitHub codespace with minikube installed in it.
Install and configure GitHub's self hosted runner in GitHub Codespaces.
Configure and start GitHub's self hosted runner in GitHub Codespace - preferably you should run self hosted runner as a service
Run your GitHub's Actions on self hosted runners
jobs:
build:
runs-on:
labels:
- self-hosted
- self-hosted-runner-label
Pull based
Start GitHub Codespace with minikube installed in it.
Install ArgoCD in minikube
Point ArgoCD towards your GitHub repository
Use GitHub Actions to generate new k8s manifests files

Related

How can I use Gitlab's Container Registry for Helm Charts with ArgoCDs CI/CD Mechanism?

My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.

Kubernetes how to make Deployment to update image auto CI/CD

I am using gcp and kubernetes.
I have gcp repository and container registry.
I have a trigger for build container after pushing into the master branch.
I don't know how to set some auto-trigger to deploy new version of the container (docker file).
How can I automate the build process?
You need some extra pieces to do it, for example if you use Helm to package your deployment you can use Flux to trigger the automated deployment.
https://helm.sh/
https://fluxcd.github.io/flux/
There are two solutions here.
You can expand the build step. Cloud Build can also push changes to your GKE cluster. You can read more about this here
What you currently have is a solid CI pipeline, for the CD, you can use Spinnaker for GCP, which was released recently. This integrates well with GCE, GKE and GAE and allows you to automate the CD portion.

aws codeBuild buildspec.yml example for github

I am trying to use AWS CodeBuild for building my code from github. These are the steps I followed so far,
1) Created a windows docker image with all the pre-req software
needed (git, npm, node.js etc) and pushed to Amazon ECS.
2)Created a project in AWS CodeBuild using
a) github as the source (What to build)
b) docker image created in Step 1 (How to build)
I setup buildspec.yml as below:
env:
#variables:
#parameter-store:
phases:
#install:
#pre_build:
build:
commands:
- git clone https://github.com/OrgName/RepName.git "c:\www\localfolder"
#post_build:
#artifacts:
#files:
But this is always failing during DOWNLOAD_SOURCE STEP saying "CodeBuild is experiencing Issues"
Please suggest how to setup buildspec.yml for github clone\fetch\checkout purpose.
Thanks.
The issue you encountered may not be related to git clone\fetch\checkout failure. The build could also fail at "DOWNLOAD_SOURCE" step if CodeBuild failed/timed out when pulling the Windows Docker image; especially when the image is large.
Workarounds you can try:
1) use the windows image provided by CodeBuild and install the pre-req software during the install phase. (you will need to update your buildspec.yml)
OR
2) use a BUILD_GENERAL1_LARGE instance. maybe you will need to increase the timeout too.

Mirror from github to gitlab

In Gitlab it is now possible to automatically mirror remote GIT repo:
http://docs.gitlab.com/ee/workflow/repository_mirroring.html
Synchronization is either done manually or via gitlab cron script (running every hour).
I would like to sync in this way my github repo and run Gitlab CI jobs using my own runners.
Is is possible to automatize sync task, i.e. via Github webhooks ? Do you know if there is any other way to do it with Gitlab infrastructure ?
I would like to avoid hacking like:
- cloning github repo in gitlab runner
- running my own cron jobs which do sync more often
Mirror does work, but it's slow. If your goal is to run gitlab-ci for a github repository, good news, gitlab has released a new version which lets you use github.com repository with gitlab-ci:
https://about.gitlab.com/2018/03/22/gitlab-10-6-released/
GitLab CI/CD for GitHub feature a part of our GitLab.com Free tier
Instructions:
https://about.gitlab.com/2018/03/22/gitlab-10-6-released/#gitlab-cicd-for-external-repos
https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/

SSH continuous deployment with self hosted Drone CI

I just need a simple guide on how to setup SSH continuous deployment with a self hosted Drone CI. Is it possible to do that? I know that Drone.io offers continuous deployment in many ways (SSH, Heroku, AppEngine, Amazon S3 ...etc) but what about self hosted Drone CI?
I found that self hosted Drone is awesome enough to have a Go plugin that supports Continuous Deployment. It's really simple as this:
deploy:
bash:
script:
- bundle exec cap deploy:update