Drools - Clone Github Repo - github

I have created a K8 deployment for KIE WB and KIE Server.
For KIE WB I have created a Docker Image that configures the Post Commit hook so my repos are pushed to GitHub.
All of this works great.
My question revolves around restoring a repository upon pod creation. So when my pod starts up I would like to clone my GitHub repos then they're restored without having to manually import them.
I thought I could use the GitHub API and get a tarball and drop them into the .niogit/ path but that's not working (I see the file structure in git is different, it looks like just the java source and not the required files for KIE WB recognize that it's a repo).
I know I have to be doing something that has been done but I am not finding anything to get this going. I don't want to reinvent the wheel either :)
Any ideas?

You need to use initContainers with Kubernetes git-sync https://github.com/kubernetes/git-sync/
Please see Answers to Question How to clone a private git repository into a kubernetes pod using ssh keys in secrets?
So the Kubernetes git-sync initContainer will pull a git repository into a local directory which is shared with your Drools Container via emptyDir volume mount.
For More Information on initContainers please read https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

Related

how to use gitlab to share files and folders between projects

I have a question to ask but ill explain my plan/requirement first
I have started on new company
I have been tasked to migrate a lot of microservices running on swarm to Kubernetes
there are about 50 microservices running now
right now we are using consul as key/value store for configuration files
due to a lot of mistakes in designing infrastructure, our swarm is not stable ( failing overlays and so on)
developers want to have sub-versioning on configuration as well but in a specific way
one project for all config files
they don't want to go through building stages
there are some applications that read live configurations (
changes occur regularly )
so I need to centralize the configuration and create a project for this task
I store Kubernetes manifests GitLab-ci files and app configurations there
when I include ci files in the target project I can't access config and Kube manifests ( submodule is not acceptable by developers )
I'm planning to use helm instead of kubectl for deployment
my biggest challenge is to provide the configuration live ( as the developer pushes it applies on cm )
am I on the right track?
any suggestion on how to achieve my goal?
I expect to be able to deploy projects and use multiple files and folders from other projects
create a ci file like this in your devops repo, this job should commit the config file to your repo when config changed.
commit-config-file-to-devops-repo:
script: "command to commit config file to your devops repo"
only:
refs:
- master
changes:
- path/some-config-file.json
- configs/*
change default ci file location to point to ci file in your devops repo
https://192.168.64.188/help/ci/pipelines/settings#custom-cicd-configuration-path
my/path/.my-custom-file.yml#mygroup/another-project
setup pipeline, apply config to k8s when file commited.
Personally I use argocd to sync helm chart to k8s, you can do it your way.
Read live configurations is normally not recommended, because changing config may cause error.
When using k8s, it is better to create configmap and inject config into environment variables
Then use rollout mechanism to restart the app.
Howeven, if you are using configmap volume
It will auto update config file when you change config
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically

What is Staging directory in Kubernetes for?

I'm still new to Kubernetes and I'm wondering why and what codes go into Kubernetes staging directory there is a description there in README file but I'm not sure what does that mean?
As the README states:
This directory is the staging area for packages that have been split to their own repository. The content here will be periodically published to respective top-level k8s.io repositories.
Kubernetes uses a mono-repo to store all the code for its components. Some of these code exist in separate repositories too e.g. kube-proxy configuration api. The code for these packages is maintained in the staging directory and periodically synced to the external repositories.

Github strategy for Anthos config sync

I am setting up Anthos config sync for my GitHub repositories for application deployment.
I am looking for a git strategy for the following problem:
I built the docker image based on the Pull Request merge on the git repository.
The image version is to be updated in the policy folder for anthos config, where I have either helm or kustomize.
Currently, helm and kustomize are with the same source code repository. So whenever I push the code docker build happens. So, how to update the docker image version in the helm or kustomize?
I have a reservation, I don't want to use the latest tag.
So does that mean:
Do I have to separate the git repository for helm / kustomize deployment from the source code?
And I have to use Github API to commit the file in that git repository for helm / kustomize?
If I use a custom pre-commit hook then also, it does not guarantee it fires to update the git commit hash to docker image and Kubernetes manifests.
Do you have some suggestions, for the git strategy to handle the situation?

How to manage software updates on docker-compose with one machine per user architecture?

We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.
What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.
We can accept some down time in the middle of the night, so we don't need to have high availability.
We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.
We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.
So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?
Some things we have considered:
Docker Hub (concerns about rate limiting)
Deploying our own Docker repo
Kubernetes
Ansible
https://containrrr.dev/watchtower/
Other things that we probably need include GitHub actions to build and update the Docker images.
We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!
In your case I advice you to use Kubernetes combination with CD tools. One of it is Buddy. I think it is the best way to make such updates in an automated fashion. Of course you can use just Kubernetes, but with Buddy or other CD tools you will make it faster and easier. In my answer I am describing Buddy but there are a lot of popular CD tools for automating workflows in Kubernetes like for example: GitLab or CodeFresh.io - you should pick which one is actually best for you. Take a look: CD-automation-tools-Kubernetes.
With Buddy you can avoid most of these steps while automating updates - (executing kubectl apply, kubectl set image commands ) by doing a simple push to Git.
Every time you updates your application code or Kubernetes configuration, you have two possibilities to update your cluster: kubectl apply or kubectl set image.
Such workflow most often looks like:
1. Edit application code or configuration .YML file
2. Push changes to your Git repository
3. Build an new Docker image
4. Push the Docker image
5. Log in to your K8s cluster
6. Run kubectl apply or kubectl set image commands to apply changes into K8s cluster
Buddy is a CD tool that you can use to automate your whole K8s release workflows like:
managing Dockerfile updates
building Docker images and pushing them to the Docker registry
applying new images on your K8s cluster
managing configuration changes of a K8s Deployment
etc.
With Buddy you will have to configure just one pipeline.
With every change in your app code or the YAML config file, this tool will apply the deployment and Kubernetes will start transforming the containers to the desired state.
Pipeline configuration for running Kubernetes pods or jobs
Assume that we have application on a K8s cluster and the its repository contains:
source code of our application
a Dockerfile with instructions on creating an image of your app
DB migration scripts
a Dockerfile with instructions on creating an image that will run the migration during the deployment (db migration runner)
In this case, we can configure a pipeline that will:
1. Build application and migrate images
2. Push them to the Docker Hub
3. Trigger the DB migration using the previously built image. We can define the image, commands and deployment and use YAML file.
4. Use either Apply K8s Deployment or Set K8s Image to update the image in your K8s application.
You can adjust above workflow properly to your environment/applications properties.
Buddy supports GitLab as a Git provider. Integration of these two tools is easy and only requires authorizing GitLab in your profile. Thanks to this integration you can create pipelines that will build, test and deploy your app code to the server. But of course if you are using GitLab there is no need to set up Buddy as an extra tool because GitLab is also CD tools tool for automating workflows in Kubernetes.
More information you can find here: buddy-workflow-kubernetes.
Read also: automating-workflows-kubernetes.
As it turns out, we found that a paid Docker Hub plan addressed all of our needs. I appreciate the excellent information from #Malgorzata.

Automatically mirroring a Gitlab repo onto Github on push

I'm looking for a way to automatically mirror my Gitlab repos to Github, on push. I use Gitlab repos as my main repos, and would rather have to push to only one remote. But, I want my code to be browsable on Github also.
I found similar questions on StackOverflow, such as this one.
But the answers are always the same: one should add a custom post-receive git hook to the gitlab repo. This requires a shell access to the server running Gitlab. As I'm hosting a community edition Gitlab for many users, and not only me, they can't have easy access to a shell (and this isn't the most user-friendly way to do this), so it does not fit my needs.
I thought about two ways to implement it:
Either a MirrorOnPush project service, implementing such a git hook in Ruby, as the EmailOnPush project service currently do.
Or use a custom server to clone and push the repo, using a webhook.
The first one seems to be the cleaner to me, but I can't find any doc about Gitlab project service and code structure… On the other hand, the second is a bad and ugly hack, but is almost straightforward.
I'd rather implement a project service to handle it. Do you have any doc or leads on how to write a project service for Gitlab (without having to read all the Gitlab source code, as there seems to be no dev doc…) ?
Thanks !
one should add a custom post-receive git hook to the gitlab repo.
Actually, that was the best solution, up until 7.x GitLab, as I detailed in "Gitlab repository mirroring";
A true project service for repo mirroring is requested, but not voted up enough: suggestion: suggestion 4614663.
The main documentations remains:
the app models project services folder,
the spec models project services folder,
the doc/project_services,
the project services scenarios.
This isn't much, as the OP noted before.
Since it That leaves you with the hack approach.