I'm working on a tinker Kubernetes-based CI system, where each build gets launched as a Job. I'm running these much like Drone CI does, in that each step in the build is a separate container. In my k8s CI case, I'm running each step as a container within a Job pod. Here's the behavior I'm after:
A build volume is created. All steps will mount this. A Job is fired
off with all of the steps defined as separate containers, in order of
desired execution.
The git step (container) runs, mounting the shared volume and cloning
the sources.
The 'run tests' step mounts the shared volume to a container spawned
from an image with all of the dependencies pre-installed.
If our tests pass, we proceed to the Slack announcement step, which is
another container that announces our success.
I'm currently using a single Job pod with an emptyDir Volume for the shared build space. I did this so that we don't have to wait while a volume gets shuffled around between nodes/Pods. This also seemed like a nice way to ensure that things get cleaned up automatically at build exit.
The problem becomes that if I fire up a multi-container Job with all of the above steps, they execute at the same time. Meaning the 'run tests' step could fire before the 'git' step.
I've thought about coming up with some kind of logic in each of these containers to sleep until a certain unlock/"I'm done!" file appears in the shared volume, signifying the dependency step(s) are done, but this seems complicated enough to ask about alternatives before proceeding.
I could see giving in and using multiple Jobs with a coordinating Job, but then I'm stuck getting into Volume Claim territory (which is a lot more complicated than emptyDir).
To sum up the question:
Is my current approach worth pursuing, and if so, how to sequence the Job's containers? I'm hoping to come up with something that will work on AWS/GCE and bare metal deployments.
I'm hesitant to touch PVCs, since the management and cleanup bit is not something I want my system to be responsible for. I'm also not wanting to require networked storage when emptyDir could work so well.
Edit: Please don't suggest using another existing CI system, as this isn't helpful. I am doing this for my own gratification and experimentation. This tinker CI system is unlikely to ever be anything but my toy.
If you want all the build steps to run in containers, GitLab CI or Concourse CI would probably be a much better fit for you. I don't have experience with fabric8.io, but Frank.Germain suggests that it will also work.
Once you start getting complex enough that you need signaling between containers to order the build steps it becomes much more productive to use something pre-rolled.
As an option you could use a static volume (i.e. a host path as an artifact cache, and trigger the next container in sequence from the current container, mounting the same volume between the stages. You could then just add a step to the beginning or end of the build to 'clean up' after your pipeline has been run.
To be clear: I don't think that rolling your own CI is the most effective way to handle this, as there are already systems in place that will do exactly what you are looking for
Related
I have a cluster in Google Kubernetes Engine, in that cluster there is a workload which runs every 4 hours, its a cron job that was set up by someone. I want to make that run whenever I need it. I am trying to achieve this by using the google Kubernetes API, sending requests from my app whenever a button is clicked to run that cron job, unfortunately the API has no apparent way to do that, or does not have a way at all. What would be some good advice to achieve my goal?
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
CronJob resource in kubernetes is not meant to be used one-off tasks, that are run on demand. It is rather configured to run on a regular schedule.
Manuel Polacek has already mentioned that in his comment:
For this scenario you don't need a cron job. A simple bare pod or a
job would be enough, i would say. You can apply a resource on button
push, for example with kubectl – Manuel Polacek Apr 24 at 19:25
So rather than trying to find a way to run your CronJobs on demand, regardless of how they are originally scheduled (usually to be repeated at regular intervals), you should copy the code of such CronJob and find a different way of running it. A Job fits ideally to such use case as it is designed to run one-off tasks.
I'm thinking of a solution to do a rolling update on a schedule without really releasing something. I was thinking of an ENV variable change through kubectl patch to kick off the update in GKE.
The context is we have containers that don't do garbage collection, and the temporary fix and easiest path forward and is cycling out pods frequently that we can schedule on a nightly.
Our naive approach would be to schedule this through our build pipeline, but seems like there's a lot of moving parts.
I haven't looked at Cloud Functions, but I'm sure there's an API that could do this and I'm leaning towards automating this with Cloud Functions.
Or is there already a GKE solution to do this?
Any other approaches to this problem?
I know AWS's ec2 has this feature for ASG, I was looking for the same thing so I don't to do a hacky ENV var change on manifest.
I can think of two possibilities:
Cronjobs. You can use CronJobs to run tasks at a specific time or interval. Suggested for automatic tasks, such as backups, reporting, sending emails, or cleanup tasks. More details here.
CI/CD with CloudBuild. When you push a change to your repository, Cloud Build automatically builds and deploys the container to your GKE cluster.
I am currently running a Flink session cluster (Kubernetes, 1 JobManager, 1 TaskManager, Zookeeper, S3) in which multiple jobs run.
As we are working on adding more jobs, we are looking to improve our deployment and cluster management strategies. We are considering migrating to using job clusters, however there is reservation about the number of containers which will be spawned. One container per job is not an issue, but two containers (1 JM and 1 TM) per job raises concerns about memory consumption. Several of the jobs need high-availability and the ability to use checkpoints and restore from/take savepoints as they aggregate events over a window.
From my reading of the documentation and spending time on Google, I haven't found anything that seems to state whether or not what is being considered is really possible.
Is it possible to do any of these three things:
run both the JobManager and TaskManager as separate processes in the same container and have that serve as the Flink cluster, or
run the JobManager and TaskManager as literally the same process, or
run the job as a standalone JAR with the ability to recover from/take checkpoints and the ability to take a savepoint and restore from that savepoint?
(If anyone has any better ideas, I'm all ears.)
One of the responsibilities of the job manager is to monitor the task manager(s), and initiate restarts when failures have occurred. That works nicely in containerized environments when the JM and TMs are in separate containers; otherwise it seems like you're asking for trouble. Keeping the TMs separate also makes sense if you are ever going to scale up, though that may moot in your case.
What might be workable, though, would be to run the job using a LocalExecutionEnvironment (so that everything is in one process -- this is sometimes called a Flink minicluster). This path strikes me as feasible, if you're willing to work at it, but I can't recommend it. You'll have to somehow keep track of the checkpoints, and arrange for the container to be restarted from a checkpoint when things fail. And there are other things that may not work very well -- see this question for details. The LocalExecutionEnvironment wasn't designed with production deployments in mind.
What I'd suggest you explore instead is to see how far you can go toward making the standard, separate container solution affordable. For starters, you should be able to run the JM with minimal resources, since it doesn't have much to do.
Check this operator which automates the lifecycle of deploying and managing Flink in Kubernetes. The project is in beta but you can still get some idea about how to do it or directly use this operator if it fits your requirement. Here Job Manager and Task manager is separate kubernetes deployment.
I have a python app that builds a dataset for a machine learning task on GCP.
Currently I have to start an instance of a VM that we have, and then SSH in, and run the app, which will complete in 2-24 hours depending on the size of the dataset requested.
Once the dataset is complete the VM needs to be shutdown so we don't incur additional charges.
I am looking to streamline this process as much as possible, so that we have a "1 click" or "1 command" solution, but I'm not sure the best way to go about it.
From what I've read about so far it seems like containers might be a good way to go, but I'm inexperienced with docker.
Can I setup a container that will pip install the latest app from our private GitHub and execute the dataset build before shutting down? How would I pass information to the container such as where to get the config file etc? It's conceivable that we will have multiple datasets being generated at the same time based on different config files.
Is there a better gcloud feature that suits our purpose more effectively than containers?
I'm struggling to get information regarding these basic questions, it seems like container tutorials are dominated by web apps.
It would be useful to have a batch-like container service that runs a container until its process completes. I'm unsure whether such a service exists. I'm most familiar with Google Cloud Platform and this provides a wealth of compute and container services. However -- to your point -- these predominantly scale by (HTTP) requests.
One possibility may be Cloud Run and to trigger jobs using Cloud Pub/Sub. I see there's async capabilities too and this may be interesting (I've not explored).
Another runtime for you to consider is Kubernetes itself. While Kubernetes requires some overhead in having Google, AWS or Azure manage a cluster for you (I strongly recommend you don't run Kubernetes yourself) and some inertia in the capacity of the cluster's nodes vs. the needs of your jobs, as you scale the number of jobs, you will smooth these needs. A big advantage with Kubernetes is that it will scale (nodes|pods) as you need them. You tell Kubernetes to run X container jobs, it does it (and cleans-up) without much additional management on your part.
I'm biased and approach the container vs image question mostly from a perspective of defaulting to container-first. In this case, you'd receive several benefits from containerizing your solution:
reproducible: the same image is more probable to produce the same results
deployability: container run vs. manage OS, app stack, test for consistency etc.
maintainable: smaller image representing your app, less work to maintain it
One (beneficial!?) workflow change if you choose to use containers is that you will need to build your images before using them. Something like Knative combines these steps but, I'd stick with doing-this-yourself initially. A common solution is to trigger builds (Docker, GitHub Actions, Cloud Build) from your source code repo. Commonly you would run tests against the images that are built but you may also run your machine-learning tasks this way too.
Your containers would container only your code. When you build your container images, you would pip install, perhaps pip install --requirement requirements.txt to pull the appropriate packages. Your data (models?) are better kept separate from your code when this makes sense. When your runtime platform runs containers for you, you provide configuration information (environment variables and|or flags) to the container.
The use of a startup script seems to better fit the bill compared to containers. The instance always executes startup scripts as root, thus you can do anything you like, as the command will be executed as root.
A startup script will perform automated tasks every time your instance boots up. Startup scripts can perform many actions, such as installing software, performing updates, turning on services, and any other tasks defined in the script.
Keep in mind that a startup script cannot stop an instance but you can stop an instance through the guest operating system.
This would be the ideal solution for the question you posed. This would require you to make a small change in your Python app where the Operating system shuts off when the dataset is complete.
Q1) Can I setup a container that will pip install the latest app from our private GitHub and execute the dataset build before shutting down?
A1) Medium has a great article on installing a package from a private git repo inside a container. You can execute the dataset build before shutting down.
Q2) How would I pass information to the container such as where to get the config file etc?
A2) You can use ENV to set an environment variable. These will be available within the container.
You may consider looking into Docker for more information about container.
I'm comparing salt-cloud and terraform as tools to manage our infrastructure at GCE. We use salt stack to manage VM configurations, so I would naturally prefer to use salt-cloud as an integral part of the stack and phase out terraform as a legacy thing.
However my use case is critical on VM deployment time because we offer PaaS solution with VMs deployed on customer request, so need to deliver ready VMs on a click of a button within seconds.
And what puzzles me is why salt-cloud takes so long to deploy basic machines.
I have created neck-to-neck simple test with deploying three VMs based on default CentOS7 image using both terraform and salt-cloud (both in parallel mode). And the time difference is stunning - where terraform needs around 30 seconds to deploy requested machines (which is similar to time needed to deploy through GCE GUI), salt-cloud takes around 220 seconds to deploy exactly same machines under same account in the same zone. Especially strange is that first 130 seconds salt-cloud does not start deploying and does seemingly nothing at all, and only after around 130 seconds pass it shows message deploying VMs and those VMs appear in GUI as in deployment.
Is there something obvious that I'm missing about salt-cloud that makes it so slow? Can it be sped up somehow?
I would prefer to user full salt stack, but with current speed issues it has I cannot really afford that.
Note that this answer is a speculation based on what I understood about terraform and salt-cloud, I haven't verified with an experiment!
I think the reason is that Terraform keeps state of the previous run (either locally or remotely), while salt-cloud doesn't keep state and so queries the cloud before actually provisioning anything.
These two approaches (keeping state or querying before doing something) are needed, since both tools are idempotent (you can run them multiple times safely).
For example, I think that if you remove the state file of Terraform and re-run it, it will assume there is nothing in the cloud and will actually instantiate a duplicate. This is not to imply that terraform does it wrong, it is to show that state is important and Terraform docs say clearly that when operating in a team the state should be saved remotely, exactly to avoid this kind of problem.
Following my line of though, this should also mean that if you either run salt-cloud in verbose debug mode or look at the network traffic generated by it, in the first 130 secs you mention (before it says "deploying VMs"), you should see queries from salt-cloud to the cloud provider to dynamically construct the state.
Last point, the fact that salt-cloud doesn't store the state of a previous run doesn't mean that it is automatically safe to use in a team environment. It is safe to use as long as no two team members run it at the same time. On the other hand, terraform with remote state on Consul allows for example to lock, so that team concurrent usage will always be safe.