Launching multiple lxc-net instances using a single upstart script - upstart

I would like to create two or more lxc bridges (e.g. lxcbrA, lxcbrB, etc.) using the same lxc-net.conf launched through start lxc-net.
I am aware of this solution, but I would rather use a single upstart script. Therefore I need the lxc-net service to support multiple instances.
I have read Serge Hallyn's comment regarding the fact that lxc-net was not supporting instances one year ago.
Could someone please tell me if this feature is still not supported?

Related

How to automate MacOS OS user interaction tests?

At work I am setting up a product that sets up and manages security policies on MacOs systems among others. Unfortunately I could not find in the documentation of this product exactly which OS mechanism is used for the application and local management of the policies, but I think for my question this knowledge is not essential.
I am looking for a solution to test the policy itself. Currently, I have to manually log in to the test system and manually call various apps and services to check if the policy blocks or allows the correct actions. Are there any tools/libraries in the Mac world to automate this task?
For GUI testing I found this library by a quick google https://github.com/google/EarlGrey/tree/earlgrey2. But I don't know if it is suitable for testing any apps/services in the sense of my use case. For example, would I have to find all the window ID's etc. by hand before I can write the test? Can I use them in my scenario at all?
Are there any other Swift/Objective-C libraries for this kind of tests? Or maybe even some in Ruby?
It would be ideal if this solution could also be integrated into a CI/CD pipeline.
Thanks a lot for your help!
You might be able to make your own set of test scripts based on some existing helper tools and scripts (potentially many different ones).
Some pointers are:
AppleScript - it allows automating GUI apps among other things
Automator and analogs
Alfred
For CI if you are able to wrap running your manual workflow in a shell script, that produces a well-defined output (an expected screenshot or a text file), then it could be a base for your test suite. This test suite itself could be coded in any language as long as it has access to the shell (Ruby, Python, etc. including bash/zsh itself).

How can I compactly store a shared configuration with Kubernetes Kustomize?

First, I'm not sure this question is specific enough for Stack Overflow. Happy to remove or revise if someone has any suggestions.
We use Kubernetes to orchestrate our server side code, and have recently begun using Kustomize to modularize the code.
Most of our backend services fit nicely into that data model. For our main transactional system we have a base configuration that we overlay with tweaks for our development, staging, and different production flavors. This works really well and has helped us clean things up a ton.
We also use TensorFlow Serving to deploy machine learning models, each of which is trained and at this point deployed for each of our many clients. The only way that these configurations differ is in the name and metadata annotations (e.g., we might have one called classifier-acme and another one called classifier-bigcorp), and the bundle of weights that are pulled from our blob storage (e.g., one would pull from storage://models/acme/classifier and another would pull from storage://models/bigcorp/classifier). We also assign different namespaces to segregate between development, production, etc.
From what I understand of the Kustomize system, we would need to have a different base and set of overlays for every one of our customers if we wanted to encode the entire state of our current cluster in Kustomize files. This seems like a huge number of directories as we have many customers. If we have 100 customers and five different elopement environments, that's 500 directories with a kustomize.yml file.
Is there a tool or technique to encode this repeating with Kustomize? Or is there another tool that will work to help us generate Kubernetes configurations in a more systematic and compact way?
You can have more complex overlay structures than just a straight matrix approach. So like for one app have apps/foo-base and then apps/foo-dev and apps/foo-prod which both have ../foo-base in their bases and then those in turn are pulled in by the overlays/us-prod and overlays/eu-prod and whatnot.
But if every combo of customer and environment really does need its own setting then you might indeed end up with a lot of overlays.

What is the scope of learning kubernetes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I came across the word "Kubernetes" recently while searching for some online courses. I understood if I learn Kubernetes, I will learn about containers and stuff related to container orchestration and how easily we can scale the microservices. But I wanted to know after learning kubernetes is there any other thing to learn to become an expert in that line?
My question is more of the stream I can select if I learn this, as like learning Python or R will help you to become a data analyst or other data related stream?
I am very new this, really appreciate your help in understanding this
Thanks in advance
The main prerequisite for Kubernetes is Docker. Once you learn Docker, you learn how to package environments into containers and deploy them. Once you've learnt how to build docker images, you need to 'orchestrate' them. What does that mean?
That means, if you have a bunch of microservices (in the form of containers), you can spin up multiple machines and tell Kubernetes which image/container goes where and so you can orchestrate your app using Docker images (packaged environments) and then Kubernetes as the underlying resource provider to run these containers, and control when they are spun up/killed.
Assuming you don't have a massive cluster on-prem (or at home) Kubernetes on a single personal computer is rather useless. You would need to learn a cloud platform (or invest in a server) to utilise Kubernetes efficiently.
Once you learn this, you would possibly need to find a way for your containers to communicate with one another. In my opinion, the two most important things any amateur programmer needs to know are:
Message brokers
REST
Message brokers: Kafka, RabbitMQ (personal fave), Google Pub/Sub, etc.
REST: Basically sending/receiving data via HTTP requests.
Once all of this is done, you've learnt how to build images, orchestrate them, have them communicate with one another and use resources from other machines (utilizing the cloud or on-prem servers)
There are many other uses for Kubernetes, but in my opinion, this should be enough to entice you to learn this key-skill.
Kubernetes and Docker is the future, because it removes the need to worry about environments. If you have a docker image, you can run that image on Mac, Linux, Windows or basically any machine with a hypervisor. Increase portability, and decreases over-head of setting up environments each time. Also allows you to spin up 1 or 100 or 1000 or 10,000 containers (excellent for scalability!)
Yes, if you are looking to explore fully then security aspect can also be a thing you can learn and these days its in demand where various clients want to get security leaks checked at level of containers, containers registry and even at level of kubernetes also.
You can become DevSecOps with couple of certifications.
And pertaining to your later question I can't envisage anything because here you can just deploy containers and you can even deploy some python code there which is expected to collect some data from sensors and do some computations.
Please comment if something specific is your question

Should actors/services be split into multiple projects?

I'm testing out Azure Service Fabric and started adding a lot of actors and services to the same project - is this okay to do or will I lose any of service fabric features as fail overs, scaleability etc?
My preference here is clearly 1 actor/1 service = 1 project. The big win with a platform like this is that it allows you to write proper microservice-oriented applications at close to no cost, at least compared to the implementation overhead you have when doing similar implementations on other, somewhat similar platforms.
I think it defies the point of an architecture like this to build services or actors that span multiple concerns. It makes sense (to me at least) to use these imaginary constraints to force you to keep the area of responsibility of these services as small as possible - and rather depend on/call other services in order to provide functionality outside of the responsibility of the project you are currently implementing.
In regards to scaling, it seems you'll still be able to scale your services/actors independently even though they are a part of the same project - at least that's implied by looking at the application manifest format. What you will not be able to do, though, are independent updates of services/actors within your project. As an example; if your project has two different actors, and you make a change to one of them, you will still need to deploy an update to both of them since they are part of the same code package and will share a version number.

VMware Converter Automation

I am looking for a way to automate a conversion process of true image backup (.tib) files to Virtual Machines. I have been looking around VMware for a way to implement this but have left myself really unsure of what I would need.
The reason for this is that I have a bunch of these files that I would like to check to see if they are valid. I would like an automated way to grab these files, convert them, start them up, and then shut them down. I currently have everything set up, I am just lost for a way to actually convert these files.
The two closest solutions that I have found so far are these:
VMware vSphere Essentials Kit
From my understanding (please correct me if I am wrong) I would be able to use PowerCLI to automate a way to convert these .tib files to VMs, but the catch is that I would need a VMware server or ESXi to do so. This seems to provide the ESXi that would be needed to do that.
VMware vCenter Converter
Again, from my understanding (please correct me if I am wrong) I would be able to use this to convert the files to VMs. I have downloaded this before just to use the GUI and it seems to work up until i get to the point where it asks me for the name of the server that I would like to use. I see that there is an option to "Buy Now" on the webpage, but it just takes me to (what seems like) a products page. I figured that the buy now option would be the solution to my server issue there. My thoughts to using this would be to use the API to automate what I need.
My official questions are:
I must not be the first person to want an automated way to convert these files over for testing. Does anyone have any ideas/past experiences to share?
Has anyone used either of these options before?
Are my assumptions correct in thinking that I need a VMware server or ESXi to convert these files?
Would I be able to use the API method that I mentioned above?
VMware vCenter Converter has its own SOAP/WebServices based API.
SDK can be found at https://www.vmware.com/support/developer/converter-sdk/
So you can automate any task basically, using a programming language which is capable to work with WebServices