How to test autodiscovery with just one computer? - server

I'm trying to write a Kotlin server with auto discovery, however I only have one computer to develop. My server uses a port and I just can't figure it out myself how to successfully test my application. Thanks for your help!

JVMs
You may be able to run copies of the app in different JVMs, but would have to run them on different ports.
VMs
This may be slow, but may be an option
Docker
Using Docker (and optionally compose) you been run multiple copies of the app on the same port, with less overhead than using full VMs.

Related

lanching desktop application in containers and making them accessible like services using kubernetes and xorg

In a school project, we've been asked to solve the following problem:
our students use some software (desktop application )for their studies like packet-tracer, eclipseIDE, qjis, etc. and they may have problems of performance with their PCs or version incompatibilities when sharing project files.
The idea is to not let the students worry about installing these apps on their PCs, but they can connect to school server and the the app will be open directly on their PCs although the app will run inside a container in the server but its UI will be forwarded to student pc using xforwarding or xorg.
server will have kubernetes to manage containers running these desktop apps in containers and provide them as services or let's say like RDP running app but in this case container running app.
Can we manage to do this, if so what would be the architecture, and how will manage to keep the connection between the PC and the same container running the application?
To explain more the challenge, regular kubernetes use case like microservices, expose containerized application through services
it doesn't matter which container that handle your request, loadbalancer usually use roundrobin to choose which container it forwards the request.
but in our case we must treat a container like rdp a user must keep connection to the same container because it is running a desktop app like eclipse.

Is it possible to deploy a cluster with one service running locally?

Say you have 3 or more services that communicate with each other constantly, if they are deployed remotely to the same cluster all is good cause they can see each other.
However, I was wondering how could I deploy one of those locally, using minikube for instance, in a way that they are still able to talk to each other.
I am aware that I can port-forward the other two so that the one I have locally deployed can send calls to the others but I am not sure how I could make it work for the other two also be able to send calls to the local one.
TL;DR Yes, it is possible but not recommended, it is difficult and comes with a security risk.
Charlie wrote very well in the comment and is absolutely right:
Your local service will not be discoverable by a remote service unless you have a direct IP. One other way is to establish RTC or Web socket connection between your local and remote services using an external server.
As you can see, it is possible, but also not recommended. Generally, both containerization and the use of kubernetes tend to isolate environments. If you want your services to communicate with each other anyway being in completely different clusters on different machines, you need to configure the appropriate network connections over the public internet. It also may come with a security risk.
If you want to set up the environment locally, it will be a much better idea to run these 3 services as an independent whole. Also take into account that the Minikube is mainly designed for learning and testing certain solutions and is not entirely suitable for production solutions.

NixOS within NixOS?

I'm starting to play around with NixOS deployments. To that end, I have a repo with some packages defined, and a configuration.nix for the server.
It seems like I should then be able to test this configuration locally (I'm also running NixOS). I imagine it's a bad idea to change my global configuration.nix to point to the deployment server's configuration.nix (who knows what that will break); but is there a safe and convenient way to "try out" the server locally - i.e. build it and either boot into it or, better, start it as a separate process?
I can see docker being one way, of course; maybe there's nothing else. But I have this vague sense Nix could be capable of doing it alone.
There is a fairly standard way of doing this that is built into the default system.
Namely nixos-rebuild build-vm. This will take your current configuration file (by default /etc/nixos/configuration.nix, build it and create a script allowing you to boot the configuration into a virtualmachine.
once the script has finished, it will leave a symlink in the current directory. You can then boot by running ./result/bin/run-$HOSTNAME-vm which will start a boot of your virtualmachine for you to play around with.
TLDR;
nixos-rebuild build-vm
./result/bin/run-$HOSTNAME-vm
nixos-rebuild build-vm is the easiest way to do this, however; you could also import the configuration into a NixOS container (see Chapter 47. Container Management in the NixOS manual and the nixos-container command).
This would be done with something like:
containers.mydeploy = {
privateNetwork = true;
config = import ../mydeploy-configuration.nix;
};
Note that you would not want to specify the network configuration in mydeploy-configuration.nix if it's static as that could cause conflicts with the network subnet created for the container.
As you may already know, system configurations can coexist without any problems in the Nix store. The problem here is running more than one system at once. For this, you need an isolation or virtualization tools like Docker, VirtualBox, etc.
NixOS Containers
NixOS provides an efficient implementation of the container concept, backed by systemd-nspawn instead of an image-based container runtime.
These can be specified declaratively in configuration.nix or imperatively with the nixos-container command if you need more flexibility.
Docker
Docker was not designed to run an entire operating system inside a container, so it may not be the best fit for testing NixOS-based deployments, which expect and provide systemd and some services inside their units of deployment. While you won't get a good NixOS experience with Docker, Nix and Docker are a good fit.
UPDATE: Both 'raw' Nix packages and NixOS run in Docker. For example, Arion supports images from plain Nix, NixOS modules and 'normal' Docker images.
NixOps
To deploy NixOS inside NixOS it is best to use a technology that is designed to run a full Linux system inside.
It helps to have a program that manages the integration for you. In the Nix ecosystem, NixOps is the first candidate for this. You can use NixOps with its multiple backends, such as QEMU/KVM, VirtualBox, the (currently experimental) NixOS container backend, or you can use the none backend to deploy to machines that you have created using another tool.
Here's a complete example of using NixOps with QEMU/KVM.
Tests
If the your goal is to run automated integration tests, you can make use of the NixOS VM testing framework. This uses Linux KVM virtualization (expose /dev/kvm in sandbox) to run integrations test on networks of virtual machines, and it runs them as a derivation. It is quite efficient because it does not have to create virtual machine images because it mounts the Nix store in the VM. These tests are "built" like any other derivation, making them easy to run.
Nix store optimization
A unique feature of Nix is that you can often reuse the host Nix store, so being able to mount a host filesystem in the container/vm is a nice feature to have in your solution. If you are creating your own solutions, depending on you needs, you may want to postpone this optimization, because it becomes a bit more involved if you want the container/vm to be able to modify the store. NixOS tests solve this with an overlay file system in the VM. Another approach may be to bind mount the Nix store forward the Nix daemon socket.

Jboss multiple instances in Standalone mode on same pc

We are using Jboss 7 App Server and we are trying run multiple server nodes on a single box and also on other box *basically 2 boxes which will have 2 each nodes of Jboss servers running).
My question is to have multiple nodes of Jboss Servers on a single box in Standalone mode. Should I have to copy server folder twice with port offsets?
Or is it ok to start servers just via port offset without having to copying server folder?
What is the best practice to have multiple server nodes running on the same box? Any advice would be greatly appreciated.
Thank you.
Just create multiple copies of standalone directory(Example: standalone_PROD,standalone_SIT) so that we will have separate log files and deployment directories for each instance. And use below option while starting server instance:
-Djboss.server.base.dir=/path/to/standalone_SIT <-- Location of standalone dir
-Djboss.socket.binding.port-offset=10 <-- PortOffset to avoid port conflict
We have had two instances of jboss on the same computer over several years. Both instances were in the same domain. Each instance had its own port and of course lay in its own path. Our experiences were good.
You can have as many standalone instances you want on a machine, depending upon the resources available.
All you need to do is copy over the same folder twice and make changes in all the ports to be used in the standalone mode. Also If you are setting any parameters make sure they are according to the memory available on the machine.

Managing resources (database, elasticsearch, redis, etc) for tests using Docker and Jenkins

We need to use Jenkins to test some web apps that each need:
a database (postgres in our case)
a search service (ElasticSearch in our case, but only sometimes)
a cache server, such as redis
So far, we've just had these services running on the Jenkins master, but this causes problems when we want to upgrade Postgres, ES or Redis versions. Not all apps can move in lock step, and we want to run the tests on new versions before committing to move an app in production.
What we'd like to do is have these services provided on a per-job-run basis, each one running in its own container.
What's the best way to orchestrate these containers?
How do you start up these ancillary containers and tear them down, regardless of whether to job succeeds or not?
how do you prevent port collisions between, say, the database in a run of a job for one web app and the database in the job for another web app?
Check docker-compose and write a docker-compose file for your tests.
The latest network features of Docker (private network) will help you to isolate builds running in parallel.
However, start learning docker-compose as if you only had one build at the same time. When confident with this, look further for advanced docker documentation around networking.