TestContainers: reuse network from DockerComposeContainer to use in other GenericContainer? - docker-compose

There is a method:
org.testcontainers.containers.GenericContainer#withNetwork
which I can use to spawn containers with a same network using TestContainers lib.
But what about DockerComposeContainer, first I start it, then I want to get a network out of it and reuse the same network when I spawn following GenericContainers.
Is it even possible now?

Related

Use docker-compose to execute a script right before the container is stopped

Let's say I want to execute a cleanup script whenever container termination is triggered. How do I go about this using docker-compose?
This could be handy to automatically back up the files, databases, etc for the dev container.
docker containers are meant to be ephemeral:
By "ephemeral", we mean that the container can be stopped and destroyed, then rebuilt and replaced with an absolute minimum set up and configuration.
Building upon this concept docker itself does not offer anything to hook into the shutdown process. docker-compose is built on top of docker and also does not add such functionality.
Maybe you can rethink your problem the docker way to better fit the intended use of docker. Without further context it is hard to say what could be a good solution but maybe one of the following approaches helps you out:
docker stop sends a SIGTERM signal to the main process in the container. You could use a custom entrypoint or supervisor process that would trigger the appropriate actions on a SIGTERM. This approach requires custom containers. With the stop_signal attribute you can also configure a custom signa to be sent in your docker-compose.yml
if you just want to persist data files from the containers just configuring the right volumes might be enough
you could use docker events to listen and act upon any types of events emitted by the docker daemon

Selecting specific worker for connection

So, I'm using a hypnotoad server for my application and is trying to maintain state for connections. Turns out for every connection a different worker is spawned/selected. Can I somehow make this selection explicit? Also is there a way to know which worker was used for my last request and use it again for corresponding requests?
You can ran multiple masters (every with one worker or like morbo - single processes).
And add load balancer before 'em which will be responsible to selection of concrete process per connection.
Typically I used nginx with upstream module sticky setting.

Call one container from another in kubernetes

Say I have a container image that contains a large command-line program that is executed from the shell. I have another container that contains a scheduler whose job it is to invoke the first container when it receives a certain signal. For various reasons I don't want to put them in the same container (mainly because the scheduler can invoke many different tools, and different versions of those tools, and I don't want to have to put all the tools and their versions in the same container image.)
I know how to put two containers in the same pod. However, the default behavior is to run both containers at startup. What I want to be able to do is to have the scheduler be able to decide when to invoke the other container, and to be able to specify the command-line arguments (and ideally, environment variables) for it. Also, I need to know the exit status. Extra credit for getting stdout/stderr, but I can hack around with volumes if I need to.
I also know how to do this if the second container was a server, but in this case it's a shell program.
A quick way to do this is:
Add a kubectl proxy in your container startup
Then call a kubernetes job from the first pod.
This would create a lightweight solution in which the desired job can be queried for success state, seemingly fulfilling your requirements

How to Run Two applications on single JBoss without creating multiple instance

I have one application which listen to 1099 port.
I also have another application which refer to same port(1099).
I have deploy this two application in single JBoss.
When i run JBoss(Using jboss-6.1.0.Final) it throws an error.
Is there any other way to do this without creating another instance of JBoss?
Two processes should not be listening to the same port. You should make the ports configurable in the applications, and then use different ports for them.

Horizontal scaling of node.js server instances on a single machine

Running a web server on node.js is a simple thing to do (as seen by its excellent examples and documentation) but I wonder how you can fully use the CPU resources of a dedicated server?
Since node.js is single-threaded the only way to take advantage of multiple processors is via multiple processes. Of course, only one process can bind to a port so it seems there would have to be a master/worker pattern wherein the master forks children, binds to the incoming port, and delegates incoming connections (and the actual processing work) to the children. (Perhaps via a hungry-consumer pattern?)
Is this the best way to scale a web server running node.js? If so, are there libraries to simplify the master/worker pattern? If not, what patterns or deployment setups are recommended to best use the entire resources of a dedicated machine?
(Is this a better question for ServerFault?)
Multi-node is a library that provides the master/worker pattern.
If the server processes don't need to be able to talk to each other, and you aren't using Socket.IO, a simple option would be to just start one process/core, bind to local ports, and use something like nginx or HAProxy to load balance between them.
If you're using express, I'd use tj's Cluster: http://learnboost.github.com/cluster/
It provides 'transparent' cpu based load balancing, which is nice because you can use your existing express app, and it scales it across cores relatively painlessly.