Run a docker command within a container managed by tilt - kubernetes

In my node project, I would like to define a script in my package.json file like:
{
"scripts": {
"runTestsWithTilt": "<some command to run tests within a container launched by tilt>"
}
}
Is there a tilt cli command where I could run an arbitrary command through docker exec?
Something like: tilt docker exec <container-name> <command-to-run>

Tilt is still developing and the documentation is very minimal, in the official documentation and in the official github page they have mentioned only a syntax for executing docker commands but they didn’t provide a complete manual anywhere. So for now, similar to KinD (Kubernetes in Docker) we need to use docker commands for a few operations. Hope this will be addressed soon.

Related

Keep containers running after build

We are using the Docker Compose TeamCity build runner and would like the containers to continue running after the build.
I understand that the docker-compose build step itself follows a successful 'up' with a 'down', so I have attempted to bring them back up in a subsequent command line step with simply:
docker-compose up -d
I can see from the log that this is initially successful but when the build process exits, so do the containers. I have also tried:
nohup docker-compose up -d &
The outcome is the same.
How do we keep the containers running when the build has finished?
For info, the environment is both TeamCity and its BuildAgent running on the same Ubuntu box.
I have achieved this by NOT using the Docker Compose build runner. I now just have a single command line build step doing:
docker-compose down
docker-compose up -d
This works, and I feel rather silly ;-)

In ansible can I run docker_compose with parameters?

Is there a way to run docker_compose with parameters?
Something like the following:
docker-compose run --rm app_service python init_script
Now I use shell module for this.
Can I use the docker_compose module instead?
The documentation for the docker_compose module suggests that it can only do the equivalents of docker-compose up, down, and build. None of the other Ansible Docker modules connect to Compose at all.
You could use docker_container as an equivalent to a separate docker run command, but this has the same drawbacks as trying to docker run a separate container in a mostly-Compose environment (you don't get networks or volumes or dependencies declared in the docker-compose.yml file).
Falling back to shell is probably your best option here.

bash TAB completion does not work on centos 8

I run a centos 8 distro on docker and I would like to have bash TAB completion with dnf package manager. According to other posts, I did the following once my docker container is started:
dnf clean all && rm -r /var/cache/dnf && dnf upgrade -y && dnf update -y
and then
dnf install bash-completion sqlite -y
After doing that I restart the container but there is still no bash completion. I also tried to source directly the bash completion file by doing:
source /etc/profile.d/bash_completion.sh
but without any better effect.
Would you know what I am doing wrong ?
You shouldn't need BASH Completion in a Docker container. The only time you should be manually connecting to a shell inside a Linux container is to troubleshoot why the process running in the container is behaving abnormally. In fact, some container design advice might even go as far as suggesting you not include a shell inside your base OS at all!
The reason this isn't working for you is due to the way in which Linux containers operate. A Container is simply a namespaced process that is managed by the kernel installed on the Host OS. This process cannot be modified or interrupted or the container will be destroyed since the process will be sent a SIGTERM. When you attempt to source the bash_completion.sh script, you are attempting to pass new configuration arguments to your existing namespaced process managed by Docker.
If you really wanted to do this the best way to do it would be to create a new Docker Container Image based on the original CentOS 8 Base Image. And then from there install the bash completion package and add an echo command to add the source line to your user's .bashrc file.
EDIT:
With regards to the additional question asked OP in the comments of this answer I have added additional information below.
Why should not I need bash completion in a container
The reason you do not need bash completion in a container is because containers are not meant to be attached to with a shell. A is simply supposed to be a single instance of a process running under specific configured criteria. Containers aren't meant to be used to create dev environments for you to connect to, they're meant to run processes and applications in software infrastructure.
Manually updating & installing packages
You mention that one of the first things you do when you spin up a container is install packages. This is also alarming to me because you are not supposed to be manually interacting with a container at all. This includes package installation. Instead, you should generate a new Container Image from the older Base Image and add additional RUN statements to the Dockerfile to update the system and install these desired packages.
Cannot believe it is not possible
It is possible if you create a new Dockerfile that purposely installs it on a new layer of the base image and produces a new container image for you to use. BUT the point is that you shouldn't be connecting to Docker containers in the first place to even get to a point where you could need something like bash completion!
Here is a great summary on the difference between a container and a virtual machine that might help clarify some of this for you. In a nutshell, containers are supposed to run, and only run, processes.

Can I run aws-xray on the same ECS container?

I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.

How to connect travis ci mongodb service from inside docker container?

See the logs of: https://travis-ci.com/Jeff-Tian/uni-sso/builds/147317611
I created a travis CI project, that uses mongodb service. And it then runs a docker which from inside will connect that mongodb. But as the log shows, it will fail.
I tried those MONGO_URI, none of them works:
mongodb://localhost:27017
mongodb://127.0.0.1:27017
mongodb://host.docker.internal:27017
Can anyone shed some light on this? I can't find a solution either from Travis CI document nor google.
Thanks in advance!
more details
I can use mongodb://host.docker.internal:27017 in travis ci unit test, but inside the docker it would fail.
Probably already late for you, but I've managed to find solution for the same problem I had here https://docs.docker.com/network/network-tutorial-host/.
This approach binds docker container directly to the Docker host’s network
My script to run tests in docker container:
script:
- docker run --network host -e CI=true mydocker/api-test npm test
Then, from your test you can access mongodb using this url
mongodb://127.0.0.1:27017/mongo_db_name