As other answers have noted, I'm trying to run multiple commands in docker-compose, but my container exits without any errors being logged. I've tried numerous variations of this in my docker-compose file:
command: echo "the_string_for_the_file" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend
The Dockerfile command is:
CMD ["datahub-frontend/bin/datahub-frontend"]
My Real Goal
Before the application starts, I need to create a file named user.props in a location ./datahub-frontend/conf/ and add some text to that file.
Annoying Constraints
I cannot edit the Dockerfile
I cannot use a volume + some init file to do my bidding
Why? DataHub is an open source project for documenting data. I'm trying to create a very easy way for non-developers to get an instance of DataHub hosted in the cloud. The hosting we're using (AWS Elastic Beanstalk) is cool in that it will accept a docker-compose file to create a web application, but it cannot take other files (e.g. an init script). Even if it could, I want to make it really simple for folks to spin up the container: just a single docker-compose file.
Reference:
The container image is located here:
https://registry.hub.docker.com/layers/datahub-frontend-react/linkedin/datahub-frontend-react/465e2c6/images/sha256-e043edfab9526b6e7c73628fb240ca13b170fabc85ad62d9d29d800400ba9fa5?context=explore
Thanks!
You can use bash -c if your docker image has bash
Something like this should work:
command: bash -c "echo \"the_string_for_the_file\" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend"
Related
I am using VS Code with remote containers extension
My container accepts json files to override specific configurations supported by it.
For example, if I running from a terminal, the command looks something like this:
$ docker run -it my-container --Flag1 /path/to/file.json --Flag2 /path/to/file2.json
However, I can't find a way to pass these flags using either:
devcontainer.json
or
devcontainer.env
I believe they should go in the devcontainer.env file - but I can't seem to find a way to specify it.
If I use in my devcontainer.env:
Flag1=/path/to/file.json
Flag2=/path/to/file2.json
The container will not start.
I have a few quick navigation plugins such as "block travel" I use all the time. Is there a way to use these in cloud shell?
I imagine there are some restrictions, but even some simple editor plugins can be huge timesavers.
While I'm at it - alt-D to duplicate a line, or transpose lines - some of those seem to be missing and hard to use key remapping to get working, at least within the shell. In general maybe keyboard shortcuts seem to get trapped by the browser or PWA wrapper. I'm using cloudshell as a webapp on a chromebook FWIW, for various secure projects.
I have come up with a solution that covers both aspects of your question
To get Unlimited Persistent Disk:
You can use Google Cloud Storage FUSE
Google Cloud Storage FUSE lets you mount a GCS bucket as a folder to your linux instance. By doing that you get an “unlimited '' persistent disk and it is super simple to set up since gcsfuse is already installed in cloud shell.
1. Create a GCS bucket (you only need to run this once) -- replace BUCKET_NAME with any name:
gsutil mb "gs://BUCKET_NAME/"
2. Create a local directory for mounting -- replace FOLDER_NAME with the chosen directory name:
mkdir /home/[USER]/[FOLDER_NAME]
chmod 777 /home/[USER]/[FOLDER_NAME]
3. Mount the bucket onto the local filesystem (note: you need to re-run this every time Cloud Shell starts)
gcsfuse -o nonempty -file-mode=777 -dir-mode=777 --uid=1000 --debug_gcs [BUCKET_NAME] /home/[USER]/[FOLDER_NAME]
To use third party plugins in cloud shell:
You can use an environment customization script (.customization_environment) as mentioned in the public documentation. It allows you to install additional packages into your Cloud Shell environment when it starts.
For reference, below are the steps to install VS Code plug-in.
Step 1:
To install the VSCode server, run the script named visual_studio_code.sh as below, in the root directory workspace of Cloud Shell Editor.
visual_studio_code.sh file:
export VERSION=`curl -s https://api.github.com/repos/cdr/code-server/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`
wget https://github.com/cdr/code-server/releases/download/$VERSION/code-server-3.10.2-linux-amd64.tar.gz
tar -xvzf code-server-3.10.2-linux-amd64.tar.gz
cd code-server-3.10.2-linux-amd64
Run the script using the below command in shell,
./visual_studio_code.sh
if getting permission denied error then run this following command in shell,
chmod +x visual_studio_code.sh
./visual_studio_code.sh
Step 2:
Make a customization script in the root directory workspace in Cloud Shell Editor to start VS Code Server on boot with the below commands :
.customization_environment file :
#!/bin/sh
#.customize_environmnet run in background as root, wait for your user to initialize
sleep 20
sudo -u [USER] /home/[USER]/code-server-3.10.2-linux-amd64/code-server --auth none --port 9090
Step 3:
To view Visual Studio Code server on port 9090 :
Click on Web Preview > Change Port > 9090
If getting a 404 error, remove ‘?authuser=0’ from the url.
Visual Studio Code Server will now be running on the browser!!!
Block travel navigation plugin:
To have the block travel navigation plugin in cloud shell,follow the below commands and run them in shell in root directory:
wget https://github.com/efatsi/block-travel/archive/refs/tags/v1.0.0.tar.gz
tar xzvf v1.0.0.tar.gz
ls
#You will see block-travel-1.0.0
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
#You might get Permission denied if yes, then follow the next two commands else go to webport view in 9090
chmod +x block-travel-1.0.0/keymaps/block-travel.cson
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
Open the webport view in 9090, you will be able to navigate through the vs code files using :
Alt+up for block-travel.jumpUp
Alt+shift+up for block-travel.selectUp
Alt+down for block-travel.jumpDown
Alt+shift+down for block-travel.selectDown
WARNING: This should not be considered a long term solution, just a stop gap until this is supported in an easier fashion.
This might not be the greatest idea but it does seem to work for the vim extension I tried in my environment. Probably best to make a request through the in product feedback to get it officially added but until then you can follow these steps.
Upload the .vsix package to your $HOME directory.
Unzip the package into the /google/devshell/editor/theia/plugins directory. This action will not persist so you'll want to add the command to the .customize_environment script actions.
e.g.
sudo unzip vscodevim.vsix -d /google/devshell/editor/theia/plugins/vscode-vim
Now for the questionable part. You'll want to install the pslist package to make life easy so you have access to the rkill command. You probably also want to add this to the .customize_environment file as well since it also will not persist.
sudo apt install pslist
Now we need to get the process id for the editor. Currently this seems to be spawned by a supervisord command, which also spawns the tmux section so we're going to grab the process id that is from the runuser command it spawns (and filter for the theia one just in case).
ps ax | grep runuser | grep "theia start"
Then we can use rkill to kill the process and all of the its children, which will cause supervisord to restart it for us and the plugin should be available.
sudo rkill PID_OF_GREP_OUTPUT
I'm not sure the best way to script the rkill command yet, since I don't know the timing of when it's up vs the .customize_environment execution, so right now I run this each time I start up a new VM.
If anything goes horribly wrong, you should be able to request a restart of the VM from the menu options and get a fresh one.
Cloud Shell offers VS Code editor experience through Theia. Did you try cloud Code editor in the cloud shell? that is exposed through "Open Editor" button on the top right, this will open cloud code editor that gives you VSCode experience. You have all the navigation keys that are available in the editor.
I don't want to have to deploy a whole other ECS service just to enable X-Ray. I'm hoping I can run X-Ray on the same docker container as my app, I would have thought that was the preferred way of running it. I know there might be some data loss if my container dies. But I don't much care about that, I'm trying to stop this proliferation of extra services which serve only extra analytical/logging functions, I already have a logstash container I'm not happy about, my feeling is that apps themselves should be able to do this sort of stuff.
While we have the Dockerhub image of the X-Ray Daemon, you can absolutely run the daemon in the same docker container as your application - that shouldn't be an issue.
Here's the typical setup with the daemon dockerfile and task definition instructions:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
I imagine you can simply omit the task definition attributes around the daemon, since it would be running locally beside your application - those wouldn't be used at all.
So I think the proper way to do this is using supervisord, see link for an example of that, but I ended up just making a very simple script:
# start.sh
/usr/bin/xray &
$CATALINA_HOME/bin/catalina.sh run
And then having a Dockerfile:
FROM tomcat:9-jdk11-openjdk
RUN apt-get install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
# COPY APPLICATION
# TODO
COPY start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh
CMD ["/bin/bash", "/usr/bin/start.sh"]
I think I will look at using supervisord next time.
I am using docker-compose for a development project. I have 6 services defined in my docker compose file. I have been using the below script to rebuild the images whenever I make a change.
#!/bin/bash
# file: rebuild.sh
docker-compose down
docker-compose build
docker-compose up
I am looking for a way to reduce the build time as building and restarting all the services seems unnecessary as I am usually only changing one module. I see in the docker-compose docs you can run commands for individual services by specifying the service name after e.g. docker-compose build myservice.
In another terminal window I tried docker-compose build myservice && docker-compose restart myservice while leaving the other ./rebuild.sh command open in the original terminal. In the ./rebuild.sh terminal window I see all the initialization messages being reprinted to the stdout so I know it is restarting that service but the code changes aren't there. What am I doing wrong? I just want to rebuild and restart a single service.
Try:
docker-compose up -d --force-recreate --build myservice
Note that:
-d is for Detached mode,
-force-recreate will recreate containers even is your code did not change,
-build is for build your images before starting containers.
At least the name of your service.
Take a look here.
I need to do a dataimport from a PostgreSQL container running inside docker to a Solr server also running inside of Docker.
In my docker run command I specify the --link option which creates the environment variable $POSTGRESQL_PORT_5432_TCP_ADDR inside the solr docker container, and I need to pass this into Solr to use in my solrconfig.xml file.
I've heard that this is possible by passing JVM environment variables to the Solr startup command, but docker run starts Solr automatically. The only workaround I've found is doing something like:
docker run --name solr -d -p 8983:8983 --link postgresql --volumes-from solr_cores makuk66/docker-solr /bin/true
Starting the container with bin/true so it does nothing, and then
docker exec -it solr /bin/bash
to get into the container, finally running the solr startup command myself with the flag
-Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR
However this is an involved manual process, and I'm wondering if there's a better way.
Looking on the page Taking Solr to Production you see
The bin/solr script simply passes options starting with -D on to the JVM during startup. For running in production, we recommend setting these properties in the SOLR_OPTS variable defined in the include file. Keeping with our soft-commit example, in /var/solr/solr.in.sh, you would do:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
So all you need to do is edit the SOLR_OPTS environment variable in solr.bin.sh.
It's a bit different for Docker because you don't directly have access to solr.bin.sh, but it after some trial and error, it was as easy as adding this to my Dockerfile.
RUN echo 'SOLR_OPTS="$SOLR_OPTS -Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR"' >> /opt/solr/bin/solr.in.sh
Then you can use it in the solrconfig.xml file as
${solr.database.ip}
An important thing to note is that you can call the JVM environment variable whatever you want as long as you make sure not to overwrite anything important. I could have called it
-Dsolr.potato
if I wanted to.
For some reason the solr.in.cmd file looks exactly the same as solr.in.sh which confused me on how to set variables there. In windows containers, the command to accomplish the same - from a dockerfile, would be:
RUN Add-Content C:\solr\bin\solr.in.cmd 'set SOLR_OPTS=%SOLR_OPTS% -Dsolr.database.ip=%POSTGRESQL_PORT_5432_TCP_ADDR%'