devcontainers without VS Code? - visual-studio-code

Let's say that for some reason I don't want to launch VSC to get a devcontainer shell running, but I still want all of that devcontainer goodness without rewriting all of the configuration files. There's a devcontainer CLI, but at the moment, the only options available are open (VSC, connected to the container) and build (which builds the image, in the use case that many people are sharing the same devcontainer environment).
Ideally, there'd be a third option devcontainer shell which does all the build, spin up and connection work that is done inside VSC, but the just execs to the running container.

The .devcontainer folder contains a devcontainer.json file. In it, if you're using docker-compose, there will be a dockerComposeFile key with an array of docker-compose files, loaded in order. You can do the same with a command such as docker-compose -f first-compose-file.yml -f second-compose-file.yml.
That same folder usually has its own docker-compose.yml file. You will notice it declares your main service and usually sets up a volume to share between the host and the container (useful to work inside the container).
There are other interesting keys in devcontainer.json such as forwardPorts, remoteUser or postCreateCommand. You should be able to set up most of them in your docker-compose file (dev stuff should go into the .devcontainer/ one). The post-create command can be run with docker compose exec SERVICENAME COMMAND.
I don't know if there's a command to detect .devcontainer files and pick up the right settings, but it should not be hard to write one.

Related

docker-composer run multiple commands

As other answers have noted, I'm trying to run multiple commands in docker-compose, but my container exits without any errors being logged. I've tried numerous variations of this in my docker-compose file:
command: echo "the_string_for_the_file" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend
The Dockerfile command is:
CMD ["datahub-frontend/bin/datahub-frontend"]
My Real Goal
Before the application starts, I need to create a file named user.props in a location ./datahub-frontend/conf/ and add some text to that file.
Annoying Constraints
I cannot edit the Dockerfile
I cannot use a volume + some init file to do my bidding
Why? DataHub is an open source project for documenting data. I'm trying to create a very easy way for non-developers to get an instance of DataHub hosted in the cloud. The hosting we're using (AWS Elastic Beanstalk) is cool in that it will accept a docker-compose file to create a web application, but it cannot take other files (e.g. an init script). Even if it could, I want to make it really simple for folks to spin up the container: just a single docker-compose file.
Reference:
The container image is located here:
https://registry.hub.docker.com/layers/datahub-frontend-react/linkedin/datahub-frontend-react/465e2c6/images/sha256-e043edfab9526b6e7c73628fb240ca13b170fabc85ad62d9d29d800400ba9fa5?context=explore
Thanks!
You can use bash -c if your docker image has bash
Something like this should work:
command: bash -c "echo \"the_string_for_the_file\" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend"

VS Code and Remote containers - how to pass a variable?

I am using VS Code with remote containers extension
My container accepts json files to override specific configurations supported by it.
For example, if I running from a terminal, the command looks something like this:
$ docker run -it my-container --Flag1 /path/to/file.json --Flag2 /path/to/file2.json
However, I can't find a way to pass these flags using either:
devcontainer.json
or
devcontainer.env
I believe they should go in the devcontainer.env file - but I can't seem to find a way to specify it.
If I use in my devcontainer.env:
Flag1=/path/to/file.json
Flag2=/path/to/file2.json
The container will not start.

VSCode: How to open repositories that have been cloned to a local volume

Using the remote container extension in VSCode I have opened a repository in a dev container. I am using vscode/docker in windows/wsl2 to develop in linux containers. In order to improve disk performance I chose the option to clone the repository into a docker volume instead of bind mounting it from windows. This works great, I did some editing, but before my change was finished I closed my VSCode window to perform some other tasks. Now I have some uncommitted state in a git repository that is cloned into a docker volume, I can inspect the volume using the docker extension.
My question is, how do I reconnect to that project?
One way is if the container is still running I can reconnect to it, then do file>open folder and navigated to the volume mounted inside the container. But what if the container itself has been deleted? If the file was on my windows filesystem I could say "open folder" on the windows directory and then run "Remote-Container: Reopen in dev container" or whatever, but I can't open the folder in a volume can I?
if I understood correctly you cloned a repository directly into a container volume using the "Clone Repository in Container Volume..." option.
Assuming the local volume is still there you should still be able to recover the uncommitted changes you saved inside it.
First make sure the volume is still there: Unless you named it something in particular, it is usually named <your-repository-name>-<hex-unique-id>. Use this docker command to list the volume and their labels:
docker volume ls --format "{{.Name}}:\t{{.Labels}}"
Notice I included the Labels property, this should help you locate the right volume which should have a label that looks like vsch.local.repository=<your-repository-clone-url>. You can even use the filter mode of the previous command if you remember the exact URL used for cloning in the first place, like this:
docker volume ls --filter label=vsch.local.repository=<your-repository-clone-url>
If you still struggle to locate the exact volume, you can find more about the docker volume ls command in the Official docker documentation and also use docker volume inspect to obtain detailed information about volumes.
Once you know the name of the volume, open an empty folder on your local machine and create the necessary devcontainer file .devcontainer/devcontainer.json. Choose the image most suitable to your development environment, but in order to recover your work by performing a simple git commit any image with git installed should do (even those who are not specifically designed to be devcontainers, here I am using Alpine because it occupies relatively little space).
Then set the workspaceFolder and workspaceMount variables to mount your volume in your new devcontainer, like this:
{
"image": "mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13",
"workspaceFolder": "/workspaces/<your-repository-name>",
"workspaceMount": "type=volume,source=<your-volume-name>,target=/worskpaces"
}
If you need something more specific there you can find exhaustive documentation about the possible devcontainer configuration in the devcontainer.json reference page.
You can now use VSCode git tools and even continue the work from where you left the last time you "persisted" your file contents.
This is, by the way, the only way I know to work with VSCode devcontainers if you are using Docker through TCP or SSH with a non-local context (a.k.a. the docker VM is not running on your local machine), since your local file system is not directly available to the docker machine.
If you look at the container log produced when you ask VSCode to spin up a devcontainer for you, you will find the actual docker run command executed by the IDE to be something like along this line:
docker run ... type=bind,source=<your-local-folder>,target=/workspaces/<your-local-folder-or-workspace-basename>,consistency=cached ...
meaning that if you omit to specify the workspaceMount variable in devcontainer.json, VSCode will actually do it for you like if you were to write this:
// file: .devcontainer/devcontainer.json
{
// ...
"workspaceFolder": "/worspaces/${localWorkspaceFolderBasename}",
"workspaceMount": "type=bind,source=${localWorkspaceFolder},target=/worspaces/${localWorkspaceFolderBasename},consistency=cached"}
// ...
}
Where ${localWorkspaceFolder} and ${localWorkspaceFolderBasename} are dynamic variables avaialble in the VSCode devcontainer.json context.
Alternatively
If you just want to commit the changes and throw away the volume afterwards you can simply spin up a docker container with git installed (even the tiny Alpine linux one should do):
docker run --rm -it --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash
Then, if you are familiar with the git command line tool, you can git add and git commit all your modifications. Alternatively you can run the git commands directly instead of manually using a shell inside the container:
docker run --rm -t --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash -c "git add . && git commit -m 'Recovered from devcontainer'"
You can find a full list of devcontainers provided by MS in the VSCode devcontainers repository .
Devcontainers are an amazing tool to help you keep your environment clean and flexible, I hope this answer understood and helped you solve your problem and expand a bit your knowledge about this instrument.
Cheers
you can also use remote-contianer: clone repository in volumes again
volumes and your changes still there

docker-compose - issue restarting single service

I am using docker-compose for a development project. I have 6 services defined in my docker compose file. I have been using the below script to rebuild the images whenever I make a change.
#!/bin/bash
# file: rebuild.sh
docker-compose down
docker-compose build
docker-compose up
I am looking for a way to reduce the build time as building and restarting all the services seems unnecessary as I am usually only changing one module. I see in the docker-compose docs you can run commands for individual services by specifying the service name after e.g. docker-compose build myservice.
In another terminal window I tried docker-compose build myservice && docker-compose restart myservice while leaving the other ./rebuild.sh command open in the original terminal. In the ./rebuild.sh terminal window I see all the initialization messages being reprinted to the stdout so I know it is restarting that service but the code changes aren't there. What am I doing wrong? I just want to rebuild and restart a single service.
Try:
docker-compose up -d --force-recreate --build myservice
Note that:
-d is for Detached mode,
-force-recreate will recreate containers even is your code did not change,
-build is for build your images before starting containers.
At least the name of your service.
Take a look here.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.