.env variable in the mounts section of devcontainer.json - visual-studio-code

setting up a .devcontainer configuration ...
I have a mount that normally lives in one spot, but when working remotely, can live in a different path. I want to set up a devcontainer so that I can change an environment variable locally, without making a commit change to the repos. I can't change the target because it is a convention that spans many tools and systems.
"mounts": [
"type=bind,source=${localEnv:MY_DIR},target=/var/local/my_dir,readonly"
]
For example, in a .env file in the project root:
MY_DIR=~/workspace/my_local_dir
When I try this, no matter what combinations, this environment variable always ends up blank, and then Docker is sad when it gets a --mount configuration that is missing a source:
[2022-02-12T19:11:17.633Z] Start: Run: docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=,target=/var/local/my_dir,readonly --entrypoint /bin/sh vsc-sdmx-9c04deed4ad9f1e53addf97c69727933 -c echo Container started
[2022-02-12T19:11:17.808Z] docker: Error response from daemon: invalid mount config for type "bind": field Source must not be empty.
See 'docker run --help'.
The reference shows that this type of syntax is allowed for a mount, and that a .env will be picked up by VSCode. But I guess it only does that for the running container and not VSC itself.
How do I set up a developer-specific configuration change?

Related

flyctl launch: Error name argument or flag must be specified when not running interactively

I am trying to deploy a flask app in fly.io, but when execute flyctl launch in the terminal I get an error:
Error name argument or flag must be specified when not running interactively.
I don't see any other way to make a deployment in fly.io other than the console. I tried with a Dockerfile but flyctl launch continue throwing the same error.
Apparently flyctl believes you're not running its command-line tool interactively. That may or may not be a bug of flyctl itself, you can ask about that in the fly.io community.
The solution to this problem is to add the required information as parameters instead of being prompted for data entry. To my knowledge, you only need the name of the app you want to launch and the region code of the server location. The syntax for that can be found using the fly help launch command:
λ flyctl help launch
Create and configure a new app from source code or a Docker image.
Usage:
flyctl launch [flags]
Flags:
--auto-confirm Will automatically confirm changes when running non-interactively.
--build-arg strings Set of build time variables in the form of NAME=VALUE pairs. Can be specified multiple times.
--build-only Build but do not deploy
--build-secret strings Set of build secrets of NAME=VALUE pairs. Can be specified multiple times. See https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information
--build-target string Set the target build stage to build if the Dockerfile has more than one stage
--copy-config Use the configuration file if present without prompting
--detach Return immediately instead of monitoring deployment progress
--dockerfile string Path to a Dockerfile. Defaults to the Dockerfile in the working directory.
--dockerignore-from-gitignore If a .dockerignore does not exist, create one from .gitignore files
-e, --env strings Set of environment variables in the form of NAME=VALUE pairs. Can be specified multiple times.
--generate-name Always generate a name for the app, without prompting
-i, --image string The Docker image to deploy
--image-label string Image label to use when tagging and pushing to the fly registry. Defaults to "deployment-{timestamp}".
--local-only Only perform builds locally using the local docker daemon
--name string Name of the new app
--nixpacks Deploy using nixpacks to generate the image
--no-cache Do not use the build cache when building the image
--no-deploy Do not prompt for deployment
--now Deploy now without confirmation
-o, --org string The target Fly organization
--path string Path to the app source root, where fly.toml file will be saved (default ".")
--push Push image to registry after build is complete
-r, --region string The target region (see 'flyctl platform regions')
--remote-only Perform builds on a remote builder instance instead of using the local docker daemon
--strategy string The strategy for replacing running instances. Options are canary, rolling, bluegreen, or immediate. Default is canary, or rolling when max-per-region is set.
Global Flags:
-t, --access-token string Fly API Access Token
-j, --json json output
--verbose verbose output
In summary, the following command, executed in the directory of the app you want to launch on fly.io, should create an app called your-app-name in the Toronto, Canada location.
flyctl launch --name your-app-name --region yyz

Dockerfile instuctions is igrored in Logstash configuration

Hello everyone I am using Elasticsearch, Kibana, Logstash stack from https://github.com/robcowart/elastiflow
running it with docker-compose. The problem is that the instructions that are described in the dockerfile are not executed, now I will tell you with an example.
Logstash has settings that are stored in /etc/logstash/elastiflow/user_settings when I edit one of the configuration files, and after that I do docker-compose, then logstash is built with standard settings, ignoring the settings made in the /user_settings folder The problem is partly solved by the fact that I go into the running logstash container, into the /user_settings folder and already edit the necessary files there,
docker exec -u 0 -it container_ID bash
cd user_settings
vi ifName.yml
vi sampling_interval.yml
but such changes live until the container is loaded, after docker-compose down, the settings i made are returns to default. I have a question, for what reason could it be that the instruction specified in the dockerfile is ignored by the docker compose?
Here is default intruction:
WORKDIR /etc/logstash/elastiflow
COPY --chown=logstash:root ./logstash/elastiflow/user_settings ./

VSCode: How to open repositories that have been cloned to a local volume

Using the remote container extension in VSCode I have opened a repository in a dev container. I am using vscode/docker in windows/wsl2 to develop in linux containers. In order to improve disk performance I chose the option to clone the repository into a docker volume instead of bind mounting it from windows. This works great, I did some editing, but before my change was finished I closed my VSCode window to perform some other tasks. Now I have some uncommitted state in a git repository that is cloned into a docker volume, I can inspect the volume using the docker extension.
My question is, how do I reconnect to that project?
One way is if the container is still running I can reconnect to it, then do file>open folder and navigated to the volume mounted inside the container. But what if the container itself has been deleted? If the file was on my windows filesystem I could say "open folder" on the windows directory and then run "Remote-Container: Reopen in dev container" or whatever, but I can't open the folder in a volume can I?
if I understood correctly you cloned a repository directly into a container volume using the "Clone Repository in Container Volume..." option.
Assuming the local volume is still there you should still be able to recover the uncommitted changes you saved inside it.
First make sure the volume is still there: Unless you named it something in particular, it is usually named <your-repository-name>-<hex-unique-id>. Use this docker command to list the volume and their labels:
docker volume ls --format "{{.Name}}:\t{{.Labels}}"
Notice I included the Labels property, this should help you locate the right volume which should have a label that looks like vsch.local.repository=<your-repository-clone-url>. You can even use the filter mode of the previous command if you remember the exact URL used for cloning in the first place, like this:
docker volume ls --filter label=vsch.local.repository=<your-repository-clone-url>
If you still struggle to locate the exact volume, you can find more about the docker volume ls command in the Official docker documentation and also use docker volume inspect to obtain detailed information about volumes.
Once you know the name of the volume, open an empty folder on your local machine and create the necessary devcontainer file .devcontainer/devcontainer.json. Choose the image most suitable to your development environment, but in order to recover your work by performing a simple git commit any image with git installed should do (even those who are not specifically designed to be devcontainers, here I am using Alpine because it occupies relatively little space).
Then set the workspaceFolder and workspaceMount variables to mount your volume in your new devcontainer, like this:
{
"image": "mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13",
"workspaceFolder": "/workspaces/<your-repository-name>",
"workspaceMount": "type=volume,source=<your-volume-name>,target=/worskpaces"
}
If you need something more specific there you can find exhaustive documentation about the possible devcontainer configuration in the devcontainer.json reference page.
You can now use VSCode git tools and even continue the work from where you left the last time you "persisted" your file contents.
This is, by the way, the only way I know to work with VSCode devcontainers if you are using Docker through TCP or SSH with a non-local context (a.k.a. the docker VM is not running on your local machine), since your local file system is not directly available to the docker machine.
If you look at the container log produced when you ask VSCode to spin up a devcontainer for you, you will find the actual docker run command executed by the IDE to be something like along this line:
docker run ... type=bind,source=<your-local-folder>,target=/workspaces/<your-local-folder-or-workspace-basename>,consistency=cached ...
meaning that if you omit to specify the workspaceMount variable in devcontainer.json, VSCode will actually do it for you like if you were to write this:
// file: .devcontainer/devcontainer.json
{
// ...
"workspaceFolder": "/worspaces/${localWorkspaceFolderBasename}",
"workspaceMount": "type=bind,source=${localWorkspaceFolder},target=/worspaces/${localWorkspaceFolderBasename},consistency=cached"}
// ...
}
Where ${localWorkspaceFolder} and ${localWorkspaceFolderBasename} are dynamic variables avaialble in the VSCode devcontainer.json context.
Alternatively
If you just want to commit the changes and throw away the volume afterwards you can simply spin up a docker container with git installed (even the tiny Alpine linux one should do):
docker run --rm -it --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash
Then, if you are familiar with the git command line tool, you can git add and git commit all your modifications. Alternatively you can run the git commands directly instead of manually using a shell inside the container:
docker run --rm -t --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash -c "git add . && git commit -m 'Recovered from devcontainer'"
You can find a full list of devcontainers provided by MS in the VSCode devcontainers repository .
Devcontainers are an amazing tool to help you keep your environment clean and flexible, I hope this answer understood and helped you solve your problem and expand a bit your knowledge about this instrument.
Cheers
you can also use remote-contianer: clone repository in volumes again
volumes and your changes still there

How do I use lsquic (LiteSpeed QUIC and HTTP/3 library)?

https://github.com/litespeedtech/lsquic
I want to implement lsquic. after the setup in the readme, what should I do to send data from client to server and track the network traffic? For setup, do I just follow the three steps, install BoringSSL, LSQUIC and then docker? Would just copy and paste the commands in Terminal work?
Error message:
CMake Error: The current CMakeCache.txt directory /src/lsquic/CMakeCache.txt is different than the directory /Users/nini/Development/lsquic/boringssl/lsquic where CMakeCache.txt was created. This may result in binaries being created in the wrong place. If you are not sure, reedit the CMakeCache.txt
The command '/bin/sh -c cd /src/lsquic && cmake -DBORINGSSL_DIR=/src/boringssl . && make' returned a non-zero code: 1
(base) pc-68-32:lsquic nini$ sudo docker run -it --rm lsquic http_client -s www.google.com -p / -o version=Q046
Password:
Unable to find image 'lsquic:latest' locally
docker: Error response from daemon: pull access denied for lsquic, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
You can build lsquic with docker and then run it (because of the "unable to find" error, i think you did not build the docker image). To do so, git clone (just) the lsquic repository, and run the commands given in the section titled "Building with Docker". The docker build will (o.a.) download boringssl and build it, so you don't have to do that yourself and then it will build lsquic for you.

keda func deploy from a dir which contains spaces is failing

I am using Visual Code with Azure Core Tools to deploy a container to a K8S cluster which has KEDA installed. But seeing this docker error. The error is caused because the docker build is run without the double quotes.
$ func kubernetes deploy --name bollaservicebusfunc --registry sbolladockerhub --python
Running 'docker build -t sbolladockerhub/bollaservicebusfunc C:\Users\20835918\work\welcome to space'....done
Error running docker build -t sbolladockerhub/bollaservicebusfunc C:\Users\20835918\work\welcome to space.
output:
"docker build" requires exactly 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
(.venv)
20835918#CROC1LWPF1S99JJ MINGW64 ~/work/welcome to space (master)
I know there is a known bug Spaces in directory
But posting to see if there is a workaround, this is important as I have eveything in Onedrive - Comapny Name and it has spaces in it
Looking into the code for func, you could specify --image-name instead of --registry which seems to skip building the container.
You would have to build your docker container manually using the same code shown in the output and provide the value for the -t argument of the docker command for --image-name of the func command after.
Also, since this would not push your docker container as well, make sure to push it before running the func command.