VSCode throws error when setting PATH environment variable in devcontainer.json - visual-studio-code

I have the following devcontainer.json file in a project.
When I try to open VSCode in a container, it crashes. The container builds successfully, but the following logs are emitted during startup. When I remove the environment variable configuration, the container starts up and stays running just fine.
I followed the example for configuring environment variables inside the dev container, according to the Visual Studio Code documentation for Advanced Container Configuration.
Question: How do I properly configure the PATH environment variable in my devcontainer.json file?
devcontainer.json
{
"name": "Ubuntu 18.04 & Git",
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"containerEnv": {
"PATH": "${containerEnv:PATH}:/root/.customfolder/bin/"
}
}
Logs
6499 ms] Successfully built 096d41dceada
[6503 ms] Successfully tagged vsc-asdf-73cee28d5205fdd4a6063fc596248885:latest
[6506 ms] Start: Run: git rev-parse --show-toplevel
[6533 ms] Start: Starting container
[6533 ms] Start: Run: docker run -a STDOUT -a STDERR --mount type=bind,source=/Users/username/git/asdf,target=/workspaces/asdf,consistency=cached --mount source=/Users/username/.aws/credentials,target=/root/.aws/credentials,type=bind -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=/Users/username/git/asdf -e PATH=${containerEnv:PATH}:/root/.customfolder/bin/ --entrypoint /bin/sh vsc-pulumi-73cee28d5205fdd4a6063fc596248885 -c echo Container started ; while sleep 1; do :; done
[6852 ms] /bin/sh: 1: sleep: not found
[6852 ms] Container started
[6873 ms] Start: Inspecting container
[6879 ms] Start: Run in container: uname -m
[7031 ms] Start: Run in container: cat /etc/passwd
[7035 ms] Shell server terminated (code: 1, signal: null)
Error response from daemon: Container 8e0f6eeb22c358b0dfd8f1c1410c10b382ea66aa432e7e400a4564671619046f is not running
An error occurred setting up the container
Environment
MacOS Catalina
Docker Desktop 2.2.0.0
Microsoft Visual Studio Code 1.42.0
VSCode Remote-Containers extension 0.101.0

You should be able to change the property from containerEnv to remoteEnv to resolve the issue.
Only the remoteEnv property supports referencing existing container env vars. The containerEnv property is like -e for the Docker CLI and is therefore evaluated before the container is created. This is mainly useful when your Dockerfile itself depends on certain env vars being set (though you can modify the PATH inside your Dockerfile if you so desire).
For everything else, remoteEnv is the way to go since VS Code and all sub-processes like terminals us it. Since this is evaluated after container create, you can update the path as the examples illustrates.
"remoteEnv": {
"PATH": "${containerEnv:PATH}:/some/other/path",
"MY_REMOTE_VARIABLE": "some-other-value-here",
"MY_REMOTE_VARIABLE2": "${localEnv:SOME_LOCAL_VAR}"
}

"containerEnv": {
"PATH": "${localEnv:PATH}:/workspaces/v8/depot_tools"
}
I think that is what you need. localEnv here means the container env.

Related

GitLab K8s Runner fails for get_sources

we are trying to move gitlab-runners from standard CentOS VMs to kebernetes.
But after setup and registration, pipeline fails with unknown error:
Running with gitlab-runner 15.7.0 (259d2fd4)
on Kubernetes-local JXRw3mH1
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gitlab-test.domain:5005/image:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-jxrw3mh1-project-290-concurrent-0dpd88 to be running, status is Pending
Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
Getting source from Git repository
00:00
error: could not lock config file /root/.gitconfig: Read-only file system
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Inside the log of the job pod we found:
helper Running on runner-jxrw3mh1-project-290-concurrent-0dpd88 via gitlab-runner-d7df6c548-hsgxg...
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/prepare_script"}
helper error: could not lock config file /root/.gitconfig: Read-only file system
helper
helper {"command_exit_code": 1, "script": "/scripts-290-207166/get_sources"}
helper
helper {"command_exit_code": 0, "script": "/scripts-290-207166/cleanup_file_variables"}
Inside the log of the gitlab-runner pod we found:
Starting in container "helper" the command ["gitlab-runner-build" "<<<" "/scripts-290-207167/get_sources" "2>&1 | tee -a /logs-290-207167/output.log"] with script: #!/usr/bin/env bash
if set -o | grep pipefail > /dev/null; then set -o pipefail; fi; set -o errexit
set +o noclobber
: | eval $'export FF_CMD_DISABLE_DELAYED_ERROR_LEVEL_EXPANSION=$\'false\'\nexport FF_NETWORK_PER_BUILD=$\'false\'\nexport FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=$\'false\'\nexport FF_USE_DIRECT_DOWNLOAD
exit 0
job=207167 project=290 runner=JXRw3mH1
Remote process exited with the status: CommandExitCode: 1, Script: /scripts-290-207167/get_sources job=207167 project=290 runner=JXRw3mH1
Container "helper" exited with error: command terminated with exit code 1 job=207167 project=290 runner=JXRw3mH1
notes:
the error "error: could not lock config file /root/.gitconfig: Read-only file system" is due to the current user inside container is different by root
the file /logs-290-207167/output.log contains the log of the job pod
Inside job pod shell we also tested some git commands and perform successfully fetch and clone using our personal credentials (the same user that perform the run of the pipeline from gitlab gui).
We think the problem can be related on gitlab-ci-token, but we have finished our investigation... :frowning:

How to setup tRPC project with pnpm

I am trying to learn from this repo.
Step 3 of the setup instructions says to run yarn dx.
The package.json for this repo defines that script as:
"dx": "run-p dx:* --print-label",
When I try to do this, I get an error message that says:
yarn dx yarn run v1.22.19 $ run-p dx:* --print-label [dx:next
] $ run-s migrate-sqlite generate-sqlite db-seed && next dev
[dx:prisma-studio] $ pnpm prisma-studio-sqlite [dx:prisma-studio]
/bin/sh: pnpm: command not found [dx:prisma-studio] error Command
failed with exit code 127. [dx:prisma-studio] info Visit
https://yarnpkg.com/en/docs/cli/run for documentation about this
command. ERROR: "dx:prisma-studio" exited with 127. error Command
failed with exit code 1.
I'm not sure what pnpm means, or why prisma is trying to link to sqlite when the db it specifies is psql.
Can anyone point me in the direction of what's required to get this repo to start?
If you use yarn, change any occurrence in the package.json from pnpm to yarn and I think it should work :)

INSTRUMENTATION_FAILED when run android automation test from sh

I'm doing automation test for an android app. My test code is ready and good to run from android studio and manual command from android device. But when I change the command to sh, it failed with INSTRUMENTATION_FAILED. Anyone can help me how to fix it? I just don't understand, why it's working when directly run from terminal, but failed when run from sh.
Manual input comment, which is working:
am instrument -w -r -e debug false -e class com.amap.auto.androidautomation.testcases.basemap.SmokeTest com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUnitRunner
Result:
INSTRUMENTATION_STATUS: numtests=9
INSTRUMENTATION_STATUS: stream=
com.amap.auto.androidautomation.testcases.basemap.SmokeTest:
INSTRUMENTATION_STATUS: id=AndroidJUnitRunner
INSTRUMENTATION_STATUS: test=smoke01
INSTRUMENTATION_STATUS: class=com.amap.auto.androidautomation.testcases.
Run from sh:(the command is the same as manual, just put it in a sh file)
sh r.sh
Result:
INSTRUMENTATION_STATUS: id=ActivityManagerService
INSTRUMENTATION_STATUS: Error=Unable to find instrumentation info for: Component
Info{com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUn
}tRunner
INSTRUMENTATION_STATUS_CODE: -1
android.util.AndroidException: INSTRUMENTATION_FAILED: com.amap.auto.androidauto
mation.test/android.support.test.runner.AndroidJUnitRunner
at com.android.commands.am.Am.runInstrument(Am.java:1093)
at com.android.commands.am.Am.onRun(Am.java:371)
at com.android.internal.os.BaseCommand.run(BaseCommand.java:47)
at com.android.commands.am.Am.main(Am.java:100)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:251)
: not foundnload/r.sh[2]: exit

swift build faild due to "database is locked" in docker contianer?

basically I am trying to learn swift in win7 using docker with following setup and steps:
1) physical machine running on win7
2) docker toolbox 1.12.5 window version installed in win7
3) open "Docker Quickstart Terminal" which is a MINGW64 console
4) in MINGW64 console,ran "docker pull swift" to pull a docker swift image
5) create container using "docker run -it --hostname=value --privileged=true --net=host -v //d/dev/tools/docker/swift://swift:z --name swiftfun 24cc712c0763 /bin/bash", which actually the volume mapping does not work. I can not create file in folder of my win7 host
swift version is :
root#value:/swift/PerfectTemplate/.build# swift -version
Swift version 3.0.2 (swift-3.0.2-RELEASE)
Target: x86_64-unknown-linux-gnu
linux container is :
root#value:/swift/PerfectTemplate/.build# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
6) then ran this in container "mount -t cifs //10.x.x.xxx/D$/dev/tools/docker/swift /swift -o username=myusername,password=mypassword,noperm" , this time works and I can see files in my win7 folder and can write files to win7 .
7) go to the folder "/swift" in container and pull code from git as shown in this link http://perfect.org/docs/gettingStarted.html . I can see files/folders created in win7 folder
8) in container, go to folder "PerfectTemplate" and ran swift build , it failed with following message :
...
Cloning https://github.com/PerfectlySoft/Perfect-Thread.git
HEAD is now at aee3b32 Cleanup
Resolved version: 2.0.9
<unknown>:0: error: unable to attach DB: unable to initialize database (database
is locked)
error: exit(1): /usr/bin/swift-build-tool -f /swift/PerfectTemplate/.build/debug
.yaml
...
there is a file build.db created in my win7 folder D:\dev\tools\docker\swift\PerfectTemplate\.build\build.db
and the file size remains 0 byte
following is verbose info from building:
/usr/bin/swiftc --driver-mode=swift -I /usr/lib/swift/pm -L /usr/lib/swift/pm -l
PackageDescription /swift/PerfectTemplate/Packages/PerfectThread-2.0.9/Package.s
wift -fileno 4
/usr/bin/swift-build-tool -f /swift/PerfectTemplate/.build/debug.yaml -v
<unknown>:0: error: unable to attach DB: unable to initialize database (database
is locked)
error: exit(1): /usr/bin/swift-build-tool -f /swift/PerfectTemplate/.build/debug
.yaml -v
if I use linux local folder for building code then everything is working fine .the size of build.db changes . does it have anything to do with mounted drive using //ip/drive ?
how do I resolve ? Thanks
OK, it seems that sqlite is not tolerate to mapped windows folder in containers. I tried mapping folders using both //ip/folder approach and virtualbox shared folder approach . none of them works. and seems it has something to do with winLockFile,please check link below
http://sqlite.1065341.n5.nabble.com/SQLite3-database-on-windows-network-drive-and-unreliable-connection-td75875.html
https://www.sqlite.org/whentouse.html
I also checked source code of swift package manager , it seems there is no way of adding options to swift-build-tool as part of "swift build" command

Creating linked_dirs in Capistrano 3 fails

I am attempting to set up Capistrano with a SilverStripe build and am running into a few troubles setting up the shared directories.
I set the linked_dirs in deploy.rb with the following:
set :linked_dirs, %w{assets vendor}
Since adding this line I get the following error:
[617afa7f] Command: /usr/bin/env mkdir -p /var/www/website/releases/20160215083713 /var/www/website/releases/20160215083713
INFO [617afa7f] Finished in 0.250 seconds with exit status 0 (successful).
DEBUG [88c3de20] Running /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [88c3de20] Command: [ -L /var/www/website/releases/20160215083713/assets ]
DEBUG [88c3de20] Finished in 0.258 seconds with exit status 1 (failed).
DEBUG [3d61c1c4] Running /usr/bin/env [ -d /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [3d61c1c4] Command: [ -d /var/www/website/releases/20160215083713/assets ]
DEBUG [3d61c1c4] Finished in 0.254 seconds with exit status 1 (failed).
INFO [3016a8cd] Running /usr/bin/env ln -s /var/www/website/shared/assets /var/www/website/releases/20160215083713/assets as capistrano#128.199.231.152
I am a mega noob when it comes to Capistrano and a semi noob when it comes to server configuration and permissions, so any pointers would be appreciated.
It probably hasn't actually failed. One thing to know about Capistrano is that (success) and (failed) are actually returning the result of the exit status, (success) if 0 and (failed) if non-0.
If we look at the command in question, it says that /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] failed. This command is saying "return 0 if /var/www/website/releases/20160215083713/assets exists and is a link (-L). This fails, but that just means it returns non-0, thus the link needs to be created. Note that the next command also fails (-d) with asserting that the path is a directory. And the last line in your output is actually creating the link in question.
You can see the test in the Capistrano codebase here: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/tasks/deploy.rake#L128
You can clean up and simplify the output with https://github.com/mattbrictson/airbrussh. This is developed by one of the primary Capistrano devs.
As a sidenote, similarly all the green text in your terminal is stdout and the red text is stderr. This can also be confusing.