I'm doing automation test for an android app. My test code is ready and good to run from android studio and manual command from android device. But when I change the command to sh, it failed with INSTRUMENTATION_FAILED. Anyone can help me how to fix it? I just don't understand, why it's working when directly run from terminal, but failed when run from sh.
Manual input comment, which is working:
am instrument -w -r -e debug false -e class com.amap.auto.androidautomation.testcases.basemap.SmokeTest com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUnitRunner
Result:
INSTRUMENTATION_STATUS: numtests=9
INSTRUMENTATION_STATUS: stream=
com.amap.auto.androidautomation.testcases.basemap.SmokeTest:
INSTRUMENTATION_STATUS: id=AndroidJUnitRunner
INSTRUMENTATION_STATUS: test=smoke01
INSTRUMENTATION_STATUS: class=com.amap.auto.androidautomation.testcases.
Run from sh:(the command is the same as manual, just put it in a sh file)
sh r.sh
Result:
INSTRUMENTATION_STATUS: id=ActivityManagerService
INSTRUMENTATION_STATUS: Error=Unable to find instrumentation info for: Component
Info{com.amap.auto.androidautomation.test/android.support.test.runner.AndroidJUn
}tRunner
INSTRUMENTATION_STATUS_CODE: -1
android.util.AndroidException: INSTRUMENTATION_FAILED: com.amap.auto.androidauto
mation.test/android.support.test.runner.AndroidJUnitRunner
at com.android.commands.am.Am.runInstrument(Am.java:1093)
at com.android.commands.am.Am.onRun(Am.java:371)
at com.android.internal.os.BaseCommand.run(BaseCommand.java:47)
at com.android.commands.am.Am.main(Am.java:100)
at com.android.internal.os.RuntimeInit.nativeFinishInit(Native Method)
at com.android.internal.os.RuntimeInit.main(RuntimeInit.java:251)
: not foundnload/r.sh[2]: exit
Related
I am trying to learn from this repo.
Step 3 of the setup instructions says to run yarn dx.
The package.json for this repo defines that script as:
"dx": "run-p dx:* --print-label",
When I try to do this, I get an error message that says:
yarn dx yarn run v1.22.19 $ run-p dx:* --print-label [dx:next
] $ run-s migrate-sqlite generate-sqlite db-seed && next dev
[dx:prisma-studio] $ pnpm prisma-studio-sqlite [dx:prisma-studio]
/bin/sh: pnpm: command not found [dx:prisma-studio] error Command
failed with exit code 127. [dx:prisma-studio] info Visit
https://yarnpkg.com/en/docs/cli/run for documentation about this
command. ERROR: "dx:prisma-studio" exited with 127. error Command
failed with exit code 1.
I'm not sure what pnpm means, or why prisma is trying to link to sqlite when the db it specifies is psql.
Can anyone point me in the direction of what's required to get this repo to start?
If you use yarn, change any occurrence in the package.json from pnpm to yarn and I think it should work :)
Hi I am trying to run my tests parallely(pytest-xdist) on the azure pipelines.
Till now the tests were running perfectly fine.
Suddenly the pytest is throwing a weird error saying "unrecognized argument".
The file name : integration_test.py
Command used : pytest -n 5 --tb=short integration_test.py -v -s --> to run 5 tests parallely
Total number of tests : 57
Versions :
pytest==6.2.5
pytest-xdist==2.3.0
Even tried with the latest versions of these 2 modules.
Error :
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: -n integration_test.py
How can I overcome this error?
This error is what you encountered:
As hoefling mentioned, the solution is to install the pytest-xdist:
pip install pytest-xdist
On MacOS just running pytest might be ran by a different Python's version than you thought.
$ pytest
============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.9.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
rootdir: [REDACTED]
plugins: anyio-3.5.0, cov-3.0.0
While
$ python3 -m pytest
============================================================================== test session starts ===============================================================================
platform darwin -- Python 3.10.6, pytest-7.1.2, pluggy-1.0.0
rootdir: [REDACTED]
plugins: xdist-2.5.0, forked-1.4.0, pylama-8.4.1
Be careful, and launch it as a module :)
Windows10 eclipse esp-idf latest version 2021-03
With the command line idf.py I can build and flash the esp-idf\examples\get-started\blink programme which run on a ESP32.
In eclipse, the buils works but the run command display in console
Usage: C:\Users\peter\esp-idf\tools\idf.py [OPTIONS] COMMAND1 [ARGS]...
[COMMAND2 [ARGS]...]...
ESP-IDF CLI build management tool. For commands that are not known to
idf.py an attempt to execute it as a build system target will be made.
.... bla bla ...
Can anybody tells me what is wrong ?
Regards
There is a bug in the eclipse/esp-idf launch bar.
The ESP target for esp32 is defined with a Serial Port COM3.
But that info is not used.
If one redefine a new ESP Target with the same serial port under a different name, then the run command will work!
See hereafter for the people interested in the details
cmd.exe /C "cd /D C:\Users\peter\esp-idf\components\esptool_py && C:\Users\peter.espressif\tools\cmake\3.16.4\bin\cmake.exe -D IDF_PATH="C:/Users/peter/esp-idf" -D ESPTOOLPY="C:\Users\peter.espressif\python_env\idf4.2_py3.8_env\Scripts\python.exe C:/Users/peter/esp-idf/components/esptool_py/esptool/esptool.py --chip esp32" -D ESPTOOL_ARGS="--before=default_reset --after=hard_reset write_flash #flash_args" -D WORKING_DIRECTORY="C:/Users/peter/eclipse-workspace/blink/build" -P C:/Users/peter/esp-idf/components/esptool_py/run_esptool.cmake"
esptool.py --chip esp32 -p COM3 -b 460800 --before=default_reset --after=hard_reset write_flash --flash_mode dio --flash_freq 40m --flash_size 2MB 0x8000 partition_table/partition-table.bin 0x1000 bootloader/bootloader.bin 0x10000 blink.bin
esptool.py v3.0
I have the following devcontainer.json file in a project.
When I try to open VSCode in a container, it crashes. The container builds successfully, but the following logs are emitted during startup. When I remove the environment variable configuration, the container starts up and stays running just fine.
I followed the example for configuring environment variables inside the dev container, according to the Visual Studio Code documentation for Advanced Container Configuration.
Question: How do I properly configure the PATH environment variable in my devcontainer.json file?
devcontainer.json
{
"name": "Ubuntu 18.04 & Git",
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"containerEnv": {
"PATH": "${containerEnv:PATH}:/root/.customfolder/bin/"
}
}
Logs
6499 ms] Successfully built 096d41dceada
[6503 ms] Successfully tagged vsc-asdf-73cee28d5205fdd4a6063fc596248885:latest
[6506 ms] Start: Run: git rev-parse --show-toplevel
[6533 ms] Start: Starting container
[6533 ms] Start: Run: docker run -a STDOUT -a STDERR --mount type=bind,source=/Users/username/git/asdf,target=/workspaces/asdf,consistency=cached --mount source=/Users/username/.aws/credentials,target=/root/.aws/credentials,type=bind -l vsch.quality=stable -l vsch.remote.devPort=0 -l vsch.local.folder=/Users/username/git/asdf -e PATH=${containerEnv:PATH}:/root/.customfolder/bin/ --entrypoint /bin/sh vsc-pulumi-73cee28d5205fdd4a6063fc596248885 -c echo Container started ; while sleep 1; do :; done
[6852 ms] /bin/sh: 1: sleep: not found
[6852 ms] Container started
[6873 ms] Start: Inspecting container
[6879 ms] Start: Run in container: uname -m
[7031 ms] Start: Run in container: cat /etc/passwd
[7035 ms] Shell server terminated (code: 1, signal: null)
Error response from daemon: Container 8e0f6eeb22c358b0dfd8f1c1410c10b382ea66aa432e7e400a4564671619046f is not running
An error occurred setting up the container
Environment
MacOS Catalina
Docker Desktop 2.2.0.0
Microsoft Visual Studio Code 1.42.0
VSCode Remote-Containers extension 0.101.0
You should be able to change the property from containerEnv to remoteEnv to resolve the issue.
Only the remoteEnv property supports referencing existing container env vars. The containerEnv property is like -e for the Docker CLI and is therefore evaluated before the container is created. This is mainly useful when your Dockerfile itself depends on certain env vars being set (though you can modify the PATH inside your Dockerfile if you so desire).
For everything else, remoteEnv is the way to go since VS Code and all sub-processes like terminals us it. Since this is evaluated after container create, you can update the path as the examples illustrates.
"remoteEnv": {
"PATH": "${containerEnv:PATH}:/some/other/path",
"MY_REMOTE_VARIABLE": "some-other-value-here",
"MY_REMOTE_VARIABLE2": "${localEnv:SOME_LOCAL_VAR}"
}
"containerEnv": {
"PATH": "${localEnv:PATH}:/workspaces/v8/depot_tools"
}
I think that is what you need. localEnv here means the container env.
I am attempting to set up Capistrano with a SilverStripe build and am running into a few troubles setting up the shared directories.
I set the linked_dirs in deploy.rb with the following:
set :linked_dirs, %w{assets vendor}
Since adding this line I get the following error:
[617afa7f] Command: /usr/bin/env mkdir -p /var/www/website/releases/20160215083713 /var/www/website/releases/20160215083713
INFO [617afa7f] Finished in 0.250 seconds with exit status 0 (successful).
DEBUG [88c3de20] Running /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [88c3de20] Command: [ -L /var/www/website/releases/20160215083713/assets ]
DEBUG [88c3de20] Finished in 0.258 seconds with exit status 1 (failed).
DEBUG [3d61c1c4] Running /usr/bin/env [ -d /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [3d61c1c4] Command: [ -d /var/www/website/releases/20160215083713/assets ]
DEBUG [3d61c1c4] Finished in 0.254 seconds with exit status 1 (failed).
INFO [3016a8cd] Running /usr/bin/env ln -s /var/www/website/shared/assets /var/www/website/releases/20160215083713/assets as capistrano#128.199.231.152
I am a mega noob when it comes to Capistrano and a semi noob when it comes to server configuration and permissions, so any pointers would be appreciated.
It probably hasn't actually failed. One thing to know about Capistrano is that (success) and (failed) are actually returning the result of the exit status, (success) if 0 and (failed) if non-0.
If we look at the command in question, it says that /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] failed. This command is saying "return 0 if /var/www/website/releases/20160215083713/assets exists and is a link (-L). This fails, but that just means it returns non-0, thus the link needs to be created. Note that the next command also fails (-d) with asserting that the path is a directory. And the last line in your output is actually creating the link in question.
You can see the test in the Capistrano codebase here: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/tasks/deploy.rake#L128
You can clean up and simplify the output with https://github.com/mattbrictson/airbrussh. This is developed by one of the primary Capistrano devs.
As a sidenote, similarly all the green text in your terminal is stdout and the red text is stderr. This can also be confusing.