I create sample project with Azure functions & Dapr and running in Linux Docker. I use docker compose and yml file to start project. But there are few issues. I'm using VS 2022 and VS code.
First in VS 2022 it will not start for Azure Function project. It works for asp.net Web and API. For Azure Function project it will be getting error as project not support running in Linux. Then I open in VS code command "docker compose up". Projects runs fine, but it was not able to attach debugger on it.
I first tried in VS 2022. followed instruction. I can see my container in list as follows.
After Attached container, it keep runs but not stop at any breakpoint.
Then I tried in VS code, in container I have mount vsdbg to /remote_debugger
Here is my docker-compose.yml file
daprfunctester:
image: ${DOCKER_REGISTRY-}daprfunctester
container_name: "daprfunctester"
build:
context: .
dockerfile: DaprFuncTester/Dockerfile
environment:
- DAPR_HTTP_PORT=3500
- AzureWebJobsStorage
- DOTNET_USE_POLLING_FILE_WATCHER
- NUGET_FALLBACK_PACKAGES
- NUGET_PACKAGES
- ASPNETCORE_CONTENTROOT=/azure-functions-host
- FUNCTIONS_WORKER_RUNTIME=dotnet
- DOTNET_RUNNING_IN_CONTAINER=true
- PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
volumes:
- F:\Work\Moneycorp\CodeForFun\DurableForFun:/src
- C:\Users\winso\vsdbg\vs2017u5:/remote_debugger:rw
- C:\Users\winso\AppData\Local\AzureFunctionsTools\Containers_2167102\Releases\4.20.0\linuxCLI:/functions_debugging_cli:ro
- ~/.nuget/packages:/root/.nuget/packages:ro
- C:\Users\winso\.nuget\packages:/root/.nuget/fallbackpackages:ro
networks:
- hello-dapr
And launch.json file
{
"name": "Docker .NET Core Attach (Preview)",
"type": "docker",
"request": "attach",
"platform": "netCore",
"netCore": {
"debuggerPath": "/remote_debugger/vsdbg",
//"appProject": "${workspaceFolder}/DaprFuncTester/DaprFuncTester.csproj",
//"enableDebugging": true
},
"sourceFileMap": {
"/home/site/wwwroot": "${workspaceFolder}/DaprFuncTester"
},
"processName": "Microsoft.Azure.WebJobs.Script.WebHost"
//"processId":"${command:pickProcess}"
},
When I do F5 run debugger, it was running and if I try to breakpoint, it also stop the Thread due to breakpoint, but it doesn't interact with code, it doesn't indicate or have pointer to the line of code. If press F10 "step over" it will pop error "Failed to step". I don't able to check any variable values. If I check on .Net ThreadPool Worker, it has message "Error processing 'stackTrace' request".
My question are, it looks from vs code it debugger does stop at breakpoint, why my code doesn't interact? was this something wrong with setting in my launch.json or docker compose files?
Related
I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.
When I build using the command line gcloud beta builds submit, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.
However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the
1.) The build succeeds even with the failing unit test
2.) No logging from the unit test appears in GCP logging
3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile
What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?
Thanks!
cloudbuild.yaml
steps:
- name: node
entrypoint: npm
args: ['install']
- id: "test"
name: node
entrypoint: npm
args: ['test']
options:
logging: CLOUD_LOGGING_ONLY
scaffold.yaml
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: genesys-gencloud-dev
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild: {}
launch.json
{
"configurations": [
{
"name": "Kubernetes: Run/Debug - cloudbuild",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"profile": "cloudbuild",
"watch": true,
"cleanUp": false,
"portForward": true,
"internalConsoleOptions": "neverOpen",
"imageRegistry": "gcr.io/my-gcp-project",
"debug": [
{
"image": "my-image-dev",
"containerName": "my-container-dev",
"sourceFileMap": {
"${workspaceFolder}": "/WORK_DIR"
}
}
]
}
]
}
You will need to edit your skaffold.yaml file to use Cloud Build:
build:
googleCloudBuild: {}
See https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build for more details.
EDIT: It looks like your skaffold.yaml enables cloud build for the cloudbuild profile, but that the profile isn't active.
Some options:
Add "profile": "cloudbuild" to your launch.json for 'Run on Kubernetes'.
Screenshot
Move the googleCloudBuild: {} to the top-level build: section. (In other words, skip using the profile)
Activate the profile using one of the other methods from https://skaffold.dev/docs/environment/profiles/#activation
UDPATE (from asker)
I needed to do the following:
Update skaffold.yaml as follows. In particular note the image, field under build > artifacts, and projectId field under profiles > build.
apiVersion: skaffold/v2beta19
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- context: .
image: gcr.io/my-project-id/my-image
deploy:
kubectl:
manifests:
- kubernetes-manifests/**
profiles:
- name: cloudbuild
build:
googleCloudBuild:
projectId: my-project-id
Run this command to activate the profile: skaffold dev -p cloudbuild
Situation
I'm experimenting with writing a VSCode Language Server Protocol (LSP) Extension. I've got it running as follows:
An lsp-server process which is launched by running haskell-lsp-example-exe from terminal
An lsp-client written in Typescript which, for now, basically just launches lsp-server (it's based on the lsp-sample repo)
The lsp-server is launch as follows:
# extension.ts
let serverOptions: ServerOptions = {
run: {
command: "haskell-lsp-example-exe"
},
}
The lsp-client is launched using code --extensionDevelopmentPath="path/to/extension"
I can see that it launches correctly, and I can find its pid through Activity Monitor (I'm on Mac).
Question
How can I see the logs of this process which is spawned by VSCode?
Context
I have tried the following:
In lsp-client/package.json I set the following which gives me the messages going back and forth. But not the logs of lsp-server.:
"languageServerExample.trace.server": {
"scope": "window",
"type": "string",
"enum": [
"off",
"messages",
"verbose"
],
"default": "verbose",
"description": "Traces the communication between VS Code and the language server."
}
I've also tried opening up dev tools in the launched instance of VSCode, but that gives the logs of lsp-client
The log labelled Log (Extension Host) in the launched instance of VSCode also doesn't look too useful
Thanks in advance for any help!
I have integrated my github repo with Google cloud build to automatically build a docker images after every commit in github. This is working fine, but now I want to do sonarqube analysis on code before Docker image building process. So for that I have integrated the sonarqube part in cloudbuild.yaml file. But not able to run it.
I have followed the steps provided in link: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
and pushed the sonar-scanner image in google container registry.
My sonarqube server is running on a GCP instance. On every commit in github, cluod build automatically triggered and start doing task mentioned in cloudbuild.yaml file
Dockerfile:
FROM nginx
COPY ./ /usr/share/nginx/html
cloudbuild.yaml :
steps:
- name: 'gcr.io/PROJECT_ID/sonar-scanner:latest'
args:
- '-Dsonar.host.url=sonarqube_url'
- '-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19'
- '-Dsonar.projectKey=sample-project'
- '-Dsonar.sources=.'
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/PROJECT_ID/html-css-website', '.' ]
images:
- 'gcr.io/PROJECT_ID/html-css-website'
Error:
Status: Build failed
Status detail: failed unmarshalling build config cloudbuild.yaml: yaml: line 3: did not find expected key
If the formatting you've pasted actually matches what you've got in your project then your issue is that the args property within the first steps block is indented too far: it should be aligned with the name property above it.
---
steps:
- name: "gcr.io/PROJECT_ID/sonar-scanner:latest"
args:
- "-Dsonar.host.url=sonarqube_url"
- "-Dsonar.login=c2a7631a6e402c338739091ffbc30e5e3d66cf19"
- "-Dsonar.projectKey=sample-project"
- "-Dsonar.sources=."
- name: "gcr.io/cloud-builders/docker"
args:
- "build"
- "-t"
- "gcr.io/PROJECT_ID/html-css-website"
- "."
images:
- "gcr.io/PROJECT_ID/html-css-website"
I have a simple monolithic application generated using JHipster v4.10.1 with front-end using Angular 4.x. To run JavaScript unit tests, as suggested in the documentation I ran
./node_modules/karma/bin/karma start src/test/javascript/karma.conf.js --debug
The command runs the tests, reports coverage summary and exits, whether tests all pass or some test fail does not matter. Test run output does show at one point that the debug server is loaded:
21 11 2017 13:41:20.616:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
But because the command exits, the Karma debug server can not be accessed. How to run tests so that Karma console can be used in browser to debug?
Figured out that the magic flag is actually single-run which seems to be true by default. So the main command to run for JS debug is:
yarn test --single-run=false
which in turn runs
$ karma start src/test/javascript/karma.conf.js --single-run=false
With this the command will only exit with explicit kill e.g. with Ctrl+C or Z. Karma debug console can then be accessed on http://localhost:9876/debug.html (assuming default port is not already busy. If it is, test output should tell you which port was chosen).
Additionally you need to disable minimization (and also istanbul config - not sure why) so that you can breakpoint and step through the .ts code in debugger easily. I figured this is done by making following changes in webpack/webpack.test.js file:
Remove following istanbul config from module.rules array:
{
test: /src[/|\\]main[/|\\]webapp[/|\\].+\.ts$/,
enforce: 'post',
exclude: /(test|node_modules)/,
loader: 'sourcemap-istanbul-instrumenter-loader?force-sourcemap=true'
}
Add minimize: false to the LoaderOptionsPlugin under plugins array:
new LoaderOptionsPlugin({
minimize: false,
options: {
tslint: {
emitErrors: !WATCH,
failOnHint: false
}
}
})
i'm try to use wercker,
but i don't know my testing can't connect into my mongodb.
i'm using sails + sails mongo, and when npm test...i'm always get error can connect into mongo db, this is my wercker.yml :
box: nodesource/trusty:0.12.7
services:
- id: mongo:2.6
# Build definition
build:
# The steps that will be executed on build
steps:
- script:
name: set NODE_ENV
code: export NODE_ENV=development
# A step that executes `npm install` command
- npm-install
# A step that executes `npm test` command
- npm-test
# A custom script step, name value is used in the UI
# and the code value contains the command that get executed
- script:
name: echo nodejs information
code: |
echo "node version $(node -v) running"
echo "npm version $(npm -v) running"
this is my error message :
warn: `sails.config.express` is deprecated; use `sails.config.http` instead.
Express midleware for passport
error: A hook (`orm`) failed to load!
1) "before all" hook
2) "after all" hook
0 passing (2s)
2 failing
1) "before all" hook:
Uncaught Error: Failed to connect to MongoDB. Are you sure your configured Mongo instance is running?
Error details:
{ [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
at net.js:459:14
2) "after all" hook:
Uncaught Error: Failed to connect to MongoDB. Are you sure your configured Mongo instance is running?
Error details:
{ [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
at net.js:459:14
While out of the box, MongoDB has no authentication so you just have to provide to sails the right host and port.
Define a new connection in your sails app in config/connection.js:
mongodbTestingServer: {
adapter: 'sails-mongo',
host: process.env.MONGO_PORT_27017_TCP_ADDR,
port: process.env.MONGO_PORT_27017_TCP_PORT
},
Concerning MONGO_PORT_27017_TCP_ADDR and MONGO_PORT_27017_TCP_PORT, these 2 environment variable are created by Wercker when you declared a mongo service. Like That, you will be able to connected your application to your database with the right host and port.
Add a new environment in your sails sails app in config/env/testing.js. It will be used by Wercker :
module.exports = {
models: {
connection: 'mongodbTestingServer'
}
};
In your wercker file wercker.yml. I recommend you to use the ewok stack (based on Docker), you can active it in the settings of your application. Here is some useful informations concerning migration to Ewok stack. My example use a box based on a Docker image.
# use the latest official stable node image hosted on DockerHub
box: node
# use the mongo (v2.6) image hosted on DockerHub
services:
- id: mongo:2.6
# Build definition
build:
steps:
# Print node and npm version
- script:
name: echo nodejs information
code: |
echo "node version $(node -v) running"
echo "npm version $(npm -v) running"
- script:
name: set NODE_ENV
code: |
export NODE_ENV=testing
# install npm dependencies of your project
- npm-install
# run tests
- npm-test
To see all environment variables in your Wercker build, add this line :
- script:
name: show all environment variables
code: |
env
It should work.