How to configure the DAP debugger under neovim for typescript? - neovim

I'm trying to configure the DAP debugger in Neovim for a typescript application.
I added the DAP plugin:
use "mfussenegger/nvim-dap"
I also have a config.lua file containing the adapter and configuration:
local status_ok, dap = pcall(require, "dap")
if not status_ok then
return
end
dap.adapters.chrome = {
type = "executable",
command = "node",
args = {os.getenv("HOME") .. "/dev/dap-debugger/vscode-js-debug/out/src/debugServerMain.js", "45635"}
}
dap.configurations.typescript = {
{
type = "chrome",
request = "attach",
program = "${file}",
debugServer = 45635,
cwd = vim.fn.getcwd(),
sourceMaps = true,
protocol = "inspector",
port = 9222,
webRoot = "${workspaceFolder}"
}
}
When, under nvim in my typescript application project, I try to start the debugger with the :lua require'dap'.continue() command, I get the error:
Debug adapter didn't respond. Either the adapter is slow (then wait and ignore this) or there is a problem with your adapter or `chrome` configuration. Check
the logs for errors (:help dap.set_log_level)
But the ~/.cache/nvim/dap.log DAP log shows no error:
[ DEBUG ] 2022-04-12T08:49:37Z+0200 ] ...nvim/site/pack/packer/start/nvim-dap/lua/dap/session.lua:776 ] "Spawning debug adapter" {
args = { "/home/stephane/dev/dap-debugger/vscode-js-debug/out/src/debugServerMain.js", "45635" },
command = "node",
type = "executable"
}
[ DEBUG ] 2022-04-12T08:49:37Z+0200 ] ...nvim/site/pack/packer/start/nvim-dap/lua/dap/session.lua:965 ] "request" {
arguments = {
adapterID = "nvim-dap",
clientId = "neovim",
clientname = "neovim",
columnsStartAt1 = true,
linesStartAt1 = true,
locale = "en_US.UTF-8",
pathFormat = "path",
supportsRunInTerminalRequest = true,
supportsVariableType = true
},
command = "initialize",
seq = 0,
type = "request"
}
I can set breakpoints with the command:
lua require'dap'.toggle_breakpoint()
I also installed the VSCode Js debugger with the following commands:
git clone https://github.com/microsoft/vscode-js-debug
cd vscode-js-debug/
npm i
gulp
I can see that my Chrome browser is listening on the 9222 port:
chrome 208069 stephane 118u IPv4 1193769 0t0 TCP 127.0.0.1:9222 (LISTEN)
If I run the debugger manually, I can see it starts on the given port number:
09:16 $ node ~/dev/dap-debugger/vscode-js-debug/out/src/debugServerMain.js 45635
Debug server listening at 45635
I'm on NVIM v0.7.0-dev
My Angular application is started and responds all right.
UPDATE: The debugger I was trying to use is not on DAP standard. I guess I need to find an alternative.

The VSCode Chrome debugger is deprecated and has been replaced by the VSCode JS debugger. The VSCode JS debugger is compatible with all browsers. But the VSCode JS debugger is not DAP compliant. So the VSCode Chrome debugger is still being used for now.
Installing the debugger:
git clone git#github.com:microsoft/vscode-chrome-debug.git
cd vscode-chrome-debug
npm install
npm run build
Configuring the debugger:
local function configureDebuggerAngular(dap)
dap.adapters.chrome = {
-- executable: launch the remote debug adapter - server: connect to an already running debug adapter
type = "executable",
-- command to launch the debug adapter - used only on executable type
command = "node",
args = { os.getenv("HOME") .. "/.local/share/nvim/lsp-debuggers/vscode-chrome-debug/out/src/chromeDebug.js" }
}
-- The configuration must be named: typescript
dap.configurations.typescript = {
{
name = "Debug (Attach) - Remote",
type = "chrome",
request = "attach",
-- program = "${file}",
-- cwd = vim.fn.getcwd(),
sourceMaps = true,
-- reAttach = true,
trace = true,
-- protocol = "inspector",
-- hostName = "127.0.0.1",
port = 9222,
webRoot = "${workspaceFolder}"
}
}
end
local function configureDap()
local status_ok, dap = pcall(require, "dap")
if not status_ok then
print("The dap extension could not be loaded")
return
end
dap.set_log_level("DEBUG")
vim.highlight.create('DapBreakpoint', { ctermbg = 0, guifg = '#993939', guibg = '#31353f' }, false)
vim.highlight.create('DapLogPoint', { ctermbg = 0, guifg = '#61afef', guibg = '#31353f' }, false)
vim.highlight.create('DapStopped', { ctermbg = 0, guifg = '#98c379', guibg = '#31353f' }, false)
vim.fn.sign_define('DapBreakpoint', { text = '', texthl = 'DapBreakpoint', linehl = 'DapBreakpoint',
numhl = 'DapBreakpoint' })
vim.fn.sign_define('DapBreakpointCondition',
{ text = 'ﳁ', texthl = 'DapBreakpoint', linehl = 'DapBreakpoint', numhl = 'DapBreakpoint' })
vim.fn.sign_define('DapBreakpointRejected',
{ text = '', texthl = 'DapBreakpoint', linehl = 'DapBreakpoint', numhl = 'DapBreakpoint' })
vim.fn.sign_define('DapLogPoint', { text = '', texthl = 'DapLogPoint', linehl = 'DapLogPoint', numhl = 'DapLogPoint' })
vim.fn.sign_define('DapStopped', { text = '', texthl = 'DapStopped', linehl = 'DapStopped', numhl = 'DapStopped' })
return dap
end
local function configure()
local dap = configureDap()
if nil == dap then
print("The DAP core debugger could not be set")
end
configureDebuggerAngular(dap)
end

Related

Neovim starting multiple LSP clients for each buffer

I recently realized that my neovim automatically spawns the same language server (in this case, tsserver and tailwindcss) everytime I open a file.
Everything works fine when opening the first file
However, once I opened another file, it starts spawning another lsp client
Here's my nvim config for the lsp.
-- Mappings.
-- See `:help vim.diagnostic.*` for documentation on any of the below functions
local opts = { noremap = true, silent = true }
vim.api.nvim_set_keymap("n", "<space>e", "<cmd>lua vim.diagnostic.open_float()<CR>", opts)
vim.api.nvim_set_keymap("n", "[d", "<cmd>lua vim.diagnostic.goto_prev()<CR>", opts)
vim.api.nvim_set_keymap("n", "]d", "<cmd>lua vim.diagnostic.goto_next()<CR>", opts)
vim.api.nvim_set_keymap("n", "<space>q", "<cmd>lua vim.diagnostic.setloclist()<CR>", opts)
-- Use an on_attach function to only map the following keys
-- after the language server attaches to the current buffer
local on_attach = function(client, bufnr)
vim.api.nvim_buf_set_option(bufnr, "omnifunc", "v:lua.vim.lsp.omnifunc")
-- Mappings.
-- See `:help vim.lsp.*` for documentation on any of the below functions
vim.api.nvim_buf_set_keymap(bufnr, "n", "gd", "<cmd>lua vim.lsp.buf.definition()<CR>", opts)
end
local capabilities = require("cmp_nvim_lsp").default_capabilities(vim.lsp.protocol.make_client_capabilities())
local rawCapabilitiesWithoutFormatting = vim.lsp.protocol.make_client_capabilities()
rawCapabilitiesWithoutFormatting.textDocument.formatting = false
rawCapabilitiesWithoutFormatting.textDocument.rangeFormatting = false
local capabilitiesWithoutFormatting = require("cmp_nvim_lsp").default_capabilities(rawCapabilitiesWithoutFormatting)
-- Use a loop to conveniently call 'setup' on multiple servers and
-- map buffer local keybindings when the language server attaches, for
-- servers that don't need any special treatment
local servers = {
"bashls",
"clangd",
"cssls",
"eslint",
"gopls",
"html",
"jsonls",
"rust_analyzer",
"svelte",
"tailwindcss",
"vimls",
"volar",
"prismals",
"marksman",
}
for _, lsp in pairs(servers) do
require("lspconfig")[lsp].setup({
on_attach = on_attach,
flags = {
debounce_text_changes = 300,
},
capabilities = capabilities,
})
end
-- setup tsserver manually like a pro
require("lspconfig").tsserver.setup({
on_attach = function(client, bufnr)
client.server_capabilities.document_formatting = false
client.server_capabilities.document_range_formatting = false
on_attach(client, bufnr)
end,
flags = {
debounce_text_changes = 300,
},
capabilities = capabilitiesWithoutFormatting,
settings = {
documentFormatting = false,
},
root_dir = require("lspconfig.util").find_git_ancestor,
})
vim.lsp.handlers["textDocument/publishDiagnostics"] = vim.lsp.with(vim.lsp.diagnostic.on_publish_diagnostics, {
underline = true,
-- This sets the spacing and the prefix, obviously.
virtual_text = {
spacing = 4,
},
signs = true,
update_in_insert = true,
})
It was an issue on nvim-lspconfig's end, however it was fixed on a recent patch. Just pull in the latest version and it should work as expected.

Terraform import error retrieving Virtual Machine Scale Set created from an image

I'm trying to import a Linux VM Scale Set that was deployed in the Azure Portal from a custom shared image, also created in the portal. I'm using the following command:
terraform import module.vm_scaleset.azurerm_linux_virtual_machine_scale_set.vmscaleset /subscriptions/00000000-0000-0000-0000-000000000000
/resourceGroups/myrg/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1
Import fails with the following error:
Error: retrieving Virtual Machine Scale Set "vmss1" (Resource Group "myrg"): properties.virtualMachineProfile.osProfile was nil
Below is my VM Scale set module code
data "azurerm_lb" "loadbalancer" {
name = var.lbName
resource_group_name = var.rgName
}
data "azurerm_lb_backend_address_pool" "addresspool" {
loadbalancer_id = data.azurerm_lb.loadbalancer.id
name = var.lbAddressPool
}
data "azurerm_shared_image" "scaleset_image" {
provider = azurerm.ist
name = var.scaleset_image_name
gallery_name = var.scaleset_image_gallery
resource_group_name = var.scaleset_image_rgname
}
resource "azurerm_linux_virtual_machine_scale_set" "vmscaleset" {
name = var.vmssName
resource_group_name = var.rgName
location = var.location
sku = var.vms_sku
instances = var.vm_instances
admin_username = azurerm_key_vault_secret.vmssusername.value
admin_password = azurerm_key_vault_secret.vmsspassword.value
disable_password_authentication = false
zones = var.vmss_zones
source_image_id = data.azurerm_shared_image.scaleset_image.id
tags = module.vmss_tags.tags
os_disk {
storage_account_type = var.vmss_osdisk_storage
caching = "ReadWrite"
create_option = "FromImage"
}
data_disk {
storage_account_type = "StandardSSD_LRS"
caching = "None"
disk_size_gb = 1000
lun = 10
create_option = "FromImage"
}
network_interface {
name = format("nic-%s-001", var.vmssName)
primary = true
enable_accelerated_networking = true
ip_configuration {
name = "internal"
load_balancer_backend_address_pool_ids = [data.azurerm_lb_backend_address_pool.addresspool.id]
primary = true
subnet_id = var.subnet_id
}
}
lifecycle {
ignore_changes = [
tags
]
}
}
The source image was created from a Linux RHEL 8.6 VM that included a custom node.js script.
Examination of the Scale Set in the portal does indeed show that the virtualMachineProfile.osProfile is absent.
I haven't been able to find a solution on any forum. Is there any way to ignore the error and import the Scale Set anyway?

neovim's lsp does not show error messages

i have a neovim config which contains lsp configuration
on my laptop i have neovim 0.7.2 which works perfectly
but on my desktop i have neovim 0.8.0 and lsp still works but does not show error massages.
this is my config
vim.opt.tabstop = 2
vim.cmd([[colorscheme dedsec]])
vim.cmd([[set number]])
vim.cmd([[set noswapfile]])
vim.cmd([[set mouse=a]])
vim.opt.shiftwidth = 2
vim.opt.softtabstop = 2
vim.opt.expandtab = true
require("packer").startup(function(use)
-- use "wbthompson/packer.nvim"
use "vim-airline/vim-airline"
use 'vim-airline/vim-airline-themes'
use 'ryanoasis/vim-devicons'
use 'jpalardy/vim-slime'
use 'shime/vim-livedown'
use 'ap/vim-css-color'
use 'terryma/vim-multiple-cursors'
use 'mattn/emmet-vim'
use 'scrooloose/nerdtree'
use 'mxw/vim-jsx'
-- LSP
use {
'VonHeikemen/lsp-zero.nvim',
requires = {
-- LSP Support
{'neovim/nvim-lspconfig'},
{'williamboman/mason.nvim'},
{'williamboman/mason-lspconfig.nvim'},
-- Autocompletion
{'hrsh7th/nvim-cmp'},
{'hrsh7th/cmp-buffer'},
{'hrsh7th/cmp-path'},
{'saadparwaiz1/cmp_luasnip'},
{'hrsh7th/cmp-nvim-lsp'},
{'hrsh7th/cmp-nvim-lua'},
-- Snippets
{'L3MON4D3/LuaSnip'},
{'rafamadriz/friendly-snippets'},
}
}
end)
vim.cmd([[
let g:airline_powerline_fonts = 1 "Включить поддержку Powerline шрифтов
let g:airline#extensions#keymap#enabled = 0 "Не показывать текущий маппинг
let g:airline_section_z = "\ue0a1:%l/%L Col:%c" "Кастомная графа положения курсора
let g:Powerline_symbols='unicode' "Поддержка unicode
let g:airline#extensions#xkblayout#enabled = 0 "Про это позже расскажу
set guioptions= "Отключаем панели прокрутки в GUI
set showtabline=1 "Отключаем панель табов (окошки FTW)
let g:slime_target = "tmux"
let g:slime_target = "neovim"
nnoremap <leader>n :NERDTreeFocus<CR>
nnoremap <C-n> :NERDTree<CR>
nnoremap <C-b> :NERDTreeToggle<CR>
nnoremap <C-f> :NERDTreeFind<CR>
let g:user_emmet_mode='a'
set colorcolumn=109
]])
local lsp = require('lsp-zero')
lsp.preset('recommended')
lsp.nvim_workspace()
lsp.setup()
local servers = { 'pyright', 'tsserver', 'jdtls', 'rust-analyzer', 'clangd', 'sumneko_lua'}
for _, lsp in pairs(servers) do
require('lspconfig')[lsp].setup {
on_attach = on_attach,
flags = {
debounce_text_changes = 150,
}
}
end
local signs = { Error = " ", Warn = " ", Hint = " ", Info = " " }
for type, icon in pairs(signs) do
local hl = "DiagnosticSign" .. type
vim.fn.sign_define(hl, { text = icon, texthl= hl, numhl = hl })
end
this is how it works on laptop
this is how it works on my desktop
i have tried to install neovim 0.7.2 but it does not help me
i also have tired to update and reinstall plugins but it also does not help
https://github.com/VonHeikemen/lsp-zero.nvim/blob/v1.x/doc/md/lsp.md#diagnostics
simply follow this documentation.
change
virtual_text = false,
to
virtual_text = true,
if you have the same situation try to install williamboman/nvim-lsp-installer. it have halped in my situation

Tags format on Packer ec2-ami deployment

I'm trying out to create an amazon ec2 ami for the 1st time using Hashicorp Packer, however getting this failure on the tag creation, I already tried some retries on trial and error test for the format but still unlucky
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ packer init .
Error: Missing item separator
on variables.pkr.hcl line 28, in variable "tags":
27: default = [
28: "environment" : "testing"
Expected a comma to mark the beginning of the next item.
My code ec2.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat ec2.pkr.hcl
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "ec2" {
ami_name = "${var.ami_prefix}-${local.timestamp}"
instance_type = "t2.micro"
region = "us-east-1"
vpc_id = "${var.vpc}"
subnet_id = "${var.subnet}"
security_group_ids = ["${var.sg}"]
ssh_username = "ec2-boy-oh-boy"
source_ami_filter {
filters = {
name = "amzn2-ami-hvm-2.0*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["12345567896"]
}
launch_block_device_mappings = [
{
"device_name": "/dev/xvda",
"delete_on_termination": true
"volume_size": 10
"volume_type": "gp2"
}
]
run_tags = "${var.tags}"
run_volume_tags = "${var.tags}"
}
build {
sources = [
"source.amazon-ebs.ec2"
]
}
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$
Then my code variables.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat variables.pkr.hcl
locals {
timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}
variable "ami_prefix" {
type = string
default = "ec2-boy-oh-boy"
}
variable "vpc" {
type = string
default = "vpc-asd957d"
}
variable "subnet" {
type = string
default = "subnet-asd957d"
}
variable "sg" {
type = string
default = "sg-asd957d"
}
variable "tags" {
type = map
default = [
environment = "testing"
type = "none"
production = "later"
]
}
Your default value for the tags variable is of type list(string). Both the run_tags and run_volume_tags directives expect type map[string]string.
I was able to make the following changes to your variables file and run packer init successfully:
variable "tags" {
default = {
environment = "testing"
type = "none"
production = "later"
}
type = map(string)
}

Buildbot configuration error

I have installed buildbot master and slave and when i am running the slave after starting the master this is my master script for a build name simplebuild.
c = BuildmasterConfig = {}
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
auth=auth.BasicAuth([("slave1","slave1")]),
gracefulShutdown = False,
forceBuild = 'auth',
forceAllBuilds = False,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import SVN
from buildbot.steps.shell import ShellCommand
qmake = ShellCommand(name = "qmake",
command = ["qmake"],
haltOnFailure = True,
description = "qmake")
makeclean = ShellCommand(name = "make clean",
command = ["make", "clean"],
haltOnFailure = True,
description = "make clean")
checkout = SVN(baseURL = "file:///home/aguerofire/buildbottestsetup/codeRepo/",
mode = "update",
username = "pawan",
password = "pawan",
haltOnFailure = True )
makeall = ShellCommand(name = "make all",
command = ["make", "all"],
haltOnFailure = True,
description = "make all")
f_simplebuild = BuildFactory()
f_simplebuild.addStep(checkout)
f_simplebuild.addStep(qmake)
f_simplebuild.addStep(makeclean)
f_simplebuild.addStep(makeall)
from buildbot.buildslave import BuildSlave
c['slaves'] = [
BuildSlave('slave1', 'slave1'),
]
c['slavePortnum'] = 13333
from buildbot.config import BuilderConfig
c['builders'] = [
BuilderConfig(name = "simplebuild", slavenames = ['slave1'], factory = f_simplebuild)
]
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.changes import filter
trunkchanged = SingleBranchScheduler(name = "trunkchanged",
change_filter = filter.ChangeFilter(branch = 'master'),
treeStableTimer = 10,
builderNames = ["simplebuild"])
c['schedulers'] = [ trunkchanged ]
from buildbot.changes.svnpoller import SVNPoller
svnpoller = SVNPoller(svnurl = "file:///home/aguerofire/buildbottestsetup/codeRepo/",
svnuser = "pawan",
svnpasswd = "pawan",
pollinterval = 20,
split_file = None)
c['change_source'] = svnpoller
After running this script when i check at browser the status of the build then i am not getting any status of the build.
detail inside the waterfall view is
My first question is where actual build is performed at master's end or slave's end ?
What can be the problem in the configuring of buildbot as i have made an error in the commit and was trying to find out weather it will be shown in the waterfall display...but again no error and the same screen as coming in the console view and waterfall view ?
Builds are run on a slave, master just manages schedulers, builders and slaves.
It seems that builds are not run. As for your second screenshot, it shows change info, but not build info. What does your "builders" tab show?