What's the best way to identify which Visual Studio Code setting is generating / allowing various suggestions to pop up (so it can be turned off)? In particular I'd like to eliminate these three from ever showing.
Those suggestions are types from the standard library. The TypeScript service that powers VS Code's JavaScript and TypeScript language features loads these types from .d.ts files in order to understand the signatures of standard JavaScript library functions such as parseInt or Promise.
To find out where a type is coming from, try using workspace symbol search (cmdT):
In this case, these types come from the standard lib.d.ts file that TypeScript loads automatically. TypeScript will also automatically load a d.ts file for the DOM api.
To disable these suggestions, create a jsconfig.json at the root of your project with the contents:
{
"compilerOptions": {
"lib": []
}
}
This tells typescript not to include any extra typings files for the core libraries. You can also select which typings you want to include:
{
"compilerOptions": {
"lib": [
"es2015"
]
}
}
See the documentation for a list of valid lib options
If you notice any bugs with this behavior or have any suggestions on how this could be improved, please file an issue against VS Code
Update
To discover where a type suggestion is coming from, you may also be able to write:
/**
* #type {AsyncResultObjectCallback}
*/
var placeholer;
And then run go to type definition on placeholder. Even using "lib": [], you may still be seeing suggestions from #types files or node packages that include d.ts files
Related
I'm using VSCode with Modern Fortran+fortls to write Fortran code with PETSc library. The declaration using PETSc's key words seems not being recognized at all. I tried to include the path of PETSc library and header files in .fortls file using include_dirs etc, but it does not resolve the issue. Can anyone give me a hint so the all the keywords and function declarations in PETSc can be handled by fortls? Thanks a lot
I used the following in the .fortls file
{
"source_dirs": ["./**","./src/**",
"/Users/apple/local/petsc/osx-gfc/include/**",
"/Users/apple/local/petsc/include/**",
"/Users/apple/local/petsc/include/petsc/finclude",
"/Users/apple/local/petsc/src/vec/f90-mod",
"/Users/apple/local/petsc/src/ksp/f90-mod",
"/Users/apple/local/petsc/src/vec/f90-mod",
"/Users/apple/local/petsc/src/dm/f90-mod",
"/Users/apple/local/petsc/src/mat/f90-mod"
],
"include_dirs": ["./include/**",
"/Users/apple/local/petsc/osx-gfc/include/**",
"/Users/apple/local/petsc/include/**",
"/Users/apple/local/petsc/include/petsc/finclude/",
"/Users/apple/local/petsc/src/ksp/f90-mod",
"/Users/apple/local/petsc/src/vec/f90-mod",
"/Users/apple/local/petsc/src/snes/f90-mod"
],
"ext_source_dirs": ["/Users/apple/local/petsc/src/snes/f90-mod/",
"/Users/apple/local/petsc/src/sys/f90-mod/",
"/Users/apple/local/petsc/osx-gfc/include/**",
"/Users/apple/local/petsc/src/vec/f90-mod",
"/Users/apple/local/petsc/src/dm/f90-mod",
"/Users/apple/local/petsc/src/mat/f90-mod",
"/Users/apple/local/petsc/src/ksp/f90-mod",
"/Users/apple/local/petsc/src/vec/f90-mod"
],
"hover_language": "fortran90",
"hover_signature": true,
"use_signature_help": true
}
But the type keyword like SNES and KSP are not recognized by the IDE at all.
You need to slightly tweak your language server settings. You are attempting to use the PETSc preprocessor variables without "defining" them in the language server. Parsing files containing preprocessor macros can sometimes be a bit flimsy in the language server.
Assuming you are using fortls you can define preprocessor definitions as shown in the documentation, see: https://gnikit.github.io/fortls/options.html#pp-defs
You should also be able to do that via the VS Code settings.
Beware of this bug (https://github.com/gnikit/fortls/issues/72) where defining a preprocessor macro will override actual fortran tokens like .IF or ELSE
Full disclosure I am the author of fortls and the Modern Fortran vscode extension
I'm trying to use Visual Studio Code to develop Fortran MPI programs. However, while I can successfully build and run them just fine, it would be very helpful for me if I can use intellisense/autocompletion features for MPI (as well as other external modules). I have /usr/lib/openmpi/ (which contains mpi_f08.mod) as part of fortran.includePaths in my settings.json. However, when I use mpi_f08, I get the problem message from VS Code Module "mpi_f08" not found in project. Here is a minimal CMake build example:
! hello.f90
program hello
use mpi_f08
implicit none
integer :: ierror, nproc, my_rank
call MPI_Init()
call MPI_Comm_size(MPI_COMM_WORLD, nproc, ierror)
call MPI_Comm_rank(MPI_COMM_WORLD, my_rank, ierror)
print*, "hello from rank ", my_rank
call MPI_Finalize()
end program hello
# CMakeLists.txt
cmake_minimum_required(VERSION 3.12)
project(hello_mpi)
enable_language(Fortran)
find_package(MPI REQUIRED)
add_executable(hello_mpi hello.f90)
include_directories(${MPI_Fortran_INCLUDE_PATH})
target_link_libraries(hello_mpi PUBLIC ${MPI_Fortran_LIBRARIES})
I would like to be able to (i) get rid of the warning/message and more importantly (ii) enable suggestions from MPI when I press CTRL+space as it would if I was calling from an internal module.
I'll post a partial answer since it's better than nothing, hopefully this helps someone else and/or enables someone else to answer my question fully.
It seems the issue relates to the Fortran language server, which can be configured by adding a .fortls JSON file, as explained on its Github README: https://github.com/hansec/fortran-language-server
I added the following, which allowed it to find not only local modules but also MPI (and the external module json-fortran):
{
"source_dirs": ["src", "."],
"ext_source_dirs": [
"/path/to/json-fortran/src",
"/path/to/openmpi-4.1.2/ompi/mpi/fortran/use-mpi-f08",
]
}
This doesn't capture all functions in json-fortran, which I think is because of its .inc files, as it doesn't give me function pointers like json_file::get at autocomplete.
As for MPI, this kind of works, as it gives me all the functions I can think of needing, but with _f08 appended to the end of it. I don't know the inner workings of OpenMPI but I guess e.g. MPI_Init wraps MPI_Init_f08 for reasons of backward compatibility. For now I can simply autocomplete to the _f08 version and remove that bit manually. (I also tried adding openmpi-4.1.2/ompi/mpi/fortran/use-mpi-tkr and openmpi-4.1.2/ompi/mpi/fortran/mpif.h but no luck).
Would be nice to get this detail sorted though. It is also mildly annoying that I must manually include the source dirs now (removing it makes it not find local modules).
I am trying to set TI C2000 C/C++ compiler (cl2000.exe) in the "compilerPath", but the VS Code says:
Unable to resolve configuration with compilerPath ".../bin/cl2000.exe". Using "cl.exe" instead.
Is there a way to make this work, eg to manually specify where to find include files etc., or compilers other than big ones (gcc, clang) just do not work with VS Code no matter what I do?
I do not need features like debugger, library documentation etc., but I do need intellisense (msvc-x64). I could just use cl.exe as a defined compiler, but it does not recognize some compiler-specific stuff like __attributes__((ramfunc)), it does not use proper <stdint.h> etc.
With the current C/C++ extension (v1.7.1) I don't think what you want is achievable, as it seems it needs to have some kind of integration with the compiler to extract a lot of "hidden" information from it, and currently only Microsoft's C compiler, Clang and GCC are supported (source).
In any case, you can get a lot of functionality working with the right c_cpp_settings.json.
Here's a copy of my current setup for a project on the F280049C. I have the motor control SDK added as a submodule on the project folder, something you might not need. Make sure that you add all the different SDKs and other bits and bobs that are relevant to your project.
For the defines property, I checked the project properties in Code Composer, and then in CSS Build / C2000 Compiler / Predefined Symbols there's a list of defines passed at the command line. There's a bunch of "hidden" defines that I don't know how to get, see my comment in the code below. Take the ones on the example with a pinch of salt, your defines might be different.
The C and C++ standard can be checked in the flags passed in the command line (in my project they are C++03 and C11).
Compiler Path has to be left empty (""), as the extension can't use the C2000 compiler, that's why you get the Unable to resolve configuration with compilerPath ".../bin/cl2000.exe". Using "cl.exe" instead. error. I guess VS Code sees the "cl.exe" compiler, a compiler the C/C++ extension can use (it's Microsoft's VS compiler, maybe the fact that its name is similar to TI's is misleading you there)
About the intelliSense mode, nothing will work with the devilish architecture of the C2000. I prefer to set intelliSense to 32 bits, but that's just as wrong as 64 bits, so pick your poison there. Keep in mind that things like the values of sizeof or pointers will be wrong in VS Code (as you are telling it to treat them as a 32 bit architecture).
Try this config. You will get lots of red squiggly lines under some includes, but most of them should be in your browse path and can be automatically added by clicking in the yellow bulb icon.
{
"env": {
"SDKPath": "${workspaceFolder}/../lib/C2000Ware_MotorControl_SDK_3_01_00_00",
"StdLibPath": "C:/ti/ccs1031/ccs/tools/compiler/ti-cgt-c2000_20.2.5.LTS",
"XDCTools": "C:/ti/ccs1031/xdctools_3_62_00_08_core",
"TIBIOS": "C:/ti/bios_6_83_00_18/packages/"
},
"configurations": [
{
"name": "TI",
"includePath": [
"${workspaceFolder}/**"
],
"browse": {
"path": [
"${StdLibPath}/**",
"${SDKPath}/**",
"${XDCTools}/**",
"${TIBIOS}/**"
]
},
"defines": [
"_F28004x",
"DEBUG",
"_INLINE",
"_FLASH",
"DRV8353_SPI",
// Hidden predefined symbols. They are set by the compiler by default, but
// they are not output on the command line and I haven't found a way to
// list them yet
// needed for STL (see s_c__system.h)
"_STLP_COMPILER",
"__TI_COMPILER_VERSION__"
],
"compilerPath": "",
"cStandard": "c11",
"cppStandard": "c++03",
"intelliSenseMode": "gcc-x86"
}
],
"version": 4
}
Keep in mind that when working with microcontrollers there's a lot of customization to do, but I think this will get you on the good track.
The Visual Studio Code Markdown preview uses markdown-it which supports a plugin ecosystem.
Extension of this rendering pipeline is documented, but this does not consider the possibility of clients other than the markdown preview pane.
How can I use the built-in Markdown rendering?
My use case, if it helps, is printing. I wrote a printing extension for Visual Studio Code and a natural enhancement of that was rendering Markdown when it prints.
At the moment I'm effectively recreating the rendering pipeline. Apart from the obvious redundancy, the reason I'd like to use the built-in rendering is to inherit any configured extensions, so that the print out matches the preview in capability.
It seems reasonable to expect the Markdown preview pane to be implemented as a virtual document which is a client of the rendering pipeline. What repository and file(s) contain the implementation of the Markdown preview pane?
It appears there is a MarkdownEngine class that manages the loading of plugins according to configuration, and while there doesn't seem to be a way to reference the MarkdownIt instance, there is a render method defined here
https://github.com/Microsoft/vscode/blob/fa5306d67bb934c42d206fb3c7e028dff00d530f/extensions/markdown-language-features/src/markdownEngine.ts#L96
So on the face of it, all you need to do is import MarkdownEngine and use this method. However, this is not currently supported. I have logged a feature request.
The authors do not want to expose MarkdownEngine, but they proposed providing a render method.
That's the ultimate answer, but it isn't of any help right now. In the interim, it is possible to obtain a reference to Visual Studio Code's markdownIt instance.
Change your extension to masquerade as a Markdown plugin. Notice that in the documentation for adding plugins it says:
Then, in the extension's main activation function, return an object with a function named extendMarkdownIt. This function takes the current MarkdownIt instance and must return a new MarkdownIt instance:
import * as vscode from 'vscode';
export function activate(context: vscode.ExtensionContext) {
return {
extendMarkdownIt(md: any) {
return md.use(require('markdown-it-emoji'));
}
};
}
This is your chance to capture the Markdown renderer. Modify your extension to present as a Markdown plugin:
"contributes": {
"markdown.markdownItPlugins": true,
Give it a private property to hold a reference to the MarkdownIt instance. Don't bother to strong type it. That would require you to bundle the MarkdownIt library.
var md: any;
Then capture a reference by putting this at the end of your activate method.
return { extendMarkdownIt(mdparam: any) { return md = mdparam; } };
When the pipeline initialises, it will invoke the callback you have provided passing a reference to itself. The rest of your code can get it from the property.
This trick depends on the rendering pipeline loading early irrespective of whether you use the Markdown preview pane. Happily it does.
I'm using rollup with the Babel and CommonJS plugins, like this:
const inputOptions = {
input: "...",
plugins: [
resolve(),
babel({
exclude: "node_modules/**",
externalHelpers: true,
include: "**/components/**/*.js",
}),
commonjs(),
],
};
But what happens is that modules referenced from components don't seem to be recognized by the CommonJS plugin, they end up as plain require(...) statements in the output (just like the source input) where of course they cannot be resolved. Modules imported (also through require() statements) by modules outside the components directory get picked up properly and included in the bundle.
I have tried moving the babel plugin up (before the resolve plugin), but this had no effect. I also tried moving it down, but then Rollup chokes on the JSX in the components.
I also tried removing the include option, so that all files go through Babel and then the result is that no modules get picked up besides the entry point, so it really seems like the Babel and CommonJS plugins aren't playing along nicely, though I can hardly imagine I'm the only one with a setup like this. Am I missing something?
Update: One other thing that I notice is that the files for which the requires() aren't recognized, aren't properly exported either. Instead, for each component that fails, I see this in the output bundle:
module.exports = ComponentName;
var componentName = /*#__PURE__*/Object.freeze({
});
The module.exports line comes from the source, but that Object.freeze() statement is something rollup adds, maybe because it doesn't see any default export?
To add a bit of extra confusion: There's actually one component that gets transpiled by Babel and for which the module resolution works and the requires() get replaced like you'd expect, but all the components included from that component in turn have the defective behavior described above.
Update 2: I have been able to reproduce the problem in a minimal example as well, and it allowed me to pinpoint why things worked for the one component, but not by the components it includes in turn. Apparently, functional React components work properly, but class components trigger the issue. So now my hypothesis is that the Babel transform for ES6 classes somehow confuses the CommonJS plugin.
Update 3: As I believe this is a bug, I have also created issues with the relevant projects: https://github.com/rollup/rollup-plugin-babel/issues/297 and https://github.com/rollup/rollup-plugin-commonjs/issues/369