where is the `IPython` builtin magics and extensions defined - ipython

I wanted to see how magics are defined from some sample source but did not find info from web.
Moreover, where should the IPython extensions reside?

To understand how to deploy your custom magic you need to read 2 sectinos of IPython's documentation:
How to write your own magic
How to write an extension
Then you will see that generally a magic is simply a class that define magics methods in Python module.
Here is for example a self contained IPython magic as an extension, the code is not the best, but just to show you it can be small.
As to where the IPython's builtin's one are defined, they all are in this folder, though I would not recommend to take them as an example, as there is a lot of historical baggage and can be quite complex.
A more recent full example and will appear in the docs once next version of IPython is released.

Related

How can I have 2 verions of Gensim for summarization in one Jupyter notebook?

I want to have 2 versions of Gensim for using summarization and keyword function from old Gensim.
How can I setup this senario?
In general, a single Jupyter notebook is backed by a single Python interpreter/environment, and popular packages at their 'official' installation paths can only be installed once.
There are a few hackish workarounds suggested in answers like:
Installing multiple versions of a package with pip
However, each workaround presents operational problems.
One approach is to install the older package to a non-standard path (directory) that's still found by Python importing logic (controlled by PYTHONPATH). For example, put/move the older copy of Gensim to a gensim_old package directory. But: this is only likely to work well with very sime (single-.py-file) packages.
With any signficant library (like Gensim) which cross-imports a lot of things from its own utility modules, using the standard paths, lots of things are likely to break unless you dig into all involved individual files to change their import paths. That's kind of kludgey & hard-to-maintain. (Though, to the extent you're just using one old version, say gensim-3.8.3 for the removed summarization feature, perhaps it'd be worth fighting through this process once, then keeping the changes around.)
Another approach is to create a totally-separate Python environment with the alternate version, and only use that other environment from the notebook by a system-call – via either something in Python-code like subprocess.call(), or the notebook-cell ! or !! magic-escapes to run a shell command. That is, you give up the ability to run individual interactive lines of Python in that alt environment - but could still send it batches of data, and either capture the console output or observe its output files to continue processing in your notebook.
I'd expect this to be a better option – cleaner & more-maintainable – provided that either the old-version-functionality (summarization) or new-version-functionality (whatever else) can be condensed into one (or a few) single-step scripts.
Another option would be to try to completely copy the gensim.summarization source code files to some new location inside your own project – performing whatever (few, minor) edits are necessary to ensure it works from the alternate location.
One of the reasons that functionality was removed was that its approach to things like tokenization was not consistent/integrated with other Gensim practices – which actually means it's likely to be a little easier to keep it working (given its use of its own idiosyncratic approaches) separately.
Personally I'd rank these three options desirability as:
(best) Section off the summarization tasks to be run via subprocess executions in a separate Python environment, which has only the older package installed.
(maybe ok) Copy the 10 .py files that implement the gensim.summarization' to your own local module. Edit lightly as necessary to ensure they still work. (That should mainly be updating import` lines, but might reuire a few other adaptations to other Python 3.x/Gensim 4.x changes.)
(probably too messy) Install the whole old package to a non-standard directory, edit lots of files to ensure anything you're using still works.
Finally, note that the main reason the feature was removed is that it did not offer very impressive or adaptable results. While I've seen some people say it's worked OK for their applications, I've never seen even so much as a demo where its practices/algorithm – which can only extract some subset of important sentences, never paraphrase – gave impressive results.
So unless you already know that its approach works well for your needs, don't get your hopes up! Good luck.

how can I use elisp to print dependencies of a code?

I am trying to print the dependencies associated with a code, such as definitions related to functions or variables in a statement using Emacs, however I am not finding the functions necessary to do it. I have already been able to parse the code, now I just need the printing part, for which I have been looking into the srecode package without success.
It will be a necessary step to translate Java code into C or C++
What "code"? In what programming language? There are packages for different programming languages that could help. You need to be more specific.
to use emacs at this point perharps was a bad idea. I searched for code slicing and found some tools here: slicers. For the translation part I may use code from cogre-srecode.el from the cogre package of cedet and for it the manual of srecode is better

Go to implementation instead of TypeScript declaration

When I click an imported variable while holding Cmd on MacOS in VSCode (or Ctrl on other platforms), I often end up looking at the TypeScript declaration of that variable.
Is there any way to have VSCode take me to the definition of it instead?
I don't use TypeScript myself, so the feature isn't helpful to me right now.
Try Go to source definition
This command will try to jump to the original JavaScript implementation of a function/symbol, even for code under node_modules.
JavaScript is a very dynamic language though, so we can't figure out the source location in every case. If you aren't getting results for a common library, please file an issue against TypeScript so we can investigate adding support
For faster and more accurate results, libraries can bundle declaration maps that map from .d.ts files back to source .ts (or .js) files. However many libraries currently do not include these
I found a simple solution for this after a lot of searching.
You just need to add "typescript.disableAutomaticTypeAcquisition": true to your project's settings.json (or vscode's global settings).
This will disable the automatic generation of TypeScript definitions and restore the original "Jump to" behaviour of going to the implementation.
Source:
https://ianwalter.dev/jump-to-source-definition-instead-of-typescript-definition-in-vs-code (archive.org link)
The author provided the wrong instructions though (false when it should have been true so be careful when you read the post. Re-installing node modules was also not needed.
VSCode was updated to include a new option Go to Source Definition. If the ts source is available and ts is upgraded to > 4.7 and VSCode to > 1.67 it should work.
Many library authors do not include the ts source code unfortunately. The package often only consists of the compiled *.js files and the *.d.ts definition files. That makes this new feature of VSCode useless for these packages unfortunately.
This is the original issue:
https://github.com/microsoft/TypeScript/issues/6209
And this is an issue for feedback on the new feature.
https://github.com/microsoft/TypeScript/issues/49003
Implementation is bundled and transpiled ro javascript and vscode is not able to take you there but instead of it will take you to interface. You can search for references in javascript file or you can clone or form the repo to see the implementation in typescript.
As other answers have already stated,
Regardless of any of your tsconfig and whether the package you are requiring/importing things from provides type declaration files or whether you installed a Definitely Typed package for it or not, you can use the TypeScript: Go to Source Definition command to go to the symbol definition in the JS file. This functionality is provided TypeScript and the vscode.typescript-language-features extension (which is built-into / ships out-of-box with VS Code).
I thought I'd try to give more interesting information that other answers haven't covered yet for fun and profit curiosity's sake (and also explain why this "often" happens to you, but not always):
You can bind that command to a keybinding. It's keybinding command ID is typescript.goToSourceDefinition.
If the package you require or import packages its own type declaration files or you installed a community-maintained type declaration file from the Definitely Typed project, then ctrl+clicking / cmd+clicking into the require/import argument or putting the caret on it and invoking whatever the editor.action.revealDefinition command or editor.action.goToTypeDefinition are bound to (F12 by default for editor.action.revealDefinition) will take you to the type declaration by default.
If the package you require or import doesn't package its own type declarations and you didn't install a types package from the Definitely Typed project, and you modify your tsconfig or jsconfig to set allowJs: true and maxNodeModuleJsDepth: <N>, then ctrl+clicking / cmd+clicking into the require/import argument or putting the caret on it and invoking whatever the editor.action.revealDefinition command or editor.action.goToTypeDefinition command are bound to (F12 by default for editor.action.revealDefinition) will take you to the symbol's definition in the JS file by default (unless you already performed this action at a point when a type declaration file declaration types for symbol was available and have not since reloaded/restarted VS Code or edited your tsconfig/jsconfig file, because it will cache that association in memory (smells like minor bug, but ¯\_( ツ )_/¯)).
The editor.action.revealDeclaration keybinding seems to do nothing here (at the time of this writing). I guess that keybinding is more for languages like C and C++.
Some loosely related release notes sections and user docs (non-exhaustive list (I don't get paid to do this)):
https://code.visualstudio.com/docs/editor/editingevolved#_go-to-definition
https://code.visualstudio.com/updates/v1_13#_go-to-implementation-and-go-to-type-definition-added-to-the-go-menu
https://code.visualstudio.com/updates/v1_35#_go-to-definition-improvements
https://code.visualstudio.com/updates/v1_67#_typescript-47-support
In TypeScript's GitHub repo: Go To Source Definition feedback thread #49003
https://code.visualstudio.com/updates/v1_68#_go-to-source-definition
Quoting from that last one:
One of VS Code's longest standing and most upvoted feature requests is to make VS Code navigate to the JavaScript implementation of functions and symbols from external libraries. Currently, Go to Definition jumps to the type definition file (the .d.ts file) that defines the types for the target function or symbol. This is useful if you need to inspect the types or the documentation for these symbols but hides the actual implementation of the code. The current behavior also confuses many JavaScript users who may not understand the TypeScript type from the .d.ts.
While changing Go to Definition to navigate to the JavaScript implementation of a symbol may sound simple, there's a reason why this feature request has been open for so long. JavaScript (and especially the compiled JavaScript shipped by many libraries) is much more difficult to analyze than a .d.ts. Trying to analyze all the JavaScript code under node_modules would be both slow and would also dramatically increase memory usage. There are also many JavaScript patterns that the VS Code IntelliSense engine is not able to understand.
That's where the new Go to Source Definition command comes in. When you run this command from either the editor context menu or from the Command Palette, TypeScript will attempt to track down the JavaScript implementation of the symbol and navigate to it. This may take a few seconds and we may not always get the correct result, but it should be useful in many cases.
See also: https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-7.html#go-to-source-definition.

Measuring Documentation Coverage with Doxygen

I wanted to ask if there are any features (or add-ons) for Doxygen to measure the documentation coverage via command line. I already know that I can set up Doxygen to write undocumented elements as warnings into a log file, but to fully evaluate the documentation coverage from that, I'd need to write my own warning log parser. Was something like this done already or is there an even easier way I couldn't find? Is there any add-on I could check out for this?
Thank you.
I don't know anything that can give documentation coverage for doxygen, but a quick search gives 2 interesting results : https://github.com/alobbs/doxy-coverage (require xml output for doxygen) and http://jessevdk.github.io/cldoc/ (alternative for c++ projects?)
There is coverxygen which is using the same idea as alobbs/doxy-coverage (uses xml output of Doxygen) but provides more options (for example, filter by access specifier).
Disclaimer: I am contributing to that project.

What is the purpose of the Emacs function (eval-and-compile...)?

I can read the documentation, so I'm not asking for a cut-and-paste of that.
I'm trying to understand the motivation for this function.
When would I want to use it?
The documentation in the Emacs lisp manual does have some example situations that seem to answer your question (as opposed to the doc string).
From looking at the Emacs source code, eval-and-compile is used to quiet the compiler, to make macros/functions available during compilation (and evaluation), or to make feature/version specific variants of macros/functions available during compilation.
One usage I found helpful to see was in ezimage.el. In there, an if statement was put inside the eval-and-compile to conditionally define macros depending on whether the package was compiled/eval'ed in Emacs or XEmacs, and additionally whether a particular feature was present. By wrapping that conditional inside the eval-and-compile you enable the appropriate macro usage during compilation. A similar situation can be found in mwheel.el.
Similarly, if you want to define a function via fset and have it available during compilation, you need to have the call to fset wrapped with eval-and-compile because otherwise the symbol -> function association isn't available until the file is evaluated (because compilation of a call to fset just optimizes the assignment, it doesn't actually do the assignment). Why would you want this assignment during compilation? To quiet the compiler. Note: this is just my re-wording of what is in the elisp documentation.
I did notice a lot of uses in Emacs code which just wrapped calls to require, which sounds redundant when you read the documentation. I'm at a loss as to how to explain those.