I've read lots of discussions on here about how to setup VSCode to allow linting of Javascript files when // #flow is enabled at the top of the file.
I believe I've implemented things properly but am still getting mysterious errors that seem to indicate otherwise.
For example, I'm getting a Missing type annotation for destructuring.Flow(InferError) for classes here:
const AddCustomer = ({ classes }) => {
So I changed it to this:
const AddCustomer = ({ classes:any }) => {
but the error persisted.
Here is another observation of the way things are working in my VSCode at the moment:
export const useFetch = (initialUrl, initialData) => {
With this code I get warnings that the two parameters are not typed. So I changed the code to this:
export const useFetch = (initialUrl:string, initialData:any) => {
Nothing happened immediately but when I saved the file then the warnings went away. Not a biggie but that's not the way I expect linters to work.
Might anyone have any thoughts on why #flow linting is not quite working correctly in my VSCode?
So, for the first issue regarding Missing type annotation for destructuring.Flow(InferError), I believe the type is placed in the wrong place. The syntax ({ classes: any }) is renaming the classes property on the first parameter to any. A more thorough explanation can be read in MDN's page Destructuring assignment. To correctly specify the type of the object parameter, you will want to place the type definition after the object
const addCustomer = ({ classes }: { classes: any }) => { ... };
This may feel verbose and repetitive, but the idea is to separate the code from the types. For example, the type definition of a customer may be imported from another file, so the code would look more like
// types.js
export type Customer = { classes: any };
// index.js
import type { Customer } from './types';
const addCustomer = ({ classes }: Customer) => { ... };
This allows the parameter to be specified in any way (e.g., without destructuring) and the type to be specified in any way.
For your second issue, check to make sure that the setting flow.runOnEdit is set to true. This will allow Flow to check your files without saving. If that does not seem to fix the problem, then I would raise an issue with the maintainers of the VSCode extension.
Related
I am working on a VsCode extension in that I want to provide custom snippets for code completion.
I know about the option of using snippet json files directly, however those have the limitation of not being able to utilize the CompletionItemKind property that determines the icon next to the completion suggestion in the pop-up.
My issue:
If I implement a simple CompletionItemProvider like this:
context.subscriptions.push(
vscode.languages.registerCompletionItemProvider(
{scheme:"file",language:"MyLang"},
{
provideCompletionItems(document: vscode.TextDocument, position: vscode.Position) {
let item = new vscode.CompletionItem('test');
item.documentation = 'my test function';
item.kind = vscode.CompletionItemKind.Function;
return [item];
}
}
)
)
then the original VsCode IntelliSense text suggestions are not shown anymore, only my own. Should I just return a kind of an empty response, like
provideCompletionItems(document: vscode.TextDocument, position: vscode.Position) {
return [null|[]|undefined];
}
the suggestions appear again as they should. It seems to me that instead of merging the results of the built-in IntelliSense and my own provider, the built-in ones get simply overridden.
Question:
How can I keep the built-in IntelliSense suggestions while applying my own CompletionItems?
VsCode Version: v1.68.1 Ubuntu
I seem to have found the answer for my problem, so I will answer my question.
Multiple providers can be registered for a language. In that case providers are sorted
by their {#link languages.match score} and groups of equal score are sequentially asked for
completion items. The process stops when one or many providers of a group return a
result.
My provider seems to provide results that are just higher scored than those of IntelliSense.
Since I didn't provide any trigger characters, my CompletionItems were comteping directly with the words found by the built-in system by every single pressed key and won.My solution is to simply parse and register the words in my TextDocument myself and extend my provider results by them. I could probably just as well create and register a new CompletionItemProvider for them if I wanted to, however I decided to have a different structure for my project.
Why does calling fake.Provide<T>() wipe out fakes already configured with A.CallTo()? Is this a bug?
I'm trying to understand a problem I've run into with Autofac.Extras.FakeItEasy (aka AutoFake). I have a partial solution, but I don't understand why my original code doesn't work. The original code is complicated, so I've spent some time simplifying it for the purposes of this question.
Why does this test fail? (working DotNetFiddle)
public interface IStringService { string GetString(); }
public static void ACallTo_before_Provide()
{
using (var fake = new AutoFake())
{
A.CallTo(() => fake.Resolve<IStringService>().GetString())
.Returns("Test string");
fake.Provide(new StringBuilder());
var stringService = fake.Resolve<IStringService>();
string result = stringService.GetString();
// FAILS. The result should be "Test string",
// but instead it's an empty string.
Console.WriteLine($"ACallTo_before_Provide(): result = \"{result}\"");
}
}
If I swap the order of the calls to fake.Provide<T>() and A.CallTo(), it works:
public static void Provide_before_ACallTo()
{
// Same code as above, but with the calls to
// fake.Provide<T>() and A.CallTo() swapped
using (var fake = new AutoFake())
{
fake.Provide(new StringBuilder());
A.CallTo(() => fake.Resolve<IStringService>().GetString())
.Returns("Test string");
var stringService = fake.Resolve<IStringService>();
string result = stringService.GetString();
// SUCCESS. The result is "Test string" as expected
Console.WriteLine($"Provide_before_ACallTo(): result = \"{result}\"");
}
}
I know what is happening, sort of, but I'm not sure if it's intentional behavior or if it's a bug.
What is happening is, the call to fake.Provide<T>() is causing anything configured with A.CallTo() to be lost. As long as I always call A.CallTo() after fake.Provide<T>(), everything works fine.
But I don't understand why this should be.
I can't find anything in the documentation stating that A.CallTo() cannot be called before Provide<T>().
Likewise, I can't find anything suggesting Provide<T>() cannot be used with A.CallTo().
It seems the order in which you configure unrelated dependencies shouldn't matter.
Is this a bug? Or is this the expected behavior? If this is the expected behavior, can someone explain why it works like this?
It isn't that the Fake's configuration is being changed. In the first test, Resolve is returning different Fakes each time it's called. (Check them for reference equality; I did.)
Provide creates a new scope and pushes it on a stack. The topmost scope is used by Resolve when it finds an object to return. I think this is why you're getting different Fakes in ACallTo_before_Provide.
Is this a bug? Or is this the expected behavior? If this is the expected behavior, can someone explain why it works like this?
It's not clear to me. I'm not an Autofac user, and don't understand why an additional scope is introduced by Provide. The stacked scope behaviour was introduced in PR 18. Perhaps the author can explain why.
In the meantime, if possible, I'd Provide all you need to before Resolveing, if you can manage it.
I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.
I'm new to Flow and am having a bit of trouble assigning to DOM Element types properly.
Looking at the DOM declarations in the Flow repo, it feels like I'm missing something.
// Works
const otherMeta:?HTMLMetaElement = document.querySelector("meta");
//Doesn't work
const metaTag:?HTMLMetaElement = document.querySelector("meta[name='something']");
The second example results in the following error:
const metaTag:?HTMLMetaElement = document.querySelector("meta[name='something']");
^ HTMLElement. This type is incompatible with
const metaTag:?HTMLMetaElement = document.querySelector("meta[name='something']");
^ HTMLMetaElement
Have a look at the example in the Try Flow REPL tool.
Flow is not smart enough to know what the return type will be for an arbitrary querySelector query. The simple element-name queries have been hard-coded into the builtin type definitions. You can see them in Flow's Github repo.
For Flow to know that the result is an HTMLMetaElement, you'll need to explicitly verify that with code like
const metaTag: ?HTMLElement = document.querySelector("meta[name='something']");
if (metaTag && !(metaTag instanceof HTMLMetaElement)) throw new Error("Expected a 'meta' element.");
// use metaTag here
so by explicitly checking instanceof, Flow will recognize that metaTag must now be the type HTMLMetaElement. This type of behavior is very common in Flowtype and is referred to as refining the type.
Forgive my ignorance, but I can't get ic-ajax working inside of certain
functions.
Specifically, I'd like to get a test like this working, but for Ember CLI:
e.g. http://coderberry.herokuapp.com/testing-your-ember-application#30
I can call ajax inside Ember.Object.Extend and outside of functions and object definitions, but not in modules, tests, or Ember.Route's model function.
Am I misunderstanding something or is there a misconfiguration in my app?
I've figured out that within functions I can do:
ajax = require('ic-ajax')['default'];
defineFixture = require('ic-ajax')['defineFixture'];
but I'm pretty sure import at the top of the file is supposed to work.
I'm experiencing this on Ember 0.40.0 (both in my existing app and a fresh app). See below for more specifics where I'm finding it undefined. Setting var ajax = icAjaxRaw outside of the functions does not work either. I'm at a bit of a loose end so any help you could give in this regard would be great.
users-test.js:
import ajax from 'ic-ajax';
import { raw as icAjaxRaw } from 'ic-ajax';
import { defineFixture as icAjaxDefineFixture } from 'ic-ajax';
debugger;
---> icAjaxDefineFixture IS defined here
module('Users', {
setup: function() {
App = startApp();
debugger;
icAjaxDefineFixture --> UNDEFINED
},
teardown: function() {
Ember.run(App, App.destroy);
}
});
test("Sign in", function() {
icAjaxDefineFixture --> UNDEFINED
expect(1);
visit('/users/sign-in').then(function() {
equal(find('form').length, 1, "Sign in page contains a form");
});
});
Brocfile.js (I don't think these are actually needed with the new ember-cli-ic-ajax addon):
app.import('vendor/ic-ajax/dist/named-amd/main.js', {
exports: {
'ic-ajax': [
'default',
'defineFixture',
'lookupFixture',
'raw',
'request',
]
}
});
Had the same problem. Turns out it is a Chrome debugger optimization issue, checkout this blog post http://johnkpaul.com/blog/2013/04/03/javascript-debugger-surprises/
While debugging, if you try to use a variable from a closure scope in the console, that wasn’t actually used in the source, you’ll be surprised by ReferenceErrors. This is because JavaScript debuggers optimize the hell out of your code and will remove variables from the Lexical Environment of a function if they are unused.
To play around in debugger, I've just typed ajax; inside of the closure and variable magically appeared.