I am using stylelints and I have some rules that I want to disable :
in less I have to do calc this way top: calc(~'50% + 30px'); but "function-calc-no-invalid" prevent it
https://stylelint.io/user-guide/rules/function-calc-no-invalid
also, I want to allow my less code to apply css to component directly so
my-componet { width:100px} so I need to disable "selector-type-no-unknown"
https://stylelint.io/user-guide/rules/selector-type-no-unknown
I tried to create a .styllelintrc file and add the following
"selector-type-no-unknown": "custom-elements",
"function-calc-no-invalid": "false",
and manyvariation, but I keep getting
Invalid Option: Unexpected option value "false" for rule "function-calc-no-invalid"
Invalid Option: Unexpected option value "custom-elements" for rule "selector-type-no-unknown"
Your stylelint configuration object in your .stylelintrc file should be:
{
"rules": {
"function-calc-no-invalid": null,
"selector-type-no-unknown": [
true,
{
"ignore": [
"custom-elements"
]
}
]
}
}
You can learn more about how rules are configured in the stylelint user guide, e.g. how to turn rules off rules using null and configure optional secondary options like ignore: ["custom-elements"].
Related
We are having a runnable jar for the DAP(Debugger Adapater Protocol) of ABL language and we are planning to add a vscode extension for that ABL DAP.
Could you provide us some documentation or give us an idea about how to do it?
There are basically two approaches for making VS Code run your *.jar:
Declarative:
In the package.json of your extension, you can register your executable as part of the debuggers extension point. Here is the doc: https://code.visualstudio.com/api/extension-guides/debugger-extension#anatomy-of-the-package.json-of-a-debugger-extension.
And here is an example from the node.js debugger.
Basically the following attributes are available:
"runtime": "node",
"runtimeArgs": [ "--arguments_passed_to_runtime" ],
"program": "./dist/dap.js",
"args": [ "--arguments_passed_to_program" ],
The resulting command that VS Code will call is this:
node --arguments_passed_to_runtime ./dist/dap.js --arguments_passed_to_program
You do not have to make a distinction between runtime and program. You can use either "runtime" or "program" and leave out the other.
Non-declarative:
Alternatively you can use extension code to "calculate" the command that VS Code should execute when launching your Debug Adapter. See this API: https://github.com/microsoft/vscode/blob/4c3769071459718f89bd48fa3b6e806c83cf3336/src/vs/vscode.d.ts#L8797-L8816
Based on this API the following snippet shows how you could create a DebugAdapterDescriptor for your abl debugger:
// this method is called when your extension is activated
// your extension is activated the very first time the command is executed
export function activate(context: ExtensionContext) {
vscode.debug.registerDebugAdapterDescriptorFactory('abl', {
createDebugAdapterDescriptor(session: DebugSession, executable: DebugAdapterExecutable | undefined): ProviderResult<DebugAdapterDescriptor> {
return new vscode.DebugAdapterExecutable(
"/usr/bin/java",
[
join(context.extensionPath, "bin", "abl.jar")
],
{
"cwd": "some current working directory path",
"env": {
"key": "value"
}
}
);
}
});
}
Let's say that I have a few standalone routes in my Sails.js app (v1.0.2):
'user/login/',
'user/logout',
'user/reset-password'
...
Now, my current routes looks like that:
'GET /api/user/login': {
action: 'user/login',
},
'GET /api/user/logout': {
action: 'user/logout',
},
'GET /api/user/reset-password': {
action: 'user/reset-password',
},
Is there a way to get the same results with less code? something like:
'GET /api/user/*': {
action: 'user/*',
},
or:
'GET /api/user/:actionName': {
action: 'user/:actionName',
},
When using a route with a wildcard, such as '/*', be aware that this will also match requests to static assets (i.e. /js/dependencies/sails.io.js) and override them.
https://sailsjs.com/documentation/concepts/routes/custom-routes#?wildcards-and-dynamic-parameters
Or you can try using pattern variables to have less code.
May be you can activated
Automatically expose implicit routes for every action in your app?
in config/blueprints.js file.
Like this all your routes will be exposed and you won't need to specify each route and action.
But it is not a good solution for security reason.
I have a karma.conf.js that has defines browsers and a lot of custom launchers for testing on Browser Stack:
browsers: ["Chrome", "Firefox"],
customLaunchers: {
IE11: {
base: "BrowserStack",
browser: "IE",
browser_version: "11",
os: "Windows",
os_version: "10",
},
IE10: {
base: "BrowserStack",
browser: "IE",
browser_version: "10",
os: "Windows",
os_version: "8",
},
// ...
}
Most of the time I want to run only against Chrome and Firefox, but once in a while I want to have the tests run against all browsers known to the configuration.
I know I could put all keys of customLaunchers in browsers but that's not a great option because it would mean that most of the time I'd have to pass --browsers to limit the run to Chrome and Firefox.
I know I could list all the browsers on the command line but that's cumbersome because the list has 11 items in it and would grow even longer over time.
So is there a way to tell Karma "run against all the launchers known to you from karma.conf.js"? I've checked the documentation, searched issues and SO and found nothing.
There is no built-in option that I know of. However, the karma.conf.js file can be modified to accept a fake browser name like all that can be used as a signal that custom code in karma.conf.js should set the configuration to use all browsers known to the configuration file:
module.exports = function configure(config) {
"use strict";
var options = {
basePath: "",
// ...
browsers: ["Chrome", "Firefox"],
customLaunchers: {
IE11: // ...
IE10: // ...,
// ...
}
};
// If the user passed `--browsers all`, then we grab the list
// from the `options` object and modify `config.browsers`.
var browsers = config.browsers;
if (browsers.length === 1 && browsers[0] === "all") {
var newList = options.browsers.concat(Object.keys(options.customLaunchers));
browsers.splice.apply(browsers, [0, browsers.length].concat(newList));
}
config.set(options);
}
The options object contains the configuration for your project. When the user passes --browsers all on the command line, the value of config.browsers is modified in place to list all browsers. Note that it has to be modified in place. Replacing the value with a new array won't work, probably because Karma stores a reference before karma.conf.js is run and uses that reference from then on. Also, modifying the value of options.browsers has no effect, because config.browsers has priority. (Otherwise, --browsers [...] would not override what is in the configuration passed through karma.conf.js.)
I am developing a language support extension for VS Code by converting a Sublime tmBundle. I am using the bundle from siteleaf/liquid-syntax-mode. I have successfully included the following using yo code options 4 & 5 and combining the output:
Syntax File (.tmLanguage)
Snippets (.sublime-snippet)
What I would like to do is add autocomplete/Intellisense support by importing the .sublime-completions file, either directly or by rewriting it somehow.
Is it even possible to add items to the autocomplete/Intellisense in VS Code?
It looks like it is possible if I create a Language Server extension. From the site:
The first interesting feature a language server usually implements is validation of documents. In that sense, even a linter counts as a language server and in VS Code linters are usually implemented as language servers (see eslint and jshint for examples). But there is more to language servers. They can provide code complete, Find All References or Go To Definition. The example code below adds code completion to the server. It simply proposes the two words 'TypeScript' and 'JavaScript'.
And some sample code:
// This handler provides the initial list of the completion items.
connection.onCompletion((textDocumentPosition: TextDocumentPositionParams): CompletionItem[] => {
// The pass parameter contains the position of the text document in
// which code complete got requested. For the example we ignore this
// info and always provide the same completion items.
return [
{
label: 'TypeScript',
kind: CompletionItemKind.Text,
data: 1
},
{
label: 'JavaScript',
kind: CompletionItemKind.Text,
data: 2
}
]
});
// This handler resolve additional information for the item selected in
// the completion list.
connection.onCompletionResolve((item: CompletionItem): CompletionItem => {
if (item.data === 1) {
item.detail = 'TypeScript details',
item.documentation = 'TypeScript documentation'
} else if (item.data === 2) {
item.detail = 'JavaScript details',
item.documentation = 'JavaScript documentation'
}
return item;
});
I have an application setup in Filepicker. This application uploads directly to my S3 bucket. The initial pickAndStore() function works well. The follow up convert function always fails with the 403 error "The FPFile could not be converted with the requested parameters". I have the following code:
try {
filepicker.setKey(apiKey);
filepicker.pickAndStore(
{
extensions : [ '.jpg','.jpeg','.gif','.png' ],
container : 'modal',
services : [ 'COMPUTER', 'WEBCAM', 'PICASA', 'INSTAGRAM', 'FACEBOOK', 'DROPBOX' ],
policy : policy,
signature : signature,
},
{
location : 'S3',
multiple : false,
path : path,
},
function(InkBlobs){
filepicker.convert(
InkBlobs[0],
{
width : 150,
height : 150,
fit : 'max',
align : 'faces',
format : 'png',
policy : policy,
signature : signature,
},
{
location : 'S3',
path : response.path + fileName + '.png',
},
function(InkBlob) {
console.log(InkBlob);
},
function(FPError) {
console.log(FPError);
}
);
},
function(InkBlobs){
console.log(JSON.stringify(InkBlobs));
}
);
} catch (e) {
console.log(e.toString());
}
The error handler function is always called. The raw POST response is...
"Invalid response when trying to read from
http://res.cloudinary.com/filepicker-io/image/fetch/a_exif,c_limit,f_png,g_face,h_150,w_150/https://www.filepicker.io/api/file/"
...with the rest of my credentials appended. The debug handler returns the previously mentioned message with the moreInfo parameter pointing to a URL "https://developers.filepicker.io/answers/jsErrors/142" which has no content on it about the error.
I thought the problem might be that using S3 directly means the file is not present on the Filepicker system to convert. I tried using the standard pick() function without any S3 uploading and then converting the resulting InkBlob. That produced exactly the same error message.
Any help would be appreciated.
In this instance, the error is in the use of faces and fit max. When using faces, you can only set fit to crop.
The interpretation in the command above is to find the faces, but set the image to fit the max allowed size.
Try removing the "path" option in the policy.
Specifying the path in the policy works well for pickAndStore(), though if you specify a path in your policy for convert, filepicker will give you a 403 error adressing the conversion parameters. Seems like the API won't know if it's the source or destination path.