I'm getting the following warning on VS Code
"Also define the standard property 'grid-row' for compatibility"
For this code:
header {
-ms-grid-row: 1; /* warning here */
-ms-grid-column: 1; /* warning here */
-ms-grid-column-span: 2;
grid-area: header;
}
How can I fix it?
You can get rid of that, if you want, by setting this setting to ignore:
CSS > Lint: Vendor Prefix
When using a vendor-specific prefix, also include the standard property.
You are getting the warnings because you use one or more of these keys
-ms-grid-row
-ms-grid-column
in your elements without also using the standard non-prefixed versions at the same time:
grid-row
grid-column
So in every element where you have -ms-grid-row also include, after it in the same selector, grid-row and the same with -ms-grid-column put a grid-column after it in the same selector. And the warnings will go away. This is good practice anyways. For example:
header {
-ms-grid-row: 1; /* warning has gone away */
grid-row: 1;
-ms-grid-column: 1;
-ms-grid-column-span: 2;
grid-column: 2;
grid-area: header;
}
Or set the CSS > Lint: Vendor Prefix to ignore and you won't see the warnings - but I do not recommend doing that. You should be including the standard non-prefixed versions of those keys.
Related
Pretend I have an emacs window open and it is split horizontally into three buffers:
===== BUFFER1.cpp ======
#include "CommonTypeDefs.h"
namespace NS1 {
...
void someFunction( void ) {
MyClassHandle mh = getMyClassHandle(); // getMyClassHandle(): returns a MyClassHandle
mh-> // SEMANTIC does not parse
...
-
===== CommonTypeDefs.h ======
...
typedef class NS2::MyClass* MyClassHandle
...
-
===== BUFFER2.h ======
namespace NS2 {
...
class MyClass {
// things in here
...
-
I do not see why this extra layer (in this case a common header file full of typedefs) is not picking up the possible completions.
-
I have also noticed that when Semantic works I get shown the line of the class definition, eg:
-
===== BUFFER1.h ======
SomeDe[]finedClass* sdc = getSomeDefinedClassPtr();
-
If I have the cursor over the type that is sdc (see the square braces), the Emacs minibuffer will say something
like:
-
SomeDefinedClass.h: class SomeDefinedClass {}
-
This means that I know Semantic will be able to parse the symbos in SomeDefinedClass. Everything is good when this happens because there is no typedef involved. But, in the case where things don't work:
-
===== BUFFER1.h ======
SomeDe[]finedClassHandle sdch = getSomeDefinedClassHandle();
-
Again, I have the cursor where the square brackets are but this time the minibuffer says:
-
CommonTypeDefs.h: typedef* SomeDefinedClassHandle {}
-
I guess this makes sense, because firstly this type is a typedef, but it seems like Semantic stops there - rather than then saying, Okay: I am a typedef, but now I need to look at symbols for the typedef. Hence, I do not get completions.
-
I have confirmed that everything has been parsed because I have -decoration-mode on. Also, semantic is working for other things, it just doesn't seem to be able to handle these handles.
There is no problem with missing includes either as there are no compilation errors. In this project there are a number of subsystems, hence why I have included different directory to the system-c-dependency-include* path and these all work fine.
Why is this not working?
Emacs 24.3 / Windows / CEDET (around the 26th Sept 2014 bzr release)
Using Doxygen version 1.8.4:
When AUTOLINK_SUPPORT = NO in the configuration file, the HTML See Also section references generated by \sa (or #see) are not active links to the referenced method. When AUTOLINK_SUPPORT = YES the \sa references are active links as expected.
This seems to be a relatively recent change to doxygen behaviour: I've been using AUTOLINK_SUPPORT = NO for years to avoid having to mark all the words in the description text that would otherwise result in undesired automatic links with a '%' character, and the See Also references had remained active links.
Is there a known workaround that enables \sa references to remain active links while still having global AUTOLINK_SUPPORT disabled?
Here is a trivial test file:
/** The Fooy class.
*/
class Fooy
{
public:
/** foo
The alternative is {#link bar(int, int)const the bar method}
#param value A value.
#return The value incremented.
#see bar(int, int)const
*/
int foo (const int value) const;
/** bar
The alternative is {#link foo(const int)const the foo method}
#param value A value.
#param number A number.
#return The value plus the number.
#see foo(const int)const
*/
int bar (int value, int number) const;
};
Using the auto-generated Doxyfile (doxygen -g) doxygen version 1.8.8 produces, without any warnings, the expected HTML results: The {#link bar(int, int)const the bar method} syntax results in a link named "the bar method" (and likewise for the other #link), and the #see references result in the expected links having the method signatures.
If the Doxyfile is changed so AUTOLINK_SUPPORT = NO doxygen now produces HTML in which the #see signatures are no longer links, but is otherwise the same.
If " the bar method" is removed from the first #link command doxygen outputs -
Fooy.hh:9: warning: unable to resolve link to `bar(int, int)' for \link command
And the resulting HTML has the single word "const" as the link to the html/classFooy.html file instead of the method signature and correct link. However, if the single space is removed after the comma in the argument list of the method signature the warning disappears and the link is now correct with the full signature text. Note that the space character
in the foo argument list of the second #link command must remain to have a correct signature, so removing the text " the foo method" from the command will always result in an incorrect parsing by doxygen. Restoring AUTOLINK_SUPPORT = YES does not change this behaviour. This suggests a flaw in the doxygen parser.
Back to the initial issue of getting the #see references to be links in the HTML output when AUTOLINK_SUPPORT = NO in the Doxyfile. Putting the method signatures in the #see commands inside {#link ...} wrappers fails as described above (only if the method signature has no space character does it work correctly). If the wrappers are replaced with the new #link ... #endlink syntax (and the new syntax used with the other {#link ...} commands) then the HTML output from doxygen is the expected results with full method signatures and correct links.
So the answer to this problem is that doxygen is not backwards compatible with previous in-code command syntax. Unless some workaround is known, all existing code files must be edited to accommodate the changes to the doxygen parser.
Sigh.
I have a basic function for printing messages of verbosity levels in a perl package:
# 0 = no output, 1 = debug output
our $verbosity = 0;
sub msg {
(my $verbLevel, my $format, my #addArgs) = #_;
if ($verbLevel <= $verbosity) {
printf($format, #addArgs);
}
}
This is IMO an elegant solution inside the package, because to print a debug message I can simply do:
msg(1, "Some debug message");
However, in practice this package is being 'used' in a long chain of packages, each of which also uses a verbosity feature. Let's say the chain of usage is like this: entry.pl > package0.pm > package1.pm > package2.pm. Each file must set the verbosity flag of the next in order for each to work right.
I now think this is an inelegant solution because of duplicate code and the requirement for each "parent file" to set each of it's children's verbosity level. What I would like to happen is for each *.pm file to inherit the verbosity level and function from entry.pl.
Is there a design pattern I can follow to share a verbosity functionality across packages? Is there a module out there that can already do this?
Perhaps look at Log::Log4Perl - either as a model to work from for your own implementation or as a potential replacement.
I have an API that I would like to change the behavior of. I originally made method is_success mean a "green light", but a "red light" is not exceptional, it still means the light is working properly. I would now like is_success to only be false in the event of a suggestion, and have added is_green and is_red (note: there are also statuses "yellow" and "purple") to my API to complement specific checks ( currently yellow and purple throw exceptions, but may get status checks later ).
Is there any good way that I can give warnings from the code that the behavior is changing? or has changed? while allowing those warnings to be turned off if the user is aware? (note: have already put a deprecation notice in the change log )
You could use Perl's lexical warnings categories. There is a deprecated category or you could register the package/module as a warnings category.
{
package My::Foo;
use warnings;
sub method {
(#_ <= 2) or warnings::warnif('deprecated', 'invoking ->method with ... ')
}
}
{
package My::Bar;
use warnings;
use warnings::register;
sub method {
(#_ <= 2) or warnings::warnif('invoking ->method with ... ')
}
}
{
use warnings;
My::Foo->method(1);
My::Foo->method(1, 2);
My::Bar->method(1, 2);
}
{
no warnings 'deprecated';
My::Foo->method(1, 2);
no warnings 'My::Bar';
My::Bar->method(1, 2);
}
See warnings and perllexwarn
Yes, you can use the warn command. It will display warnings but they can also be trapped by specifying an empty sub for $SIG{'WARN'}, which will stop the messages from deing displayed.
# warnings are thrown out with this BEGIN block in your code.
BEGIN {
$SIG{'__WARN__'} = sub { }
}
# prints the warning to STDOUT, if $SIG{'__WARN__'} is set to the default
warn "uh oh, this is deprecated!";
See the perdocs for more info and additional examples, http://perldoc.perl.org/functions/warn.html and http://perldoc.perl.org/perllexwarn.html.
I've always found the method used by MIME::Head useful and amusing.
This method has been deprecated. See "decode_headers" in MIME::Parser for the full reasons. If you absolutely must use it and don't like the warning, then provide a FORCE:
"I_NEED_TO_FIX_THIS"
Just shut up and do it. Not recommended.
Provided only for those who need to keep old scripts functioning.
"I_KNOW_WHAT_I_AM_DOING"
Just shut up and do it. Not recommended.
Provided for those who REALLY know what they are doing.
The idea is that the deprecation warning can be suppressed only by providing a magic argument that documents why the warning is being suppressed.
I am looking to utilize the ICU library for transliteration, but I would like to provide a custom transliteration file for a set of specific custom transliterations, to be incorporated into the ICU core at compile time for use in binary form elsewhere. I am working with the source of ICU 4.2 for compatibility reasons.
As I understand it, from the ICU Data page of their website, one way of going about this is to create the file trnslocal.mk within ICUHOME/source/data/translit/ , and within this file have the single line TRANSLIT_SOURCE_LOCAL=custom.txt.
For the custom.txt file itself, I used the following format, based on the master file root.txt:
custom{
RuleBasedTransliteratorIDs {
Kanji-Romaji {
file {
resource:process(transliterator){"custom/Kanji_Romaji.txt"}
direction{"FORWARD"}
}
}
}
TransliteratorNamePattern {
// Format for the display name of a Transliterator.
// This is the language-neutral form of this resource.
"{0,choice,0#|1#{1}|2#{1}-{2}}" // Display name
}
// Transliterator display names
// This is the English form of this resource.
"%Translit%Hex" { "%Translit%Hex" }
"%Translit%UnicodeName" { "%Translit%UnicodeName" }
"%Translit%UnicodeChar" { "%Translit%UnicodeChar" }
TransliterateLATIN{
"",
""
}
}
I then store within the directory custom the file Kanji_Romaji.txt, as found here. Because it uses > instead of the → I have seen in other files, I converted each entry appropriately, so they now look like:
丁 → Tei ;
七 → Shichi ;
When I compile the ICU project, I am presented with no errors.
When I attempt to utilize this custom transliterator within a testfile, however (a testfile that works fine with the in-built transliterators), I am met with the error error: 65569:U_INVALID_ID.
I am using the following code to construct the transliterator and output the error:
UErrorCode status = U_ZERO_ERROR;
Transliterator *K_R = Transliterator::createInstance("Kanji-Romaji", UTRANS_FORWARD, status);
if (U_FAILURE(status))
{
std::cout << "error: " << status << ":" << u_errorName(status) << std::endl;
return 0;
}
Additionally, a loop through to Transliterator::countAvailableIDs() and Transliterator::getAvailableID(i) does not list my custom transliteration. I remember reading with regard to custom converters that they must be registered within /source/data/mappings/convrtrs.txt . Is there a similar file for transliterators?
It seems that my custom transliterator is either not being built into the appropriate packages (though there are no compile errors), is improperly formatted, or somehow not being registered for use. Incidentally, I am aware of the RuleBasedTransliterator route at runtime, but I would prefer to be able to compile the custom transliterations for use in any produced binary.
Let me know if any additional clarification is necessary. I know there is at least one ICU programmer on here, who has been quite helpful in other posts I have written and seen elsewhere as well. I would appreciate any help I can find. Thank you in advance!
Transliterators are sourced from CLDR - you could add your transliterator to CLDR (the crosswire directory contains it in XML format in the cldr/ directory) and rebuild ICU data. ICU doesn't have a simple mechanism for adding transliterators as you are trying to do. What I would do is forget about trnslocal.mk or custom.txt as you don't need to add any files, and simply modify root.txt - you might file a bug if you have a suggested improvement.