I have an overlay defined in nix, in my ~/.config/nixpkgs/overlays/tmft.nix, that looks like this:
self: super: {
tfmt = import ../dists/tfmt/default.nix {};
}
That's fine, I can install it fine. It's a haskell package, and I want to install it as part of my ghc install. So I have another overlay, in myHaskellEnv.nix, that looks like this:
self: super: {
myHaskellEnv = super.haskellPackages.ghcWithHoogle
(haskellPackages: with haskellPackages;
[ tfmt ]);
}
Only, that complains that it can't see tfmt (error: undefined variable 'tfmt').
I can work around this by importing the package directly:
let tfmt = import ../dists/tfmt/default.nix {};
in self: super: {
myHaskellEnv = super.haskellPackages.ghcWithHoogle
(haskellPackages: with haskellPackages;
[ tfmt ]);
}
but that defeats the reuse.
How can I use the one overlay from another? I tried referring to super.tfmt, but that shows the same issue.
An overlay is just a function from self and super to an attribute set (usually of packages). The same scoping rules apply as in any Nix function definition. So when you want to use something from a previous overlay, it isn't magically in scope; you have to get it from self or super which represent the final package set and definition from earlier overlays only.
By changing [ tfmt ] to [ self.tfmt ] you should be able to insert the final definition of tfmt - as it may be overridden in subsequent overlays. Alternatively, you could get tfmt from super, which is not recommended because it is less flexible, but sometimes you need super to avoid creating cyclic definitions that result in infinite recursions during evaluation.
When writing overlays, you should avoid re-importing Nixpkgs, directly or indirectly. This is important because <nixpkgs> may be the wrong version of Nixpkgs for someone's use and if you do get it right, you will re-evaluate the Nixpkgs fixpoint, which takes time, you will lose any configuration that was in the original Nixpkgs, like config and overlays and cross compilations arguments.
Instead, you should use the self and super attributes. In particular, super.callPackage comes in handy, but for Haskell packages you'd best override a haskell package set, extending it with your own packages, for consistency.
Here's an example of this.
Also, I recommend keeping your number of overlays to the bare minimum. Only split them if it makes sense from a software distribution perspective - that's what they were intended for.
Related
I would like to have a semantic named custom element that extends from button: like fab-button
class FabButton extends HTMLButtonElement {
constructor() {
super();
this.html = hyperHTML.bind(this);
}
}
customElements.define("fab-button", FabButton);
Extending for HTMLButtonElement doesn't seem to work.
Is there a way to extend from a non-HTMLElement with the HyperHTML "document-register-element.js"?
Codepen example:
https://codepen.io/jovdb/pen/qoRare
It's difficult to answer this, because it's tough to understand where to start from.
The TL;DR solution though, is here, but it needs a lot of explanations.
Extending built-ins is a ghost in the Web specs
It doesn't matter what WHATWG says, built-ins are a de-facto death specification because Webkit strongly opposed to it and Firefox, as well as Edge, that never even shipped Custom Elements, didn't push to have them neither.
Accordingly, as starting point, it's discouraged to extend built-ins with Custom Elements V1 specification.
You might have luck with V0 though, but that's only Chrome API and it's already one of those APIs created to die (R.I.P. WebSQL).
My polyfill follows specs by definition
The document-register-element polyfill, born with V0 but revamped with V1, follows specifications as close as possible, which is why it makes extending built-ins possible, because WHATWG still has that part in.
That also means you need to understand how extending built-ins works.
It is not by defining a simple class that extends HTMLButtonElement that you get a button, you need to do at least three extra things:
// define via the whole signature
customElements.define(
"fab-button",
FabButton,
{extends: 'button'}
);
... but also ... allow the polyfill to work
// constructor might receive an instance
// and such instance is the upgraded one
constructor(...args) {
const self = super(...args);
self.html = hyperHTML.bind(self);
return self;
}
and most important, its most basic representation on the page would be like this
<button is="fab-button">+</button>
Bear in mind, with ES6 classes super(...args) would always return the current context. It's there to grant super constructors didn't return other instances as upgraded objects.
The constructor is not your friend
As harsh as it sounds, Custom Elements constructors work only with Shadow DOM and addEventListener, but nothing else, really.
When an element is created is not necessarily upgraded yet. in fact, it won't be, most likely, upgraded.
There are at least 2 bugs filed to every browser and 3 inconsistent behaviors about Custom Elements initialization, but your best bet is that once connectedCallback is invoked, you really have the custom element node content.
connectedCallback() {
this.slots = [...this.childNodes];
this.render();
}
In that way you are sure you actually render the component once live on the DOM, and not when there is not even a content.
I hope I've answered your question in a way that also warns you to go away from custom elements built-ins if that's the beginning of a new project: they are unfortunately not a safe bet for the future of your application.
Best Regards.
The context
In a framework I'm currently building, I am using multiple structs (example) to store String constants. Let's say one looked like this:
public struct SpecificConstants {
private init() {}
public static let foo: String = "foo"
}
This is all nice and well. You can use the constant, it doesn't clutter global namespace, the struct name states the specific purpose of the constants which are defined in it.
Also, by making init() private, it is made clear inside the framework (it's open source) and outside of it that this struct should not be instantiated. It wouldn't hurt if you were to create an instance of it but it would also have no use at all. Also, the init would show up in autocomplete if it weren't private, which would annoy me :)
The problem
I'm proudly writing a lot of tests for the framework and I'm using Xcode's internal coverage reporting (llvm cov). Unfortunately, this coverage reporting shows the init as 'not covered':
This is completely logical, since the init isn't being run by the tests, because it can't be.
To my distress, this prevents me from getting the good ol' 100% coverage.
Possible solutions
I could use lcov, which would enable me to use LCOV_EXCL_LINE or LCOV_EXCL_START and LCOV_EXCL_STOP to exclude the inits from the coverage.
Why not: I'd love not having to setup a different coverage tool when there's already a builtin tool in Xcode.
I could make the inits internally accessible so I could gain access to them inside my unit tests by importing the module as #testable.
Why not: Though they would still be inaccessible from outside the framework, they would now be visible inside the framework, which I don't like. I'd like them to be darn private :D
I could live with my coverage never reaching 100%.
Why not: Because I just can't :).
The question
Is there any way (I could live with it being a bit, even quite hacky) to run this forsaken init in my unit tests while keeping it inaccessible from outside as well as inside the framework?
Move your String constants to an enum then you won't need a private init.
enum SpecificConstants {
static let foo = "foo"
}
I'm looking for a way of condensing some of my AS3 code to avoid almost duplicate commands.
The issue is that I have multiple variables with almost the same name e.g. frenchLanguage, englishLanguage, germanLanguage, spanishLanguage
My Controller class contains public static variables (these are accessed across multiple classes) and I need a way to be able to call a few of these variables dynamically. If the variables are in the class you are calling them from you can do this to access them dynamically:
this["spanish"+"Language"]
In AS3 it's not possible to write something like:
Controller.this["spanish"+"Language"]
Is there any way to achieve this? Although everything is working I want to be able to keep my code as minimal as possible.
It is possible to access public static properties of a class this way (assuming the class name is Controller as in your example:
Controller['propertyName']
I'm not sure how this helps to have "minimal code", but this would be a different topic/question, which might need some more details on what you want to achive.
Having said that, I like the approach DodgerThud suggests in the comments of grouping similar values in a (dynamic) Object or Dictonary and give it a proper name.
Keep in mind, that if the string you pass in as the key to the class or dynamic object is created from (textual) user input you should have some checks for the validity of that data, otherwise your programm might crash or expose other fields to the user.
It would make sense to utilize a Dictionary object for a set of variables inherited: it provides a solid logic and it happens to work...
I do not think this is what you are trying to accomplish. I may be wrong.
Classes in AS3 are always wrapped within a package - this is true whether you have compiled from Flash, Flex, Air, or any other...
Don't let Adobe confuse you. This was only done in AS3 to use Java-Based conventions. Regardless, a loosely typed language is often misunderstood, unfortunately. So:
this["SuperObject"]["SubObject"]["ObjectsMethod"][ObjectsMethodsVariable"](args..);
... is technically reliable because the compiler avoids dot notation but at runtime it will collect a lot of unnecessary data to maintain those types of calls.
If efficiency becomes an issue..
Use:
package packages {
import flash.*.*:
class This implements ISpecialInterface {
// Data Objects and Function Model
// for This Class
}
package packages {
import...
class ISpecialInterface extends IEventDispatcher
I've heard of loads of different types of classes, but what does an Ambient class do and what exactly is it? How is it different from any other class?
I get this from watching a few videos on typescript, and they are always talking about Ambient classes, but they go on to define just regular old classes, I mean to me, it seems no different from just a normal class with variables and functions in it.
So if one some one can, please define what an ambient class is in a language agnostic context and what does it mean in typescript?
Ambient declarations are used to provide type information for some existing code.
For example, if you wrote the following in TypeScript:
module Example {
export class Test {
do() {
return 'Go';
}
}
}
var test = new Example.
You would get auto-completion after the . to help you discover the Test class.
If you already had a whole load of JavaScript that were using from some TypeScript code, you wouldn't get this auto-completion and where you did it would not be type-aware. Rather than re-writing the whole JavaScript file in TypeScript, you can write an ambient declaration for it instead.
For example, imagine that the following JavaScript file was much larger and would take a long time to re-write in TypeScript:
var Example;
(function (Example) {
var Test = (function () {
function Test() {
}
Test.prototype.do = function () {
return 'Go';
};
return Test;
})();
Example.Test = Test;
})(Example || (Example = {}));
The ambient declaration contains the type information, but not the implementation:
declare module Example {
export class Test {
do() : string;
}
}
This gives you full type-checking and auto-completion for your JavaScript without the need to re-write the whole thing in TypeScript.
When would you do this? Usually you write an ambient declaration when you are consuming a bunch of third-party JavaScript - you can't re-write it in TypeScript every time they update the library, so having an ambient declaration allows you to take the updates with minimal impact (you may have to add new features, but you never have to make changes due to implementation details). The ambient declaration acts as a contract that states what the third-party library does.
You can find out more by reading my guide to writing ambient declarations, and you can find a lot of existing ambient declarations for popular JavaScript libraries on Definitely Typed.
Say I have a class that looks like the following:
internal class SomeClass
{
IDependency _someDependency;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
}
And then I want to add functionality that is related but makes use of a different dependency to achieve its purpose. Perhaps something like the following:
internal class SomeClass
{
IDependency _someDependency;
IDependency2 _someDependency2;
...
internal string SomeFunctionality_MakesUseofIDependency()
{
...
}
internal string OtherFunctionality_MakesUseOfIDependency2()
{
...
}
}
When I write unit tests for this new functionality (or update the unit tests that I have for the existing functionality), I find myself creating a new instance of SomeClass (the SUT) whilst passing in null for the dependency that I don't need for the particular bit of functionality that I'm looking to test.
This seems like a bad smell to me but the very reason why I find myself going down this path is because I found myself creating new classes for each piece of new functionality that I was introducing. This seemed like a bad thing as well and so I started attempting to group similar functionality together.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
When every instance method touches every instance variable then the class is maximally cohesive. When no instance method shares an instance variable with any other, the class is minimally cohesive. While it is true that we like cohesion to be high, it's also true that the 80-20 rule applies. Getting that last little increase in cohesion may require a mamoth effort.
In general if you have methods that don't use some variables, it is a smell. But a small odor is not sufficient to completely refactor the class. It's something to be concerned about, and to keep an eye on, but I don't recommend immediate action.
Does SomeClass maintain an internal state, or is it just "assembling" various pieces of functionality? Can you rewrite it that way:
internal class SomeClass
{
...
internal string SomeFunctionality(IDependency _someDependency)
{
...
}
internal string OtherFunctionality(IDependency2 _someDependency2)
{
...
}
}
In this case, you may not break SRP if SomeFunctionality and OtherFunctionality are somehow (functionally) related which is not apparent using placeholders.
And you have the added value of being able to select the dependency to use from the client, not at creation/DI time. Maybe some tests defining use cases for those methods would help clarifying the situation: If you can write a meaningful test case where both methods are called on same object, then you don't break SRP.
As for the Facade pattern, I have seen it too many times gone wild to like it, you know, when you end up with a 50+ methods class... The question is: Why do you need it? For efficiency reasons à la old-timer EJB?
I usually group methods into classes if they use a shared piece of state that can be encapsulated in the class. Having dependencies that aren't used by all methods in a class can be a code smell but not a very strong one. I usually only split up methods from classes when the class gets too big, the class has too many dependencies or the methods don't have shared state.
My question: should all dependencies of a class be consumed by all its functionality i.e. if different bits of functionality use different dependencies, it is a clue that these should probably live in separate classes?
It is a hint, indicating that your class may be a little incoherent ("doing more than just one thing"), but like you say, if you take this too far, you end up with a new class for every piece of new functionality. So you would want to introduce facade objects to pull them together again (it seems that a facade object is exactly the opposite of this particular design rule).
You have to find a good balance that works for you (and the rest of your team).
Looks like overloading to me.
You're trying to do something and there's two ways to do it, one way or another. At the SomeClass level, I'd have one dependency to do the work, then have that single dependent class support the two (or more) ways to do the same thing, most likely with mutually exclusive input parameters.
In other words, I'd have the same code you have for SomeClass, but define it as SomeWork instead, and not include any other unrelated code.
HTH
A Facade is used when you want to hide complexity (like an interface to a legacy system) or you want to consolidate functionality while being backwards compatible from an interface perspective.
The key in your case is why you have the two different methods in the same class. Is the intent to have a class which groups together similar types of behavior even if it is implemented through unrelated code, as in aggregation. Or, are you attempting to support the same behavior but have alternative implementations depending on the specifics, which would be a hint for a inheritance/overloading type of solution.
The problem will be whether this class will continue to grow and in what direction. Two methods won't make a difference but if this repeats with more than 3, you will need to decide whether you want to declare it as a facade/adapter or that you need to create child classes for the variations.
Your suspicions are correct but the smell is just the wisp of smoke from a burning ember. You need to keep an eye on it in case it flares up and then you need to make a decision as how you want to quench the fire before it burns out of control.