In golang, when I import a module, its init() gets executed (before main() I assume?), it is possible some error is generated in this function. How can I capture these errors and handle them in my own code?
Errors in Go are return values, as you may know. As init() does not return anything, the only alternative is to panic() in init if anything goes wrong.
A package that panics on init is arguably not very well designed, although there may be valid use cases for this.
Given the circumstances, recover() is not an option, because init is run before main. So if you can't edit the package in question, then you're out of luck.
This is one of the reasons why panic and recover should be used sparingly, only in situations where literally "panicking" makes sense.
#twotwotwo contributed the following quote from "effective Go" that describes this (for the init case):
if the library truly cannot set itself up, it might be reasonable to panic, so to speak
So: if your init function needs to report an error, then ask yourself if that code really belongs in init or would be better kept somewhere else. If it really has to be init, consider setting an error flag inside of the package and document that any client must check that error.
Yes, package init() functions run before the main() function, see Package initialization in the language specification.
And no, you can't handle errors occurred in package init() functions. Even if you could, that would mean a package your program depends on failed to initialize and you wouldn't know what to expect from it.
Package init() functions have no return values and they can't panic in a meaningful way that was meant to recover from. If an init() function panics, the program terminates.
Since init() functions are not called by you (e.g. from the main() function), you can't recover from there. It is the responsibility of the package itself to handle errors during its initialization, not the users of the package.
One option to signal error happening during an init() is to store the error state in a variable (e.g. exported, or unexported but queryable by an exported function). But this should be used only if it is reasonable to continue, and this is also the task/responsibility of the package itself (to store/report the error) and not the users of the package. You can't do this without the cooperation of the package (you can't "catch" unhandled/unreported errors and panics).
Not directly, but you could using something like this:
package mypkg
var InitErr error
var Foo MyFoo
func init() {
Foo, InitErr = makeInitialisation()
// ...
}
And then in your main:
package main
import "foo/bar/mypkg"
func main() {
if (mypkg.InitErr != nil) {
panic(err)
}
// ...
}
Related
I am writing a program that uses aws-sdk-go-v2 and receives a string input from the user that determines what storage class to use when storing an object in S3. I have to validate that the input is an allowed value, and if it is not, I give a list of allowed values.
In v1 of aws-sdk-go, you could call s3.StorageClass_Values() to enumerate the allowed StorageClass values.
func StorageClass_Values() []string
Example:
// v1.go
package main
import (
"fmt"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
fmt.Println(s3.StorageClass_Values())
}
$ go run v1.go
[STANDARD REDUCED_REDUNDANCY STANDARD_IA ONEZONE_IA INTELLIGENT_TIERING GLACIER DEEP_ARCHIVE OUTPOSTS]
But in aws-sdk-go-v2, types were introduced for StorageClass and the function that enumerates the values requires a type to be called.
From the docs:
func (StorageClass) Values() []StorageClass
This seems to require an initialized variable to call? Why is this the case? What's the idiomatic way to call this function?
I've managed to get it to work in two different ways, and both seem wrong.
// v2.go
package main
import (
"fmt"
s3Types "github.com/aws/aws-sdk-go-v2/service/s3/types"
)
func main() {
// Create uninitialized StorageClass variable and call .Values()
var sc s3Types.StorageClass
fmt.Println(sc.Values())
// One-liner that uses one of the types directly:
fmt.Println(s3Types.StorageClassStandard.Values())
}
$ go run v2.go
[STANDARD REDUCED_REDUNDANCY STANDARD_IA ONEZONE_IA INTELLIGENT_TIERING GLACIER DEEP_ARCHIVE OUTPOSTS]
[STANDARD REDUCED_REDUNDANCY STANDARD_IA ONEZONE_IA INTELLIGENT_TIERING GLACIER DEEP_ARCHIVE OUTPOSTS]
The one-liner is better because it is more concise, but I have to reference one of the storage classes, which doesn't have a particular meaning, so it feels wrong.
Which one should I use and why?
I wish they had simply kept the calling convention from v1. The Values() function in v2 doesn't use the type sent to it.
I totally agree with you that this is an odd api design. The (StorageClass) Values() is not using the receiver. This is the sdk-code:
func (StorageClass) Values() []StorageClass {
return []StorageClass{
"STANDARD",
"REDUCED_REDUNDANCY",
"STANDARD_IA",
"ONEZONE_IA",
"INTELLIGENT_TIERING",
"GLACIER",
"DEEP_ARCHIVE",
"OUTPOSTS",
}
}
I assume this is due to the fact that the code is generated and based on some common representation that is used to create SDKs for various languages,
In my opinion, the one-liner is the way to go, because it avoids introducing an unused variable:
s3Types.StorageClassStandard.Values()
Introducing a new variable would however highlight the fact that there is no meaning to the value which is used when invoking the Values() method.
I have an overlay defined in nix, in my ~/.config/nixpkgs/overlays/tmft.nix, that looks like this:
self: super: {
tfmt = import ../dists/tfmt/default.nix {};
}
That's fine, I can install it fine. It's a haskell package, and I want to install it as part of my ghc install. So I have another overlay, in myHaskellEnv.nix, that looks like this:
self: super: {
myHaskellEnv = super.haskellPackages.ghcWithHoogle
(haskellPackages: with haskellPackages;
[ tfmt ]);
}
Only, that complains that it can't see tfmt (error: undefined variable 'tfmt').
I can work around this by importing the package directly:
let tfmt = import ../dists/tfmt/default.nix {};
in self: super: {
myHaskellEnv = super.haskellPackages.ghcWithHoogle
(haskellPackages: with haskellPackages;
[ tfmt ]);
}
but that defeats the reuse.
How can I use the one overlay from another? I tried referring to super.tfmt, but that shows the same issue.
An overlay is just a function from self and super to an attribute set (usually of packages). The same scoping rules apply as in any Nix function definition. So when you want to use something from a previous overlay, it isn't magically in scope; you have to get it from self or super which represent the final package set and definition from earlier overlays only.
By changing [ tfmt ] to [ self.tfmt ] you should be able to insert the final definition of tfmt - as it may be overridden in subsequent overlays. Alternatively, you could get tfmt from super, which is not recommended because it is less flexible, but sometimes you need super to avoid creating cyclic definitions that result in infinite recursions during evaluation.
When writing overlays, you should avoid re-importing Nixpkgs, directly or indirectly. This is important because <nixpkgs> may be the wrong version of Nixpkgs for someone's use and if you do get it right, you will re-evaluate the Nixpkgs fixpoint, which takes time, you will lose any configuration that was in the original Nixpkgs, like config and overlays and cross compilations arguments.
Instead, you should use the self and super attributes. In particular, super.callPackage comes in handy, but for Haskell packages you'd best override a haskell package set, extending it with your own packages, for consistency.
Here's an example of this.
Also, I recommend keeping your number of overlays to the bare minimum. Only split them if it makes sense from a software distribution perspective - that's what they were intended for.
The context
In a framework I'm currently building, I am using multiple structs (example) to store String constants. Let's say one looked like this:
public struct SpecificConstants {
private init() {}
public static let foo: String = "foo"
}
This is all nice and well. You can use the constant, it doesn't clutter global namespace, the struct name states the specific purpose of the constants which are defined in it.
Also, by making init() private, it is made clear inside the framework (it's open source) and outside of it that this struct should not be instantiated. It wouldn't hurt if you were to create an instance of it but it would also have no use at all. Also, the init would show up in autocomplete if it weren't private, which would annoy me :)
The problem
I'm proudly writing a lot of tests for the framework and I'm using Xcode's internal coverage reporting (llvm cov). Unfortunately, this coverage reporting shows the init as 'not covered':
This is completely logical, since the init isn't being run by the tests, because it can't be.
To my distress, this prevents me from getting the good ol' 100% coverage.
Possible solutions
I could use lcov, which would enable me to use LCOV_EXCL_LINE or LCOV_EXCL_START and LCOV_EXCL_STOP to exclude the inits from the coverage.
Why not: I'd love not having to setup a different coverage tool when there's already a builtin tool in Xcode.
I could make the inits internally accessible so I could gain access to them inside my unit tests by importing the module as #testable.
Why not: Though they would still be inaccessible from outside the framework, they would now be visible inside the framework, which I don't like. I'd like them to be darn private :D
I could live with my coverage never reaching 100%.
Why not: Because I just can't :).
The question
Is there any way (I could live with it being a bit, even quite hacky) to run this forsaken init in my unit tests while keeping it inaccessible from outside as well as inside the framework?
Move your String constants to an enum then you won't need a private init.
enum SpecificConstants {
static let foo = "foo"
}
I am developing an iOS application and am trying to integrate Typhoon into the testing. I am currently trying to mock out a dependency in a view controller that comes from the storyboard, so with in my assembly:
public dynamic var systemComponents: SystemComponents!
public dynamic func storyboard() -> AnyObject {
return TyphoonDefinition.withClass(TyphoonStoryboard.self) {
(definition) in
definition.useInitializer("storyboardWithName:factory:bundle:") {
(initializer) in
initializer.injectParameterWith("Main")
initializer.injectParameterWith(self)
initializer.injectParameterWith(NSBundle.mainBundle())
}
}
}
I want to create a CameraModeViewController (the class I am unit testing) with its dependency upon a system-camera-functions-providing protocol mocked out. The dependency is dynamic var cameraProvider: CameraAPIProvider?. I think I correctly created a replacement collaborating assembly to replace systemComponents; MockSystemComponents is a subclass of SystemComponents that overrides functions. This is where I inject the mock:
let assembly = ApplicationAssembly().activateWithCollaboratingAssemblies([
MockSystemComponents(camera: true)
])
let storyboard = assembly.storyboard()
subject = storyboard.instantiateViewControllerWithIdentifier("Camera-Mode") as! CameraModeViewController
The next line of code in the tests is let _ = subject.view, which I learned is a trick to call viewDidLoad and get all the storyboard-linked IBOutlets, one of which is required for this test.
However, I am getting very mysterious result: sometimes but not always, all the tests fail because in the viewDidLoad I make a call to the dependency (cameraProvider), and I get an "unrecognized message sent to class" error. The error seems to indicate that at the time the message is sent (which is a correct instance method in protocol CameraAPIProvider) the field is currently a CLASS and not an instance: it interprets the message as +[MockSystemCamera cameraStreamLayer] as reported in the error message.
~~~BUT~~~
Here's the kicker: if I add a breakpoint between the calls to assembly.storyboard() and subject.view, the tests always pass. Everything is set up correctly, and the message is correctly sent to an instance without this "class method" bogus interpretation. Therefore, I have to wonder if Typhoon does some kind of asynchronous procedure in the injection that I have to wait for? Possibly only when dealing with storyboard-delivered view controllers? And if so, is there any way to make sure it blocks?
After digging around in Typhoon's source for a while, I get the impression that in the TyphoonDefinition(Instance Builder) initializeInstanceWithArgs:factory: method there is an __block id instance that is temporarily a Class type, and then is replaced with an instance of that type; and possibly this can be called asynchronously without blocking, so the injected member is left as a Class type?
UPDATE: Adding the code for MockSystemComponents(camera:). Note that SystemComponents inherits from TyphoonAssembly.
#objc
public class MockSystemComponents: SystemComponents {
var cameraAvailable: NSNumber
init(camera: NSNumber) {
self.cameraAvailable = camera
super.init()
}
public override func systemCameraProvider() -> AnyObject {
return TyphoonDefinition.withClass(MockSystemCamera.self) {
(definition) in
definition.useInitializer("initWithAvailable:") {
(initializer) in
initializer.injectParameterWith(self.cameraAvailable)
}
}
}
}
UPDATE #2: I tried replacing the constructor injection in the MockSystemComponents.systemCameraProvider() with a property injection. Different issue, but I suspect it's equivalent in cause: now, the property that is injected (declared optional) is still nil some of the time when I go to unwrap it (but not always -- probably about 4/5 of test runs fail, about the same as before).
UPDATE #3: have tried using the following code block, using factory construction according to this answer (note that setting factory directly didn't work as that OP did, but I think I correctly used the feature added in response to Jasper's issue). The results are the same as when using property injection like Update #2 above), so no dice there.
This issue was in fact arising even before the call to the instantiation. In fact, the problem was assemblies aren't generally intended to be stateful. There are a few ways to get around this, but the one I used -- having a member variable and an initializer method -- is NOT recommended. The problem with doing this is that in the activateWithCollaboratingAssemblies method, all the instance methods of the assembly are enumerated for definitions, and initializers will actually get called on the collaborating assembly. Consequently, even if you create your assembly with an initializer, it may get called again with a bogus value.
Note that the reason there appeared to be async behavior is actually that there is nondeterministic order in which definitions are assembled (property of storing them in an NSDictionary). This means that if activateWithCollaboratingAssemblies happens to enumerate methods which depend on state first, they'll work fine; but if the initializer is enumerated first, and the state is destroyed, definitions that are created after will be borked.
I am following MyFancordionRunner example from Fancordion v1.0.4 official documentation to test a BedSheet application, but the suiteSetup method (see below) is not being called and the server remains null, causing the fixture tests to fail with a NullPointerException.
override Void suiteSetup() {
super.suiteSetup
server = BedServer(AppModule#.pod).addModule(WebTestModule#).startup
}
Looking at FancordionRunner source code, the runFixture(Obj fixtureInstance) method should be invoking suiteSetup() the first time a Fixture is run as per this piece of code...
FixtureResult runFixture(Obj fixtureInstance) {
...
locals := Locals.instance
firstFixture := (locals.originalRunner == null)
if (firstFixture) {
locals.originalRunner = this
suiteSetup()
...
}
But for some reason in my case the condition (locals.originalRunner == null) must be returning false, causing the suiteSetup() invocation to be skipped. It seems that this piece of code uses Fantom Actors which I'm not familiar with.
I am manually invoking the suiteSetup within MyFancordionRunner like this:
override Void fixtureSetup(Obj fixtureInstance) {
if (server == null) suiteSetup
...
This workaround solves the NullPointerException issue and allows the fixtures to run successfully but I don't know if this workaround is defeating the purpose of the Actor logic, which I presume is meant to invoke suiteSetup only once.
Can anyone explain what could be going on here that is preventing the suiteSetup method from being called within runFixture(...), please?
I don't know what's going on here without seeing a lot more code.
The only part of Actor being used is Actor.locals() which is really just a pot to hold thread local variables - as it is assumed that all tests are run in the same thread.
As you shown, the logic in runFixture() is pretty simple, are you sure it is being called?