GetExportedValues<MyType> returns nothing, I can see the parts - mef

I have a strange MEF problem, I tested this in a test project and it all seems to work pretty well but for some reason not working in the real project
This is the exporting code
public void RegisterComponents()
{
_registrationBuilder = new RegistrationBuilder();
_registrationBuilder
.ForTypesDerivedFrom(typeof(MyType))
.SetCreationPolicy(CreationPolicy.NonShared)
.Export();
var catalog = new AggregateCatalog();
catalog.Catalogs.Add(new AssemblyCatalog(typeof(MyType).Assembly, _registrationBuilder));
var directoryCatalog = new DirectoryCatalog(PathToMyTypeDerived, _registrationBuilder);
catalog.Catalogs.Add(directoryCatalog);
_compositionContainer = new CompositionContainer(catalog);
_compositionContainer.ComposeParts();
var exports = _compositionContainer.GetExportedValues<MyType>();
Console.WriteLine("{0} exports in AppDomain {1}", exports.Count(), AppDomain.CurrentDomain.FriendlyName);
}
exports count is 0 :( Any ideas why?
IN the log file I have many of this
System.ComponentModel.Composition Information: 6 : The ComposablePartDefinition 'SomeOthertype' was ignored because it contains no exports.
Though I would think this is ok because I wasn' interested in exporting 'someOtherType'
UPDATE: I found this link but after debuging over it I am not wiser but maybe I m not following up properly.
Thanks for any pointers
Cheers

I just had the same problem and this article helped me a lot.
It describes different reasons why a resolve can fail. One of the more important ones is that the dependency of a dependency of the type you want to resolve is not registered.
What helped me a lot was the the trace output that gets written to the Output window when you debug your application. It describes exactly the reasons why a type couldn't be resolved.
Even with this output. you might need to dig a little bit, because I only got one level deep.
Example:
I wanted to resolve type A and I got a message like this:
System.ComponentModel.Composition Warning: 1 : The ComposablePartDefinition 'Namespace.A' has been rejected. The composition remains unchanged. The changes were rejected because of the following error(s): The composition produced multiple composition errors, with 1 root causes. The root causes are provided below. Review the CompositionException.Errors property for more detailed information.
1) No exports were found that match the constraint:
ContractName Namespace.IB
RequiredTypeIdentity Namespace.IB
Resulting in: Cannot set import 'Namespace.A..ctor (Parameter="b", ContractName="namespace.IB")' on part 'Namespace A'.
Element: Namespace.A..ctor (Parameter="b", ContractName="Namespace.IB") --> Namespace.A --> AssemblyCatalog (Assembly="assembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=...")
But I clearly saw a part for Namespace.IB. So, in the debugger, I tried to resolve that one. And I got another trace output. This time it told me that my implementation of Namespace.IB couldn't be resolved because for one of its imports there was a missing export, so basically the same message as above, just with different types. And this time, I didn't find a part for that missing import. Now I knew, which type was the real problem and figure out, why no registration happened for it.

Related

Virtualized AutoComplete Error - Invalid prop `children` supplied to `ForwardRef(ListboxComponent)`

The Autocomplete Virtualization example at https://mui.com/components/autocomplete/#main-content produces and error in the console advising:
Failed prop type: Invalid prop children supplied to ForwardRef(ListboxComponent), expected a ReactNode.
Does anyone know what the error is referring to? The example works but I'd like to know where the error is coming from to better understand the example 🤓
The error seems to be coming directly from line 109-111 where the the propTypes of children are specifically being defined as node (Typechecking). Because children is not a node, the error is thrown.
ListboxComponent.propTypes = {
children: PropTypes.node,
};
If these lines are commented out, the error goes away; but, are the children supposed to be nodes? It would seem they should be per these lines being added to ensure such but then why is it working without. Does anyone know of any documentation/articles explaining this example in a little more depth?

Stop huge error output from testing-library

I love testing-library, have used it a lot in a React project, and I'm trying to use it in an Angular project now - but I've always struggled with the enormous error output, including the HTML text of the render. Not only is this not usually helpful (I couldn't find an element, here's the HTML where it isn't); but it gets truncated, often before the interesting line if you're running in debug mode.
I simply added it as a library alongside the standard Angular Karma+Jasmine setup.
I'm sure you could say the components I'm testing are too large if the HTML output causes my console window to spool for ages, but I have a lot of integration tests in Protractor, and they are SO SLOW :(.
I would say the best solution would be to use the configure method and pass a custom function for getElementError which does what you want.
You can read about configuration here: https://testing-library.com/docs/dom-testing-library/api-configuration
An example of this might look like:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
You can then put this in any single test file or use Jest's setupFiles or setupFilesAfterEnv config options to have it run globally.
I am assuming you running jest with rtl in your project.
I personally wouldn't turn it off as it's there to help us, but everyone has a way so if you have your reasons, then fair enough.
1. If you want to disable errors for a specific test, you can mock the console.error.
it('disable error example', () => {
const errorObject = console.error; //store the state of the object
console.error = jest.fn(); // mock the object
// code
//assertion (expect)
console.error = errorObject; // assign it back so you can use it in the next test
});
2. If you want to silence it for all the test, you could use the jest --silent CLI option. Check the docs
The above might even disable the DOM printing that is done by rtl, I am not sure as I haven't tried this, but if you look at the docs I linked, it says
"Prevent tests from printing messages through the console."
Now you almost certainly have everything disabled except the DOM recommendations if the above doesn't work. On that case you might look into react-testing-library's source code and find out what is used for those print statements. Is it a console.log? is it a console.warn? When you got that, just mock it out like option 1 above.
UPDATE
After some digging, I found out that all testing-library DOM printing is built on prettyDOM();
While prettyDOM() can't be disabled you can limit the number of lines to 0, and that would just give you the error message and three dots ... below the message.
Here is an example printout, I messed around with:
TestingLibraryElementError: Unable to find an element with the text: Hello ther. This could be because the text is broken up by multiple elements. In this case, you can provide a function for your text matcher to make your matcher more flexible.
...
All you need to do is to pass in an environment variable before executing your test suite, so for example with an npm script it would look like:
DEBUG_PRINT_LIMIT=0 npm run test
Here is the doc
UPDATE 2:
As per the OP's FR on github this can also be achieved without injecting in a global variable to limit the PrettyDOM line output (in case if it's used elsewhere). The getElementError config option need to be changed:
dom-testing-library/src/config.js
// called when getBy* queries fail. (message, container) => Error
getElementError(message, container) {
const error = new Error(
[message, prettyDOM(container)].filter(Boolean).join('\n\n'),
)
error.name = 'TestingLibraryElementError'
return error
},
The callstack can also be removed
You can change how the message is built by setting the DOM testing library message building function with config. In my Angular project I added this to test.js:
configure({
getElementError: (message: string, container) => {
const error = new Error(message);
error.name = 'TestingLibraryElementError';
error.stack = null;
return error;
},
});
This was answered here: https://github.com/testing-library/dom-testing-library/issues/773 by https://github.com/wyze.

How do I customize wintersmith paginator?

I've been setting up a site with Wintersmith and am loving it for the most part, but I cannot wrap my head around some of the under-the-hood mechanics. I started with the "blog" skeleton that adds the paginator.coffee plugin.
The question requires some details, so up top, what I'm trying to accomplish:
Any files (markdown, html, json metadata) will be picked up either in /contents/article/<file> or /contents/articles/<subdir>/<file>
Output files are at /articles/YYYY/MM/DD/title-slug/
/blog.html lists all articles, paginated.
Files just under /contents (not in articles) are not treated as blog posts. Markdown and JSON metadata are still processed, but no permalinked URLs, not included in blog listings, file/directory structure is more directly copied over.
So, I solved #1 with this suggestion: How can I have articles in Wintersmith not in their own subdirectory? So far, great, and #3 is working -- the paginated listing includes all posts. #4 has not been an issue, it's the default behavior.
On #2 I found this solution: http://andrewphilipclark.com/2013/11/08/removing-the-boilerplate-from-wintersmith-blog-posts/ . As the author mentions, his solution was (sort of) subsequently incorporated into Wintersmith master, so I tried just setting the filenameTemplate accordingly. Unfortunately this applies to all content, not just that under /articles, so the rest of my site gets hosed (breaks #4). So then I tried the author's approach, adding a blogpost.coffee plugin using his code. This generates all the files out of /contents/articles into the correct permalink URLs, however the paginator now for some reason will no longer see files directly under /articles (point #1).
I've tried a lot of permutations and hacking. Tried changing the order of which plugin gets loaded first. Tried having PaginatorPage extend BlogpostPage instead of Page. Tried a lot of things. I finally realize, even after inspecting many of the core classes in Wintersmith source, that I do not understand what is happening.
Specifically, I cannot figure out how contents['articles']._.pages and .directories are set, which seems relevant. Nor do I understand what that underscore is.
Ultimately, Jade/CoffeeScript/Markdown are a great combo for minimizing coding and enhancing clarity except when you want to understand what's happening under the hood and you don't know these languages. It took me a bit to get the basics of Jade and CoffeeScript (Markdown is trivial of course) enough to follow what's happening. When I've had to dig into the wintersmith source, it gets deeper. I confess I'm also a node.js newbie, but I think the big issue here is just a magic framework. It would be helpful, for instance, if some of the core "plugins" were included in the skeleton site as opposed to buried in node_modules, just so curious hackers could see more quickly how things interconnect. More verbose docs would of course be helpful too. It's one thing to understand conceptually content trees, generators, views, templates, etc., but understanding the code flow and relations at runtime? I'm lost.
Any help is appreciated. As I said, I'm loving Wintersmith, just wish I could dispel magic.
Because coffee script is rubbish, this is extremely hard to do. However, if you want to, you can destroy the paginator.coffee and replace it with a simple javascript script that does a similar thing:
module.exports = function (env, callback) {
function Page() {
var rtn = new env.plugins.Page();
rtn.getFilename = function() {
return 'index.html';
},
rtn.getView = function() {
return function(env, locals, contents, templates, callback) {
var error = null;
var context = {};
env.utils.extend(context, locals);
var buffer = new Buffer(templates['index.jade'].fn(context));
callback(error, buffer);
};
};
return rtn;
};
/** Generates a custom index page */
function gen(contents, callback) {
var p = Page();
var pages = {'index.page': p};
var error = null;
callback(error, pages);
};
env.registerGenerator('magic', gen);
callback();
};
Notice that due to 'coffee script magic', there are a number of hoops to jump through here, such as making sure you return a buffer from getView(), and 'manually' overriding rather than using the obscure coffee script extension semantics.
Wintersmith is extremely picky about how it handles these functions. If callbacks are not invoked, for the returned value is not a Stream or Buffer, generated files will appear in the content summary, but not be rendered to disk during a build. Enable verbose logging and check of 'skipping foo' messages to detect this.

Entity Framework issue when creating unit tests

I am specifically getting the following error:
"The specified named connection is either not found in the configuration, not intended to be used with the EntityClient provider, or not valid."
[TestMethod()]
public void salesOrderFillListTest()
{
SalesOrderController_Accessor target = new SalesOrderController_Accessor();
string orderNumber = "1954120";
SalesOrderData result;
result = target.FillingOrder(orderNumber);
Assert.AreEqual(null, result.ErrorMessage);
Assert.AreEqual(32, result.LineItems.Count);
Assert.AreEqual("WRA-24-NFL-CLEV", result.LineItems[7].ItemNumber);
Assert.AreEqual(2, result.LineItems[7].OrderQuantity);
Assert.AreEqual(1, result.LineItems[7].FillingFilledQty);
Assert.AreEqual(1, result.LineItems[7].FillingRemainQty);
}
The error is coming up on the line:
result = target.FillingOrder(orderNumber);
I'm a junior developer and haven't had much experience with the many possible causes for this error. My App.config page contains the appropriate connection strings. Any ideas where to look for this one?
Thanks!
I was able to successfully run tests found in another test project in the same solution. After looking more carefully I found that there were a few differences between the two connection strings in each project. The data in the failing test project had become antiquated. I updated the connection string and error resolved.
Thanks.

How do I use add_to in Class::DBI?

I'm trying to use Class::DBI with a simple one parent -> may chidren relationships:
Data::Company->table('Companies');
Data::Company->columns(All => qw/CompanyId Name Url/);
Data::Company->has_many(offers => 'Data::Offer'=>'CompanyId'); # =>'CompanyId'
and
Data::Offer->table('Offers');
Data::Offer->columns(All => qw/OfferId CompanyId MonthlyPrice/);
Data::Offer->has_a(company => 'Data::Company'=>'CompanyId');
I try to add a new record:
my $company = Data::Company->insert({ Name => 'Test', Url => 'http://url' });
my $offer = $company->add_to_offers({ MonthlyPrice => 100 });
But I get:
Can't locate object method "add_to_offers" via package "Data::Company"
I looked at the classical Music::CD example, but I cannot figure out what I am doing wrong.
I agree with Manni, if your package declarations are in the same file, then you need to have the class with the has_a() relationship defined first. Otherwise, if they are in different source files, then the documentation states:
Class::DBI should usually be able to
do the right things, as long as all
classes inherit Class::DBI before
'use'ing any other classes.
As to the three-argument form, you are doing it properly. The third arg for has_many() is the column in the foreign class which is a foreign key to this class. That is, Offer has a CompanyId which points to Company's CompanyId.
Thank you
Well, the issue was actually not my code, but my set up. I realized that this morning after powering on my computer:
* Apache + mod_perl on the server
* SMB mount
When I made changes to several files, not all changes seems to be loaded by mod_perl. Restarting Apache solves the issue. I've actually seen this kind of issue in the past where the client and SMB server's time are out of sync.
The code above works fine with 1 file for each module.
Thank you
I really haven't got much experience with Class:DBI, but I'll give this a shot anyway:
The documentation states that: "the class with the has_a() must be defined earlier than the class with the has_many()".
I cannot find any reference to the way you are using has_a and has_many with three arguments which is always 'CompanyId' in your case.