AEM 6.4 upgrade - Cross-boundary resource type usage of internal marked path - aem

While upgrading to 6.4, we ran pattern detector report and got below lines for ECU category. Is there any reference to fix this issue?
Cross-boundary resource type usage of internal marked path /libs/cq/gui/components/projects/admin/projectteam referenced at
/apps/cq/core/content/projects/gadgets/xtrftranslationprojectsummary/jcr:content/content/items/form/items/fixedcolumns/items/column2/items/tabs/items/tab1/items/projectmembers
One more:
Cross-boundary resource type usage of internal marked path /libs/cq/gui/components/projects/admin/wizard/properties/thumbnail referenced at
/apps/cq/core/content/projects/wizard/xtrftranslationproject/defaultproject/items/column1/items/cover

As per the official documentation on Extraneous Content Usage, this means that your custom code uses components that are considered internal and are not part of the API. Both errors say you referenced them so we're looking at simple use (rather than an overlay or inheritance based on sling:resourceSuperType). You just have a couple resources with the sling:resourceType values belonging to internal components whose use in this context is not something that's officially supported or tested.
They may break at some point when you upgrade to a newer version of AEM or try to apply a hotfix.
The best way to go forward would be to stop using them and replace them with other components that are considered public and therefore supported. If no suitable replacement is available, you should consider replacing them with custom code that you control.
I'm not familiar with either cq/gui/components/projects/admin/projectteam or cq/gui/components/projects/admin/wizard/properties/thumbnail so I can't recommend any replacements. Any potential replacement should have the mixin type of either granite:PublicArea (can be used, overlaid or inherited), granite:AbstractArea (can be inherited but not overlaid or used directly) or granite:FinalArea (can be used but not inherited).

Related

How to create a custom annotation processor for Eclipse

I'm trying to create a custom annotation processor that generates code at compilation time (as hibernate-jpamodelgen does). I've looked in the web, and I find custom annotation processors that works with maven, but do nothing when added to the Annotation Processing > Factory Path option. How could I create a processor compatible in this way? I have not found a tutorial that works.
My idea is to, for example, annotate an entity to generate automatically a base DTO, a base mapper, etc that can be extended to use in the final code.
Thank you all
OK, Already found out the problem. The tutorial I hda found out dint't specified that, in order to the compiler to be able to apply the annotation processor, there must be a META-INF/services/javax.annotation.processing.Processor file that contains the qualified class name of the processor (or processors).
I created the file pointing to my processor class, generated the jar and added it to Annotation Processing > Factory Path and all worked correctly.
Just be careful to keep the order of the processors correctly (for example, hibernate model generator claims the classes, so no more generation will be made after it), and change the jar file name each time you want to replace the library (it seems eclipse keeps a cache). These two things have given me a good headache.
Thanks all

Colliding aspects in postsharp

I am using the PostSharp solution for INotifyPropertyChanged by decorating my business classes with the [NotifyPropertyChanged] attribute.
All works fine.
Now I wrote a custom aspect that handles property changes so that I get some custom flags set when some special properties change. This aspects is named [HandlePropertyChanged] and works when used alone.
Now I try to use both aspects in combination. As I read on the PostSharp page I can manually order them to ensure a fixed order by using
[NotifyPropertyChanged(AspectPriority = 0)]
[HandlePropertyChanged(AspectPriority = 1)]
In this case, I can build my solution, but because "NotifyPropertyChanged" runs before "HandlePropertyChanged", the changes on my properties are already done and the custom logic does not run correctly.
If I try this
[HandlePropertyChanged(AspectPriority = 0)]
[NotifyPropertyChanged(AspectPriority = 1)]
my build fails with the error at the bottm of the text (see below).
Best would be to simply do what NotifyPropertyChanged does in my custom aspect and forget about the PostSharp aspect
Is this possible?
0: Error C:\Source\WAVE\WAVE.Data.Contracts\Entities\Base\EntityBase.cs (17,16) PS0115: Conflicting aspects on "TopMotive.WAVE.Data.Contracts.Entities.Base.EntityBase`1": according to aspect dependencies, transformation "Instantiation of aspect PostSharp.Patterns.Model.NotifyPropertyChangedAttribute" should be located both before and after transformation "Instantiates binding collection for field "PostSharp.Patterns.Model.NotifyPropertyChangedAttribute/LocationBindings".".
This bug is fixed in PostSharp 5.0.52 and PostSharp 6.0.16 RC.
Try superior and free alternative: Stepen Cleary's Calculated Properties.
https://github.com/StephenCleary/CalculatedProperties/blob/master/README.md
I used both in production and found it to be far better than PostSharp's aspect.
Also from PostSharp docs:
"If a property getter calls a virtual method from its class or a delegate, or references a property of another object (without using canonical form this.field.Property), PostSharp will generate an error because it cannot resolve such a dependency at build time. The same limitations apply when your property getter contains complex data flows, such as loops, or calls to methods (except property getters) of other classes.
When this happens, you can either refactor your code so that it can be automatically analyzed by PostSharp, or you can take over the responsibility for analyzing the code"
None of those limitations apply to Calculated Properties. It can do loops, virtual methods, LINQ to objects, basically any runtime dependencies that you can imagine doesn't matter how indirect. Dependency graph rewires itself at runtime and just works without any ceremony. They are also fast.

Remove/Add References and Compile antique VB6 application using Powershell

I've been given the task of researching whether one can use Powershell to automate the managing of References in VB6 application and then compile it's projects afterwards.
There are 3 projects. I requirement is to remove a specific reference in each project. Then, compile projects from bottom up (server > client > interface) and add reference back in along the way. (remove references, compile server.dll >add client reference to server.dll, compile client.dll > add interface reference to client.dll, compile interface.exe)
I'm thinking no, but I was still given the task of finding out for sure. Of course, where does one go to find this out? Why here of course, StackOverflow.
References are stored in the project .VBP files which are just text files. A given reference takes up exactly one line of the file.
For example, here is a reference to DAO database components:
Reference=*\G{00025E01-0000-0000-C000-000000000046}#5.0#0#C:\WINDOWS\SysWow64\dao360.dll#Microsoft DAO 3.6 Object Library
The most important info is everything to the left of the path which contains the GUID (i.e., the unique identifier of the library, more or less). The filespec and description text are unimportant as VB6 will update that to whatever it finds in the registry for the referenced DLL.
An alternate form of reference is for GUI controls, such as:
Object={BDC217C8-ED16-11CD-956C-0000C04E4C0A}#1.1#0; tabctl32.ocx
which for whatever reason never seem to have a path anyway. Most likely you will not need to modify this type of reference, because it would almost certainly break forms in the project which rely on them.
So in your Powershell script, the key task would be to either add or remove the individual reference lines mentioned in the question. Unless you are using no form of binary compatibility, the GUID will remain stable. Therefore, you could essentially hardcode the strings you need to add/remove.
Aside from all that, its worth thinking through why you need to take this approach at all. Normally to build a VB6 solution it is totally unnecessary to add/remove references along the way. Also depending on your choice of deployment techniques, you are probably using either project or binary compatibility which tends to keep the references stable.
Lastly, I'll mention that there are existing tools such as Kinook's Visual Build Pro which already know how to build groups of VB6 projects and if using a 3rd party tool like that is an option, could save you a lot of work.

More than one V4L-DVB driver on the same host machine

I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave

SugarCRM $beanFiles array modification best practices

In SugarCRM, you can create your custom modules (e.g. MyModule) and they are kept in /modules just like stock objects, with any default metadata, views, language files, etc. For a custom module MyModule, you might have something like:
/modules/MyModule/MyModule.class.php
/modules/MyModule/MyModule.php
/modules/MyModule/language/
/modules/MyModule/metadata
And so on, so that everything is nicely defined and all the modules are kept together. The module becomes registered with the system by a file such as /custom/Extension/application/Include/MyModule.php with contents something like:
<?php
$beanList['MyModule'] = 'MyModule';
$beanFiles['MyModule'] = 'modules/MyModule/MyModule.php';
$moduleList[] = 'MyModule';
Obviously, the $beanFiles array references where we can find the base module's class, usually an extension of the SugarBean object. Recently I was advised that we can adjust that file's location for the sake of customization, and it makes sense to a degree. Setting it like $beanFiles['MyModule'] = 'custom/modules/MyModule/MyModule.php'; would allow us to access the base class via Module Loader even if the security scan tool prevents core file changes, and this would also allow us to not exactly extend, but replace stock modules like Accounts or Calls, without modifying core files and having system upgrades to wipe out the changes.
So here's my question: what is the best practice here? I've been working with SugarCRM pretty intensely for several years and this is the first time I've ever been tempted to modify the $beanFiles array. My concern is that I'm deviating from best practice here, and also that somehow both files modules/MyModule/MyModule.php and custom/modules/MyModule/MyModule.php could be loaded which would cause a class name conflict in PHP (i.e. because both classes are named MyModule...). Obviously, any references to the class would need to be updated (e.g. an entryPoint that works with this module), but am I missing any potential ramifications?
Technically it should be fine, but I can see how it could be possible that both the core version and your version could conflict if both are referenced. It all depends on the scenario, but I prefer to extend the core bean and find somewhere in the stack where I can have my custom version used in place of the core bean. I wrote up an example a couple of years ago here: https://www.sugaroutfitters.com/blog/safely-customizing-a-core-bean-in-sugarcrm
For most use-cases, there's a way to hijack Sugar to use your bean at a given point.
If you can't get around it you can always grep to see where the core module is explicitly being included to ensure that there won't be conflict down the road.