Clone rep:policy on AEM - aem

I am currently working on with a solution that would be able to clone/copy/backup my existing rep:policy. 'Cause when we do some jobs it accidentally removed. I am trying to apply this kind of fix, but am failing to. It says it is an invalid path.
javax.jcr.security.AccessControlException: OakAccessControl0006: Isolated policy node. Parent is not of type [rep:AccessControllable]
final Workspace ws = session.getWorkspace();
ws.copy("/etc/commerce/products/abccompany/TvPackChannelMap/rep:policy","/tmp/nxt/TvPackChannelMap/rep:policy");
Are there other ways that I can be able to take the rep:policy thru code?

You need to make sure that your job does not touch the permissions or the rep:policy, this is the best way forward for you.
The exception could be because of /etc/commerce/products/abccompany/TvPackChannelMap/rep:policy does not exist or the user whose session you are using does not have read access to the node.
Make sure the path is correct, copy paste it to your CRX/DE to make sure it exists.
I have tried to use your code to copy a rep:policy from one node to another, works fine. But I would not* recommend copying permissions that way. The best practice is to use the Access Control Management API for all things permissions.

You can check, install and use the access control tool from netcentric. It offers a jmx interface for exporting AC entries and maybe also some APIs you could use to implement your custom solution.

The Other approach is to retrieve the ACL permissions through the query language.
For example, SELECT * FROM [rep:ACL] or SELECT * FROM [rep:ACE] where [rep:principalName] is not null should give you the results.
For more information, I would recommend you to check the ACS commons ACL Packager Implementation which is available on GitHub.
Reference Link - https://github.com/Adobe-Consulting-Services/acs-aem-commons/blob/master/bundle/src/main/java/com/adobe/acs/commons/packaging/impl/ACLPackagerServletImpl.java

Related

How to clear the whole AEM dispatcher cache in a cloud manager deployment

I'd like to configure the Adobe Cloud Manager production pipeline to invalidate the whole dispatcher cache. What paths do I have to give at the production pipeline dispatcher invalidation configuration to have that done? Is it possible to give a pattern here that matches everything? The page-invalidate description talks about a path-pattern, but doesn't describe what exactly that means.
We work with statfilelevel=2. It seems the .stat files are very important for that, though the description given here is unfortunately not precise enough, not sure I understand that right.
I tried to configure /content as path - that just touches /mnt/var/www/html/.stat (/mnt/var/www/html is the docroot), which seems to apply to nodes like /* but not like /content/* .
If I give /content/oursite, that touches /mnt/var/www/html/content/.stat , too, but that does seem to apply to nodes like /content/oursite or /content/othersite, but not to pages like /content/oursite/about - for which would /mnt/var/www/html/content/oursite/.stat be relevant, if I understand that right.
Do I seriously have to enumerate a page in each site that has a .stat file, or is there a more sensible way to get everything invalidated? After all, a deployment could easily change the HTML of every page if a component has changed.
If you have ACS commons installed then you can try to use this powerful feature
https://adobe-consulting-services.github.io/acs-aem-commons/features/dispatcher-flush-rules/index.html

Using MongoDB connection string in a Github repo

This might be kind of a weird question, but I have a full-stack project that I am using MongoDB for the database. I am about to put it on a local Github repository. Obviously in the connection string, I have a username & password which I would rather not make public. Does anyone know of a more secure way of doing this?
The whole purpose of this project is to add it to my portfolio, so future employers can see it and potentially try it out. Which means I want it to be as hassle free as possible. I've never done this before so I don't even know if someone who wants to use it would have to set up their own Mongo database just to get it to work properly or if my database can be use for everybody who would potentially want to try it out.
I don't really know what I am doing here.
You need to setup environments files and add them in gitignore file.
Then use dotenv for reading the values inside the file.
Article for step by step guide: https://www.coderrocketfuel.com/article/store-mongodb-credentials-as-environment-variables-in-nodejs
You can use mongodb://localhost as the default connection string, committing this to the repository and using something like dotenv to override the connection string in your application at runtime.

Deactivated page is available after package replication

This is the scenario (CQ5.6). Let's say there is the following node /content/geometrixx/articles, with articles inside of it. In author instance I create a package as a backup of that node. Then I deactivate article1 inside of articles, if I try to access the page I get a 404 page, that is fine. However if I build the backup package again and then replicate it, the deactivated page (article1) is available, that is, I do not get the 404 but instead the article.
Is there a way to replicate a package while preserving deactivated pages? That is, how I avoid re-activation?
Replicating package means that you are replicating every thing available in Package. Which means publish environment will have deactivated pages also. There are several ways to handle it, like:
Simplest way is to add a check in template (as first rule) to see if Env==publish && requested resource ==not activated, if so, return 404 page.
Another way is to create a script to delete all deactivated pages, and run this script on publish after page activation.
Add exclude filters in your package to exclude such pages.
I would recommend to use #1 as this is one time change and will be future proof.
should use treeactivation: http://localhost:4502/etc/replication/treeactivation.html, much safer (since you have 3 options: Only Modified, Only Activated and Ignore Deactivated)

Get object instances for a class

I have an instance file register for a custom MultiDataObject in System FileSystem entry: Loaders/text/custom-mime-type/Factories.
My application creates this objects when I open a project and my LogicalView creates the nodes for files in that project.
I need to get a list of instances for those MultiDataObject type, but I've not found way to achieve this.
I try to get this using Lookups.forPath, but anything returned.
¿Any clue for this issue?
With some reflection magic you can get them from a DataObjectPool - package private class in Data Loaders module (see openide.loaders/src/org/openide/loaders/DataObjectPool.java in NetBeans sources). There is no official API of this kind. Intentionally.
I'd say there is something wrong if you need this information. Perhaps you would get better advice if you had explained better what you want to achieve. Asking at NetBeans forum / mailing list will increase your chances even higher.

External access to Magento instances

I've started investigating alternatives to my project and a few questions came out that I couldn't answer by myself.
The problem is: I want to create a web page able to access multiple Magento instances installed in the same server. Currently, I have one Magento instance per client and this project will access several Magneto instances to export reports from each one (for example).
The alternatives I thought til this moment are:
Have another Magento instance, create a new module within it that changes its 'database target' before triggering operations/queries;
Questions until this moment:
Can I 'change the database target' of a Magento instance?
How can I access data from a Magento instance without appeal to SOAP/REST?
I want to re-use some components (grids, tabs, forms..) from Magento, that's why I'm not considering an independent project (Zend, for instance) that can access this code from another projects. Does it make sense?
Any other idea?
==Edited==
Thanks by the tips and sorry by my ignorance. The comments let me believe that I'm able to execute something like this:
// File myScript.php
require '/home/DOMAIN1/app/Mage.php';
Mage::app('default');
// get some products from DOMAIN1
require '/home/DOMAIN2/app/Mage.php';
Mage::app('default');
// get some products from DOMAIN2
Is it right? Can I execute require twice (and override things from first require)?
==Edited2==
I'm still trying to connect to several Magento instances from a single third party file. Is there any tip? I'm facing several/different errors at this moment.
The only thing I know is that I can still rely on SOAP to get the information I need, but this will be expensive.
Thanks!
The easiest way would be to include Mage.php from each shop instance. You would need to use namespaces or some other trickery to be able to load more then one.
Or if that doesn't work - make your own API in a separate file to get what you want from one shop, and combine the results in the PHP-file that calls the API.
Here's a sample on how to use Magento functionality outside of Magento:
require 'app/Mage.php';
if (!Mage::isInstalled()) {
echo "Application is not installed yet, please complete install wizard first.";
exit;
}
Mage::app()->setCurrentStore(Mage_Core_Model_App::ADMIN_STORE_ID);
// your custom code here, for example, get the product model..
$productModel = Mage::getModel('catalog/product');