In Agents on author, I have a static replication agent.
So what is the benefit of using static replication agent in AEM?
The OOTB static agent (if configured properly) will produce static representations of nodes on the file-system upon modifications. There are only a few use cases for this:
A static representation of repository on the file system (as the name says). This can be used for non-dispatcher modelled proxy servers.
Backup and versioning outside the repository. For example, if financial services regulations require static snapshots of each modification on the system for archiving purposes.
Quick extraction of media from repository for larger asset based projects where media (images) can be consumed by non-AEM systems directly from the disk-storage. Good example would be ffmpeg manipulation of videos etc.
All of the above won't be useful in modern architecture/practices as there are better ways of meeting these data extraction and archiving requirements.
As Adobe documentation says:
This is an "Agent that stores a static representation of a node into the filesystem.".
For example with the default settings, content pages and dam assets are stored under /tmp, either as HTML or the appropriate asset format. See the Settings and Rules tabs for the configuration.
This was requested so that when the page is requested directly from the application server the content can be seen. This is a specialized agent and (probably) will not be required for most instances.
Related
I'm developing a Asp.Net MVC project that will be hosted in Amazon AWS, but I have some questions about storage of the client's files. The documentation from Amazon is not clear to me and I'm looking for some directions and experiences here.
1 - each client have a few files with low space disk requirements, low update frequency but very high access frequency (like brand image and even sensitive files like certificates). Is appropriate to storage this files in app_data folder in web server?
2 - the most critical to me are sensitive documents (from hundreds to dozen of thousands per client, most like xml signed files). This files has a medium read access frequency but a very high demand for creation. One solution I found is MongoDB, wich give me some freedom to manage the storage policy and allow me a external backup easy, but I'm not sure about that. Other options are to use the Amazon Storage and handle all this files and GBs in there with a lot of folders or maybe use a regular database and save the files as xml or bin.
My concerns are about the amount of data, the security and the reliable in case of disaster as most of this documents has legal value.
You could, but storing them locally, violates the shared nothing architecture and would limit your scaling options. Amazon S3 is a good option here. You can set some files public and serve them direct from s3 (or with cloudfront) and keep other private and provide access via signed urls.
Again, you can put the files on s3 and make them private. You will still probably store references to the files in your database. Generally its not a great idea to store large blob files in a database since they are often not well optimized to access them.
In cq we can create live copies by either blueprints by opting "New Site..." or directly trough content nodes by opting for "New Live Copy...".
In both the cases inheritance is maintained and roll-out works too in same ways. So what is the advantage of using one over other.
Any views?
Live Copies
Live copies can be created for just a simple page or a tree of pages and might the page and it's subpages depending on the rollout configuration. A live copy can be linked to a rollout config or will use the system's default one.
There is no formal requirement on the source page's structure.
A live copy might reference a blueprint, while it can only reference to a single blueprint.
Blueprints
Blueprints target the rollout of complete multilingual website projects and are a tool to control multiple rollout configs and live copies.
A blueprint requires a certain structure for the source site:
- One root level page
- The root's immediate childs define the language branches of the site
- each language contains one or more child pages.
Blueprints allow you to control multiple live copies and centrally consistent rollout configs for the blueprint's live copies.
A blueprint rollout will push modifications to all it's live copies.
Usage scenarios of blueprints
Inheritance and rollout work the same way. Just because blueprint make use of live copies.
But blueprints help you to organize your rollout scenarios for large multilingual sites. Just imagine a corporate website that provides a two or even three digit number of locales which that need to be translated and kept in sync.
In such a scenario you will likely end up with a hardly understandable and maintainable number of live copy and rollout configurations.
Depending on a blueprint to e.g. standardize the rollout of a new language/market/locale provides you higher degree of governance over your process as the complete process centrally manageable through the blueprint template.
But as long as you do not have such a scenario you might be fine without having the complete blueprint overhead.
A Livecopy is defined in the target page node with a cq:LiveSyncConfig node. It basically defines: I am a livecopy of source (blueprint) page X, and the following rollout configs apply.
A Blueprint is defined in the source page node with a cq:BlueprintSyncConfig node, and this defines a target.
Essentially both achieve the same in the end, but I think there are a few differences: the first option can be used to create a 1:n relationship, whereas the second option is 1:1
Also, if page nodes are copy-pasted in AEM, then relationships are copied with them (not quite sure in which way exactly, you would have to try for both scenarios). Also when pages are deleted in a tree in the first scenario, AEM will add a cq:excludedPaths property to the config which causes the page to be skipped in future rollouts - not sure this is the same for cq:BlueprintSyncConfig as well.
I have been trying to find an open source or affordable platform / CMS that is distributed.
And by distributed I mean that there is a single control panel with all the content, but you can have multiple websites on multiple web hosts that query an API that holds this content. Not the usual "one install, multiple websites" as you can do with Wordpress MU.
Ideally there would be an API that the website can connect to and get the data, or use push technology from the control panel once new content is added.
If there is no client side platform built but there is a sophisticated content management platform with an API that allows me to build my own client/website connecting to it, that would be fine too.
Does anyone have tips if there is such a thing?
Govento CMS is a distributed CMS, that allow you, to manage all projects with a single installation and present your content via push publishing dynamic and current on different remote delivery-platforms.
german: http://goventocms.com
or english:
http://translate.google.de/translate?hl=en&sl=de&u=http://govento.de/&prev=search
I have been using the metalsmith contenful plugin. I am wondering if maybe I have the idea of static site generators wring, but what is the purpose of this if I have to run a build every time something is changed on contentful.
Is there a way to have metalsmith on my server and have a build issued anytime contenful is changed, or is this a bad idea.
What would be recommended for keeping a site in sync with contentful more than just accessing the database with a static site generator.
If you want to automatically keep in sync a static site built using content on Contentful your best option is to use webhooks.
Contentful provides webhooks which can be used for different types of events (publish, editions, etc.).
I'm developing a wiki engine. Since this application can be usefull (at least for my company's private use) in its own it should be able to run as a standalone pyramid application, with its own graphical theme.
However a wiki feature could also be useful as part of a bigger project, and I would like to be able to include it into other pyramid applications.
I already found some pyramid features that could help me to achieve this but first I'm not sure whether it's the best way to do it and second some problems remain open.
Here are the potential issues I currently see:
templates: how to switch between the standalone mode and the hosted mode
host variables: event if we can reuse the host template, some variables may be needed to correctly render the templates but are not set by the guest (the wiki engine) application.
authentication: the guest app defines its own login system (based on pyramid_persona). Can the guest application reuse the host authentication system?
My current idea is to use the config.include() system of pyramid. In the wiki engine, in __init__.py I then define an include(config) method in addition the the main() method used for the standalone mode.
In the host application I then define a variable in the .ini file which points to the template file that the guest should use (ie base_template = hostapp:templates/wikibase.mako)
Inside the guest application, the includeme() method reads the base_template variable and overrides some global config.
Then each guest view work like this:
from pyramid.renderers import render
#view_config(route_name="display_wiki_page", renderer=Globals.base_template)
def view_wiki(request):
"""returns a formatted page content"""
page = request.matchdict['page']
content = get_raw_page_content_from_database(page)
page_formatted = render("wikiengine:templates/page_formatting_template.mako",
{'request': request, 'content': content} )
return {'page_formatted': page_formatted}
So from this point the base template can either be the one from the guest or the host application. Both will contain something like (in mako): ${page_formatted | n }
But this does not solve the problem of necessary host variables for the template to be rendered by the guest code. For example the host may need to have a hot_news variable that need to be displayed on each of the host pages, even the pages that host the wiki.
For this I plan use the event system, and add a subscriber for NewRequest or BeforeRender and set the needed variables here inside the request object.
Is this a correct approach ? Are there examples of what I'm trying to do?
Pyramid's configuration mechanisms make it very easy for clients of a module to override configuration. This is one of the most powerful parts of Pyramid compared to other popular web frameworks.
config.include() is a good approach to solving the problem. It allows the caller to override anything defined within the include.
Assets can be overridden using config.override_assets().
Sharing user information requires your module to either provide the user information or define a contract to which someone can conform allowing them to override your model.
Anyway this is obviously a huge topic. Highly modular apps written on top of pyramid include substanced, kotti, ptah, bookie, etc.